repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
tqdm/tqdm
jupyter
1,279
[Feature Request] get reference a tqdm instance inside a loop
For us that frequently use tqdm write, updating description and other functionalities, it would be very nice to get a reference to the tqdm instance. right now, if you want to, lets say, change a description, your boilerplate code would probably look something like this ``` from tqdm import tqdm from utils import some_functionality_over_iterable_elements iterable = range(100) progress_bar = tqdm(total=len(iterable), desc="default description") for element in iterable: result = some_functionality_over_iterable_elements(element) # based on some kind of condition, change the description if result is some_condition: progress_bar.set_description("changed_description") progress_bar.update(1) progress_bar.close() ``` would it not be nice to set a tqdm instance name and get reference to it (by its name) without closing it? something like this: ``` import tqdm from utils import some_functionality_over_iterable_elements for element in tqdm.tqdm(range(100), desc="default_description", name="prog_bar1"): result = some_functionality_over_iterable_elements(element) # based on some kind of condition, change the description if result is some_condition: tqdm.get_instance_by_name["prog_bar1"].set_description("changed_description") ``` or even more so, by lazy-referencing by order in which it was created: ``` import tqdm from utils import some_functionality_over_iterable_elements for element in tqdm.tqdm(range(100), desc="default_description"): result = some_functionality_over_iterable_elements(element) # based on some kind of condition, change the description if result is some_condition: # by default fetches latest tqdm instance, but may also use integer to annotate which one (in case of using multiple instances) you may want to reference tqdm.get_instance_by_id().set_description("changed_description") ``` this may, in many instances, grately increase code readibility and decluter/reduce needed boilerplate code
open
2021-12-01T14:35:11Z
2022-06-18T06:51:15Z
https://github.com/tqdm/tqdm/issues/1279
[ "p4-enhancement-future 🧨" ]
tloki
0
FlareSolverr/FlareSolverr
api
1,427
[yggtorrent] (updating) FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds.
### Have you checked our README? - [X] I have checked the README ### Have you followed our Troubleshooting? - [X] I have followed your Troubleshooting ### Is there already an issue for your problem? - [X] I have checked older issues, open and closed ### Have you checked the discussions? - [x] I have read the Discussions ### Have you ACTUALLY checked all these? YES ### Environment ```markdown - FlareSolverr version: - Last working FlareSolverr version: - Operating system: - Are you using Docker: [yes/no] - FlareSolverr User-Agent (see log traces or / endpoint): - Are you using a VPN: [yes/no] - Are you using a Proxy: [yes/no] - Are you using Captcha Solver: [yes/no] - If using captcha solver, which one: - URL to test this issue: ``` ### Description How to enable logs ? I don't know because I use your app on Unraid. I only have this : ![image](https://github.com/user-attachments/assets/d52b7de1-3a79-4128-a0ba-235579abb8fc) ### Logged Error Messages ```text An error occurred while updating this indexer FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Error: Error solving the challenge. Timeout after 55.0 seconds. Click here to open an issue on GitHub for FlareSolverr. ``` ### Screenshots ![image](https://github.com/user-attachments/assets/b9943af4-ca6d-4bf1-b13e-d6be8fe2126f)
closed
2024-12-27T16:29:57Z
2024-12-27T17:12:36Z
https://github.com/FlareSolverr/FlareSolverr/issues/1427
[ "duplicate" ]
Logidroid
0
fastapi/sqlmodel
fastapi
507
Relationship from Model Data and inheritance
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python class SurveyDTO(SQLModel): responses: List[ResponsesDTO] = [] # other fields.. class ResponsesDTO(SQLModel): code: int response: str class SurveyTable(SurveyDTO, table='True'): id: Optional[int] = Field(default=None, primary_key=True) # how to manage relationship from DTO? class ResponsesTable(ResponsesDTO, table='True'): id: Optional[int] = Field(default=None, primary_key=True) # how to manage relationship from DTO? In FastAPI: @app.post(endpoint_paths.SURVEY) def post_survey(session: Session = Depends(get_session), survey: SurveyDTO= Body(..., embed=True)) -> Response: # save the survey ``` ### Description I am trying to create a a one to many relationship through inheritance from a model class to a table class. I don't understand how to create the relationship with the List[ResponsesDTO] in the table without duplicating code. Maybe I am missing something? Thank you for your help :) ### Operating System Windows ### Operating System Details _No response_ ### SQLModel Version 0.08 ### Python Version 3.8 ### Additional Context Seems related to #18 and #469
open
2022-11-23T11:20:44Z
2022-11-24T10:47:39Z
https://github.com/fastapi/sqlmodel/issues/507
[ "question" ]
Franksign
3
streamlit/streamlit
streamlit
10,005
Incorrect type hint for `value` param of st.date_input. Given `NullableScalarDateValue` should be `DateValue`
### Checklist - [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [X] I added a very descriptive title to this issue. - [X] I have provided sufficient information below to help reproduce this issue. ### Summary I believe that the type hint for value is incorrect. Please see this code snippet taken [from here](https://github.com/streamlit/streamlit/blob/1.41.0/lib/streamlit/elements/widgets/time_widgets.py#L574): ``` @gather_metrics("date_input") def date_input( self, label: str, value: NullableScalarDateValue | None = "today", min_value: NullableScalarDateValue = None, max_value: NullableScalarDateValue = None, ``` The type hint of `value` is the same as for min and max values even though `value` should accept a sequence, too. ### Reproducible Code Example ```Python import datetime import streamlit as st today: datetime.date = datetime.date.today() past: datetime.date = today - datetime.timedelta(days=30) st.date_input("Date Range", value=(past, today)) ``` ### Steps To Reproduce mypy <path_to_file_with_example>.py ### Expected Behavior Success: no issues found ### Current Behavior Error message: ``` error: Argument "value" has incompatible type "tuple[date, date]"; expected "date | datetime | str | Literal['today'] | None" [arg-type] ``` ### Is this a regression? - [X] Yes, this used to work in a previous version. ### Debug info - Streamlit version: 1.41.0 - Python version: 3.12.8 - mypy version: 1.13.0 - Operating System: - - Browser: - ### Additional Information _No response_
closed
2024-12-12T11:14:31Z
2024-12-12T16:12:15Z
https://github.com/streamlit/streamlit/issues/10005
[ "type:bug", "priority:P3", "feature:st.date_input" ]
vladislavlh
4
autogluon/autogluon
computer-vision
4,825
[timeseries] Add clone_for_deployment to TimeSeriesPredictor
## Description - Add the equivalent of [TabularPredictor.clone_for_deployment](https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.clone_for_deployment.html). Currently, the trained predictor folder can be quite large and contains a lot of redundant information (e.g., training data copy), which makes it hard to use this artifact for deployment. ## References - https://auto.gluon.ai/stable/api/autogluon.tabular.TabularPredictor.clone_for_deployment.html
open
2025-01-22T15:26:07Z
2025-01-22T15:26:07Z
https://github.com/autogluon/autogluon/issues/4825
[ "enhancement", "module: timeseries" ]
shchur
0
pywinauto/pywinauto
automation
1,103
The window has not been focused due to COMError: (-2147467259, 'Unspecified error', (None, None, None, 0, None))
## Expected Behavior No warning. And/or better feedback and understanding of what is going on. ## Actual Behavior I start my app with pywinauto, and get the top window, then I carry on automating fetch the menus and toolbars and starts on my merry way. It's working quite well. But my output is polluted often, with one instance and one instance only, and usually early around the time top_window is fetched not long after mostly and sometimes later, with a message that reads: ``` The window has not been focused due to COMError: (-2147467259, 'Unspecified error', (None, None, None, 0, None)) ``` I can see that this is generated by `set_focus()` in `uiawrapper.py` but says nothing of utility. FIxes I would recommends, might include: 1. reporting the error with its Hex code not as a decimal number as that is how they are documented: https://docs.microsoft.com/en-us/windows/win32/com/com-error-codes-1 2. Including more information about which element we were trying to focus when the error arose. Like it's name or even a complete element_info dump. 3. Help us understand what role COM plays in this, a quick and simple pointer? Not least because I actually talk to the same app over its COM interface as well and wasn't aware that pywinauto used one. To what end is pywinauto talking over COM? Or this a windll internal thing? Either way I would love to understand it when it arises. ## Steps to Reproduce the Problem Not 100% sure it's easy to reproduce as it could be very contextual to this app, my platform whatever. it's a COM error after all. And an unspecified one. I do have a feeling, not a knowing, just a feeling that it arises then and when I interact with the PC while my automation script is running. If I stand back and let it do it's work, and don't touch my PC I don't see this warning (I think). If I continue to do something on the side, my automation actually works fine, nothing fails, every now and then it grabs my mouse pointer and whisks it away to click some menu on my app on the other monitor, but on the whole it all works fine. I don't do this often, but I am often doing a quick interaction moving a window after I started it ... ## Specifications - Pywinauto version: 0.6.8 - Python version and bitness: 3.83 64 bit - Platform and OS: Windows 10
open
2021-08-03T04:49:37Z
2024-11-12T20:57:00Z
https://github.com/pywinauto/pywinauto/issues/1103
[]
bernd-wechner
3
wemake-services/wemake-django-template
pytest
2,275
Gitlab-CI pipeline configuration link is dead
this one: https://gitlab.com/sobolevn/wemake-django-template/-/pipelines ![Capture d’écran 2023-07-13 à 13 40 19](https://github.com/wemake-services/wemake-django-template/assets/439279/e6611209-e57f-472a-b1e0-4dfe01e85518)
closed
2023-07-13T11:40:35Z
2024-07-13T11:46:30Z
https://github.com/wemake-services/wemake-django-template/issues/2275
[]
deronnax
0
flasgger/flasgger
flask
409
Extract summary from docstring without specification
I'd like to request a feature. Currently when specification is missing from docstring, the endpoint will be shown without summary and description. I would love to be able to see a summary extracted from first line of docstring despite the lack of OpenApi specification below it.
open
2020-06-23T17:37:55Z
2020-06-23T17:37:55Z
https://github.com/flasgger/flasgger/issues/409
[]
m-aciek
0
fastapi/sqlmodel
fastapi
535
[M2M] Query dependent incl. `link_model` fields
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python from typing import List, Optional from sqlalchemy.orm import joinedload from sqlmodel import Field, Relationship, Session, SQLModel, create_engine, select class Membership(SQLModel, table=True): team_id: Optional[int] = Field( default=None, foreign_key="team.id", primary_key=True ) hero_id: Optional[int] = Field( default=None, foreign_key="hero.id", primary_key=True ) salary: int is_disabled: bool = False class TeamBase(SQLModel): id: Optional[int] name: str headquarters: str class Team(TeamBase, table=True): id: Optional[int] = Field(default=None, primary_key=True) heroes: List["Hero"] = Relationship(back_populates="teams", link_model=Membership) class HeroBase(SQLModel): id: Optional[int] name: str secret_name: str age: Optional[int] = None class Hero(HeroBase, table=True): id: Optional[int] = Field(default=None, primary_key=True) teams: List[Team] = Relationship(back_populates="heroes", link_model=Membership) class HeroMembership(HeroBase): salary: int is_disabled: bool class TeamDetail(TeamBase): heroes: List[HeroMembership] = [] sqlite_file_name = "database.db" sqlite_url = f"sqlite:///{sqlite_file_name}" engine = create_engine(sqlite_url, echo=True) def create_db_and_tables(): SQLModel.metadata.create_all(engine) def fetch_team(session, id: int = 1) -> TeamDetail: with Session(engine) as session: query = ( select(Team) .join(Team.heroes) .where(Team.id == id) .options(joinedload(Team.heroes)) ) """ NOTE: the SQL query generated is below: SELECT team.*, hero.* FROM team JOIN membership AS membership_1 ON team.id = membership_1.team_id JOIN hero ON hero.id = membership_1.hero_id LEFT OUTER JOIN (membership AS membership_2 JOIN hero AS user_1 ON user_1.id = membership_2.hero_id) ON team.id = membership_2.team_id WHERE team.id = :team_id TODO: how to fetch additional fields from the link table since it is clearly accessed anyways? """ team_details = session.exec(query).first() Team.update_forward_refs() return team_details def create_heroes(): with Session(engine) as session: team = fetch_team(engine) print(team) def main(): create_db_and_tables() create_heroes() if __name__ == "__main__": main() ``` ### Description - Create Hero model - Create Team model - Create `link_model`, `Membership` with some additional fields - Try to fetch a particular `team` with list of `heroes`, incl. additional membership field(s) per hero ### Operating System macOS ### Operating System Details _No response_ ### SQLModel Version 0.0.8 ### Python Version Python 3.10.9 ### Additional Context Here's the SQL query generated that fetches the response correctly EXCEPT for additional membership field(s) per hero: ```sql SELECT team.*, hero.* FROM team JOIN membership AS membership_1 ON team.id = membership_1.team_id JOIN hero ON hero.id = membership_1.hero_id LEFT OUTER JOIN (membership AS membership_2 JOIN hero AS user_1 ON user_1.id = membership_2.hero_id) ON team.id = membership_2.team_id WHERE team.id = :team_id ```
open
2023-01-23T14:41:34Z
2024-04-02T20:51:44Z
https://github.com/fastapi/sqlmodel/issues/535
[ "question" ]
Pk13055
3
JoeanAmier/XHS-Downloader
api
172
怎么设置下载目录
没有看到哪里有设置下载目录的介绍,自己试了下,都是下载错误
open
2024-09-07T13:18:35Z
2024-09-07T13:44:36Z
https://github.com/JoeanAmier/XHS-Downloader/issues/172
[]
kouronan
1
opengeos/streamlit-geospatial
streamlit
92
Demo site not working
please check the link of the demo site which you've provided on your github
closed
2022-11-05T18:49:22Z
2022-11-05T18:54:55Z
https://github.com/opengeos/streamlit-geospatial/issues/92
[]
shadmanshaikh
0
ploomber/ploomber
jupyter
907
Injecting context into a task
Engineers often build abstractions to provide data scientists with ready-to-use environments, so they can start coding right away. Typical use cases are pre-configured database connectors, experiment tracker connectors, cloud storage configuration, etc. The idea is that a data scientist should not worry about setup and just call an existing function to do things. We've already seen companies build this kind of abstractions on top of Ploomber (for function-based pipelines); however, the solutions aren't elegant since Ploomber expects functions to have a pre-defined signature (product, upstream, and other parameters). So they end up creating decorators to circumvent this issue. We should support a way of passing *pre-configured* objects to tasks. For example: ```python def my_task(upstream, product, parameter, mlflow_client): pass ``` In the example above, upstream and product are the typical arguments. `parameter` is passed via the `params` key in `pipeline.yaml`, but `mlflow_client` is part of the pre-configured context. In the `pipeline.yaml`, this could look like this (however, this should also be supported in the Python API, so we'll need to modify the DAG implementation): ```yaml context: mlflow_client: clients.initialize_mlflow ... ``` Potentially, we should also allow hook functions to request context variables. It should also be possible to request file and database clients, since in some cases, users might want to interact with them via Python and it makes sense to make them accessible. We got this from a user on Slack: > is it possible to pass the db client to a py file? I can import it manually but was wondering if ploomber could help with that
open
2022-07-08T20:15:02Z
2023-03-20T20:40:32Z
https://github.com/ploomber/ploomber/issues/907
[]
edublancas
0
CorentinJ/Real-Time-Voice-Cloning
deep-learning
761
How can I train a model for the Turkish language with the Mozilla Common Voice Dataset?
First of all, hello; I want to implement synthesizer, vocoder and encoder models with Mozilla Comman Voice dataset for Turkish language. But I am quite confused about how to proceed. I would love it if you can help me with this. Thank you very much in advance.
closed
2021-05-21T20:54:59Z
2023-02-03T13:28:33Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/761
[]
AyseEe501
9
pyg-team/pytorch_geometric
deep-learning
8,963
Torch Compilation does not work with torch scatter and torch cluster
### 🐛 Describe the bug ``` torch_geometric.nn.radius_graph ``` does not support torch compile ### Versions [pip3] numpy==1.26.4 [pip3] pytorch-lightning==2.1.3 [pip3] torch==2.1.0 [pip3] torch-cluster==1.6.3+pt21cu121 [pip3] torch-ema==0.3 [pip3] torch_geometric==2.5.0 [pip3] torch-runstats==0.2.0 [pip3] torch-scatter==2.1.2+pt21cu121 [pip3] torch-sparse==0.6.18+pt21cu121 [pip3] torch-spline-conv==1.2.2+pt21cu121 [pip3] torchaudio==2.1.0 [pip3] torchmetrics==1.2.1 [pip3] torchvision==0.16.0 [pip3] triton==2.1.0 [conda] blas 2.116 mkl conda-forge [conda] blas-devel 3.9.0 16_linux64_mkl conda-forge [conda] libblas 3.9.0 16_linux64_mkl conda-forge [conda] libcblas 3.9.0 16_linux64_mkl conda-forge [conda] liblapack 3.9.0 16_linux64_mkl conda-forge [conda] liblapacke 3.9.0 16_linux64_mkl conda-forge [conda] libopenvino-pytorch-frontend 2023.3.0 h59595ed_0 conda-forge [conda] mkl 2022.1.0 h84fe81f_915 conda-forge [conda] mkl-devel 2022.1.0 ha770c72_916 conda-forge [conda] mkl-include 2022.1.0 h84fe81f_915 conda-forge [conda] numpy 1.26.4 py311h64a7726_0 conda-forge [conda] pytorch 2.1.0 py3.11_cuda12.1_cudnn8.9.2_0 pytorch [conda] pytorch-cuda 12.1 ha16c6d3_5 pytorch [conda] pytorch-lightning 2.1.3 pyhd8ed1ab_0 conda-forge [conda] pytorch-mutex 1.0 cuda pytorch [conda] torch-cluster 1.6.3+pt21cu121 pypi_0 pypi [conda] torch-ema 0.3 pypi_0 pypi [conda] torch-geometric 2.5.0 pypi_0 pypi [conda] torch-runstats 0.2.0 pypi_0 pypi [conda] torch-scatter 2.1.2+pt21cu121 pypi_0 pypi [conda] torch-sparse 0.6.18+pt21cu121 pypi_0 pypi [conda] torch-spline-conv 1.2.2+pt21cu121 pypi_0 pypi [conda] torchaudio 2.1.0 py311_cu121 pytorch [conda] torchmetrics 1.2.1 pyhd8ed1ab_0 conda-forge [conda] torchtriton 2.1.0 py311 pytorch [conda] torchvision 0.16.0 py311_cu121 pytorch
closed
2024-02-24T23:48:50Z
2024-02-28T10:41:24Z
https://github.com/pyg-team/pytorch_geometric/issues/8963
[ "bug", "compile" ]
liyy2
1
yzhao062/pyod
data-science
366
What preprocessing required for mix type of dataset ( continuous and categorical ) ?
Hello Pyod developers: I have one question regarding the outliers detection If my dataset is the combination of continuous ( numerical ) and categorical values, What are the steps to feed this data to pyod models, and is there any preprocessing required for continuous ( numerical ) and categorical values such as encoding ( categorical columns ), normalization ( continuous)?
closed
2022-01-18T19:19:02Z
2022-03-05T14:38:05Z
https://github.com/yzhao062/pyod/issues/366
[]
Abhinav43
5
gradio-app/gradio
data-visualization
10,343
ChatInterface/ChatBot not clearing when examples are present
### Describe the bug When trying to use the ChatInterface or ChatBot components and populating them with examples, I'm unable to programatically clear the component. The included minimal example (below) demonstrates the problem: 1. Click one of the examples 2. Click the clear button 3. Click on another one of the examples 4. Notice that the original history is still in place Note: clicking the garbage can icon in the top right of the component works as expected ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Reproduction ```python import gradio as gr with gr.Blocks(fill_height=True) as demo: chat_interface = gr.ChatInterface( fn=lambda m, h: "Hello World", examples=[ ["Foo"], ["Bar"], ["Baz"], ], type="messages", ) gr.ClearButton(components=chat_interface.chatbot) demo.launch() ``` ### Screenshot _No response_ ### Logs _No response_ ### System Info Note: this seems to be system agnostic as I also get the same problem in the Gradio Playground: [here](https://www.gradio.app/playground?demo=Blank&code=aW1wb3J0IGdyYWRpbyBhcyBncgoKd2l0aCBnci5CbG9ja3MoZmlsbF9oZWlnaHQ9VHJ1ZSkgYXMgZGVtbzoKICAgIGNoYXRfaW50ZXJmYWNlID0gZ3IuQ2hhdEludGVyZmFjZSgKICAgICAgICBmbj1sYW1iZGEgbSwgaDogIkhlbGxvIFdvcmxkIiwKICAgICAgICBleGFtcGxlcz1bCiAgICAgICAgICAgIFsiRm9vIl0sCiAgICAgICAgICAgIFsiQmFyIl0sCiAgICAgICAgICAgIFsiQmF6Il0sCiAgICAgICAgXSwKICAgICAgICB0eXBlPSJtZXNzYWdlcyIsCiAgICApCiAgICBnci5DbGVhckJ1dHRvbihjb21wb25lbnRzPWNoYXRfaW50ZXJmYWNlLmNoYXRib3QpCgpkZW1vLmxhdW5jaCgpCg%3D%3D&reqs=) ```shell Gradio Environment Information: ------------------------------ Operating System: Linux gradio version: 5.9.1 gradio_client version: 1.5.2 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.8.0 audioop-lts is not installed. fastapi: 0.115.6 ffmpy: 0.5.0 gradio-client==1.5.2 is not installed. httpx: 0.28.1 huggingface-hub: 0.27.1 jinja2: 3.1.5 markupsafe: 2.1.5 numpy: 2.2.1 orjson: 3.10.13 packaging: 24.2 pandas: 2.2.3 pillow: 11.1.0 pydantic: 2.10.4 pydub: 0.25.1 python-multipart: 0.0.20 pyyaml: 6.0.2 ruff: 0.8.6 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.41.3 tomlkit: 0.13.2 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.3.0 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2024.12.0 httpx: 0.28.1 huggingface-hub: 0.27.1 packaging: 24.2 typing-extensions: 4.12.2 websockets: 14.1 ``` ### Severity Blocking usage of gradio
closed
2025-01-13T11:53:31Z
2025-01-15T15:40:13Z
https://github.com/gradio-app/gradio/issues/10343
[ "enhancement", "needs designing" ]
grahamwhiteuk
8
encode/databases
asyncio
570
`IndexError` on force_rollback of Transaction in tests.
MRE: ``` import databases @pytest.fixture(scope="session") def db(): return databases.Database("postgres://...") @pytest.fixture() async def transaction(db): await db.connect() async with db.transaction(force_rollback=True): yield db async def test_example(transaction): await transaction.execute("select 1") ``` ``` ___________________________________ ERROR at teardown of test_example ___________________________________ db = <databases.core.Database object at 0x7f121c90ac80> @pytest.fixture() async def transaction(db): await db.connect() > async with db.transaction(force_rollback=True): tests/query/test_core.py:306: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .venv/lib/python3.10/site-packages/databases/core.py:435: in __aexit__ await self.rollback() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <databases.core.Transaction object at 0x7f121ca7d210> async def rollback(self) -> None: print("rollback called") async with self._connection._transaction_lock: > assert self._connection._transaction_stack[-1] is self E IndexError: list index out of range .venv/lib/python3.10/site-packages/databases/core.py:482: IndexError ``` I have included prints in the `__aenter__` and `__aexit__` of the Transaction class. ``` __aenter__ Connection: <databases.core.Connection object at 0x7f121ca7d0c0> TransactionStack: [<databases.core.Transaction object at 0x7f121ca7d210>] --------------------------------------- Captured stdout teardown ---------------------------------------- __aexit__ Connection: <databases.core.Connection object at 0x7f121c4496c0> TransactionStack: [] ``` Somehow the transaction connection on entry != the connection on exit. Can someone point out what I am doing wrong, or how this could be possible?
open
2023-09-08T11:55:54Z
2024-12-16T07:32:26Z
https://github.com/encode/databases/issues/570
[ "bug" ]
EdgyNortal
16
graphdeco-inria/gaussian-splatting
computer-vision
853
Any plan on making Gaussian splatting opensource ?
open
2024-06-17T05:27:36Z
2024-06-18T00:17:47Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/853
[]
rkakash59
1
vi3k6i5/flashtext
nlp
36
Include links to other projects?
`flashtext` is great, thank you for building, documenting, and writing a post about it! I've incorporated it into our [pipeline](https://github.com/NIHOPA/NLPre) and saw a 60x speedup with respect to the regex matching we were doing. If you'd like to link back to our project (as proof yours is being used in the wild), feel free, or I can submit a PR for that. If not, thanks again for the quick drop-in library!
closed
2017-12-08T20:50:22Z
2018-01-26T15:25:21Z
https://github.com/vi3k6i5/flashtext/issues/36
[]
thoppe
4
dynaconf/dynaconf
django
596
Bug/expected? Issue with django override_settings
Hi, I'm using dynaconf with django and met an issue when using `override_settings` decorator in test. ``` # all correct imports defined, here just a piece of test itsed @override_settings(ANY_CONFIG_PARAM='SOME_VAL') def test_method_that_use_settings(self): # this actually can be any code that call method where somewhere settings.ANY_OTHER_VALUE are used # in my case it was django api test client print(settings.ANY_OTHER_VALUE) self.assertTrue(True) # just to have any assert ``` And I got `AttributeError`. Last part of the stacktrace: ``` # all trace before this depends on where settings.ANY_OTHER_VALUE is used. This lines down below are same for all cases File "/home/igerasin/projects/project-alpha/venv/lib/python3.8/site-packages/dynaconf/base.py", line 158, in __getattr__ self._wrapped._fresh File "/home/igerasin/projects/project-alpha/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 242, in __getattr__ raise AttributeError AttributeError ``` Is it expected behavior? How can I correctly use `override_settings` with `dynaconf`? Thanks!
closed
2021-06-05T15:59:52Z
2021-08-20T14:24:17Z
https://github.com/dynaconf/dynaconf/issues/596
[ "bug", "HIGH", "django", "backport3.1.5" ]
ivan-gerasin
2
RobertCraigie/prisma-client-py
asyncio
249
Mapped columns cannot be used with type safe raw queries
<!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> If a column is mapped to a different name at the database level then pydantic will raise a ValidationError when attempting to construct the model object: ```prisma model User { id String @id @default(cuid()) name String @map("username") email String? @unique posts Post[] profile Profile? } ``` ```py query = ''' SELECT * FROM User WHERE User.id = ? ''' found = await client.query_first(query, user.id, model=User) ``` ``` /private/tmp/tox/prisma-client-py/py39/lib/python3.9/site-packages/prisma/client.py:308: in query_first results = await self.query_raw(query, *args, model=model) /private/tmp/tox/prisma-client-py/py39/lib/python3.9/site-packages/prisma/client.py:338: in query_raw return [model.parse_obj(r) for r in result] /private/tmp/tox/prisma-client-py/py39/lib/python3.9/site-packages/prisma/client.py:338: in <listcomp> return [model.parse_obj(r) for r in result] pydantic/main.py:511: in pydantic.main.BaseModel.parse_obj ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pydantic.error_wrappers.ValidationError: 1 validation error for User E name E field required (type=value_error.missing ``` ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> - Add the `@map` declaration to the test user model - Migrate the database (`prisma db push`) - Run `tox -e py39 -- -x tests/test_raw_queries.py::test_query_first_model` ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> No errors should be raised, the column name should be correctly transformed ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Mac OS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> SQLite - Python version: <!--[Run `python -V` to see your Python version]--> 3.9.9 - Prisma version: 3.8.1 ## Additional context This is not currently solvable by us as Prisma does not give us the field name at the database level. If they did we could simply create an alias when defining the `BaseModel`: ```py class User(BaseModel): ... name: str = Field(alias='username') ... ```
open
2022-01-27T06:02:46Z
2024-12-08T18:11:33Z
https://github.com/RobertCraigie/prisma-client-py/issues/249
[ "bug/2-confirmed", "kind/bug", "level/advanced", "priority/medium" ]
RobertCraigie
2
dmlc/gluon-cv
computer-vision
1,374
Is this a wrong sentence of "self.mode == 'linear'"
In the script of "utils/lr_scheduler.py ", the Line 129 displays "self.mode =='linear'", but in parse, lr_model is just one of 'step', 'poly' or 'cosine'. Is there something wrong in the Line 129? Thanks!
closed
2020-07-17T12:25:39Z
2020-07-17T14:54:24Z
https://github.com/dmlc/gluon-cv/issues/1374
[]
CVAPPS24
1
autogluon/autogluon
computer-vision
4,994
[BUG] Autogluon not recognizes catboost-dev version
Usig a autogluon code to check versions: ```python from autogluon.core.utils import show_versions show_versions() ``` catboost version is None but should be catboost-dev 1.2.7 or similar ##Steps 1- uninstall catboost via pip uninstall catboost 2 - install catboost dev via https://github.com/catboost/catboost/actions, select a most recently commit, and you will see files for your SO, Windows, macOs or linux. example will all versions: https://github.com/catboost/catboost/actions/runs/13963454030 Download and install via pip install <file> 3 Now you can run the code to check versions
open
2025-03-21T19:06:40Z
2025-03-22T10:20:12Z
https://github.com/autogluon/autogluon/issues/4994
[ "bug: unconfirmed", "Needs Triage" ]
celestinoxp
0
arogozhnikov/einops
tensorflow
3
Provide a way to skip cupy testing
CI will not have GPUs, thus we need to test without cupy.
closed
2018-10-30T20:56:44Z
2018-10-31T21:34:10Z
https://github.com/arogozhnikov/einops/issues/3
[]
arogozhnikov
0
MycroftAI/mycroft-core
nlp
2,435
feature: prioritized language preferences
Users might like to specify a language or languages that Mycroft should fall back on _before_ English. Try the skill in Spanish, no dice, try it in Portuguese, still nothing, try Italian, _nope_, okay just use English. That sort of thing.
closed
2019-12-21T11:15:32Z
2024-09-08T08:37:24Z
https://github.com/MycroftAI/mycroft-core/issues/2435
[]
ChanceNCounter
4
Asabeneh/30-Days-Of-Python
pandas
231
There is a mistake
Error message: Traceback (most recent call last): File "**/Asabeneh_30-Days-Of-Python/04_Day_Strings/day_4.py", line 171, in <module> print(challenge.digit()) # True AttributeError: 'str' object has no attribute 'digit' This side should be isdigit()
closed
2022-05-25T08:04:38Z
2023-07-08T22:17:59Z
https://github.com/Asabeneh/30-Days-Of-Python/issues/231
[]
jerry-00
0
sqlalchemy/sqlalchemy
sqlalchemy
11,160
make public version of _sentinel_value_resolver and document, *OR* rework sentinel logic to use pre-bindparam types + result procsesed
### Describe the bug Get an alignment error when doing a bulk insert in Postgres, leading to the following error. ```InvalidRequestError: Can't match sentinel values in result set to parameter sets; key '3e5d7de2-56ab-4256-9871-71e048dbb5bc' was not found. There may be a mismatch between the datatype passed to the DBAPI driver vs. that which it returns in a result row. Ensure the given Python value matches the expected result type *exactly*, taking care to not rely upon implicit conversions which may occur such as when using strings in place of UUID or integer values, etc.``` This is probably the same thing causing #11063. But I'm able to reproduce the bug, see the code below. ### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected _No response_ ### SQLAlchemy Version in Use 2.0.28 ### DBAPI (i.e. the database driver) asyncpg ### Database Vendor and Major Version PostgreSQL 16.2 ### Python Version 3.11 ### Operating system OSX ### To Reproduce ```python from operator import attrgetter from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker, AsyncSession from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column from sqlalchemy import Column, DateTime, String, func from sqlalchemy.dialects.mssql import UNIQUEIDENTIFIER from sqlalchemy.dialects.postgresql import UUID from sqlalchemy.types import TypeDecorator, CHAR import asyncio import uuid engine = create_async_engine("postgresql+asyncpg://postgres:postgres@localhost:5432/test_db", echo=True) AsyncSessionLocal = async_sessionmaker(engine, class_=AsyncSession, expire_on_commit=False) class GUID(TypeDecorator): """Platform-independent GUID type. Uses PostgreSQL's UUID type or MSSQL's UNIQUEIDENTIFIER, otherwise uses CHAR(32), storing as stringified hex values. """ impl = CHAR cache_ok = True _default_type = CHAR(32) _uuid_as_str = attrgetter("hex") def load_dialect_impl(self, dialect): if dialect.name == "postgresql": return dialect.type_descriptor(UUID()) elif dialect.name == "mssql": return dialect.type_descriptor(UNIQUEIDENTIFIER()) else: return dialect.type_descriptor(self._default_type) def process_bind_param(self, value, dialect): if value is None: return value elif dialect.name in ("postgresql", "mssql"): return str(value) else: if not isinstance(value, uuid.UUID): value = uuid.UUID(value) return self._uuid_as_str(value) def process_result_value(self, value, dialect): if value is None: return value else: if not isinstance(value, uuid.UUID): value = uuid.UUID(value) return value class Base(DeclarativeBase): type_annotation_map = { uuid.UUID: GUID, } class Item(Base): __tablename__ = 'items' name = Column(String) item_id: Mapped[uuid.UUID] = mapped_column(primary_key=True, default=uuid.uuid4) created_at = Column(DateTime(timezone=True), nullable=False, server_default=func.now()) updated_at = Column(DateTime(timezone=True), nullable=False, server_default=func.now(), onupdate=func.now()) collection_id = Column(String, default="default") async def create_tables(): async with engine.begin() as conn: await conn.run_sync(Base.metadata.create_all) async def drop_tables(): async with engine.begin() as conn: await conn.run_sync(Base.metadata.drop_all) async def insert_item(): async with AsyncSessionLocal() as session: async with session.begin(): item = Item(name='This is an item') session.add(item) await session.commit() async def insert_two_items(): async with AsyncSessionLocal() as session: async with session.begin(): item1 = Item(name='This is item 1') item2 = Item(name='This is item 2') session.add(item1) session.add(item2) await session.commit() async def async_main(): await drop_tables() await create_tables() await insert_item() # this works await insert_two_items() # this fails await drop_tables() if __name__ == "__main__": asyncio.run(async_main()) ``` ### Error ``` # Copy the complete stack trace and error message here, including SQL log output if applicable. --------------------------------------------------------------------------- KeyError Traceback (most recent call last) File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py:892, in DefaultDialect._deliver_insertmanyvalues_batches(self, cursor, statement, parameters, generic_setinputsizes, context) 890 else: 891 # single-column sentinel with no value resolver --> 892 ordered_rows = [ 893 rows_by_sentinel[ 894 parameters[_sentinel_param_key] # type: ignore # noqa: E501 895 ] 896 for parameters in imv_batch.batch 897 ] 898 except KeyError as ke: 899 # see test_insert_exec.py:: 900 # IMVSentinelTest::test_sentinel_cant_match_keys 901 # for coverage / demonstration File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py:893, in <listcomp>(.0) 890 else: 891 # single-column sentinel with no value resolver 892 ordered_rows = [ --> 893 rows_by_sentinel[ 894 parameters[_sentinel_param_key] # type: ignore # noqa: E501 895 ] 896 for parameters in imv_batch.batch 897 ] 898 except KeyError as ke: 899 # see test_insert_exec.py:: 900 # IMVSentinelTest::test_sentinel_cant_match_keys 901 # for coverage / demonstration KeyError: '3e5d7de2-56ab-4256-9871-71e048dbb5bc' The above exception was the direct cause of the following exception: InvalidRequestError Traceback (most recent call last) Cell In[3], line 1 ----> 1 await async_main() Cell In[1], line 100, in async_main() 98 await create_tables() 99 await insert_item() --> 100 await insert_two_items() 101 await drop_tables() Cell In[1], line 94, in insert_two_items() 92 session.add(item1) 93 session.add(item2) ---> 94 await session.commit() File ~/venv/lib/python3.11/site-packages/sqlalchemy/ext/asyncio/session.py:1000, in AsyncSession.commit(self) 992 async def commit(self) -> None: 993 """Commit the current transaction in progress. 994 995 .. seealso:: (...) 998 "commit" 999 """ -> 1000 await greenlet_spawn(self.sync_session.commit) File ~/venv/lib/python3.11/site-packages/sqlalchemy/util/_concurrency_py3k.py:199, in greenlet_spawn(fn, _require_await, *args, **kwargs) 197 result = context.throw(*sys.exc_info()) 198 else: --> 199 result = context.switch(value) 200 finally: 201 # clean up to avoid cycle resolution by gc 202 del context.driver File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1972, in Session.commit(self) 1969 if trans is None: 1970 trans = self._autobegin_t() -> 1972 trans.commit(_to_root=True) File <string>:2, in commit(self, _to_root) File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/state_changes.py:139, in _StateChange.declare_states.<locals>._go(fn, self, *arg, **kw) 137 self._next_state = _StateChangeStates.CHANGE_IN_PROGRESS 138 try: --> 139 ret_value = fn(self, *arg, **kw) 140 except: 141 raise File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1257, in SessionTransaction.commit(self, _to_root) 1255 if self._state is not SessionTransactionState.PREPARED: 1256 with self._expect_state(SessionTransactionState.PREPARED): -> 1257 self._prepare_impl() 1259 if self._parent is None or self.nested: 1260 for conn, trans, should_commit, autoclose in set( 1261 self._connections.values() 1262 ): File <string>:2, in _prepare_impl(self) File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/state_changes.py:139, in _StateChange.declare_states.<locals>._go(fn, self, *arg, **kw) 137 self._next_state = _StateChangeStates.CHANGE_IN_PROGRESS 138 try: --> 139 ret_value = fn(self, *arg, **kw) 140 except: 141 raise File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py:1232, in SessionTransaction._prepare_impl(self) 1230 if self.session._is_clean(): 1231 break -> 1232 self.session.flush() 1233 else: 1234 raise exc.FlushError( 1235 "Over 100 subsequent flushes have occurred within " 1236 "session.commit() - is an after_flush() hook " 1237 "creating new objects?" 1238 ) File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py:4296, in Session.flush(self, objects) 4294 try: 4295 self._flushing = True -> 4296 self._flush(objects) 4297 finally: 4298 self._flushing = False File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py:4431, in Session._flush(self, objects) 4428 transaction.commit() 4430 except: -> 4431 with util.safe_reraise(): 4432 transaction.rollback(_capture_exception=True) File ~/venv/lib/python3.11/site-packages/sqlalchemy/util/langhelpers.py:146, in safe_reraise.__exit__(self, type_, value, traceback) 144 assert exc_value is not None 145 self._exc_info = None # remove potential circular references --> 146 raise exc_value.with_traceback(exc_tb) 147 else: 148 self._exc_info = None # remove potential circular references File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/session.py:4392, in Session._flush(self, objects) 4390 self._warn_on_events = True 4391 try: -> 4392 flush_context.execute() 4393 finally: 4394 self._warn_on_events = False File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/unitofwork.py:466, in UOWTransaction.execute(self) 464 else: 465 for rec in topological.sort(self.dependencies, postsort_actions): --> 466 rec.execute(self) File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/unitofwork.py:642, in SaveUpdateAll.execute(self, uow) 640 @util.preload_module("sqlalchemy.orm.persistence") 641 def execute(self, uow): --> 642 util.preloaded.orm_persistence.save_obj( 643 self.mapper, 644 uow.states_for_mapper_hierarchy(self.mapper, False, False), 645 uow, 646 ) File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/persistence.py:93, in save_obj(base_mapper, states, uowtransaction, single) 81 update = _collect_update_commands( 82 uowtransaction, table, states_to_update 83 ) 85 _emit_update_statements( 86 base_mapper, 87 uowtransaction, (...) 90 update, 91 ) ---> 93 _emit_insert_statements( 94 base_mapper, 95 uowtransaction, 96 mapper, 97 table, 98 insert, 99 ) 101 _finalize_insert_update_commands( 102 base_mapper, 103 uowtransaction, (...) 119 ), 120 ) File ~/venv/lib/python3.11/site-packages/sqlalchemy/orm/persistence.py:1143, in _emit_insert_statements(base_mapper, uowtransaction, mapper, table, insert, bookkeeping, use_orm_insert_stmt, execution_options) 1140 if do_executemany: 1141 multiparams = [rec[2] for rec in records] -> 1143 result = connection.execute( 1144 statement, multiparams, execution_options=execution_options 1145 ) 1147 if use_orm_insert_stmt is not None: 1148 if return_result is None: File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1421, in Connection.execute(self, statement, parameters, execution_options) 1419 raise exc.ObjectNotExecutableError(statement) from err 1420 else: -> 1421 return meth( 1422 self, 1423 distilled_parameters, 1424 execution_options or NO_OPTIONS, 1425 ) File ~/venv/lib/python3.11/site-packages/sqlalchemy/sql/elements.py:514, in ClauseElement._execute_on_connection(self, connection, distilled_params, execution_options) 512 if TYPE_CHECKING: 513 assert isinstance(self, Executable) --> 514 return connection._execute_clauseelement( 515 self, distilled_params, execution_options 516 ) 517 else: 518 raise exc.ObjectNotExecutableError(self) File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1643, in Connection._execute_clauseelement(self, elem, distilled_parameters, execution_options) 1631 compiled_cache: Optional[CompiledCacheType] = execution_options.get( 1632 "compiled_cache", self.engine._compiled_cache 1633 ) 1635 compiled_sql, extracted_params, cache_hit = elem._compile_w_cache( 1636 dialect=dialect, 1637 compiled_cache=compiled_cache, (...) 1641 linting=self.dialect.compiler_linting | compiler.WARN_LINTING, 1642 ) -> 1643 ret = self._execute_context( 1644 dialect, 1645 dialect.execution_ctx_cls._init_compiled, 1646 compiled_sql, 1647 distilled_parameters, 1648 execution_options, 1649 compiled_sql, 1650 distilled_parameters, 1651 elem, 1652 extracted_params, 1653 cache_hit=cache_hit, 1654 ) 1655 if has_events: 1656 self.dispatch.after_execute( 1657 self, 1658 elem, (...) 1662 ret, 1663 ) File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py:1847, in Connection._execute_context(self, dialect, constructor, statement, parameters, execution_options, *args, **kw) 1844 context.pre_exec() 1846 if context.execute_style is ExecuteStyle.INSERTMANYVALUES: -> 1847 return self._exec_insertmany_context(dialect, context) 1848 else: 1849 return self._exec_single_context( 1850 dialect, context, statement, parameters 1851 ) File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/base.py:2036, in Connection._exec_insertmany_context(self, dialect, context) 2031 preserve_rowcount = context.execution_options.get( 2032 "preserve_rowcount", False 2033 ) 2034 rowcount = 0 -> 2036 for imv_batch in dialect._deliver_insertmanyvalues_batches( 2037 cursor, 2038 str_statement, 2039 effective_parameters, 2040 generic_setinputsizes, 2041 context, 2042 ): 2043 if imv_batch.processed_setinputsizes: 2044 try: File ~/venv/lib/python3.11/site-packages/sqlalchemy/engine/default.py:902, in DefaultDialect._deliver_insertmanyvalues_batches(self, cursor, statement, parameters, generic_setinputsizes, context) 892 ordered_rows = [ 893 rows_by_sentinel[ 894 parameters[_sentinel_param_key] # type: ignore # noqa: E501 895 ] 896 for parameters in imv_batch.batch 897 ] 898 except KeyError as ke: 899 # see test_insert_exec.py:: 900 # IMVSentinelTest::test_sentinel_cant_match_keys 901 # for coverage / demonstration --> 902 raise exc.InvalidRequestError( 903 f"Can't match sentinel values in result set to " 904 f"parameter sets; key {ke.args[0]!r} was not " 905 "found. " 906 "There may be a mismatch between the datatype " 907 "passed to the DBAPI driver vs. that which it " 908 "returns in a result row. Ensure the given " 909 "Python value matches the expected result type " 910 "*exactly*, taking care to not rely upon implicit " 911 "conversions which may occur such as when using " 912 "strings in place of UUID or integer values, etc. " 913 ) from ke 915 result.extend(ordered_rows) 917 else: InvalidRequestError: Can't match sentinel values in result set to parameter sets; key '3e5d7de2-56ab-4256-9871-71e048dbb5bc' was not found. There may be a mismatch between the datatype passed to the DBAPI driver vs. that which it returns in a result row. Ensure the given Python value matches the expected result type *exactly*, taking care to not rely upon implicit conversions which may occur such as when using strings in place of UUID or integer values, etc. ``` ### Additional context I'm using SQLModel, which uses the [GUID](https://docs.sqlalchemy.org/en/20/core/custom_types.html#backend-agnostic-guid-type) type. I created the example above with SQLAlchemy only. The error seems to be caused by the columns `created_at` and `updated_at`. With those columns, the SQL generated is the following: ``` INFO sqlalchemy.engine.Engine BEGIN (implicit) INFO sqlalchemy.engine.Engine INSERT INTO items (name, item_id, collection_id) VALUES ($1::VARCHAR, $2::UUID, $3::VARCHAR), ($4::VARCHAR, $5::UUID, $6::VARCHAR) RETURNING items.created_at, items.updated_at, items.item_id INFO sqlalchemy.engine.Engine [generated in 0.00009s (insertmanyvalues) 1/1 (ordered)] ('This is item 1', '3e5d7de2-56ab-4256-9871-71e048dbb5bc', 'default', 'This is item 2', '6d769ab8-88d0-416f-9184-fc3729642c3e', 'default') INFO sqlalchemy.engine.Engine ROLLBACK ``` If you remove those columns, the SQL generated is the following, and there is no error: ``` INFO sqlalchemy.engine.Engine BEGIN (implicit) INFO sqlalchemy.engine.Engine INSERT INTO items (name, item_id, collection_id) VALUES ($1::VARCHAR, $2::UUID, $3::VARCHAR), ($4::VARCHAR, $5::UUID, $6::VARCHAR) INFO sqlalchemy.engine.Engine [generated in 0.00018s (insertmanyvalues) 1/1 (unordered)] ('This is item 1', 'c7f2a76d-aff1-41b9-b621-8a2a0ae62ed9', 'default', 'This is item 2', '38d5444a-7b1f-4486-98ca-1f2bdbb2337f', 'default') INFO sqlalchemy.engine.Engine COMMIT ``` I'm guessing it's ` RETURNING items.created_at, items.updated_at, items.item_id` that is causing the error.
closed
2024-03-15T04:08:16Z
2024-03-19T07:32:41Z
https://github.com/sqlalchemy/sqlalchemy/issues/11160
[ "bug", "expected behavior", "insertmanyvalues" ]
shabani1
17
miguelgrinberg/flasky
flask
533
How to transfer a FlaskForm to json
Excuse me, but how can I transfer a FlaskForm to json ? I wanna to use VUE as my angularjs . It seems that VUE needs json
open
2022-03-23T13:43:56Z
2022-03-23T14:07:53Z
https://github.com/miguelgrinberg/flasky/issues/533
[ "question" ]
LeonWolfe
1
keras-team/autokeras
tensorflow
924
海哥,结构化数据无法读取模型。StructuredDataClassifier predict do not work
<!--- **If you are reporting a bug:** * Verify that your issue is not being currently addressed by other issues or pull requests. * Please note that Auto-Keras is only compatible with **Python 3.6**. * Tag the issue with the `bug report` tag. --> ### Bug Description /home/zy/anaconda3/envs/tf2/bin/python3.6 /home/zy/PycharmProjects/dl_exchange/auto_ml_kaers.py {'dot': '/usr/bin/dot', 'twopi': '/usr/bin/twopi', 'neato': '/usr/bin/neato', 'circo': '/usr/bin/circo', 'fdp': '/usr/bin/fdp', 'sfdp': '/usr/bin/sfdp'} Using TensorFlow backend. <class 'numpy.ndarray'> <class 'numpy.ndarray'> Traceback (most recent call last): File "/home/zy/PycharmProjects/dl_exchange/auto_ml_kaers.py", line 87, in <module> clf.predict(x=x_test,batch_size=999) File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/task.py", line 542, in predict **kwargs) File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/auto_model.py", line 285, in predict preprocess_graph, model = self.tuner.get_best_model() File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/tuner.py", line 111, in get_best_model preprocess_graph, keras_graph, model = self.get_best_models()[0] File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/tuner.py", line 231, in get_best_models return super(Tuner, self).get_best_models(num_models) File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/base_tuner.py", line 238, in get_best_models models = [self.load_model(trial) for trial in best_trials] File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/kerastuner/engine/base_tuner.py", line 238, in <listcomp> models = [self.load_model(trial) for trial in best_trials] File "/home/zy/anaconda3/envs/tf2/lib/python3.6/site-packages/autokeras/tuner.py", line 93, in load_model preprocess_graph, keras_graph = self.hyper_graph.build_graphs( AttributeError: 'NoneType' object has no attribute 'build_graphs' <autokeras.task.StructuredDataClassifier object at 0x7ff818300518> my_code: clf=ak.StructuredDataClassifier(max_trials=16,name='autokeras',overwrite=False,directory='ak/auto-kears1') clf.predict(x=x_test,batch_size=999) The model has been trained. It was found that it could not be loaded when it was loaded again for prediction. In addition, the model does not know why GPU utilization is very low, only about 3%. 模型已经训练完了,再次加载进行预测的时候发现无法加载。另外模型不知道为何gpu利用率非常低,只有百分之3左右。 A clear and concise description of what the bug is. --> ### Reproducing Steps Steps to reproduce the behavior: * Step 1: ... * Step 2: ... ### Expected Behavior <!--- A clear and concise description of what you expected to happen. --> ### Setup Details Include the details about the versions of: - OS type and version: - Python: 3.6.9 - autokeras: 1.0.0 - scikit-learn: - numpy:1.18.1 - keras:2.3.1 - scipy: - tensorflow:2.0.0 keras-tuner 1.0.1 ### Additional context model=clf.export_model() save_model(model,model_save_path) model = load_mode('xxx,h5') is not work too. 使用keras加载保存的模型也无法保存。 如有需要提供更多的信息,可以联系我qq402868327 -->
closed
2020-01-24T12:45:16Z
2020-04-02T08:01:09Z
https://github.com/keras-team/autokeras/issues/924
[ "bug report", "wontfix" ]
thefreeman007
2
slackapi/bolt-python
fastapi
336
Passing External Data Selections Between Select Menus
I have one form for users with 2 blocks that are external select menus. The first block has a user select one option (call this option 1) and the second block depends on option 1 to show a multi-select menu corresponding to option 1. The user then submits the form. My problem is that I can't access the value the user inputted in the first block within the second block. Code Example: ```python @slackApp.options("block1") def show_option1(ack, body): // CODE DISPLAYS EXTERNAL OPTIONS HERE ack(options=options) @slackApp.options("block2") def show_option2(ack, body): // CODE NEEDS TO GET SELECTED OPTION FROM BLOCK 1 HERE. HOW? ack(options=options) @slackApp.action({ "block_id": "block1", "action_id": "option1" }) def ack_block1(ack, body, client): ack() @slackApp.action({ "block_id": "block2", "action_id": "option2" }) def ack_block2(ack, body, client): ack() ``` I have tried something like body["view"]["state"]["values"] to grab the view's value from block1, but it seems like the view's state doesn't persist throughout blocks. #### The `slack_bolt` version slack-sdk==3.5.0 slack-bolt==1.5.0 #### Python runtime version python==2.7.16 #### OS info ProductName: Mac OS X ProductVersion: 10.15.7 BuildVersion: 19H524 Darwin Kernel Version 19.6.0 ## Requirements Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
closed
2021-05-10T20:10:00Z
2023-02-11T20:52:34Z
https://github.com/slackapi/bolt-python/issues/336
[ "question" ]
mariebarrramsey
7
InstaPy/InstaPy
automation
6,049
Problem running on DigitalOcean
<!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht --> <!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki --> ## Expected Behavior To work ## Current Behavior _. ._. ._. ._. ._. ._. ._. ._. ._. ._. ._. No any custom workspace provided. ~using existing.. InstaPy Version: 0.6.13 ._. ._. ._. ._. ._. ._. ._. Workspace in use: "/root/InstaPy" OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO INFO [2021-01-24 10:32:07] [] Session started! oooooooooooooooooooooooooooooooooooooooooooooooooooooo INFO [2021-01-24 10:32:07] [] -- Connection Checklist [1/2] (Internet Connection Status) INFO [2021-01-24 10:32:07] [] - Internet Connection Status: ok INFO [2021-01-24 10:32:07] [] - Current IP is "...." and it's from "...." INFO [2021-01-24 10:32:07] [] -- Connection Checklist [2/2] (Hide Selenium Extension) INFO [2021-01-24 10:32:07] [] - window.navigator.webdriver response: None INFO [2021-01-24 10:32:07] [] - Hide Selenium Extension: ok INFO [2021-01-24 10:32:15] [] - Cookie file not found, creating cookie... INFO [2021-01-24 10:32:54] [] Timed out with failure while explicitly waiting until visibility of element located! ^C INFO [2021-01-24 10:32:58] [] Sessional Live Report: |> No any statistics to show ## Possible Solution (optional) ## InstaPy configuration Running instapy on DigitalOcean, When I run the quickstart.py in the machine it gets stuck after: INFO [2021-01-24 10:32:15] [] - Cookie file not found, creating cookie... and then it times out. I have reproduced the same behaviour in mi Ubuntu laptop and there it works perfectly
closed
2021-01-24T10:38:28Z
2021-02-23T20:37:48Z
https://github.com/InstaPy/InstaPy/issues/6049
[]
pereverges
3
deepspeedai/DeepSpeed
machine-learning
7,019
[REQUEST] option to shard weights only in each node
**Is your feature request related to a problem? Please describe.** Multi-node training with stage3 is too slow. **Describe the solution you'd like** Avoid weight sync between inter-GPUs and shard weights only between intra-GPUs
closed
2025-02-08T16:31:09Z
2025-02-23T05:38:51Z
https://github.com/deepspeedai/DeepSpeed/issues/7019
[ "enhancement" ]
cyr0930
4
electricitymaps/electricitymaps-contrib
data-visualization
7,689
Tests failing in master via CI system
![image](https://github.com/user-attachments/assets/9ddb3407-dd89-49dc-9a04-eec1439cd942) I'm not sure, but I saw this in three separate recent PRs doing different things, so I figure its unrelated to the PRs and something that has showed up recently due to a change in external dependencies or similar.
closed
2025-01-06T00:59:04Z
2025-01-06T15:14:54Z
https://github.com/electricitymaps/electricitymaps-contrib/issues/7689
[]
consideRatio
1
Ehco1996/django-sspanel
django
432
用户的订阅链接和ref地址做成可以重置的
这样更科学一点
closed
2020-11-04T23:41:15Z
2021-09-13T00:09:19Z
https://github.com/Ehco1996/django-sspanel/issues/432
[ "enhancement" ]
Ehco1996
1
pytest-dev/pytest-cov
pytest
259
Tracking: Ensure subprocess code coverage use case is considered in pth deprecation
[bpo-33944](https://bugs.python.org/issue33944) has discussion on deprecation of pth files. Currently pytest-cov uses a pth file to ensure that statements executed in Python subprocesses are included in code coverage calculations. Per the linked issue the pth functionality isn't going to get ripped out of Python without warning, but we need to make sure to provide input so that our use case is considered for future releases.
open
2019-01-22T03:47:05Z
2019-03-15T06:47:32Z
https://github.com/pytest-dev/pytest-cov/issues/259
[]
chrahunt
2
marimo-team/marimo
data-visualization
3,888
Custom Anywidget updates in Jupyter but not in Marimo
### Describe the bug ### Describe the bug Hi, I have a simple anywidget [here](https://github.com/habemus-papadum/anydiff) that shows the diff of two code snippets using codemirror 6. I have a [jupyter notebook](https://github.com/habemus-papadum/anydiff/blob/main/src/anydiff/demo.ipynb) that shows the widget updating as various traitlets are changed and the widget behaves as expected. When I try to use the [widget in marimo](https://github.com/habemus-papadum/anydiff/blob/main/src/anydiff/marimo_demo.py) using `mo.ui.marimo` wrapper, the widget renders correctly initially, but fails to update if I edit the cell and re-run. So this is not an issue of traitlet syncing not working, but rather replacing the entire widget fails. the javascript for the widget is [here](https://github.com/habemus-papadum/anydiff/blob/main/src/anydiff/index.js). ### Reproduction ``` git clone https://github.com/habemus-papadum/anydiff.git uv sync --frozen source .venv/bin/activate marimo edit src/anydiff/marimo_demo.py ``` update the value to `code1` and re-eval cell, the output does not update. ### Logs ```shell ``` ### System Info ```shell anywidget==0.9.13 graphviz-anywidget==0.5.0 jupyter==1.1.1 jupyter-console==6.6.3 jupyter-events==0.11.0 jupyter-lsp==2.2.5 jupyter_client==8.6.3 jupyter_core==5.7.2 jupyter_server==2.15.0 jupyter_server_terminals==0.5.3 jupyterlab==4.3.4 jupyterlab_pygments==0.3.0 jupyterlab_server==2.27.3 jupyterlab_widgets==3.0.13 notebook==7.3.2 notebook_shim==0.2.4 System: OS: macOS 15.3.1 CPU: (10) arm64 Apple M1 Max Memory: 184.88 MB / 32.00 GB Shell: 5.9 - /bin/zsh Browsers: Chrome: 133.0.6943.127 Safari: 18.3 ``` ### Severity annoyance ### Environment <details> ``` { "marimo": "0.11.5", "OS": "Darwin", "OS Version": "24.3.0", "Processor": "arm", "Python Version": "3.12.7", "Binaries": { "Browser": "133.0.6943.127", "Node": "v23.6.0" }, "Dependencies": { "click": "8.1.8", "docutils": "0.21.2", "itsdangerous": "2.2.0", "jedi": "0.19.2", "markdown": "3.7", "narwhals": "1.25.0", "packaging": "24.2", "psutil": "5.9.8", "pygments": "2.18.0", "pymdown-extensions": "10.14.3", "pyyaml": "6.0.2", "ruff": "0.9.4", "starlette": "0.45.3", "tomlkit": "0.13.2", "typing-extensions": "4.12.2", "uvicorn": "0.34.0", "websockets": "14.2" }, "Optional Dependencies": { "altair": "5.5.0", "anywidget": "0.9.13", "duckdb": "1.1.3", "pandas": "2.2.3", "polars": "1.18.0", "pyarrow": "18.1.0" }, "Experimental Flags": {} } ``` </details> ### Code to reproduce https://github.com/habemus-papadum/anydiff/blob/e68a893c9d695a22a31774d91281f9890bb632c1/src/anydiff/marimo_demo.py#L27
closed
2025-02-23T20:37:13Z
2025-02-25T15:17:42Z
https://github.com/marimo-team/marimo/issues/3888
[ "bug" ]
habemus-papadum
2
KaiyangZhou/deep-person-reid
computer-vision
465
About the validation set
I see each dataset is divided into 3 small sets: train, query and gallery. So where's the validation set?
open
2021-10-14T03:06:06Z
2021-10-14T03:06:06Z
https://github.com/KaiyangZhou/deep-person-reid/issues/465
[]
Thangbluee
0
automagica/automagica
automation
80
Having this error while pip install Automagica -U
ERROR: Could not find a version that satisfies the requirement opencv-python==3.4.2.17 (from automagica) (from versions: 3.4.8.29, 4.1.2.30) ERROR: No matching distribution found for opencv-python==3.4.2.17 (from automagica)
closed
2019-12-11T10:12:26Z
2020-01-24T14:12:06Z
https://github.com/automagica/automagica/issues/80
[]
andrewwoo0902
4
nerfstudio-project/nerfstudio
computer-vision
3,426
Unable to install tiny-cuda-nn in ubuntu 24.04
I am trying to install nerfstudio in ubuntu 24.04 using cuda 11.8 and python 3.8. It seems I am always running into issues while trying to install tiny-cuda-nn (see below). I have tried different python versions along with cuda 12.6, but I have not managed to resolve the issue. Has anyone else encountered this? `(nerfstudio) user@workstation:~$ pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-d_yk1qvn Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-d_yk1qvn Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit c91138bcd4c6877c8d5e60e483c0581aafc70cce Running command git submodule update --init --recursive -q Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─> [1 lines of output] ERROR: Can not execute `setup.py` since setuptools is not available in the build environment. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─> See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. `
open
2024-09-13T12:21:03Z
2025-01-31T16:04:16Z
https://github.com/nerfstudio-project/nerfstudio/issues/3426
[]
vasl12
3
comfyanonymous/ComfyUI
pytorch
7,165
After update, comfyui manager is disappeared
### Expected Behavior after update (2025-03-10), ComfyUI-Manager button is disappered ### Actual Behavior here is my UI: ![Image](https://github.com/user-attachments/assets/b059d2ee-8df8-413c-aea9-6bb3ef728bea) ### Steps to Reproduce here is my UI: ![Image](https://github.com/user-attachments/assets/b059d2ee-8df8-413c-aea9-6bb3ef728bea) ### Debug Logs ```powershell here is my setup log: (comfyui) D:\projects\ComfyUI>python main.py Adding extra search path checkpoints E:\models\ckpts\Stable-diffusion Adding extra search path clip E:\models\clip Adding extra search path clip_vision D:\projects\ComfyUI\models\clip_vision Adding extra search path controlnet E:\models\controlnets Adding extra search path loras E:\models\loras Adding extra search path vae E:\models\VAEs Adding extra search path diffusers E:\models\ckpts\diffusers_models Adding extra search path diffusers_models E:\models\ckpts\diffusers_models [START] Security scan [DONE] Security scan ## ComfyUI-Manager: installing dependencies done. ** ComfyUI startup time: 2025-03-10 18:15:53.787 ** Platform: Windows ** Python version: 3.11.11 | packaged by Anaconda, Inc. | (main, Dec 11 2024, 16:34:19) [MSC v.1929 64 bit (AMD64)] ** Python executable: C:\\AppData\Local\anaconda3\envs\comfyui\python.exe ** ComfyUI Path: D:\projects\ComfyUI ** ComfyUI Base Folder Path: D:\projects\ComfyUI ** User directory: D:\projects\ComfyUI\user ** ComfyUI-Manager config path: D:\projects\ComfyUI\user\default\ComfyUI-Manager\config.ini ** Log path: D:\projects\ComfyUI\user\comfyui.log Prestartup times for custom nodes: 3.5 seconds: D:\projects\ComfyUI\custom_nodes\ComfyUI-Manager Checkpoint files will always be loaded safely. Total VRAM 24564 MB, total RAM 130265 MB pytorch version: 2.5.1+cu124 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 4090 : cudaMallocAsync Using pytorch attention ComfyUI version: 0.3.26 ComfyUI frontend version: 1.13.1 [Prompt Server] web root: C:\Users\==\AppData\Local\anaconda3\envs\comfyui\Lib\site-packages\comfyui_frontend_package\static ### Loading: ComfyUI-Manager (V3.30.3) [ComfyUI-Manager] network_mode: public ### ComfyUI Version: v0.3.26-3-g67c7184b | Released on '2025-03-10' [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json Import times for custom nodes: 0.0 seconds: D:\projects\ComfyUI\custom_nodes\websocket_image_save.py 0.3 seconds: D:\projects\ComfyUI\custom_nodes\sd-ppp 0.5 seconds: D:\projects\ComfyUI\custom_nodes\ComfyUI-Manager 1.2 seconds: D:\projects\ComfyUI\custom_nodes\000 Starting server To see the GUI go to: http://127.0.0.1:8188 FETCH ComfyRegistry Data: 5/57 FETCH ComfyRegistry Data: 10/57 FETCH ComfyRegistry Data: 15/57 FETCH ComfyRegistry Data: 20/57 FETCH ComfyRegistry Data: 25/57 FETCH ComfyRegistry Data: 30/57 ``` ### Other _No response_
closed
2025-03-10T10:20:57Z
2025-03-10T12:18:21Z
https://github.com/comfyanonymous/ComfyUI/issues/7165
[ "Potential Bug" ]
Reginald-L
6
apache/airflow
machine-learning
47,274
Clearing Task Instances Intermittently Throws HTTP 500 Error
### Apache Airflow version AF3 beta1 ### If "Other Airflow 2 version" selected, which one? _No response_ ### What happened? When we try to clear task instance it's throws Intermittently. **Logs:** ``` NFO: 192.168.207.1:54306 - "POST /public/dags/etl_dag/clearTaskInstances HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application + Exception Group Traceback (most recent call last): | File "/usr/local/lib/python3.9/site-packages/starlette/_utils.py", line 76, in collapse_excgroups | yield | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 178, in __call__ | recv_stream.close() | File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 767, in __aexit__ | raise BaseExceptionGroup( | exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi | result = await app( # type: ignore[func-returns-value] | File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__ | await super().__call__(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ | await self.middleware_stack(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 187, in __call__ | raise exc | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 165, in __call__ | await self.app(scope, receive, _send) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 29, in __call__ | await responder(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 126, in __call__ | await super().__call__(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 46, in __call__ | await self.app(scope, receive, self.send_with_compression) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 93, in __call__ | await self.simple_response(scope, receive, send, request_headers=headers) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 144, in simple_response | await self.app(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 178, in __call__ | recv_stream.close() | File "/usr/local/lib/python3.9/contextlib.py", line 137, in __exit__ | self.gen.throw(typ, value, traceback) | File "/usr/local/lib/python3.9/site-packages/starlette/_utils.py", line 82, in collapse_excgroups | raise exc | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 175, in __call__ | response = await self.dispatch_func(request, call_next) | File "/opt/airflow/airflow/api_fastapi/core_api/middleware.py", line 28, in dispatch | response = await call_next(request) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 153, in call_next | raise app_exc | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 140, in coro | await self.app(scope, receive_or_disconnect, send_no_error) | File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | raise exc | File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app | await app(scope, receive, sender) | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 714, in __call__ | await self.middleware_stack(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 734, in app | await route.handle(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 288, in handle | await self.app(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 76, in app | await wrap_app_handling_exceptions(app, request)(scope, receive, send) | File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | raise exc | File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app | await app(scope, receive, sender) | File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 73, in app | response = await f(request) | File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 301, in app | raw_response = await run_endpoint_function( | File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 214, in run_endpoint_function | return await run_in_threadpool(dependant.call, **values) | File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 37, in run_in_threadpool | return await anyio.to_thread.run_sync(func) | File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync | return await get_async_backend().run_sync_in_worker_thread( | File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread | return await future | File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 962, in run | result = context.run(func, *args) | File "/opt/airflow/airflow/api_fastapi/core_api/routes/public/task_instances.py", line 651, in post_clear_task_instances | dag = dag.partial_subset( | File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 811, in partial_subset | dag.task_dict = { | File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 812, in <dictcomp> | t.task_id: _deepcopy_task(t) | File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 808, in _deepcopy_task | return copy.deepcopy(t, memo) | File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy | y = copier(memo) | File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/baseoperator.py", line 1188, in __deepcopy__ | object.__setattr__(result, k, v) | AttributeError: can't set attribute +------------------------------------ During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi result = await app( # type: ignore[func-returns-value] File "/usr/local/lib/python3.9/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 187, in __call__ raise exc File "/usr/local/lib/python3.9/site-packages/starlette/middleware/errors.py", line 165, in __call__ await self.app(scope, receive, _send) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 29, in __call__ await responder(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 126, in __call__ await super().__call__(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/gzip.py", line 46, in __call__ await self.app(scope, receive, self.send_with_compression) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 93, in __call__ await self.simple_response(scope, receive, send, request_headers=headers) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/cors.py", line 144, in simple_response await self.app(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 178, in __call__ recv_stream.close() File "/usr/local/lib/python3.9/contextlib.py", line 137, in __exit__ self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.9/site-packages/starlette/_utils.py", line 82, in collapse_excgroups raise exc File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 175, in __call__ response = await self.dispatch_func(request, call_next) File "/opt/airflow/airflow/api_fastapi/core_api/middleware.py", line 28, in dispatch response = await call_next(request) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 153, in call_next raise app_exc File "/usr/local/lib/python3.9/site-packages/starlette/middleware/base.py", line 140, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "/usr/local/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 62, in __call__ await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app raise exc File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 714, in __call__ await self.middleware_stack(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 734, in app await route.handle(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 288, in handle await self.app(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 76, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app raise exc File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app await app(scope, receive, sender) File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 73, in app response = await f(request) File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 301, in app raw_response = await run_endpoint_function( File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 214, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 37, in run_in_threadpool return await anyio.to_thread.run_sync(func) File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 962, in run result = context.run(func, *args) File "/opt/airflow/airflow/api_fastapi/core_api/routes/public/task_instances.py", line 651, in post_clear_task_instances dag = dag.partial_subset( File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 811, in partial_subset dag.task_dict = { File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 812, in <dictcomp> t.task_id: _deepcopy_task(t) File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/dag.py", line 808, in _deepcopy_task return copy.deepcopy(t, memo) File "/usr/local/lib/python3.9/copy.py", line 153, in deepcopy y = copier(memo) File "/opt/airflow/task_sdk/src/airflow/sdk/definitions/baseoperator.py", line 1188, in __deepcopy__ object.__setattr__(result, k, v) AttributeError: can't set attribute ``` ### What you think should happen instead? Task instances endpoint should not throw HTTP500 ### How to reproduce As I mentioned, its intermittent you need to try clearing task instance couple of times from UI and you will observe this issue ### Operating System Linux ### Versions of Apache Airflow Providers _No response_ ### Deployment Other ### Deployment details _No response_ ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
open
2025-03-02T10:59:30Z
2025-03-12T07:18:15Z
https://github.com/apache/airflow/issues/47274
[ "kind:bug", "priority:high", "area:core", "AIP-84", "area:task-sdk", "affected_version:3.0.0beta" ]
vatsrahul1001
11
microsoft/nni
tensorflow
5,286
free(): invalid pointer Aborted (core dumped)
While I was trying to use NAS I got this error. free(): invalid pointer Aborted (core dumped) my code is: ``` import torch import torch.nn.functional as F import nni.retiarii.nn.pytorch as nn from nni.retiarii import model_wrapper @model_wrapper # this decorator should be put on the out most class Net(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 32, 3, 1) self.conv2 = nn.Conv2d(32, 64, 3, 1) self.dropout1 = nn.Dropout(0.25) self.dropout2 = nn.Dropout(0.5) self.fc1 = nn.Linear(9216, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = F.relu(self.conv1(x)) x = F.max_pool2d(self.conv2(x), 2) x = torch.flatten(self.dropout1(x), 1) x = self.fc2(self.dropout2(F.relu(self.fc1(x)))) output = F.log_softmax(x, dim=1) return output ``` **Environment**: - NNI version: 2.10 - Training service (local|remote|pai|aml|etc): local - Client OS: Ubuntu 20.04 - Python version: 3.8 - PyTorch version: 1.13.0+cu117 - Is conda/virtualenv/venv used?: Venv - Is running in Docker?:No
closed
2022-12-17T19:48:40Z
2022-12-21T13:41:16Z
https://github.com/microsoft/nni/issues/5286
[]
Armanasq
3
tensorflow/datasets
numpy
5,030
[data request] EGO4D
* Name of dataset: EGO4D * URL of dataset: [<url>](https://ego4d-data.org/docs/start-here/) * License of dataset: [<license type>](https://ego4d-data.org/pdfs/Ego4D-Licenses-Draft.pdf) * Short description of dataset and use case(s): Ego4D is a large-scale egocentric video dataset focused on daily life interactions and activities. Folks who would also like to see this dataset in `tensorflow/datasets`, please thumbs-up so the developers can know which requests to prioritize. And if you'd like to contribute the dataset (thank you!), see our [guide to adding a dataset](https://github.com/tensorflow/datasets/blob/master/docs/add_dataset.md).
open
2023-07-25T02:13:20Z
2023-07-25T12:27:47Z
https://github.com/tensorflow/datasets/issues/5030
[ "dataset request" ]
XinyangHan
1
Avaiga/taipy
data-visualization
1,593
[DOCS] Getting Started with Taipy Page --Run Command
### Issue Description It would be nice if Getting Started Page has a line or two for command to start the Taipy application so that it would be easy for new Developers to get started right away. Right now the code ends with the below line and one has to navigate to https://docs.taipy.io/en/latest/manuals/cli/run/ if __name__ == "__main__": Gui(page=page).run(title="Dynamic chart") ### Screenshots or Examples (if applicable) ![taipy](https://github.com/user-attachments/assets/3ebc3bb8-7a89-40bb-aed0-1ef5223c3c7d) ### Proposed Solution (optional) Nice to have somthing like this in the Getting Started Page (https://docs.taipy.io/en/latest/getting_started/) ![taipy1](https://github.com/user-attachments/assets/3d33712d-7971-4e18-8f57-07a143cbe4fe) ### Code of Conduct - [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+). - [ ] I am willing to work on this issue (optional)
closed
2024-07-27T09:47:07Z
2024-10-03T22:41:36Z
https://github.com/Avaiga/taipy/issues/1593
[ "📈 Improvement", "📄 Documentation", "good first issue", "🟩 Priority: Low" ]
pravintargaryen
6
widgetti/solara
flask
795
feat: reconnect to a different server may not be an error
cc @nmearl In Cosmic data stories https://www.cosmicds.cfa.harvard.edu/ a reconnect will restore the state the user was on from a database. This means that if a machine goes down (due to scalign down), and a browser reconnects, it may not be an error. Solara cannot know this, so we should have an option to configure this. Naming this might be hard (maybe ` * SOLARA_RECONNNECT_IS_ERROR=False`)
open
2024-09-20T18:10:36Z
2024-09-20T18:10:36Z
https://github.com/widgetti/solara/issues/795
[]
maartenbreddels
0
tableau/server-client-python
rest-api
965
Not able to publish Workbook with Published data source
Tableau Server version is 2020.4.5 tableauserverclient version is 0.15.0 I have two questions Question 1) I am able to publish a workbook with a Live connection, but not able to publish a workbook with the published data source. The main issue is I have two connections one is a live connection and the second is a published data source. I tried to publish workbooks with a single connection, live is working but not a published data source. Question 2) Is there a way to convert the data source (connected to the workbook) from Dev to Prod while publishing the workbook? analytics.d-datalake.com is like Dev DB server address analytics.p-datalake.com is like Prod DB server address User name and password remain the same. I tried to convert it to Prod, even the workbook with a live connection is not working. Find the code I am using. import tableauserverclient as TSC from tableauserverclient import ConnectionCredentials, ConnectionItem server = TSC.Server("https://tableau.com/") tableau_auth = TSC.TableauAuth('username', 'password', 'sitename') with server.auth.sign_in(tableau_auth): all_projects, pagination_items = server.projects.get() default_project = next((project for project in all_projects if project.is_default()), None) all_connections = [] connection = ConnectionItem() connection.server_address = 'analytics.d-datalake.com' connection.connection_credentials = ConnectionCredentials('dbuser', "dbpassword", True) all_connections.append(connection) all_connections new_workbook = TSC.WorkbookItem(default_project.id) server.workbooks.publish(new_workbook, 'Security.twbx', TSC.Server.PublishMode.Overwrite , connections=all_connections) Thanks in advance
closed
2021-11-23T06:02:07Z
2023-02-15T08:07:17Z
https://github.com/tableau/server-client-python/issues/965
[ "help wanted" ]
pavankumartableau
2
allenai/allennlp
pytorch
5,128
HuggingFace Tokenizers and multiprocess worker parallelism warnings
If i use a multiprocess worker, i get the following warning when loading my data: ``` huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks... To disable this warning, you can either: - Avoid using `tokenizers` before the fork if possible - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false) ``` My dataloader config looks like: ``` "data_loader": { "batch_sampler": { "type": "bucket", "batch_size" : 8, }, "num_workers": 4 }, ``` and I'm using a pretrained transformer tokenizer like so: ``` "tokenizer": { "type": "pretrained_transformer", "model_name": transformer_model, "add_special_tokens": false, "max_length": 128 }, ```
closed
2021-04-16T02:28:19Z
2021-04-30T14:32:15Z
https://github.com/allenai/allennlp/issues/5128
[ "bug" ]
nelson-liu
2
iperov/DeepFaceLab
deep-learning
969
Hello, guys, where i can download oldest builds?
Hello, guys, where i can download oldest builds? working better for my , can somebody help my, where i can download them, ? Thanx you
open
2020-12-12T00:08:02Z
2023-06-08T21:38:34Z
https://github.com/iperov/DeepFaceLab/issues/969
[]
tembel123456
1
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
176
Generate HTML Failing at Education Section
Traceback (most recent call last): File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 63, in job_apply self._fill_application_form(job) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 129, in _fill_application_form self.fill_up(job) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 173, in fill_up self._process_form_element(element, job) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 177, in _process_form_element self._handle_upload_fields(element, job) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 194, in _handle_upload_fields self._create_and_upload_resume(element, job) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 210, in _create_and_upload_resume raise Exception(f"Upload failed: \nTraceback:\n{tb_str}") Exception: Upload failed: Traceback: Traceback (most recent call last): File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\src\linkedIn_easy_applier.py", line 204, in _create_and_upload_resume f.write(base64.b64decode(self.resume_generator_manager.pdf_base64(job_description_text=job.description))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\virtual\Lib\site-packages\lib_resume_builder_AIHawk\manager_facade.py", line 78, in pdf_base64 self.resume_generator.create_resume_job_description_text(style_path, job_description_text, temp_html_path) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\virtual\Lib\site-packages\lib_resume_builder_AIHawk\resume_generator.py", line 38, in create_resume_job_description_text self._create_resume(gpt_answerer, style_path, temp_html_path) File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\virtual\Lib\site-packages\lib_resume_builder_AIHawk\resume_generator.py", line 19, in _create_resume message = template.substitute(markdown=gpt_answerer.generate_html_resume(), style_path=style_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tacor\LinkedIn_AIHawk_automatic_job_application\virtual\Lib\site-packages\lib_resume_builder_AIHawk\gpt_resume_job_description.py", line 313, in generate_html_resume f" {results['education']}\n" ~~~~~~~^^^^^^^^^^^^^ KeyError: 'education' Getting this error when it tries to generate and upload the resume. any ideas on how to debug?
closed
2024-08-31T00:44:53Z
2024-08-31T01:33:54Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/176
[]
acortiv
1
miguelgrinberg/flasky
flask
201
local debugging of flask-mail with Gmail
I was just able to get the email support of [flasky (tag 10d)](https://github.com/miguelgrinberg/flasky/tree/10d) working by changing my Gmail settings. It seems like when flasky is run locally it is considered by Gmail as some "less secure app" (like "Some Desktop mail clients like Microsoft Outlook and Mozilla Thunderbird") which [needs to be explicitly allowed to communicate to](https://support.google.com/accounts/answer/6010255?hl=en). I did not set up email support for a website with Gmail for a while. But I can not remember this issue. EDIT: This issue has already been addresses [here](https://github.com/miguelgrinberg/flasky/issues/141) as well. Did they change their security policy? I assume that I should be able to restore my settings to "forbid communication with less secure apps" after local debugging? BTW: The functonality works just fine! But there is some little inconsstency in the book [1st edition, 2014, page 72] which states `(venv) $ export FLASKY_ADMIN=<your-email-address>` for Unix users and `(venv) set FLASKY_ADMIN=<Gmail username>` which could confuse some readers. I am not sure but I think I was able to login into Gmail without "@googlemail.com" or "@gmail.com" in the username.
closed
2016-10-27T09:40:16Z
2016-11-10T20:11:11Z
https://github.com/miguelgrinberg/flasky/issues/201
[ "question" ]
fkromer
7
vimalloc/flask-jwt-extended
flask
224
Multiple values JWT_ACCESS_COOKIE_PATH
I am looking into updating a project from Flask-JWT and want to use cookies in flask jwt extended. I serve my API by reverse proxy through nginx under /api/. For my flask routes they are following a v1, v2 etc - e.g. as served from my python app they are /v1 /v2, but from nginx they are /api/v1 /api/v2. How might I handle multiple paths with this config? e.g. I'd like /v1, /v2 JWT_ACCESS_COOKIE_PATH I tried passing an array but I get strange behavior.
closed
2019-01-21T21:29:03Z
2019-02-03T17:53:58Z
https://github.com/vimalloc/flask-jwt-extended/issues/224
[]
msmicker
2
pallets-eco/flask-sqlalchemy
sqlalchemy
413
Session creation looks like it has bugs
In Flask-SQLAlchemy, the session is scoped to `_app_ctx_stack.get_ident`. This is essentially `thread.get_ident` if you're not using greenlets, meaning that the session is scoped to the current thread. This seems to be counter to how the sqlalchemy docs say you should handle sessions with web requests. In a WSGI environment, you cannot assume that every request will be in a new thread, and this indicates that in Flask-SQLAlchemy, a session could be shared between multiple different requests (and possibly users). Is this how it was intended? Am I missing something here?
closed
2016-08-11T12:34:01Z
2020-12-05T21:18:26Z
https://github.com/pallets-eco/flask-sqlalchemy/issues/413
[]
synic
3
ultralytics/ultralytics
deep-learning
19,620
YOLO11 SyntaxError: '-' is not a valid YOLO argument.
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Hyperparameter Tuning ### Bug When running model.tune() with YOLO11, I get this error. The model isn't failing in that my terminal doesn't cancel the run. I keep seeing iteration x/300 but also "Warning X training failure for hyperparameter tuning iteration 1. ### Environment From yolo checks: Ultralytics 8.3.74 🚀 Python-3.11.11 torch-2.0.1+cu118 CUDA:0 (NVIDIA RTX A1000 6GB Laptop GPU, 6144MiB) Setup complete ✅ (28 CPUs, 31.7 GB RAM, 139.8/475.7 GB disk) OS Windows-10-10.0.22631-SP0 Environment Windows Python 3.11.11 Install git RAM 31.69 GB Disk 139.8/475.7 GB CPU 13th Gen Intel Core(TM) i7-13850HX CPU count 28 GPU NVIDIA RTX A1000 6GB Laptop GPU, 6144MiB GPU count 1 CUDA 11.8 numpy ✅ 1.26.4<=2.1.1,>=1.23.0 matplotlib ✅ 3.10.0>=3.3.0 opencv-python ✅ 4.11.0.86>=4.6.0 pillow ✅ 11.1.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.15.1>=1.4.1 torch ✅ 2.0.1+cu118>=1.8.0 torch ✅ 2.0.1+cu118!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.15.2+cu118>=0.9.0 tqdm ✅ 4.67.1>=4.64.0 psutil ✅ 6.1.1 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.2.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.14>=2.0.0 ### Minimal Reproducible Example Run from a python script: model = YOLO('yolo11n.pt') then model.tune(). I've tried adding my own search space, using the one in the Ultralytics example, and without any search space (so all the default ones). ### Additional I have no idea where this dash is coming from. These are screenshots of the error. ![Image](https://github.com/user-attachments/assets/56bd1a14-d619-48a9-a8ed-e1f2c14506bb) ![Image](https://github.com/user-attachments/assets/bc425ab5-73aa-45a5-9251-65ea039ee08d) ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
closed
2025-03-10T13:44:18Z
2025-03-10T17:51:11Z
https://github.com/ultralytics/ultralytics/issues/19620
[ "bug", "enhancement", "fixed" ]
scottschmidl
12
strawberry-graphql/strawberry
fastapi
2,941
tests on windows broken?
Seemingly the hook tests on windows are broken. First there are many deprecation warnings, second my code change works for all other tests. PR on which "tests on windows" fail (there are also some other): https://github.com/strawberry-graphql/strawberry/pull/2938
closed
2023-07-12T13:22:52Z
2025-03-20T15:56:17Z
https://github.com/strawberry-graphql/strawberry/issues/2941
[ "bug" ]
devkral
2
pyro-ppl/numpyro
numpy
1,526
Sample an array of variable size
Hello! I have a naive question : is there a way to infer the size of an array and the elements present in this array? My objective is to recover past earthquakes on a fault given the concentration of 36Cl. To do so, I have to infer the number of earthquakes and then attribute an exhumation height for each earthquake. I also have a constraint on the elements of this array: the sum of the element cannot be superior to the fault scarp height. I have this code, with a much simpler forward function, that mimics what I need, but has the objective of finding ruptures : ``` """ Librairie de base """ import numpy as np import matplotlib.pyplot as plt """ Librairie utile pour l'inversion """ from jax import random, lax import numpyro import numpyro.distributions as dist from numpyro.infer import MCMC, NUTS import jax.numpy as jnp def f(x, a): return a*x def g(x, breach, slope): y = jnp.zeros((len(x))) for i in range(0, len(breach)): x_u = jnp.arange(int(jnp.sum(breach[0:i])), int(breach[i] + jnp.sum(breach[0:i])), dtype=float) y = y.at[int(jnp.sum(breach[0:i])):int(breach[i] + jnp.sum(breach[0:i]))].set(f(x_u, slope[i])) return y ruptures = jnp.array([20, 20, 40, 20]) slope = jnp.array([2, 5, 10, 20]) x_values=jnp.arange(0, 100, dtype=float) y_values=g(x_values, ruptures, slope) plt.plot(x_values, y_values, '.') plt.xlabel('abscisse') plt.ylabel('ordonnée') plt.show() def inverse_ruptures(obs): nb_ruptures = numpyro.sample('nb_ruptures', dist.Uniform(0, 50)) len_segment = jnp.zeros((int(nb_ruptures))) alpha_coeff = jnp.zeros((int(nb_ruptures))) for i in range(0, int(nb_ruptures)): len_segment = len_segment.at[int(i)].set(int(numpyro.sample('len_segment' + str(i)+str(0), dist.Uniform(0, 50)))) n = 1 while jnp.sum(len_segment)>=len(x_values): len_segment = len_segment.at[int(i)].set(int(numpyro.sample('len_segment' + str(i)+str(n), dist.Uniform(0, 50)))) n=n+1 alpha_coeff = alpha_coeff.at[int(i)].set(numpyro.sample('alpha_coeff' + str(i), dist.Uniform(0, 50))) print(len_segment) y_fit_numpyro = g(x_values, len_segment, alpha_coeff) return numpyro.sample('obs', dist.Normal(y_fit_numpyro, 0.5), obs=obs) rng_key = random.PRNGKey(0) rng_key, rng_key_ = random.split(rng_key) kernel = NUTS(inverse_ruptures) mcmc = MCMC(kernel, num_warmup=500, num_samples=5000) mcmc.run(rng_key, obs=y_values) mcmc.print_summary() posterior_samples = mcmc.get_samples() # résultats sous forme de dictionnaire ``` Thanks for your help
closed
2023-01-26T19:47:06Z
2023-01-27T14:14:09Z
https://github.com/pyro-ppl/numpyro/issues/1526
[ "question" ]
Sarenrenegade
3
agronholm/anyio
asyncio
461
Would it make sense to make CancelScope._timeout_expired public?
I'm a situation where I'd like to know whether the scope cancelled was cancelled by either calling `.cancel()` or by the expiration of the deadline, and take a different action in each case. Kinda something like this. ```python async with anyio.create_task_group() as task_group: @task_group.start_soon async def do_stuff(): # ... do stuff ... # something happened that requires to cancel immediately task_group.cancel_scope.cancel() @task_group.start_soon async def keep_alive(): # ... check some condition periodically ... while True: if await wait_for_some_condition(): task_group.cancel_scope.deadline = anyio.current_time() + 1 # Use task_group.cancel_scope.cancel_called and task_group.cancel_scope._timeout_expired to determine the next action ``` It seems `_timeout_expired` is unambiguous and could be made public. Would it make sense to make it public?
closed
2022-08-26T05:52:07Z
2022-08-26T08:45:10Z
https://github.com/agronholm/anyio/issues/461
[]
alegonz
4
marimo-team/marimo
data-science
3,587
Cell grouping for conditional cell set execution
### Description For context, I am developing a web application that processes CSV files for statistical analysis. The application follows multiple decision paths depending on data characteristics. For example, if the data is normalized, specific tests are applied; if not, different tests are implemented or user input is requested before proceeding. Each test may generate multiple outputs and might require linked inputs requiring separate cells, which may also reference each other. I want to be able implement conditional logic for running a groups of cells. Currently, adding conditions to individual cells is unsystematic and makes navigation difficult. While `mo.output.append` enables multiple outputs in one cell, I still run into issues when creating interactive features like user input for selecting columns in a t-test, because they require separate cells. ### Suggested solution I have two potential solutions in mind. One involves implementing cell levels using nested functions in Python, which would probably work well. The other approach would be to create a dictionary of functions and call them as needed. Both methods could probably solve this. ### Alternative _No response_ ### Additional context _No response_
open
2025-01-27T20:05:10Z
2025-01-27T21:34:48Z
https://github.com/marimo-team/marimo/issues/3587
[ "enhancement" ]
mimansajaiswal
3
tox-dev/tox
automation
3,247
Generated constraints.txt eats the first `c` letter of packages
## Issue We use `constrain_package_deps = true` and to our surprise, our dependencies that are in our `constraints.txt` that start with the letter `c`, that letter gets removed. ## Environment Provide at least: - OS: linux ## Minimal example Use these files to reproduce it: ```ini [tox] min_version = 4.14.1 envlist = test [testenv:test] allowlist_externals = echo skip_install = true constrain_package_deps = true deps = zope.interface -c constraints.txt commands = echo 1 ``` Download locally: https://dist.plone.org/release/6.0.7/requirements.txt ```shell wget https://dist.plone.org/release/6.0.7/requirements.txt ``` And run tox: ```shell tox -e test ``` After `tox` fails (on purpose), look at the generated `.tox/test/constraints.txt` and you will see that there is no `collective.XXX` version pins but rather `ollective.XXX` version pins.
closed
2024-03-20T13:21:38Z
2024-03-24T15:11:53Z
https://github.com/tox-dev/tox/issues/3247
[]
gforcada
2
hankcs/HanLP
nlp
964
java源码出现错误
<!-- 注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 当前最新版本号是:hanlp-1.6.8 我使用的版本是:hanlp-1.6.8 ## 我的问题 在hanlp-1.6.8中hmm2 的分词模型已经被废除,在data中实际不存在,但是在hanlp.java的 的源码中,仍然可以通过传入hmm2字符串来获得hmm2的分词器。 源码->hanlp.java ``` /** * HMM分词模型 * * @deprecated 已废弃,请使用{@link PerceptronLexicalAnalyzer} */ public static String HMMSegmentModelPath = "data/model/segment/HMMSegmentModel.bin"; ...... else if ("hmm2".equals(algorithm) || "二阶隐马".equals(algorithm)) return new HMMSegment(); ``` 如果真的使用hmm2就会引起以下错误 错误 ``` java.lang.IllegalArgumentExceptionPyRaisable: java.lang.IllegalArgumentException: 发生了异常:java.lang.IllegalArgumentException: HMM分词模型[ /home/font/anaconda3/lib/python3.6/site-packages/pyhanlp/static/data/model/segment/HMMSegmentModel.bin ]不存在 ```
closed
2018-09-10T13:22:18Z
2018-09-15T15:47:38Z
https://github.com/hankcs/HanLP/issues/964
[ "improvement" ]
TianFengshou
1
akfamily/akshare
data-science
5,731
stock_zh_a_spot_em() 自动运行时频繁仅输出200个数据,手动运行能正常输出5000多条
**环境:** 1.akshare版本是1.16.7 2.Python版本是3.8 **尝试解决办法:** 给request中添加headers模拟浏览器获取数据,希望明天能成功。
closed
2025-02-24T17:03:26Z
2025-02-25T09:43:25Z
https://github.com/akfamily/akshare/issues/5731
[ "bug" ]
akshare0613
4
aimhubio/aim
tensorflow
3,086
allow resize & move of panels
## 🚀 Feature Add the ability to resize/move the plots. See attached gifs which compare between wandb and aim ![ezgif-1-5d49c034b0](https://github.com/aimhubio/aim/assets/8983713/699d4015-af74-477e-b3c4-142f5b17ad73) ![ezgif-3-588612cc87](https://github.com/aimhubio/aim/assets/8983713/1c9e395e-e5ac-47fd-9841-741455609dcc)
open
2024-01-08T22:39:45Z
2024-01-08T22:39:55Z
https://github.com/aimhubio/aim/issues/3086
[ "type / enhancement" ]
orena1
0
gradio-app/gradio
machine-learning
10,818
Bitdefender Issue
### Describe the bug ![Image](https://github.com/user-attachments/assets/8db5d656-1182-4773-99ed-88fdde280028) ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction I have no clue ### Screenshot ![Image](https://github.com/user-attachments/assets/554fb1a7-26cd-4b58-b654-8c081c53054e) ### Logs ```shell ``` ### System Info ```shell PS G:\Projects2\Imagen Edit> gradio environment Gradio Environment Information: ------------------------------ Operating System: Windows gradio version: 5.15.0 gradio_client version: 1.7.0 ------------------------------------------------ gradio dependencies in your environment: aiofiles: 23.2.1 anyio: 4.8.0 audioop-lts is not installed. fastapi: 0.115.7 ffmpy: 0.4.0 gradio-client==1.7.0 is not installed. httpx: 0.28.1 huggingface-hub: 0.29.3 jinja2: 3.1.6 markupsafe: 2.1.5 numpy: 1.24.3 orjson: 3.10.7 packaging: 23.2 pandas: 2.2.3 pillow: 10.0.0 pydantic: 2.10.6 pydub: 0.25.1 python-multipart: 0.0.18 pyyaml: 6.0.2 ruff: 0.9.6 safehttpx: 0.1.6 semantic-version: 2.10.0 starlette: 0.45.3 tomlkit: 0.12.0 typer: 0.15.1 typing-extensions: 4.12.2 urllib3: 2.3.0 uvicorn: 0.34.0 authlib; extra == 'oauth' is not installed. itsdangerous; extra == 'oauth' is not installed. gradio_client dependencies in your environment: fsspec: 2023.10.0 httpx: 0.28.1 huggingface-hub: 0.29.3 packaging: 23.2 typing-extensions: 4.12.2 websockets: 14.2 PS G:\Projects2\Imagen Edit> ``` ### Severity Blocking usage of gradio
open
2025-03-17T17:42:00Z
2025-03-17T19:20:36Z
https://github.com/gradio-app/gradio/issues/10818
[ "bug" ]
PierrunoYT
1
browser-use/browser-use
python
1,067
It is cheaper to perform the operation using a browser on an Android phone.
### Problem Description Is it cheaper to use the browser of an Android phone to run it? ### Proposed Solution There are cloud phones that provide APIs to operate the browser in the cloud phone. Running in the cloud does not occupy local browser resources [www.androidcloud.ai](https://www.androidcloud.ai/) ### Alternative Solutions _No response_ ### Additional Context _No response_
open
2025-03-19T07:32:42Z
2025-03-19T07:35:03Z
https://github.com/browser-use/browser-use/issues/1067
[ "enhancement" ]
baishi678
0
RayVentura/ShortGPT
automation
53
🐛 [Bug]: MoviePy error: failed to read the duration of file %s.\n
### What happened? Follow the example to run Step 9 _editAndRenderShort Error File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 95, in respond video_path = makeVideo(script, language.value, isVertical, progress=progress) File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 46, in makeVideo for step_num, step_info in shortEngine.makeContent(): File "/home/ecs-user/shortgpt/shortGPT/engine/abstract_content_engine.py", line 72, in makeContent self.stepDict[currentStep]() File "/home/ecs-user/shortgpt/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort videoEditor.renderVideo(outputPath, logger=self.logger) File "/home/ecs-user/shortgpt/shortGPT/editing_framework/editing_engine.py", line 95, in renderVideo engine.generate_video(self.schema, outputPath, logger=logger) File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video clip = self.process_video_asset(asset) File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 200, in process_video_asset clip = VideoFileClip(**params) File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos raise IOError(("MoviePy error: failed to read the duration of file %s.\n" ### What type of browser are you seeing the problem on? Chrome ### What type of Operating System are you seeing the problem on? Linux ### Python Version python 3.10 ### Application Version V0.0.15 ### Expected Behavior Get a generated video ### Error Message ```shell Error File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 95, in respond video_path = makeVideo(script, language.value, isVertical, progress=progress) File "/home/ecs-user/shortgpt/gui/video_automation_ui.py", line 46, in makeVideo for step_num, step_info in shortEngine.makeContent(): File "/home/ecs-user/shortgpt/shortGPT/engine/abstract_content_engine.py", line 72, in makeContent self.stepDict[currentStep]() File "/home/ecs-user/shortgpt/shortGPT/engine/content_video_engine.py", line 139, in _editAndRenderShort videoEditor.renderVideo(outputPath, logger=self.logger) File "/home/ecs-user/shortgpt/shortGPT/editing_framework/editing_engine.py", line 95, in renderVideo engine.generate_video(self.schema, outputPath, logger=logger) File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 53, in generate_video clip = self.process_video_asset(asset) File "/home/ecs-user/shortgpt/shortGPT/editing_framework/core_editing_engine.py", line 200, in process_video_asset clip = VideoFileClip(**params) File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__ self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt, File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__ infos = ffmpeg_parse_infos(filename, print_infos, check_duration, File "/home/ecs-user/.local/lib/python3.10/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos raise IOError(("MoviePy error: failed to read the duration of file %s.\n" ^CKeyboard interruption in main thread... closing server. ``` ### Code to produce this issue. _No response_ ### Screenshots/Assets/Relevant links _No response_
open
2023-07-28T04:27:15Z
2023-10-07T03:26:01Z
https://github.com/RayVentura/ShortGPT/issues/53
[ "bug" ]
lilinrestart
3
Evil0ctal/Douyin_TikTok_Download_API
api
273
[BUG] endpoint closed
{ "status": "endpoint closed", "message": "此端点已关闭请在配置文件中开启/This endpoint is closed, please enable it in the configuration file" }
closed
2023-09-16T07:28:35Z
2023-09-16T07:29:39Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/273
[ "BUG", "enhancement" ]
diowcnx
1
StackStorm/st2
automation
5,917
core.remote action throwing an error when ssh connection using private key
## SUMMARY When running core.remote action to ssh to host using private key error is thrown ### STACKSTORM VERSION st2 --version st2 3.7.0, on Python 3.8.10 ##### OS, environment, install method K8s helm chart installation ## Steps to reproduce the problem `st2 run core.remote cmd=whoami hosts=xxx username=stanley private_key=/home/stanley/.ssh/stanley_rsa` ## Expected Results This action should run whoami command on remote host and print output. SSH connection using this private_key is working properly password less when running ssh command from action_runner pod: `ssh -i stanley_rsa stanley@xxx` Exactly the same private_key we have on our old stackstorm instance(st2 3.5dev (596c60c23), on Python 3.6.9) and same core.remote action is able to run successfully. I have also verified that paramiko library has different versions: - old stackstorm instance: paramiko-2.7.2.dist-info - new stackstorm instance: paramiko-2.10.1.dist-info ## Actual Results ``` st2 run core.remote cmd=whoami hosts=xxx username=stanley private_key=/home/stanley/.ssh/stanley_rsa . id: 63f779e3308e12af9365df26 action.ref: core.remote context.user: st2admin parameters: cmd: whoami hosts: xxx private_key: '********' username: stanley status: failed start_timestamp: Thu, 23 Feb 2023 14:36:19 UTC end_timestamp: Thu, 23 Feb 2023 14:36:21 UTC result: error: "Unable to connect to any one of the hosts: ['xxx']. connect_errors={ "xxx": { "failed": true, "succeeded": false, "timeout": false, "return_code": 255, "stdout": "", "stderr": "", "error": "Failed connecting to host xxx. q must be exactly 160, 224, or 256 bits long", "traceback": "Traceback (most recent call last):\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/runners/parallel_ssh.py\\", line 278, in _connect\ client.connect()\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/runners/paramiko_ssh.py\\", line 171, in connect\ self.client = self._connect(host=self.hostname, socket=self.bastion_socket)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/runners/paramiko_ssh.py\\", line 787, in _connect\ client.connect(**conninfo)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/client.py\\", line 435, in connect\ self._auth(\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/client.py\\", line 682, in _auth\ self._transport.auth_publickey(username, key)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/transport.py\\", line 1634, in auth_publickey\ return self.auth_handler.wait_for_response(my_event)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/auth_handler.py\\", line 244, in wait_for_response\ raise e\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/transport.py\\", line 2163, in run\ handler(self.auth_handler, m)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/auth_handler.py\\", line 375, in _parse_service_accept\ sig = self.private_key.sign_ssh_data(blob, algorithm)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/paramiko/dsskey.py\\", line 109, in sign_ssh_data\ key = dsa.DSAPrivateNumbers(\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py\\", line 244, in private_key\ return backend.load_dsa_private_numbers(self)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/cryptography/hazmat/backends/openssl/backend.py\\", line 826, in load_dsa_private_numbers\ dsa._check_dsa_private_numbers(numbers)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py\\", line 282, in _check_dsa_private_numbers\ _check_dsa_parameters(parameters)\ File \\"/opt/stackstorm/st2/lib/python3.8/site-packages/cryptography/hazmat/primitives/asymmetric/dsa.py\\", line 274, in _check_dsa_parameters\ raise ValueError(\\"q must be exactly 160, 224, or 256 bits long\\")\ ValueError: q must be exactly 160, 224, or 256 bits long\ " } }" traceback: " File "/opt/stackstorm/st2/lib/python3.8/site-packages/st2actions/container/base.py", line 117, in _do_run runner.pre_run() File "/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/runners/paramiko_ssh_runner.py", line 206, in pre_run self._parallel_ssh_client = ParallelSSHClient(**client_kwargs) File "/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/runners/parallel_ssh.py", line 90, in __init__ connect_results = self.connect(raise_on_any_error=raise_on_any_error) File "/opt/stackstorm/st2/lib/python3.8/site-packages/st2common/runners/parallel_ssh.py", line 131, in connect raise NoHostsConnectedToException(msg) " ``` Any suggestion what needs to be changed and how to make core.remote action work correctly?
open
2023-03-02T13:09:45Z
2023-03-24T15:34:10Z
https://github.com/StackStorm/st2/issues/5917
[ "status:to be verified" ]
ciechonp
5
reloadware/reloadium
pandas
145
Unable to refresh on template changes
## Describe the bug I have a basic flask app set up with Tailwind. Reloadium works great for the python files, but isn't reloading changes to my templates. I'm running a watch command on CSS and Template files using pytailwindcss, which rebuilds CSS into my `static/dist` folder. The folder is being updated properly when I change classes in the template files, but Reloadium doesn't register a change, even when I make a change to `app.py`. ## To Reproduce Steps to reproduce the behavior: 1. Basic flask app setup using pytailwindcss and reloadium 2. Run tailwindcss in watch mode 3. Save template file ## Expected behavior Reloadium would refresh the browser with the updated template file. ## Desktop or remote (please complete the following information): - OS: MacOS - OS version: Ventura - M1 chip: yes - Reloadium package version: 1.1.0 - PyCharm plugin version: None - Editor: VsCode - Python Version: 3.11.2 - Python Architecture: [eg. 32bit, 64bit] - Run mode: Debug
open
2023-05-05T01:05:27Z
2023-07-21T18:21:02Z
https://github.com/reloadware/reloadium/issues/145
[ "enhancement" ]
sleithart
1
tfranzel/drf-spectacular
rest-api
543
@extend_schema for simple JSON reponses
Hi: I have a simple JSON response in code like below: `return Response({'message': "this is a message"}, status=status.HTTP_400_BAD_REQUEST)` The above line generates the following HTTP response: ``` HTTP 400 ... {message : "this is a message"} ``` How do I use @extend_schema annotation to document this in OpenAPI format?
closed
2021-10-01T12:44:27Z
2021-10-04T20:36:39Z
https://github.com/tfranzel/drf-spectacular/issues/543
[]
hnataraj
3
fastapi/sqlmodel
fastapi
375
_sa_instance_state is dropped when SQLModels are nested. Is this expected?
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python from sqlmodel import SQLModel, Session, create_engine, Field engine = create_engine("sqlite:///:memory:") class A(SQLModel, table=True): id: int | None = Field(primary_key=True) x: int def blow_up(self, session: Session): self.x = 42 session.add(self) class B(SQLModel): a: A def blow_up(self, session: Session): self.a.blow_up(session) if __name__ == "__main__": SQLModel.metadata.create_all(engine) with Session(engine) as session: a = A(x=1) session.add(a) # Placing `a` inside a `B` instance appears to copy the `a` instance, # stripping `_sa_instance_state` from it. Is this intentional? b = B(a=a) b.blow_up(session) ``` ### Description * Create a SQLModel instance (`a`) * Create a non-table SQLModel (`b`) and nest `a` inside it * Note that `b.a._sa_instance_state` is not present, whereas `a._instance_state` is * The call to `b.blow_up()` will result in SQLAlchemy failing to find instance state ``` File "/Users/me/.virtualenvs/sqlmodel-issue-u6SafBdE/lib/python3.10/site-packages/sqlalchemy/orm/attributes.py", line 2254, in set_attribute state, dict_ = instance_state(instance), instance_dict(instance) AttributeError: 'A' object has no attribute '_sa_instance_state' ``` ### Operating System macOS ### Operating System Details Monterey 12.4 ### SQLModel Version 0.0.6 ### Python Version Python 3.10.4 ### Additional Context My question is: what _should_ the expected behavior here be? Stepping back: nesting pydantic models like this is pretty natural; that's why I found this behavior surprising. But: perhaps nesting is not-so-natural when working with SQLModels? I'm not sure. The upshot is that you can't perform database operations on nested instances, for example, by calling `b.a.some_method_that_does_database_stuff()`. (The behavior is unchanged if `B` derives directly from `pydantic.BaseModel`, too.) If the behavior we're seeing is not the desired behavior, I'm happy to contribute a PR, provided we have a clear understanding of what the right behavior should be. Thanks for all the hard work on FastAPI, Typer, and SQLModel!
open
2022-07-13T17:53:43Z
2024-06-18T09:33:29Z
https://github.com/fastapi/sqlmodel/issues/375
[ "question" ]
davepeck
4
pydata/pandas-datareader
pandas
421
World Bank remote data error: need new url?
I am running the Anaconda distribution of Python3. I made sure I sure that pandas_datareader was up to date by running: `pip install -U pandas_datareader` Then I tried running the following: ``` from pandas_datareader import wb wb.download(indicator='SP.POP.TOTL', country='WLD', start=1800, end=2100) ``` Upon which I get the error, traceback: ``` Traceback (most recent call last): File "/Users/jche11/anaconda/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2862, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-2-1a0b0bf45d07>", line 2, in <module> wb.download(indicator='SP.POP.TOTL', country='WLD', start=1800, end=2100) File "/Users/jche11/anaconda/lib/python3.6/site-packages/pandas_datareader/wb.py", line 379, in download **kwargs).read() File "/Users/jche11/anaconda/lib/python3.6/site-packages/pandas_datareader/wb.py", line 163, in read return self._read() File "/Users/jche11/anaconda/lib/python3.6/site-packages/pandas_datareader/wb.py", line 172, in _read df = self._read_one_data(self.url + indicator, self.params) File "/Users/jche11/anaconda/lib/python3.6/site-packages/pandas_datareader/base.py", line 81, in _read_one_data out = self._get_response(url, params=params).json() File "/Users/jche11/anaconda/lib/python3.6/site-packages/pandas_datareader/base.py", line 139, in _get_response raise RemoteDataError('Unable to read URL: {0}'.format(url)) pandas_datareader._utils.RemoteDataError: Unable to read URL: http://api.worldbank.org/countries/WLD/indicators/SP.POP.TOTL?date=1800%3A2100&per_page=25000&format=json ``` Copying that last url from the error message into a browser results in a request error from that page. http://api.worldbank.org/countries/WLD/indicators/SP.POP.TOTL?date=1800%3A2100&per_page=25000&format=json I looked at the World Bank API [documentation](https://datahelpdesk.worldbank.org/knowledgebase/articles/898581-api-basic-call-structure), and it appears that the base url is now http://api.worldbank.org/v2/ I modified the url from the error message to add "/v2" after worldbank.org: http://api.worldbank.org/v2/countries/WLD/indicators/SP.POP.TOTL?date=1800%3A2100&per_page=25000&format=json Copying that into a browser successfully returned data
closed
2017-11-18T10:30:52Z
2018-01-18T22:30:20Z
https://github.com/pydata/pandas-datareader/issues/421
[]
SummerIsHere
7
iperov/DeepFaceLab
deep-learning
5,307
Face applied is too small
Hi, I reached the 7th step (Merging) using X64, and I found that the face (from the source image) applied on the dst image is too small, anyway to solve it? (I know how to enlarge the face, but it is not that problem, the problem is the face_src is in square and too small, if enlarge then it becomes big in square - not a face shape). Attached with the problem photo when merging and the data_src face. Thanks! ![Capture](https://user-images.githubusercontent.com/82266096/114267192-eec41b00-9a2c-11eb-9b66-faf8f2a293dd.PNG) ![168275759_4385127121517108_1369924073355071737_n](https://user-images.githubusercontent.com/82266096/114267213-f5eb2900-9a2c-11eb-8c5e-5b66e9e3c026.png)
open
2021-04-10T10:49:58Z
2023-06-08T22:22:25Z
https://github.com/iperov/DeepFaceLab/issues/5307
[]
deepfaker1
1
Gozargah/Marzban
api
922
502 bad gateway =grpc
سلام بعد از یکی دو روزکلاینت در اتصال grpc با مشکل 502 مواجه میشه و تمام کانفینگ ها به جز grpc کار میکنه و اینکه اگه دوباره پنل نصب مجدد بشه درست میشه بعد دو روز دوباره از کار می افته سرور با ngnix به صورت تک پورت کار میکنه و ورژن 4.9 هم هست ممنون میشم راهنماییم کنید لاگ سرور هم نمیشه گرفت تنظیمات ngnix میزارم server { listen unix:/run/nginx-h1.socket proxy_protocol; location ~* ^\/(dashboard|api|sub|docs|openapi.json).* { proxy_pass http://unix:/run/marzban.socket; set_real_ip_from unix:; real_ip_header proxy_protocol; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; } } server { listen unix:/run/nginx-h2c.socket http2 proxy_protocol; set_real_ip_from unix:; real_ip_header proxy_protocol; location ~* ^\/(dashboard|api|sub|docs|openapi.json).* { proxy_pass http://unix:/run/marzban.socket; set_real_ip_from unix:; real_ip_header proxy_protocol; proxy_http_version 1.1; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Port $server_port; } grpc_set_header X-Real-IP $remote_addr; grpc_read_timeout 1h; grpc_send_timeout 1h; location /tg { if ($content_type !~ "application/grpc") { return 404; } client_body_buffer_size 512k; client_body_timeout 52w; client_max_body_size 0; grpc_pass unix:/run/xray-trojan-grpc.socket; } location /mg { if ($content_type !~ "application/grpc") { return 404;
closed
2024-04-04T21:54:27Z
2024-07-31T16:10:04Z
https://github.com/Gozargah/Marzban/issues/922
[ "Bug" ]
janahnovin
1
tox-dev/tox
automation
2,630
host_python is not printed part of the core configuration
See https://github.com/tox-dev/tox/blob/legacy/src/tox/session/commands/show_config.py#L57
closed
2022-12-08T03:28:08Z
2023-06-17T01:18:15Z
https://github.com/tox-dev/tox/issues/2630
[ "bug:normal", "help:wanted" ]
gaborbernat
0
nalepae/pandarallel
pandas
85
Bug, when nb_workers > nb_tasks with Memory File System
**Problem description** --------------------------------------------------------------------------- The process fails, when nb_workers > nb_tasks. **Example:** --------------------------------------------------------------------------- `pd.Series(np.arange(2)).parallel_apply(lambda x: x + 1)` INFO: Pandarallel will run on 32 workers. INFO: Pandarallel will use Memory file system to transfer data between the main process and `workers.` **Error:** --------------------------------------------------------------------------- ``` IndexError Traceback (most recent call last) <ipython-input-144-3c07ac8ca2ad> in <module> ----> 1 pd.Series(np.arange(2)).parallel_apply(lambda x: x + 1) ~/anaconda3/envs/py36/lib/python3.6/site-packages/pandarallel/pandarallel.py in closure(data, func, *args, **kwargs) 449 input_files, 450 output_files, --> 451 map_result, 452 ) 453 ~/anaconda3/envs/py36/lib/python3.6/site-packages/pandarallel/pandarallel.py in get_workers_result(use_memory_fs, nb_workers, show_progress_bar, nb_columns, queue, chunk_lengths, input_files, output_files, map_result) 375 if show_progress_bar: 376 progresses[worker_index] = chunk_lengths[worker_index] --> 377 progress_bars.update(progresses) 378 379 elif message_type is ERROR: ~/anaconda3/envs/py36/lib/python3.6/site-packages/pandarallel/utils/progress_bars.py in update(self, values) 116 """ 117 for index, value in enumerate(values): --> 118 bar, label = self.__bars[index].children 119 120 bar.value = value IndexError: list index out of range ```
closed
2020-04-03T19:58:43Z
2020-04-05T20:45:36Z
https://github.com/nalepae/pandarallel/issues/85
[]
Xetd71
4
pallets/flask
python
5,504
allow setting encoding in open_resource()
This is a duplicate of #1740 &mdash; that may have been closed for lack of a clear rationale, however, and I'd like to suggest it again with the following reasoning. The documentation currently gives this example for using `open_resource()`: ```python with app.open_resource("schema.sql") as f: conn.executescript(f.read()) ``` On Windows, however, this can fail to open a file encoded in UTF-8, which most are these days, and safer code looks like this: ```python with app.open_resource("schema.sql", mode="rb") as f: conn.executescript(f.read().decode("utf-8")) # type: ignore [attr-defined] ``` (The type comment is needed to prevent mypy from complaining about `f.read()` possibly being a string with no `.decode()` method, as it can't tell that the file was opened in 'rb' mode.) It would be cleaner and more flexible to be able to write: ```python with app.open_resource("schema.sql", encoding="utf-8") as f: conn.executescript(f.read()) ```
closed
2024-06-20T16:05:45Z
2024-07-26T00:06:22Z
https://github.com/pallets/flask/issues/5504
[]
liffiton
0
PeterL1n/BackgroundMattingV2
computer-vision
85
inference question
when i'm using imgs to inferrence through the network,did i must provide the background in the origin img? i can,t understand this,maybe i should go to your paper.
closed
2021-04-13T02:53:15Z
2021-04-13T23:55:54Z
https://github.com/PeterL1n/BackgroundMattingV2/issues/85
[]
upperblacksmith
1
lukas-blecher/LaTeX-OCR
pytorch
226
Possible bug in computing positional embeddings for patches
Hi all - currently looking into fine-tuning this model and have run into an issue with images of varying different sizes. For this example: max_height = 192, max_width = 672, patch_size=16 The error causing line is here: ```py x += self.pos_embed[:, pos_emb_ind] ``` (pix2tex.models.hybrid line 25 in CustomVisionTransformer forward_features) If I have an image of size 522 x 41, this line will throw an error. X consists of 99 patches (+ the cls tokens) making it size [100, 256] However, the positional embedding indices are only 66 in length. I am currently investigating this issue but don't quite understand the formula used to compute how many positional embedding indicies we are going to need. Right now it is computing 66 different indicies when we should be getting 100 different indicies. I think the issue arises when convolutions from the resnet embedder overlap and the formula doesn't account for this (it requires the image to be divisible by patch_size X patch_size for this formula to work). If anyone has any thoughts on how to fix this let me know! I'm definitely no computer vision expert but I believe a simple change to account for overlapping convolutions in the embedding may be enough to fix this!
open
2022-12-29T22:16:05Z
2022-12-30T16:22:05Z
https://github.com/lukas-blecher/LaTeX-OCR/issues/226
[]
ByrdOfAFeather
1
FujiwaraChoki/MoneyPrinter
automation
2
Error: 'videos'
[-] Error: 'videos' 127.0.0.1 - - [01/Feb/2024 11:49:43] "POST /api/generate HTTP/1.1" 200 - the directory may be missing
closed
2024-02-01T10:57:01Z
2024-02-01T13:35:09Z
https://github.com/FujiwaraChoki/MoneyPrinter/issues/2
[]
Hades1081
8
davidsandberg/facenet
computer-vision
459
Is SVM a good choice for large amount class classification?
I am trying to use SVM to classify unknown image. In my use case, I have almost 1400 classes of people to classify. But when I apply SVM on my pipeline the performance is so bad that I really doubt that if SVM is the best choice. My pipeline is shown as below. 1. Collect images in each class The amount of face of each person is not fixed, from about max 600 to min 20. But the angle of face is not fixed also. 2. Embed by facenet & remove outlier After collecting all class image, input all of them to facenet class by class. I remove outlier in each class using [Local Outlier Factor](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) and set n_neighbors=10 3. Apply SVM as final classifier Here is the most difficult stage I meet. After survey on the internet I think maybe the reason is that output of facenet 128 dimension vector which is much less than the number of classes. Apart from the reason I mentioned (dimension difference between embedding vector & amount of classes) I think there are still somewhere may cause such problem. - the size of bounding box is too small or resolution may too low - the algorithm I choose or some parameter can be tuned I hope I describe my problem well enough for everyone to understand. If there is still some information I need to provide, please let me know. Thank you.
closed
2017-09-17T09:53:56Z
2017-11-07T02:49:00Z
https://github.com/davidsandberg/facenet/issues/459
[]
posutsai
4
modin-project/modin
pandas
7,203
Modin should work correctly with pandas, which uses pyarrow as a backend
The main known issue is that Modin's type predetermination system is based on NumPy types. Found in https://github.com/modin-project/modin/pull/7199.
closed
2024-04-19T11:14:04Z
2024-05-14T19:44:24Z
https://github.com/modin-project/modin/issues/7203
[ "new feature/request 💬", "P1" ]
anmyachev
1
xzkostyan/clickhouse-sqlalchemy
sqlalchemy
133
native driver and insert a literal_column value
**Describe the bug** I'm trying to use alembic with native driver and when alembic inserting new revision number I get `KeyError` in `/usr/local/lib/python3.9/site-packages/clickhouse_driver/block.py`. ``` ... File "/usr/local/lib/python3.9/site-packages/alembic/script/base.py", line 490, in run_env util.load_python_file(self.dir, "env.py") File "/usr/local/lib/python3.9/site-packages/alembic/util/pyfiles.py", line 97, in load_python_file module = load_module_py(module_id, path) File "/usr/local/lib/python3.9/site-packages/alembic/util/compat.py", line 182, in load_module_py spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 855, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/opt/src/sgs2/alembic/clickhouse/env.py", line 83, in <module> run_migrations_online() File "/opt/src/sgs2/alembic/clickhouse/env.py", line 77, in run_migrations_online context.run_migrations() File "<string>", line 8, in run_migrations File "/usr/local/lib/python3.9/site-packages/alembic/runtime/environment.py", line 813, in run_migrations self.get_context().run_migrations(**kw) File "/usr/local/lib/python3.9/site-packages/alembic/runtime/migration.py", line 568, in run_migrations head_maintainer.update_to_step(step) File "/usr/local/lib/python3.9/site-packages/alembic/runtime/migration.py", line 753, in update_to_step self._insert_version(vers) File "/usr/local/lib/python3.9/site-packages/alembic/runtime/migration.py", line 693, in _insert_version self.context.impl._exec( File "/usr/local/lib/python3.9/site-packages/alembic/ddl/impl.py", line 146, in _exec return conn.execute(construct, multiparams) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in execute return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 329, in _execute_on_connection return connection._execute_clauseelement( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1451, in _execute_clauseelement ret = self._execute_context( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1813, in _execute_context self._handle_dbapi_exception( File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1998, in _handle_dbapi_exception util.raise_(exc_info[1], with_traceback=exc_info[2]) File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_ raise exception File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1750, in _execute_context self.dialect.do_executemany( File "/usr/local/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/base.py", line 1132, in do_executemany cursor.executemany(statement, parameters, context=context) File "/usr/local/lib/python3.9/site-packages/clickhouse_sqlalchemy/drivers/native/connector.py", line 170, in executemany response = execute( File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/client.py", line 242, in execute rv = self.process_insert_query( File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/client.py", line 471, in process_insert_query rv = self.send_data(sample_block, data, File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/client.py", line 522, in send_data block = block_cls(sample_block.columns_with_types, chunk, File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/block.py", line 39, in __init__ self.data = self.normalize(data or []) File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/block.py", line 127, in normalize self._mutate_dicts_to_rows(data) File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/block.py", line 161, in _mutate_dicts_to_rows data[i] = [row[name] for name in column_names] File "/usr/local/lib/python3.9/site-packages/clickhouse_driver/block.py", line 161, in <listcomp> data[i] = [row[name] for name in column_names] KeyError: 'version_num' ``` As I understand this happens because `ClickHouseNativeCompiler` truncates `INSERT` statement and `ClickHouseNativeCompiler.construct_params` didn't return bind parameters (because literal_column doesn't bind anything as I see). I'm thinking I can try to check insert statement for `literal_column` in `ClickHouseNativeCompiler.visit_insert` and conditionally truncate query, but I need your opinion about it.. **To Reproduce** ``` from sqlalchemy import ( Column, types, func, Table, literal_column, MetaData, create_engine, ) from clickhouse_sqlalchemy import make_session, engines engine = create_engine("clickhouse+native://default:@localhost/default") session = make_session(engine) tbl = Table( "alembic_version", MetaData(), Column("version_num", types.String(), nullable=False), Column("dt", types.DateTime, server_default=func.now()), engines.ReplacingMergeTree(version="dt", order_by=func.tuple()), ) version = "hello" session.execute(tbl.insert().values(version_num=literal_column("'%s'" % version))) ``` **Expected behavior** Insert should happens without exceptions **Versions** - Version of package with the problem. I use `feature-sa-1.4` branch but I think this problem exist in master too. ``` alembic==1.6.2 SQLAlchemy==1.4.15 git+https://github.com/xzkostyan/clickhouse-sqlalchemy.git@feature-sa-1.4#egg=clickhouse-sqlalchemy ``` - Python version. 3.9.5
closed
2021-05-31T17:34:31Z
2022-02-03T13:31:35Z
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/133
[]
nikitka
1
christabor/flask_jsondash
plotly
36
Integrate plotlyjs
closed
2016-08-26T19:59:03Z
2016-09-09T22:23:53Z
https://github.com/christabor/flask_jsondash/issues/36
[ "enhancement" ]
christabor
0
modin-project/modin
data-science
7,355
BUG: Cpu count would be set incorrectly on a cluster.
cpu count on a cluster would be by default set to the number of cpus on head node(node where modin initialization is performed), This should be set to the total number of cpus in the cluster. This issue is similar to https://github.com/modin-project/modin/issues/2798
closed
2024-08-01T14:53:02Z
2024-08-01T16:33:13Z
https://github.com/modin-project/modin/issues/7355
[]
arunjose696
0
dpgaspar/Flask-AppBuilder
flask
2,257
babel-extract hangs when 'writing PO template file to ./babel/messages.pot'
### Environment Flask-Appbuilder version: 4.5.0 pip freeze output: aliyun-python-sdk-core==2.13.36 aliyun-python-sdk-kms==2.16.1 apispec==6.6.1 appdirs==1.4.4 async-timeout==4.0.3 atomicwrites==1.4.0 attrs==21.2.0 azure-core==1.24.2 azure-identity==1.10.0 Babel==2.15.0 bcrypt==4.0.1 blinker==1.6.2 Brotli==1.1.0 cachelib==0.9.0 cachetools==4.2.4 cattrs==1.9.0 cerence @ file:///C:/GL/ncs/tts/mobility-ncs-tts-tools/cerence-tts-oc2-proto-python certifi==2021.10.8 cffi==1.15.1 charset-normalizer==2.0.7 click==8.1.4 colorama==0.4.6 ConfigArgParse==1.7 crcmod==1.7 cryptography==37.0.4 dataclasses-json==0.6.7 deepmerge==1.1.0 Deprecated==1.2.14 dnspython==2.6.1 dohq-artifactory==0.7.742 email_validator==2.2.0 et-xmlfile==1.1.0 exceptiongroup==1.1.3 Faker==18.7.0 filelock==3.0.4 Flask==2.3.2 Flask-AppBuilder==4.5.0 Flask-Babel==2.0.0 Flask-BasicAuth==0.2.0 Flask-Caching==2.3.0 Flask-Cors==4.0.0 Flask-JSON==0.4.0 Flask-JWT-Extended==4.6.0 Flask-Limiter==3.7.0 Flask-Login==0.6.3 Flask-SQLAlchemy==2.5.1 Flask-WTF==1.2.1 future==0.18.3 gevent==24.2.1 geventhttpclient==2.0.11 gitdb==4.0.9 GitPython==3.1.24 google-api-core==2.8.2 google-api-python-client==2.51.0 google-auth==2.3.2 google-auth-httplib2==0.1.0 googleapis-common-protos==1.56.2 greenlet==3.0.3 grpcio==1.46.3 grpcio-tools==1.46.3 httplib2==0.20.1 idna==3.3 importlib-metadata==4.12.0 importlib-resources==5.10.0 iniconfig==1.1.1 itsdangerous==2.1.2 Jinja2==3.1.2 jmespath==0.10.0 jsonschema==4.16.0 kubernetes==19.15.0 limits==3.13.0 locust==2.18.4 lxml==4.6.3 markdown-it-py==3.0.0 MarkupSafe==2.1.1 marshmallow==3.21.3 marshmallow-sqlalchemy==0.28.2 mdurl==0.1.2 msal==1.18.0 msal-extensions==1.0.0 msgpack==1.0.8 mypy-extensions==1.0.0 mysqlclient @ file:///C:/Users/yang.lei/Downloads/mysqlclient-1.4.6-cp38-cp38-win32.whl#sha256=1f4d3c3fa7d7f1683071ad12096e5494179e7fa837c5210c3032c81d72f33e3a natsort==8.3.1 netmiko==4.2.0 nlu-spec-extract @ file:///C:/GL/ncs/tts/onecloud2.0/qa/nlu-spec-extract ntc_templates==4.0.1 numpy==1.21.4 oauthlib==3.1.1 openpyxl==3.1.2 ordered-set==4.1.0 orjson==3.10.5 oss2==2.18.0 packaging==21.3 pandas==1.3.4 paramiko==3.3.1 pkgutil_resolve_name==1.3.10 pluggy==1.0.0 portalocker==2.5.1 prison==0.2.1 protobuf==3.20.1 psutil==5.9.8 py==1.11.0 pyasn1==0.4.8 pyasn1-modules==0.2.8 PyAudio @ file:///C:/NuanTools/py_tts/PyAudio-0.2.11-cp38-cp38-win32.whl pycparser==2.21 pycryptodome==3.18.0 pygame==2.1.2 Pygments==2.18.0 PyJWT==2.8.0 PyMuPDF==1.23.3 PyMuPDFb==1.23.3 PyMySQL==1.1.0 PyNaCl==1.5.0 PyOgg @ file:///C:/G/PyOgg pyparsing==2.4.7 pyproj==3.5.0 pyrsistent==0.18.1 pyserial==3.5 pytest==7.4.0 pytest-datadir==1.4.1 pytest-html==3.2.0 pytest-metadata==2.0.4 python-dateutil==2.8.2 python-gitlab==3.15.0 python-Levenshtein==0.12.2 pytz==2021.3 pywin32==304 PyYAML==6.0 pyzmq==25.1.2 qa @ file:///C:/GL/ncs/tts/onecloud2.0/qa/qa redis==5.0.1 redlock-py==1.0.8 requests==2.26.0 requests-cache==0.8.1 requests-oauthlib==1.3.0 requests-toolbelt==1.0.0 rich==13.7.1 roundrobin==0.0.4 rsa==4.7.2 rsqa @ file:///C:/GL/ncs/tts/onecloud2.0/qa/rsqa scp==0.14.5 six==1.16.0 smmap==5.0.0 sounddevice==0.4.6 soundfile==0.12.1 SQLAlchemy==1.4.52 SQLAlchemy-Utils==0.41.2 teamcity-messages==1.21 textfsm==1.1.3 thefuzz==0.19.0 tinydb==4.6.1 tomli==2.0.1 tts-grpc-python-client @ file:///C:/GL/ncs/tts/mobility-ncs-tts-tools/tts-grpc-python-client typing-inspect==0.9.0 typing_extensions==4.3.0 uritemplate==4.1.1 url-normalize==1.4.3 urllib3==1.26.7 websocket-client==1.2.1 Werkzeug==2.3.6 wrapt==1.16.0 WTForms==3.1.2 xhlib @ file:///C:/G/xhlib xmltodict==0.12.0 yamale==4.0.2 zhconv==1.4.3 zipp==3.8.1 zope.event==5.0 zope.interface==6.2 ### Describe the expected results flask fab babel-extract run successfully. ### Describe the actual results the last output hangs more than 10min, so i pressed ctrl+C, no luck, then i closed the terminal. ```powershell PS C:\Users\yang.lei> C:\pv\tts\Scripts\activate.ps1 (tts) PS C:\Users\yang.lei> cd C:\G\fab (tts) PS C:\G\fab> $Env:FLASK_APP='app/__init__.py' (tts) PS C:\G\fab> flask fab babel-extract Starting Extractions config:./babel/babel.cfg input:. output:./babel/messages.pot keywords:('lazy_gettext', 'gettext', '_', '__') Starting Update target:app/translations Finish, you can start your translations (tts) PS C:\G\fab> extracting messages from a.py updating catalog app/translations\pt\LC_MESSAGES\messages.po based on ./babel/messages.pot extracting messages from config.py extracting messages from app\__init__.py extracting messages from app\data.py extracting messages from app\models.py extracting messages from app\views.py writing PO template file to ./babel/messages.pot ```
open
2024-07-04T02:35:57Z
2024-07-04T02:45:19Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/2257
[]
LeiYangGH
1
jmcnamara/XlsxWriter
pandas
785
Documentation about changes regarding write_rich_string should include that they are backwards incompatible
Hi, I've recently upgrade from a version before Release 1.1.2 - October 20 2018, to the latest 1.3.7, and I've been having compatibility issues with write_rich_string specifically, this used to be supported: ``` worksheet.write_rich_string(ROW, COL, string, format) ``` which returns -5: ``` site-packages/xlsxwriter/worksheet.py:1030: UserWarning: You must specify more than 2 format/fragments for rich strings. Ignoring input in write_rich_string(). warn("You must specify more than 2 format/fragments for rich " ``` the fix for me here is to use write_string, and clearly check the return codes... I'd simply suggest adding to the change doc that this release (Release 1.1.2) introduced a backwards compatibility issue for this function. I hope this saves some time for others.
closed
2021-02-04T15:57:45Z
2021-02-07T08:27:47Z
https://github.com/jmcnamara/XlsxWriter/issues/785
[ "wont_fix" ]
optinirr
2
ymcui/Chinese-LLaMA-Alpaca
nlp
412
如何能让模型以stream方式输出问答?
合并lora权重后的模型,推理启动后,都是一次性输出回答,如何配置,或者修改代码,能够使得模型以一个字一个字的方式输出回答呢?
closed
2023-05-23T06:59:18Z
2023-06-03T22:01:59Z
https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/412
[ "stale" ]
jcxian
4
pywinauto/pywinauto
automation
487
Unable to select menu item from popup menu
Hi, I am very new to python/pywinauto. While learning I got stumble on selecting popup menu item. We are using microsoft wpf application. I have installed python 3 and pywinauto 0.6.4 library. I am trying following : right click on calender and open ContextMenu which is working fine. But after that I am unable to select the menu.. I tried so many ways but still can't select. Here is the code I am trying... It looks like Context menu is not part of windows specification. I really need this to work. ```python from pywinauto.application import Application import time import pywinauto import pywinauto.controls.common_controls import pywinauto.controls.uia_controls import pywinauto.controls.win32_controls ``` This code is working fine.......................... ```python #connect to ...... window app = Application(backend="uia").connect(path=r"C:\Program Files (x86)\.........\UIShell.exe") dlg=app['abc'] w_handle = pywinauto.findwindows.find_windows(title=u'ABC | EFG | XYZ', class_name='Window') window = app.window_(handle=w_handle) MyApp1 =app.window(class_name ='Window') GroupB = MyApp1.GroupBox.window_text() print("Today's date on calendar",GroupB) Popupmenu1 = MyApp1.TabControl1.type_keys("{TAB 9}").right_click_input() ``` AFTER this code I am unable to select menu item.. I tried following code i..e ```python pmenu= MyApp1.child_window(control_type="Menu",title="Context").window_text() pmenuitem = MyApp1.child_window(control_type="menuitem",title=u"Schedule").window_text() #MyApp1.PopupMenu.menu_item("Schedule").click_input() pmenu.menu_select(pmenuitem).click_input() ``` Here is the screen shot of what I am trying..... ![image](https://user-images.githubusercontent.com/38377501/38768426-b8190a34-3fc1-11e8-84e1-5dfdf9e1704d.png) I used inspect.exe for object spy ![image](https://user-images.githubusercontent.com/38377501/38768485-7aaf8dd4-3fc2-11e8-80b2-9d477f82baff.png) ![image](https://user-images.githubusercontent.com/38377501/38768497-b39af412-3fc2-11e8-8590-3d9cc0ebed4f.png) ![image](https://user-images.githubusercontent.com/38377501/38768527-42ca6df2-3fc3-11e8-9046-1fc57754c1fb.png) I hope, I have explain it right way. I would appriciate very much of your help. Thanks Smita
closed
2018-04-14T13:10:59Z
2018-04-19T12:39:00Z
https://github.com/pywinauto/pywinauto/issues/487
[ "question" ]
smitagodbole
0
jina-ai/serve
fastapi
5,516
Create a new github repository for load testing Jina Flow
Use [Locust](https://locust.io) as the load testing framework and Delivery Hero [Locust helm-charts](https://github.com/deliveryhero/helm-charts/tree/master/stable/locust) for generating Kubernetes deployment templates. Jina Flow deployment will also need to be generated from the repository.
closed
2022-12-13T16:03:45Z
2023-01-26T08:29:18Z
https://github.com/jina-ai/serve/issues/5516
[]
girishc13
4
scrapy/scrapy
web-scraping
6,049
`downloadermiddlewares.retry.BackwardsCompatibilityMetaclass` does not provide backward compatibility for middleware instances
# Description Previously, `EXCEPTIONS_TO_RETRY` was an attribute of `RetryMiddleware`. This allows: - `RetryMiddleware` subclasses could access `EXCEPTIONS_TO_RETRY` via `cls.EXCEPTIONS_TO_RETRY`. - `RetryMiddleware` instances and instances of its subclasses could access `EXCEPTIONS_TO_RETRY` via `self.EXCEPTIONS_TO_RETRY`. In 2.10 `EXCEPTIONS_TO_RETRY` was removed and added as a property to `BackwardsCompatibilityMetaclass`. This added compatibility only for the first point. # Steps to Reproduce ```python class MyRetryMiddleware(RetryMiddleware): def process_exception(self, request, exception, spider): if isinstance(exception, self.EXCEPTIONS_TO_RETRY) and not request.meta.get('dont_retry', False): # update request return self._retry(request, exception, spider) ``` # Expected behavior A warning about `EXCEPTIONS_TO_RETRY` deprecation. # Actual behavior AttributeError: 'MyRetryMiddleware' object has no attribute 'EXCEPTIONS_TO_RETRY' # Versions ``` Scrapy : 2.10.1 lxml : 4.9.3.0 libxml2 : 2.10.3 cssselect : 1.2.0 parsel : 1.8.1 w3lib : 2.1.2 Twisted : 22.10.0 Python : 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)] pyOpenSSL : 23.2.0 (OpenSSL 3.1.2 1 Aug 2023) cryptography : 41.0.3 Platform : Windows-10-10.0.19044-SP0 ```
closed
2023-09-13T16:55:14Z
2023-09-15T13:51:06Z
https://github.com/scrapy/scrapy/issues/6049
[]
Prometheus3375
1
nerfstudio-project/nerfstudio
computer-vision
3,425
Can’t render splatfacto videos higher than 1080p
Splatfacto videos and images only render at 1920x1080, even if a higher resolution like 3840x2160 is set in the viewer and camera path .json. The viewer does pass the correct resolution to the .json when you press “generate command”, but it only renders in 1080p for some reason.
closed
2024-09-12T20:32:28Z
2024-09-12T20:47:58Z
https://github.com/nerfstudio-project/nerfstudio/issues/3425
[]
gradeeterna
1
keras-team/keras
deep-learning
20,360
Propose New Layer Type (Control) for Handling Forward and Backward (Gradient Flow) with Control Mask over samples, Supporting Backend Agnosticism
I want to propose a new layer type (`control`) that sits on `keras/src/layers/control/`. It's consisting of `SkipGrad`, `StopGrad`, and `Switch`: * **StopGrad**: It's not only a wrapper version of `ops.stop_gradient()` but also supports a controlled version for certain samples, whether a gradient for i-sample is allowed to propagate or blocked (e.g., due to a lack of target for certain task in the context of multi-task learning). * **Switch**: Layer to control which layer to forward pass between two layers (both have the same shape). The gradient flowed to the switched layer but not the unswitched one. (e.g., using real sample if control mask is 1, otherwise using fake sample in the context of GAN) * **SkipGrad**: Layer to control whether a layer should be skipped during backward propagation. Unlike the stop gradient, where it affects the previous layers (gradient also stopped), the `SkipGrad` is still allowing the previous layers to flow. Those three controls can be understand intuitively using this truth table: ![image](https://github.com/user-attachments/assets/674d51df-fcbb-4187-9c6b-35dff65c538a) For the proof of concept, I actually already defined them when training static GAN, where the training stages is controlled by control mask. I already define `SkipGrad`, `StopGrad`, and `Switch`, the three has been used in the case of GAN. ![Untitled](https://github.com/user-attachments/assets/b1631fdd-a1d2-420d-ba55-230851f91052) Here is the result of how I trained it locally until 100 epochs: ![gan_image_evolution_2(1)](https://github.com/user-attachments/assets/99fae2de-21f2-498b-9c00-52aa70f1b3f4) For another use case, it's possible that these control layers are useful in reinforcement learning or multi-task learning, specifically domain-specific learning. In fact, I was thinking about this idea based on my main work, where I'm working on ECG signal data. There is an abnormal case where it is supposed to be more specific to handle it due to lack of target while still allowing normal case for training. Indeed I can use manual gradient control, but I think it's higher-level to control the gradient flow through control mask than overriding `train_step()`. Moreover, I don't see any docs in keras.io that talking about backend agnosticism to control gradient flow. A closer docs I can get is `ops.custom_gradient`. Perhaps, it can be useful for another control layers in future. Some literatures like SkipNet or ResNet are using control mechanism about how forward propagation and backward propagation should be handled. But we need more generalization of how the gradient should be flow. Here is the Google Colaboratory of my POC above about [static_dcgan](https://colab.research.google.com/drive/1aUhoyvzhD9IA0gATBZ7FbpaTyauL4SxL?usp=sharing]) So, what do you think? Let me know if I'm allowed to make the draft pull request.
closed
2024-10-15T23:12:36Z
2024-10-17T23:50:17Z
https://github.com/keras-team/keras/issues/20360
[ "type:feature", "keras-team-review-pending" ]
ikhwanperwira
2
xlwings/xlwings
automation
1,744
Dictionary as input into UDF
This more a question than an issue. I have a python function that takes a dictionary as an input. I would like to have that function available in excel as an UDF. I am not sure how to get the excel UDF to accept a two column selection in a workbook as the dictionary needed for the python function. Any ideas for help would be appreciated.
closed
2021-10-21T11:58:02Z
2022-02-05T20:06:03Z
https://github.com/xlwings/xlwings/issues/1744
[]
sabualkaz
1
ivy-llc/ivy
pytorch
28,676
Fix Frontend Failing Test: tensorflow - mathematical_functions.jax.numpy.sinc
To-do List: https://github.com/unifyai/ivy/issues/27499
open
2024-03-24T12:06:37Z
2024-03-24T12:06:37Z
https://github.com/ivy-llc/ivy/issues/28676
[ "Sub Task" ]
ZJay07
0
deepset-ai/haystack
nlp
8,472
docs: docs for `LoggingTracer`
`LoggingTracer` was added in #8447 for easy inspection of what happens in Pipelines during experimentation. We should document this new tracer in [Logging docs](https://docs.haystack.deepset.ai/docs/logging) (and in [Tracing docs](https://docs.haystack.deepset.ai/docs/tracing)).
closed
2024-10-21T07:52:04Z
2024-10-23T14:21:03Z
https://github.com/deepset-ai/haystack/issues/8472
[ "type:documentation" ]
anakin87
1
FactoryBoy/factory_boy
sqlalchemy
138
Exception: AttributeError: 'module' object has no attribute 'fuzzy'
Got this strange exception with this code ``` python import datetime import factory from client.factories import ClientFactory class ProgramFactory(factory.django.DjangoModelFactory): FACTORY_FOR = 'program.Program' client = factory.SubFactory(ClientFactory) class ProgramOutletFactory(factory.django.DjangoModelFactory): FACTORY_FOR = 'program.ProgramOutlet' program = factory.SubFactory(ProgramFactory) start_date = factory.fuzzy.FuzzyDate(datetime.date(2014, 1, 1), datetime.date(2014, 6, 1)) end_date = factory.fuzzy.FuzzyDate(datetime.date(2014, 7, 1), datetime.date(2014, 12, 1)) ``` If I include `import fuzzy` in factory/**init**.py it works.
closed
2014-03-27T13:47:00Z
2015-07-25T12:20:20Z
https://github.com/FactoryBoy/factory_boy/issues/138
[]
syabro
4
clovaai/donut
computer-vision
171
vision encoder question
Why use Swim Transformer instead of BEiT? Just a question
open
2023-03-30T02:29:12Z
2023-03-30T02:29:12Z
https://github.com/clovaai/donut/issues/171
[]
yysirs
0
pyqtgraph/pyqtgraph
numpy
3,003
parameterTree contents are not displayed on the correct position when the parameterTree widget has scroll bars
### Short description I have a parameterTree which may become bigger than the reserved space for the layout. When this happens scroll bars are shown, but part of the UI is displaced / do not scroll together with the rest of the UI ![image](https://github.com/pyqtgraph/pyqtgraph/assets/151833162/4d4be00c-caae-4c29-9056-8cdc474d31c6) Scroll left ![image](https://github.com/pyqtgraph/pyqtgraph/assets/151833162/5ab9dfdd-a483-470e-b448-3356c9e4f6b1) Scroll down, notice the first column has scrolled up, but the second stays, leading to a mismatch on the presented content ![image](https://github.com/pyqtgraph/pyqtgraph/assets/151833162/122f4b14-befa-4ae9-8438-04175bc4cf7a) ### Code to reproduce I stripped down the app to make a minimal example ``` from pyqtgraph.Qt import QtCore try: from pyqtgraph.Qt.QtWidgets import QApplication, QGraphicsProxyWidget except ModuleNotFoundError: # pyqtgraph < 0.13 from pyqtgraph.Qt import QtWidgets as qwt QApplication = qwt.QApplication QGraphicsProxyWidget = qwt.QGraphicsProxyWidget import pyqtgraph as pg from pyqtgraph.parametertree import Parameter, ParameterTree plot_params = [] for i in range(10): plot_params.append({"name": f"p{i}", "type":"int", "value":i}) parameters = [ {"name": "Plot", "type": "group", "children": plot_params}, ] class Monitor: def __init__(self): self._app = QApplication([]) self.paramtree = ParameterTree(showHeader=False) self._p = Parameter.create( name="Configuration", type="group", children=parameters ) self.paramtree.setParameters(self._p, showTop=False) self.pw = pg.GraphicsView() self.pw.setWindowTitle("test") self.pw.show() self.ly = pg.GraphicsLayout() proxy = QGraphicsProxyWidget() proxy.setWidget(self.paramtree) proxy.setMinimumWidth(250) self.pltitem = pg.PlotItem() self.ly.addItem(self.pltitem, row=1, col=1) self.ly.addItem(proxy, row=1, col=2, rowspan=1, colspan=2) self.pw.setCentralWidget(self.ly) self.pw.setCentralWidget(self.ly) self.pw.resize(1800, 700) if __name__ == "__main__": mon = Monitor() QApplication.instance().exec_() ``` Once launched, simply resize the window until the parameter tree shows a scroll bar, then it is easy to see the second column gets out of place ![image](https://github.com/pyqtgraph/pyqtgraph/assets/151833162/7a88e9a1-0cc0-4724-ad45-3b3001ee9b1f) ### Tested environment(s) I have reproduced this problem in: * PyQtGraph version: 0.13.6 and 0.13.3 * Qt Python binding: PyQt5 5.15.6 Qt 5.15.2 * Python version: 3.11.5 * NumPy version: 1.25.1 * Operating system: Windows 10 Enterprise Also in: * PyQtGraph version: 0.12.1 * Qt Python binding: PyQt5 5.15.1 Qt 5.15.1 * Python version: 3.9.4 * NumPy version: 1.26.4 * Operating system: Windows 10 Enterprise
open
2024-04-24T09:25:19Z
2024-05-02T05:58:01Z
https://github.com/pyqtgraph/pyqtgraph/issues/3003
[ "help wanted", "parameterTree" ]
alberto-gomezcasado-AP
4
neuml/txtai
nlp
675
Pass options to underlying vector models
New vector models such as [Nomic Embed](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) have yet to be fully integrated into Hugging Face Transformers. In order to load, `trust_remote_code` needs to be set to true. Currently, there is no way to pass options to underlying models like the LLM pipeline framework. This change will enable similar functionality with vector model loading.
closed
2024-02-24T11:24:37Z
2024-02-28T02:22:17Z
https://github.com/neuml/txtai/issues/675
[]
davidmezzetti
0
yezyilomo/django-restql
graphql
279
When I deploy django-restql on AWS lambda, "?query={}" results into HTTP 400 error.
open
2021-09-13T14:33:37Z
2021-09-22T01:46:47Z
https://github.com/yezyilomo/django-restql/issues/279
[]
RadySonabu
6
aimhubio/aim
data-visualization
3,198
How do you get the git commit hash from a run when queried from a repository?
Basically as the title says: How do you get the git commit hash from a run when queried from a repository? I know that I can store the git commit via ```Python run = Run( repo = path_to_repo, experiment = "experiment_name", log_system_params = True ) ``` and I can see the Git Info Card in the web interface. However, when I try to query the git commit via the SDK: ```Python repo = Repo(path_to_repo) tag = "success" # Run a basic query query = f'"{tag}" in run.tags" runs = [item.run for item in repo.query_runs(query, report_mode=0).iter_runs()] for run in tqdm.tqdm(runs, desc=f"Extracting runs"): commit = run.__system_params.git_info.commit # more code ``` then I get the error `AttributeError: 'Run' object has no attribute '__system_params'`. Again, the same query from the webui works just fine. I debugged through the `Run` object a little bit, but could not find any meaningful fields / members. For reference, I am using aim `3.23.0`: ``` aim 3.23.0 aim-ui 3.23.0 aimrecords 0.0.7 aimrocks 0.5.2 ```
closed
2024-07-29T08:27:23Z
2024-07-29T13:27:58Z
https://github.com/aimhubio/aim/issues/3198
[ "type / question" ]
sbuschjaeger
2