repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
autokey/autokey
automation
685
clipboard.get_selection() gives previous selection when there's no selection
### Has this issue already been reported? - [X] I have searched through the existing issues. ### Is this a question rather than an issue? - [X] This is not a question. ### What type of issue is this? Bug ### Which Linux distribution did you use? Manjaro Linux Release: 21.2.5, kernel 5.15 LTS ### Which AutoKey GUI did you use? GTK ### Which AutoKey version did you use? 0.96.0-beta.10 ### How did you install AutoKey? autokey-git from Archlinux (AUR) repo ### Can you briefly describe the issue? When no text is selected, clipboard.get_selection() gives the text that was selected when the previous clipboard.get_selection() was invoked. ### Can the issue be reproduced? Sometimes ### What are the steps to reproduce the issue? 1. Open a new file in `geany` editor. 2. Write there two lines: `aaa` and `bbb`. 3. Select `aaa` and invoke a script like `~/.config/autokey/data/Sample Scripts/Selection Test.py` 4. It shows that `aaa` was the selection. 5. Click on `bbb` string (so that no text is selected). 6. Invoke the same script. 7. It shows again that the selection is `aaa`. ### What should have happened? Instead of `aaa` as selection there should be an exception saying there was nothing selected. ### What actually happened? The previous selection was returned though no text was selected. ### Do you have screenshots? _No response_ ### Can you provide the output of the AutoKey command? _No response_ ### Anything else? This bug didn't happen in an another editor: Mousepad (XFCE desktop).
closed
2022-03-25T17:33:05Z
2022-03-30T05:05:07Z
https://github.com/autokey/autokey/issues/685
[ "duplicate", "wontfix", "upstream bug", "scripting" ]
chang-zhao
5
horovod/horovod
tensorflow
3,760
Huge amount of MPI_Allreduces when using NCCL
**Environment:** I used [NGC TensorFlow 1 container 22-09](https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/rel-22-09.html#rel-22-09) **Checklist:** 1. Did you search issues to find if somebody asked this question before? 2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)? 3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? 4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? **Bug report:** I tried running my application distributed by horovod. I got poor performance when getting inter connection between two nodes running each 2 A100 GPUs. Taking a look into it by Nvidia Nsight System I get a bit confused. I thought of Horovod using NCCL Allreduce and MPI for some setup stuff. But in my Nsight profile i can see a lot of `MPI_Allreduce` operation with high runtimes and just a small amount of short running `NCCL_Allreduce`. Is this a problem in my setup? Can you please clarify for which purpose MPI and NCCL are used? Any hints to get a better performance? I tried to set `HOROVOD_AUTOTUNE` but no improvement ![Screenshot from 2022-10-30 15-04-52](https://user-images.githubusercontent.com/109974897/198883706-a3b9fd8b-7d51-4992-b0db-be740e751a11.png) ![Screenshot from 2022-10-30 15-05-19](https://user-images.githubusercontent.com/109974897/198883710-df41eb58-0ee7-4765-95e7-ae8d66e30b3e.png)
closed
2022-10-30T14:24:36Z
2023-01-15T22:35:06Z
https://github.com/horovod/horovod/issues/3760
[ "question", "wontfix" ]
Anlubi
2
plotly/dash
plotly
2,781
[BUG] Dash 2.16.0
Here's another example of an app that worked in 2.15 but not in 2.16 Might be related to some already reported issues, but it looks slightly different. > Cannot read properties of undefined (reading 'DashIconify') ``` This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.) TypeError: Cannot read properties of undefined (reading 'DashIconify') at dagcomponentfuncs.DMC_Button (http://127.0.0.1:8050/assets/dashAgGridComponentFunctions.js?m=1709410368.1990933:384:60) at renderWithHooks (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:14938:20) at mountIndeterminateComponent (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:17617:15) at beginWork (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:18731:18) at HTMLUnknownElement.callCallback (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:182:16) at Object.invokeGuardedCallbackDev (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:231:18) at invokeGuardedCallback (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:286:33) at beginWork$1 (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:23338:9) at performUnitOfWork (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:22292:14) at workLoopSync (http://127.0.0.1:8050/_dash-component-suites/dash/deps/react-dom@16.v2_16_0m1709588564.14.0.js:22265:24) ``` ```python import json import dash_ag_grid as dag from dash import Dash, html, dcc, Input, Output import pandas as pd import dash_mantine_components as dmc import dash_iconify data = { "ticker": ["AAPL", "MSFT", "AMZN", "GOOGL"], "company": ["Apple", "Microsoft", "Amazon", "Alphabet"], "price": [154.99, 268.65, 100.47, 96.75], "buy": ["Buy" for _ in range(4)], "sell": ["Sell" for _ in range(4)], "watch": ["Watch" for _ in range(4)], } df = pd.DataFrame(data) columnDefs = [ { "headerName": "Stock Ticker", "field": "ticker", }, {"headerName": "Company", "field": "company", "filter": True}, { "headerName": "Last Close Price", "type": "rightAligned", "field": "price", "valueFormatter": {"function": """d3.format("($,.2f")(params.value)"""}, }, { "field": "buy", "cellRenderer": "DMC_Button", "cellRendererParams": { "variant": "outline", "leftIcon": "ic:baseline-shopping-cart", "color": "green", "radius": "xl" }, }, { "field": "sell", "cellRenderer": "DMC_Button", "cellRendererParams": { "variant": "outline", "leftIcon": "ic:baseline-shopping-cart", "color": "red", "radius": "xl" }, }, { "field": "watch", "cellRenderer": "DMC_Button", "cellRendererParams": { "rightIcon": "ph:eye", }, }, ] defaultColDef = { "resizable": True, "sortable": True, "editable": False, } grid = dag.AgGrid( id="custom-component-dmc-btn-grid", columnDefs=columnDefs, rowData=df.to_dict("records"), columnSize="autoSize", defaultColDef=defaultColDef, ) app = Dash(__name__) app.layout = html.Div( [ dcc.Markdown("Example of cellRenderer with dash-mantine-components Button and DashIconify icons"), grid, html.Div(id="custom-component-dmc-btn-value-changed"), ], style={"margin": 20}, ) @app.callback( Output("custom-component-dmc-btn-value-changed", "children"), Input("custom-component-dmc-btn-grid", "cellRendererData"), ) def showChange(n): return json.dumps(n) if __name__ == "__main__": app.run_server(debug=True) """ Put the following in the dashAgGridComponentFunctions.js file in the assets folder --------------- var dagcomponentfuncs = window.dashAgGridComponentFunctions = window.dashAgGridComponentFunctions || {}; dagcomponentfuncs.DMC_Button = function (props) { const {setData, data} = props; function onClick() { setData(); } let leftIcon, rightIcon; if (props.leftIcon) { leftIcon = React.createElement(window.dash_iconify.DashIconify, { icon: props.leftIcon, }); } if (props.rightIcon) { rightIcon = React.createElement(window.dash_iconify.DashIconify, { icon: props.rightIcon, }); } return React.createElement( window.dash_mantine_components.Button, { onClick, variant: props.variant, color: props.color, leftIcon, rightIcon, radius: props.radius, style: { margin: props.margin, display: 'flex', justifyContent: 'center', alignItems: 'center', }, }, props.value ); }; """ ```
closed
2024-03-04T21:49:37Z
2024-03-04T23:28:17Z
https://github.com/plotly/dash/issues/2781
[]
AnnMarieW
4
collerek/ormar
sqlalchemy
278
Nested model not included in Pydantic model
**Describe the bug** I don't know if it comes from FastAPI, Pydantic or Ormar. When retrieving a Pydantic model from an Ormar model, the response example shown in the Swagger interface does not include a nested model. Same with the actual responses: they do not contain that nested model. My models: ```python # meta classes removed for conciseness class Library(ormar.Model): id: int = ormar.Integer(primary_key=True) name: str = ormar.String(max_length=100) class Package(ormar.Model): id: int = ormar.Integer(primary_key=True) library: Library = ormar.ForeignKey(Library, related_name="packages") # <-- of interest version: str = ormar.String(max_length=100) class Ticket(ormar.Model): id: int = ormar.Integer(primary_key=True) number: int = ormar.Integer() status: str = ormar.String(max_length=100) class TicketPackage(ormar.Model): id: int = ormar.Integer(primary_key=True) status: str = ormar.String(max_length=100) ticket: Ticket = ormar.ForeignKey(Ticket, related_name="packages") package: Package = ormar.ForeignKey(Package, related_name="tickets") # <-- of interest ``` My route: ```python router = APIRouter(prefix="/tickets") TicketPackageOut = TicketPackage.get_pydantic(exclude={"ticket"}) @router.get( "/{ticket_id}/packages", response_model=List[TicketPackageOut], ) async def get_ticket_packages(ticket_id: int) -> List[TicketPackage]: return await TicketPackage.objects.select_related("package__library").filter(ticket__id=ticket_id).all() # Note how I "select the related package library". # If I print the return value of that query, # I can see that the library objects are here # with all their values (name) populated, as expected. ``` **Expected behavior** The example and actual responses in Swagger should look like: ```json [ { "id": 0, "status": "string", "package": { "version": "string", "id": 0, "library": { "id": 0, "name": "string" } } } ] ``` **Actual behavior** The responses look like: ```json [ { "id": 0, "status": "string", "package": { "version": "string", "id": 0 } } ] ``` The package library is missing! **Screenshots** Example response | Actual response --- | --- ![image](https://user-images.githubusercontent.com/3999221/126457728-6087270b-22f6-43aa-b049-862c03f7a76c.png) | ![image](https://user-images.githubusercontent.com/3999221/126457883-9725c888-59ce-4993-8bb6-ea675a80691b.png) *You'll notice extra fields that I removed from the code above for conciseness.* **Versions:** - Database backend used: sqlite - Python version: 3.8.11 - `ormar` version: 0.10.14 - `pydantic` version: 1.8.2 - if applicable `fastapi` version: 0.65.2
closed
2021-07-21T08:43:45Z
2021-07-21T12:44:43Z
https://github.com/collerek/ormar/issues/278
[ "bug" ]
pawamoy
7
KaiyangZhou/deep-person-reid
computer-vision
190
Getting error while feeding custom dataset
I want to train the reidentification model on VeRi vehicle dataset. I followed the instructions given here on how to use custom dataset: https://kaiyangzhou.github.io/deep-person-reid/user_guide.html#use-your-own-dataset When I try to train "hacnn" model on this custom (VeRi) dataset, I am not able to train the model. Please find below the dataset statistics and the error that I am getting: ``` => Loading train (source) dataset => Loaded VeRiDataset ---------------------------------------- subset | # ids | # images | # cameras ---------------------------------------- train | 575 | 37746 | 20 query | 200 | 1678 | 19 gallery | 200 | 11579 | 19 ---------------------------------------- => Loading test (target) dataset => Loaded VeRiDataset ---------------------------------------- subset | # ids | # images | # cameras ---------------------------------------- train | 575 | 37746 | 20 query | 200 | 1678 | 19 gallery | 200 | 11579 | 19 ---------------------------------------- **************** Summary **************** train : ['veri_dataset'] # train datasets : 1 # train ids : 575 # train images : 37746 # train cameras : 20 test : ['veri_dataset'] ***************************************** => Start training /home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torch/nn/functional.py:2457: UserWarning: nn.functional.upsample is deprecated. Use nn.functional.interpolate instead. warnings.warn("nn.functional.upsample is deprecated. Use nn.functional.interpolate instead.") Traceback (most recent call last): File "train_torchreid.py", line 52, in <module> print_freq=10 File "/home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torchreid-0.7.8-py3.6-linux-x86_64.egg/torchreid/engine/engine.py", line 100, in run self.train(epoch, max_epoch, trainloader, fixbase_epoch, open_layers, print_freq) File "/home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torchreid-0.7.8-py3.6-linux-x86_64.egg/torchreid/engine/image/softmax.py", line 99, in train loss = self._compute_loss(self.criterion, outputs, pids) File "/home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torchreid-0.7.8-py3.6-linux-x86_64.egg/torchreid/engine/engine.py", line 302, in _compute_loss loss = DeepSupervision(criterion, outputs, targets) File "/home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torchreid-0.7.8-py3.6-linux-x86_64.egg/torchreid/losses/__init__.py", line 21, in DeepSupervision loss += criterion(x, y) File "/home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torch/nn/modules/module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "/home/rajat/MyPC/DFKI/MasterThesis/Vehicle_Reidentification/veri/lib/python3.6/site-packages/torchreid-0.7.8-py3.6-linux-x86_64.egg/torchreid/losses/cross_entropy_loss.py", line 47, in forward targets = torch.zeros(log_probs.size()).scatter_(1, targets.unsqueeze(1).data.cpu(), 1) RuntimeError: Invalid index in scatter at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:551 ``` Note: The same code is working fine if I use "market1501" dataset instead of my custom one. Please help me in solving this issue. Thanks
closed
2019-06-16T20:01:32Z
2022-12-19T08:24:08Z
https://github.com/KaiyangZhou/deep-person-reid/issues/190
[]
Rajat-Mehta
8
babysor/MockingBird
deep-learning
484
ImportError: dlopen: cannot load any more object with static TLS
**Summary[问题简述(一句话)]** 在运行web.py和demo_toolbox.py时,报如题错误 **Env & To Reproduce[复现与环境]** OS: CentOS 7 Python: Anaconda+python 3.8 pytorch:1.11.0 CUDA:11.4 **Screenshots[截图(如有)]** ![image](https://user-images.githubusercontent.com/64410968/161378846-3ced5e99-8cee-4eae-ad75-52373fff525f.png)
closed
2022-04-02T10:22:30Z
2022-04-07T03:33:06Z
https://github.com/babysor/MockingBird/issues/484
[]
SchweitzerGAO
2
plotly/dash
data-visualization
3,024
[BUG] Callbacks fail to regenerate on second webapp testing instances.
**Describe your context** Please provide us your environment, so we can easily reproduce the issue. ``` Package Version Editable project location ------------------------------------- ------------------------------- ---------------------------------------------------------- absl-py 2.1.0 aiohappyeyeballs 2.4.3 aiohttp 3.10.8 aiosignal 1.3.1 amply 0.1.6 amqp 5.2.0 annotated-types 0.7.0 anyio 4.6.0 argon2-cffi 23.1.0 argon2-cffi-bindings 21.2.0 arrow 1.3.0 arviz 0.20.0 asciitree 0.3.3 asttokens 2.4.1 astunparse 1.6.3 async-lru 2.0.4 async-timeout 4.0.3 attrs 24.2.0 Authlib 1.3.0 Babel 2.14.0 backoff 2.2.1 backports.tarfile 1.0.0 bcrypt 4.2.0 beautifulsoup4 4.12.3 billiard 4.2.1 bleach 6.1.0 blinker 1.8.2 bokeh 3.5.2 boto3 1.35.31 botocore 1.35.31 branca 0.7.2 Brotli 1.1.0 CacheControl 0.14.0 cached-property 1.5.2 cachelib 0.9.0 cachetools 5.5.0 celery 5.4.0 certifi 2024.8.30 cffi 1.17.1 cfgv 3.3.1 charset-normalizer 3.3.2 click 8.1.7 click-didyoumean 0.3.1 click-plugins 1.1.1 click-repl 0.3.0 cligj 0.7.2 cloud-store 1.2.8 cloudpickle 3.0.0 coiled 1.53.0 colorama 0.4.6 comm 0.2.2 cons 0.4.6 contourpy 1.3.0 cryptography 43.0.1 cycler 0.12.1 cytoolz 0.12.3 dash 2.18.1 dash_ag_grid 31.2.0 dash_auth 2.3.0 dash-breakpoints 0.1.0 dash-core-components 2.0.0 dash_cytoscape 1.0.2 dash-deck 0.0.1 dash-extensions 1.0.18 dash-html-components 2.0.0 dash_iconify 0.1.2 dash-intersection-observer 1.0.1 dash_leaflet 1.0.15 dash_mantine_components 0.12.1 dash-parallel-callback 0.6.3 dash-table 5.0.0 dash-testing-stub 0.0.2 dask 2024.5.2 dask-cluster 0.3.5 dask-expr 1.1.2 dask-gateway 2024.1.0 dataclass-wizard 0.22.3 debugpy 1.8.6 decorator 5.1.1 defusedxml 0.7.1 Deprecated 1.2.14 der-core 0.6.3 dill 0.3.9 distlib 0.3.8 distributed 2024.5.2 dm-tree 0.1.8 docutils 0.21.2 EditorConfig 0.12.4 enea-appshell 0.9.4 entrypoints 0.4 ephem 4.1.5 et-xmlfile 1.1.0 etuples 0.3.9 exceptiongroup 1.2.2 executing 2.1.0 fabric 3.2.2 Faker 30.1.0 fasteners 0.17.3 fastjsonschema 2.20.0 filelock 3.16.1 fiona 1.9.6 firebase-admin 6.5.0 firestore-aio 1.12.1 Flask 2.2.5 Flask-Caching 2.3.0 Flask-Compress 1.15 flatbuffers 24.3.25 flatdict 4.0.1 folium 0.17.0 fonttools 4.54.1 fqdn 1.5.1 frozenlist 1.4.1 fsspec 2024.9.0 gast 0.5.5 gcsfs 2024.9.0.post1 GDAL 3.8.4 geobuf 1.1.1 geopandas 1.0.1 gilknocker 0.4.1 gitdb 4.0.11 GitPython 3.1.43 google-api-core 2.20.0 google-api-python-client 2.147.0 google-auth 2.35.0 google-auth-httplib2 0.2.0 google-auth-oauthlib 1.2.1 google-cloud-appengine-logging 1.3.0 google-cloud-audit-log 0.3.0 google-cloud-core 2.4.1 google-cloud-firestore 2.19.0 google-cloud-logging 3.11.2 google-cloud-secret-manager 2.20.1 google-cloud-storage 2.18.2 google-cloud-tasks 2.16.5 google-cloud-workflows 1.10.1 google-crc32c 1.1.2 google-pasta 0.2.0 google-resumable-media 2.7.2 googleapis-common-protos 1.65.0 gprof2dot 2024.6.6 grpc-google-iam-v1 0.13.1 grpcio 1.59.3 grpcio-status 1.59.3 gunicorn 23.0.0 h11 0.14.0 h2 4.1.0 h5netcdf 1.3.0 h5py 3.11.0 holidays 0.57 hpack 4.0.0 httpcore 1.0.6 httplib2 0.22.0 httpx 0.27.2 humanize 4.10.0 hyperframe 6.0.1 identify 2.6.1 idna 3.10 importlib_metadata 7.0.2 importlib_resources 6.4.5 iniconfig 2.0.0 invoke 2.2.0 ipykernel 6.29.5 ipython 8.27.0 ipywidgets 8.1.5 isoduration 20.11.0 itsdangerous 2.2.0 jaraco.classes 3.4.0 jaraco.context 5.3.0 jaraco.functools 4.0.0 jax 0.4.27 jaxlib 0.4.23.dev20240214 jedi 0.19.1 jeepney 0.8.0 Jinja2 3.1.4 jmespath 1.0.1 joblib 1.4.2 jsbeautifier 1.15.1 json5 0.9.25 jsondiff 2.0.0 jsonpointer 3.0.0 jsonschema 4.23.0 jsonschema-specifications 2023.12.1 jupyter_client 8.6.3 jupyter_core 5.7.2 jupyter-events 0.10.0 jupyter-lsp 2.2.5 jupyter_server 2.14.2 jupyter_server_terminals 0.5.3 jupyterlab 4.2.5 jupyterlab_pygments 0.3.0 jupyterlab_server 2.27.3 jupyterlab_widgets 3.0.13 keras 2.15.0 keyring 25.4.1 keyrings.google-artifactregistry-auth 1.1.1 kiwisolver 1.4.7 kombu 5.4.1 lazy-import 0.2.2 lazy_loader 0.4 llvmlite 0.43.0 locket 1.0.0 logical-unification 0.4.6 lxml 5.3.0 lz4 4.3.3 mapclassify 2.8.1 Markdown 3.6 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.2 matplotlib-inline 0.1.7 mdurl 0.1.2 miniKanren 1.0.3 mistune 3.0.2 ml-dtypes 0.2.0 more-itertools 10.5.0 msgpack 1.1.0 msgspec 0.18.6 multidict 6.1.0 multimethod 1.9.1 multipledispatch 0.6.0 multiprocess 0.70.16 munkres 1.1.4 mypy-extensions 1.0.0 nbclient 0.10.0 nbconvert 7.16.4 nbformat 5.10.4 nest_asyncio 1.6.0 networkx 3.3 nodeenv 1.9.1 notebook_shim 0.2.4 numba 0.60.0 numcodecs 0.13.0 numpy 1.26.0 numpy-financial 1.0.0 nutpie 0.13.1 oauthlib 3.2.2 openpyxl 3.1.5 opentelemetry-api 1.27.0 opt_einsum 3.4.0 orjson 3.10.7 outcome 1.3.0.post0 overrides 7.7.0 packaging 24.1 pandas 2.2.3 pandera 0.20.4 pandocfilters 1.5.0 paramiko 3.5.0 parso 0.8.4 partd 1.4.2 patsy 0.5.6 percy 2.0.2 pexpect 4.9.0 pickleshare 0.7.5 pillow 10.3.0 pip 24.2 pip-requirements-parser 32.0.1 pixi-kernel 0.4.0 pkgutil_resolve_name 1.3.10 planning 1.2.1a3 platformdirs 4.3.6 plotly 5.24.1 plotly-resampler 0.10.0 pluggy 1.5.0 polars 1.2.1 polyfactory 2.17.0 pre_commit 3.8.0 prometheus_client 0.21.0 prompt_toolkit 3.0.48 proto-plus 1.23.0 protobuf 4.24.4 psutil 6.0.0 ptyprocess 0.7.0 PuLP 2.8.0 pure_eval 0.2.3 pvlib 0.11.1 pyarrow 15.0.0 pyarrow-hotfix 0.6 pyasn1 0.6.1 pyasn1_modules 0.4.1 pycparser 2.22 pydantic 2.9.2 pydantic_core 2.23.4 Pygments 2.18.0 PyJWT 2.9.0 pymc 5.16.2 PyNaCl 1.5.0 pyogrio 0.7.2 pyOpenSSL 24.2.1 pyparsing 3.1.4 pyproj 3.6.1 PySocks 1.7.1 pysolar 0.11 pytensor 2.25.4 pytest 8.3.3 pytest-mock 3.14.0 pytest-profiling 1.7.0 pytest-subtests 0.13.1 python-dateutil 2.9.0 python-dotenv 1.0.1 python-json-logger 2.0.7 python-slugify 8.0.4 pytz 2024.1 pyu2f 0.1.5 pyxlsb 1.0.10 PyYAML 6.0.2 pyzmq 26.2.0 referencing 0.35.1 requests 2.32.3 requests-oauthlib 2.0.0 retrying 1.3.3 rfc3339-validator 0.1.4 rfc3986-validator 0.1.1 rich 13.9.1 rpds-py 0.20.0 rsa 4.9 ruff 0.4.10 runtype 0.5.1 ruptures 1.1.9 s3transfer 0.10.2 scikit-learn 1.5.2 scipy 1.14.1 SecretStorage 3.3.3 selenium 4.2.0 Send2Trash 1.8.3 setuptools 75.1.0 shapely 2.0.4 shellingham 1.5.4 six 1.16.0 smmap 5.0.0 sniffio 1.3.1 sortedcontainers 2.4.0 soupsieve 2.5 stack-data 0.6.2 statsmodels 0.14.3 tblib 3.0.0 tenacity 9.0.0 tensorboard 2.15.2 tensorboard-data-server 0.7.0 tensorflow 2.15.0 tensorflow_estimator 2.15.0 tensorflow-probability 0.23.0 termcolor 2.4.0 terminado 0.18.1 text-unidecode 1.3 tf_keras 2.15.0 threadpoolctl 3.5.0 tinycss2 1.3.0 toml 0.10.2 tomli 2.0.1 toolz 0.12.1 tornado 6.4.1 tqdm 4.66.5 trace-updater 0.0.9.1 traitlets 5.14.3 trio 0.26.2 trio-websocket 0.11.1 tsdownsample 0.1.3 typeguard 4.3.0 typer 0.12.5 typer-slim 0.12.5 types-python-dateutil 2.9.0.20240906 typing_extensions 4.12.2 typing-inspect 0.9.0 typing-utils 0.1.0 tzdata 2024.2 ukkonen 1.0.1 unicodedata2 15.1.0 Unidecode 1.3.8 uri-template 1.3.0 uritemplate 4.1.1 urllib3 2.2.3 vine 5.1.0 virtualenv 20.26.6 wcwidth 0.2.13 webcolors 24.8.0 webencodings 0.5.1 websocket-client 1.8.0 Werkzeug 2.2.3 wheel 0.44.0 widgetsnbextension 4.0.13 windpowerlib 0.2.1 wrapt 1.14.1 wsproto 1.2.0 xarray 2024.9.0 xarray-einstats 0.8.0 xlrd 2.0.1 xlsx2csv 0.8.3 XlsxWriter 3.2.0 xyzservices 2024.9.0 yarl 1.13.1 zarr 2.18.3 zict 3.0.0 zipp 3.20.2 zstandard 0.23.0 ``` - If frontend related, tell us your Browser, Version and OS - OS: Windows - Browser: Chrome - Version: ChromeDriver 128.0.6613.137 **Describe the bug** When creating pytests for a full webapp with several callbacks and pages, using `dash_duo` in the `dash[testing]` package. The first pytest will work as expected. However, something to do with the callbacks failing to activate/ regenerate on the second instance of `dash_duo` causes some callbacks to fail to trigger. Leading to the second pytest failing. this causes very odd behavouir, where testing multiple pytests on their own will succeed. However, testing them together will cause them to fail. Very hard to debug. **Expected behavior** For the consecutive tests to run as if the first test did not. The following code form @RenaudLN fixes the issue: ```python from copy import deepcopy import dash._callback import pytest PYDF_CALLBACK_LIST = [] PYDF_CALLBACK_MAP = {} @pytest.fixture(scope="session", autouse=True) def init_test_session(): """Store all registered callbacks.""" global PYDF_CALLBACK_LIST, PYDF_CALLBACK_MAP # noqa: PLW0603 PYDF_CALLBACK_LIST = deepcopy(dash._callback.GLOBAL_CALLBACK_LIST) PYDF_CALLBACK_MAP = deepcopy(dash._callback.GLOBAL_CALLBACK_MAP) @pytest.fixture(scope="function", autouse=True) def reset_callbacks(): """Add global callbacks back.""" dash._callback.GLOBAL_CALLBACK_LIST = deepcopy(PYDF_CALLBACK_LIST) dash._callback.GLOBAL_CALLBACK_MAP = deepcopy(PYDF_CALLBACK_MAP) ``` **Screenshots**
closed
2024-10-03T00:34:32Z
2024-10-11T03:43:18Z
https://github.com/plotly/dash/issues/3024
[ "bug", "testing", "P3" ]
Andre-Medina
2
OpenInterpreter/open-interpreter
python
968
Best way for multi-session / muti-user support? Advice needed
First of all thanks for a great project! I'm now looking for a way to give several users an access to the interpreter simultanoiusly, so I need "sessions" feature. I saw PR "feat:Add support for containerized Code execution, and utilities ( upload / download fn ). #459" but the proposed solution seems to be very complex and overengineered. So I'm looking for other ways to accomplish this and want to share my thoughts and listen to community advices. What I want to get: 1) Several sessions can be run simultaniosly 2) Sessions are more or less isolated, i.e. user of one session can't write such prompt that can affect files and flow of other sessions. 3) Sessions persist so that we can return to dialogue in 1-2 weeks and continue it from the point we left it, and with files we uploaded/processed to the moment 4) Not too much overhead fo each session in terms of memory and hard drive What solutions are possible? 1) We can make 1 session = 1 docker container, but it seems to be non-realistic solution because of huge memory and cpu overhead, also a lot of DevOps headache. 2) We can have 1 docker where we run several instances of open-interpreter inside one container. But what about sessions and file storage? We can bind external volume with subfolders where 1 subfolder = 1 session, and we can name that folder like session GUID. But how to make Open Interpreter work with particular folder as its root and whatever user writes in his prompt there should be no way how Open Interpreter can come to the subfolder of _other_ session with any file read/write operation from executed Python script? 3) I'm thinking about using venv somehow to separate user sessions, but it seems to be overhead in storing the same set of Python libraries several times (for each session), also I don't know how fast is it to create new venv instances for each session. Still we have open problem with storing and separating uploads/processed files of each session. How do you think what is the best solution?
open
2024-01-25T17:00:53Z
2024-01-25T21:31:39Z
https://github.com/OpenInterpreter/open-interpreter/issues/968
[ "Enhancement" ]
KonstantinMastak
1
sktime/pytorch-forecasting
pandas
1,769
[BUG] Locally run tutorials do not show same results as web tutorials
**Describe the bug** Running the tutorials (eg. https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/ar.html) locally do not produce the same results as the website tutorial. In the N-Beats example, the graphs all show really nice and tidy predictions. When run on a modern mac m2 or on the current google collab all the predictions are flat and are somewhat indicative of a linear pattern, but no where near as accurate as the online tutorials. ![Image](https://github.com/user-attachments/assets/27cd2844-313d-45fd-918a-94065a463542)
closed
2025-02-15T02:18:56Z
2025-02-17T21:23:40Z
https://github.com/sktime/pytorch-forecasting/issues/1769
[ "bug", "documentation" ]
twobitunicorn
3
Gozargah/Marzban
api
804
Add custom json configs in subscription
سلام. توی نسخه جدید v2rayNG قابلیت ایمپورت کردن کانفیگ json از سابسکریپشن اضافه شده. اگه این قابلیت به مرزبان هم اضافه بشه که بتونیم برای کاربر کانفیگ با Fragment و mux اضافه کنیم خیلی کمک میکنه
closed
2024-02-18T16:09:46Z
2024-02-20T10:09:16Z
https://github.com/Gozargah/Marzban/issues/804
[ "Feature" ]
SLVRMGC
1
PokeAPI/pokeapi
api
303
Add Generation 7
It appears that [generation 7 pokemon](https://bulbapedia.bulbagarden.net/wiki/List_of_Pok%C3%A9mon_by_National_Pok%C3%A9dex_number#Generation_VII) are missing from the API.
closed
2017-09-27T13:15:56Z
2017-10-25T09:02:24Z
https://github.com/PokeAPI/pokeapi/issues/303
[ "enhancement" ]
JacobDB
4
AUTOMATIC1111/stable-diffusion-webui
pytorch
15,516
[Bug]: NotImplementedError
### Checklist - [x] The issue exists after disabling all extensions - [ ] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [ ] The issue has not been reported before recently - [x] The issue has been reported before but has not been fixed yet ### What happened? Something's broken in webui-user. I tried to roll back the stable diff version to the old version 1.8.0. by rewriting the git command. Didn't work. Now the program stopped generating images. The problem is definitely not in the computer. But the error appears. In git pull origin master I wrote the command git reset --hard v1.8.0 and everything broke for me. Tell me, has anyone figured out how to fix this error? ![image](https://github.com/AUTOMATIC1111/stable-diffusion-webui/assets/167031576/66e92bdb-7814-48ae-84db-7b42efcc0631) ### Steps to reproduce the problem NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `flshattF@0.0.0` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info triton is not available `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40 ### What should have happened? Tried to roll back the program to the correct version. It made it even worse. ### What browsers do you use to access the UI ? _No response_ ### Sysinfo No. ### Console logs ```Shell venv "venv\Scripts\Python.exe" fatal: No names found, cannot describe anything. Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Version: 1.8.0-RC Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5 Launching Web UI with arguments: --xformers --autolaunch --theme dark WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.2+cu121 with CUDA 1201 (you have 2.0.1+cu118) Python 3.10.11 (you have 3.10.9) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details ============================================================================== You are running torch 2.0.1+cu118. The program is tested to work with torch 2.1.2. To reinstall the desired version, run with commandline flag --reinstall-torch. Beware that this will cause a lot of large files to be downloaded, as well as there are reports of issues with training tab on the latest version. Use --skip-version-check commandline argument to disable this check. ============================================================================== [-] ADetailer initialized. version: 24.3.1, num models: 10 ControlNet preprocessor location: D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-controlnet\annotator\downloads 2024-04-14 23:22:40,873 - ControlNet - INFO - ControlNet v1.1.441 2024-04-14 23:22:40,952 - ControlNet - INFO - ControlNet v1.1.441 Loading weights [bfb82d76c7] from D:\SDP\stable-diffusion-portable-main\models\Stable-diffusion\949bb26a4c989cbf387d10c62c6e0fac.safetensors [LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension. 2024-04-14 23:22:41,249 - ControlNet - INFO - ControlNet UI callback registered. *** Error executing callback ui_tabs_callback for D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-depth-lib\scripts\main.py Traceback (most recent call last): File "D:\SDP\stable-diffusion-portable-main\modules\script_callbacks.py", line 180, in ui_tabs_callback res += c.callback() or [] File "D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-depth-lib\scripts\main.py", line 47, in on_ui_tabs dataset = gr.Examples(examples=os.path.join(maps_path, t), inputs=[png_input_area],examples_per_page=24,label="Depth Maps", elem_id="examples") File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 58, in create_examples examples_obj = Examples( File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 209, in __init__ self.processed_examples = [ File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 210, in <listcomp> [ File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 211, in <listcomp> component.postprocess(sample) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\components\image.py", line 318, in postprocess return client_utils.encode_url_or_file_to_base64(y) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio_client\utils.py", line 387, in encode_url_or_file_to_base64 return encode_file_to_base64(path) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio_client\utils.py", line 360, in encode_file_to_base64 with open(f, "rb") as file: PermissionError: [Errno 13] Permission denied: 'tmp' --- Creating model from config: D:\SDP\stable-diffusion-portable-main\configs\v1-inference.yaml Running on local URL: http://127.0.0.1:7860 To create a public link, set `share=True` in `launch()`. Startup time: 10.4s (prepare environment: 2.1s, import torch: 2.7s, import gradio: 0.8s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 2.6s, create ui: 0.4s, gradio launch: 0.4s). Loading VAE weights specified in settings: D:\SDP\stable-diffusion-portable-main\models\VAE\vae-ft-ema-560000-ema-pruned.safetensors Applying attention optimization: xformers... done. Model loaded in 4.5s (load weights from disk: 0.4s, create model: 0.5s, apply weights to model: 1.6s, load VAE: 1.0s, calculate empty prompt: 0.8s). 0%| | 0/20 [00:00<?, ?it/s] *** Error completing request *** Arguments: ('task(bcn55hk1gljk7yp)', <gradio.routes.Request object at 0x000001F1B68AF6A0>, 'dog', '(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation', [], 20, 'Euler a', 1, 1, 8, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '', 0, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, True, 3, 4, 0.15, 0.3, 'bicubic', 0.5, 2, True, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, 50) {} Traceback (most recent call last): File "D:\SDP\stable-diffusion-portable-main\modules\call_queue.py", line 57, in f res = list(func(*args, **kwargs)) File "D:\SDP\stable-diffusion-portable-main\modules\call_queue.py", line 36, in f res = func(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\modules\txt2img.py", line 110, in txt2img processed = processing.process_images(p) File "D:\SDP\stable-diffusion-portable-main\modules\processing.py", line 785, in process_images res = process_images_inner(p) File "D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\modules\processing.py", line 921, in process_images_inner samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts) File "D:\SDP\stable-diffusion-portable-main\modules\processing.py", line 1257, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_kdiffusion.py", line 234, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_common.py", line 261, in launch_sampling return func() File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_kdiffusion.py", line 234, in <lambda> samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_cfg_denoiser.py", line 237, in forward x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in)) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps return self.inner_model.apply_model(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\modules\sd_hijack_utils.py", line 18, in <lambda> setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs)) File "D:\SDP\stable-diffusion-portable-main\modules\sd_hijack_utils.py", line 32, in __call__ return self.__orig_func(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward out = self.diffusion_model(x, t, context=cc) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\modules\sd_unet.py", line 91, in UNetModel_forward return original_forward(self, x, timesteps, context, *args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward h = module(h, emb, context) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward x = layer(x, context) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward x = block(x, context=context[i]) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint return CheckpointFunction.apply(func, len(inputs), *args) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply return super().apply(*args, **kwargs) # type: ignore[misc] File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward output_tensors = ctx.run_function(*ctx.input_tensors) File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\SDP\stable-diffusion-portable-main\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v)) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 223, in memory_efficient_attention return _memory_efficient_attention( File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 337, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 120, in _dispatch_fw return _run_priority_list( File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 63, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 `decoderF` is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see `python -m xformers.info` for more info `flshattF@0.0.0` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `tritonflashattF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info triton is not available `cutlassF` is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see `python -m xformers.info` for more info `smallkF` is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see `python -m xformers.info` for more info unsupported embed per head: 40 --- ``` ### Additional information Everything was working fine just yesterday. Today the version has been updated. I don't know how to rollback. Only made things worse.
open
2024-04-14T20:33:12Z
2024-04-15T12:17:33Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15516
[ "bug-report" ]
An0m3l1ss
1
laurentS/slowapi
fastapi
25
Using slowapi in bigger application
In test.py from app is run: ``` import uvicorn from app.main import app if __name__ == "__main__": uvicorn.run("test:app", host="0.0.0.0", port=8000, reload=True) ``` In main.py ``` from app.api.api_v1.api import router as api_router from fastapi import FastAPI from slowapi import Limiter, _rate_limit_exceeded_handler from slowapi.util import get_remote_address from slowapi.errors import RateLimitExceeded def get_application() -> FastAPI: application = FastAPI(title=PROJECT_NAME, debug=DEBUG, version=VERSION) application.add_event_handler( "startup", create_start_app_handler(application)) application.add_event_handler( "shutdown", create_stop_app_handler(application)) application.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler) application.include_router(api_router, prefix=API_PREFIX) return application app = get_application() ``` In endpoint user.py, how to use @limiter.limit("5/minute") ? ``` from starlette.requests import Request router = APIRouter() @router.post('/show', tags=["show"], name="show:show") async def homepage(request: Request): return PlainTextResponse("test") ***code**** ``` In this use case how to use slowapi in endpoint. I need to **limit to 5 requests per minute per user only and block that user for a minute** Sleep and retry after a minute In api.py ``` from app.api.api_v1.endpoints.show import router as ShowRouter router = APIRouter() router.include_router(ShowRouter, tags=["show"], prefix="/show")
closed
2020-12-29T14:45:34Z
2023-08-12T11:52:44Z
https://github.com/laurentS/slowapi/issues/25
[]
himalacharya
7
nvbn/thefuck
python
632
git checkout new branch suggestion improvement
Currently if i do the following: > git checkout my-new-branch error: pathspec 'my-new-branch' did not match any file(s) known to git. > fuck I get the following suggestion: > git branch my-new-branch && git checkout my-new-branch This works fine, but what i more likely meant (and this does the same thing) is: > git checkout -b my-new-branch This is much more succinct and should probably be the suggestion in my opinion.
closed
2017-04-20T18:16:10Z
2018-01-01T23:30:34Z
https://github.com/nvbn/thefuck/issues/632
[]
JemarJones
0
simple-login/app
flask
1,904
You published private key!
Hello! You had published the private key: https://github.com/simple-login/app/blob/master/local_data/private-pgp.asc It can cause a lot of security problems! @acasajus and @nguyenkims
closed
2023-10-03T20:16:49Z
2023-10-03T20:31:07Z
https://github.com/simple-login/app/issues/1904
[]
ghost
2
stanfordnlp/stanza
nlp
1,257
bracket chars not separated as punctuation in Spanish
**Describe the bug** When Stanza for Spanish processes square brackets containing text, the brackets are not recognized as punctuation. Instead, they remain attached to the text they were next to. The brackets are used in a transcript to indicate who is speaking. **To Reproduce** ``` from stanza.pipeline.core import Pipeline nlp_es_parse = Pipeline(lang="es", processors="tokenize") s="[Daniel Alarcón]: Esto es Radio Ambulante, desde NPR. Soy Daniel Alarcón." res=nlp_es_parse(s) ``` the first two entries (simplified) of *res* are: ``` { "id": 1, "text": "[Daniel", }, { "id": 2, "text": "Alarcón]", }, ``` **Expected behavior** I expect similar behavior as for English: ``` nlp_en_parse = Pipeline(lang="en", processors="tokenize") res=nlp_en_parse(s) ``` gives: ``` { "id": 1, "text": "[", }, { "id": 2, "text": "Daniel", }, { "id": 3, "text": "Alarcón", }, { "id": 4, "text": "]", },``` **Environment (please complete the following information):** - MacOS Monterey 12.1 (21C52) - Python 3.9.7 from Anaconda3 - stanza-1.5.0 - java version "1.8.0_371" **Additional context** none
open
2023-06-07T23:15:15Z
2023-06-10T16:54:54Z
https://github.com/stanfordnlp/stanza/issues/1257
[ "bug" ]
psnider
6
microsoft/unilm
nlp
835
Running the EdgeLM model on XSum
Hi I am trying to use the EdgeLM model and was wondering 1. how I can run the model with pre-trained weights in interactive mode, I'd like to run it for a summarization task. 2. I was also wondering if the pre-trained weights can be used to replicate the results reported on the XSum dataset. 3. How to make use of the pre-trained 2k vocab file and sentencepiece model while attempting 1 and 2. 4. How can I run the model on CPU only.
open
2022-08-22T13:52:26Z
2022-08-25T09:33:20Z
https://github.com/microsoft/unilm/issues/835
[]
pramodith
2
developmentseed/lonboard
jupyter
371
Raise warning for input without CRS
Something like "input doesn't have CRS information; if nothing renders on the map, check your CRS"
closed
2024-02-16T19:16:59Z
2024-02-29T20:28:35Z
https://github.com/developmentseed/lonboard/issues/371
[]
kylebarron
0
mljar/mercury
jupyter
296
Navbar scrolling off the top of the page
When scrolling off the top of the page, the navbar comes down and covers the elements in the sidebar. And also when scrolling past the bottom of the page, the navbar tends to want to disappear as well (though this behavior is probably more ok than the behavior when scrolling past the top) Some pointers: https://stackoverflow.com/questions/37916122/how-can-i-keep-the-navbar-at-top-when-scrolling-up-past-the-top-of-the-page Super nit, but adds some polish!
closed
2023-05-27T06:50:40Z
2023-07-14T10:22:52Z
https://github.com/mljar/mercury/issues/296
[ "bug", "help wanted" ]
kapily
4
charlesq34/pointnet
tensorflow
138
What does param H5_BATCH_SIZE mean? And how does this parameter affect training?
I am not familiar with the file format of hdf5. I want to know what the effect of this parameter H5_BATCH_SIZE (in file gen_indoor_h5.py) on training is. Need to change? And how to modify it according to the data? Thanks!!!!
open
2018-10-03T11:56:26Z
2018-10-03T11:56:26Z
https://github.com/charlesq34/pointnet/issues/138
[]
kxhit
0
piskvorky/gensim
nlp
2,649
Rename auto_examples directory to something more helpful
e.g. documentation
open
2019-10-24T14:17:41Z
2019-10-24T16:48:18Z
https://github.com/piskvorky/gensim/issues/2649
[ "documentation" ]
mpenkov
0
pennersr/django-allauth
django
3,637
Email registration unique constraint "users_user_email_key"
I'm using `allauth` with `email` as the username field. All the processes are working except when a user tries to register again with an email that is not verified. Steps to reproduce 1. Register an user with an email 2. Do not verify that email 3. Try to make registration again 4. See the error message as: ``` django.db.utils.IntegrityError: duplicate key value violates unique constraint "users_user_email_key" ``` After debugging what I found out is: 1. In `validate_email` method in `RegisterSerializer` we are checking if we already have a `verified` email such as: ``` def validate_email(self, email): email = get_adapter().clean_email(email) if allauth_account_settings.UNIQUE_EMAIL: if email and EmailAddress.objects.is_verified(email): <--------- Only checks if we have verified email that's why this conditional is skipped raise serializers.ValidationError( _('A user is already registered with this e-mail address.'), ) return email ``` 2. Email passes validation in 1st step, and is tried to save email to the 2nd `user` instance 3. But as email is already saved into the first `user` instance and waiting for verification we get the unique constraint `users_user_email_key` error My email field ``` email = models.EmailField("Email", unique=True) <---- I can remove unique constrain from here, but then django admin will fail to prevent duplicates ``` Settings: ``` ACCOUNT_AUTHENTICATION_METHOD = "email" ACCOUNT_EMAIL_REQUIRED = True ACCOUNT_USERNAME_REQUIRED = False ACCOUNT_USER_MODEL_USERNAME_FIELD = None ACCOUNT_EMAIL_VERIFICATION = "mandatory" ``` Package version ``` django-allauth==0.60.1 ```
closed
2024-02-11T09:15:06Z
2024-02-12T20:33:31Z
https://github.com/pennersr/django-allauth/issues/3637
[]
madatbay
1
babysor/MockingBird
deep-learning
946
有没有交流群来一波
同上
open
2023-08-11T06:26:23Z
2023-11-10T02:48:53Z
https://github.com/babysor/MockingBird/issues/946
[]
stars1324
6
onnx/onnx
machine-learning
6,607
Error in convert_from_ml_dtypes
```pytb .nox/test_onnx_weekly/lib/python3.11/site-packages/onnx/reference/ops/_op.py:91: in run res = self._run(x, y) .nox/test_onnx_weekly/lib/python3.11/site-packages/onnx/reference/ops/_op.py:139: in _run res = (convert_from_ml_dtypes(res[0]),) .nox/test_onnx_weekly/lib/python3.11/site-packages/onnx/reference/custom_element_types.py:50: in convert_from_ml_dtypes return array.view(dtype=dtype) E ValueError: Changing the dtype of a 0d array is only supported if the itemsize is unchanged ```
closed
2024-12-31T18:28:03Z
2025-01-07T06:36:39Z
https://github.com/onnx/onnx/issues/6607
[ "bug", "module: reference implementation", "contributions welcome" ]
justinchuby
1
STVIR/pysot
computer-vision
417
close
close
closed
2020-09-03T15:25:40Z
2020-09-05T15:43:26Z
https://github.com/STVIR/pysot/issues/417
[]
StrugglingForBetter
0
jupyter/nbgrader
jupyter
952
Unexpected behaviour of utils.find_all_files
<!-- Thanks for helping to improve nbgrader! If you are submitting a bug report or looking for support, please use the below template so we can efficiently solve the problem. If you are requesting a new feature, feel free to remove irrelevant pieces of the issue template. --> ### Operating system Ubunto 16.04 ### `nbgrader --version` nbgrader version 0.5.4 ### `jupyterhub --version` (if used with JupyterHub) 0.8.1 ### `jupyter notebook --version` 5.4.1 ### Expected behavior By including '.git' or '.git/**' in CourseDirectory.ignore anything under the git directory to be ignored. ### Actual behavior Anything in subdirectories of '.git' is included. ### Steps to reproduce the behavior $ mkdir -p foo/bar/qwe $ touch foo/bar/qwe/file.py $ /opt/conda/bin/python -c "from nbgrader.utils import find_all_files;print(find_all_files('foo', ['bar']))" ['foo/bar/qwe/file.py'] I'm sorry if this is expected behaviour but I found it surprising.
closed
2018-05-01T15:15:10Z
2018-05-03T22:49:21Z
https://github.com/jupyter/nbgrader/issues/952
[ "bug" ]
hcastilho
1
sktime/pytorch-forecasting
pandas
1,082
Why the first element of mode in `base_model.predict()` must be "raw" if a tuple is specified?
https://github.com/jdb78/pytorch-forecasting/blob/master/pytorch_forecasting/models/base_model.py line 1179 thx
open
2022-08-02T12:16:07Z
2022-08-02T12:16:07Z
https://github.com/sktime/pytorch-forecasting/issues/1082
[]
liaoyuhua
0
amidaware/tacticalrmm
django
1,955
[UI tweak] invert disks space and custom fields positions
**Is your feature request related to a problem? Please describe.** currently is is impossible to have a clear read trough if you have a lot of custom fields or big values in them due to the small space in the middle collumn of the UI **Describe the solution you'd like** invert the space that the disks are taking and give it to the custom fields to allow longer fields and more of them to be displayed at once without scrolling. **Describe alternatives you've considered** reports ? custom ui ? **Additional context** ![image](https://github.com/user-attachments/assets/35835888-1402-4fbe-8d77-1f5e05ca9718) https://discord.com/channels/736478043522072608/889513877124042762/1268858255174799403
open
2024-08-03T06:39:53Z
2024-08-03T06:47:52Z
https://github.com/amidaware/tacticalrmm/issues/1955
[]
P6g9YHK6
1
ray-project/ray
tensorflow
50,679
[core] Cover cpplint for ray/src/ray/scheduling
## Description As part of the initiative to introduce cpplint into the pre-commit hook, we are gradually cleaning up C++ folders to ensure compliance with code style requirements. This issue focuses on cleaning up `ray/src/ray/scheduling`. ## Goal - Ensure all `.h` and `.cc` files in `ray/src/ray/scheduling` comply with cpplint rules. - Address or suppress all cpplint warnings. - Add `ray/src/ray/scheduling` to the pre-commit hook once it is clean. ### Steps to Complete 1. Checkout the latest main branch and install the pre-commit hook. 2. Manually modify all C++ files in `ray/src/ray/scheduling` to trigger cpplint (e.g., by adding a newline). 3. Run `git commit` to trigger cpplint and identify issues. 4. Fix the reported issues or suppress them using clang-tidy if necessary. 5. Once all warnings are resolved, update the pre-commit hook to include `ray/src/ray/scheduling`. This is a sub issue from #50583
closed
2025-02-18T03:31:31Z
2025-02-21T15:19:55Z
https://github.com/ray-project/ray/issues/50679
[ "enhancement", "core" ]
400Ping
2
Yorko/mlcourse.ai
plotly
619
Missed athlete_events.csv in data
Missed athlete_events.csv in data
closed
2019-09-15T18:34:13Z
2019-09-15T18:36:28Z
https://github.com/Yorko/mlcourse.ai/issues/619
[]
iptkachev
1
facebookresearch/fairseq
pytorch
5,177
RuntimeError
Given groups=1, weight of size [1024, 80, 5], expected input[11, 240, 2630] to have 80 channels, but got 240 channels instead #### Code <!-- Please paste a code snippet if your question requires it! --> fairseq-train $DATA_ROOT \ --config-yaml config.yaml --multitask-config-yaml config_multitask.yaml \ --task speech_to_speech --n-frames-per-step 5 \ --criterion speech_to_spectrogram \ --arch s2spect_transformer_fisher --decoder-normalize-before \ --dropout 0.1 --attention-dropout 0.1 --relu-dropout 0.1 \ --train-subset train --valid-subset dev \ --save-dir ${MODEL_DIR} \ --eval-inference --best-checkpoint-metric mcd_loss \ --lr 0.0005 --lr-scheduler inverse_sqrt --warmup-init-lr 1e-7 --warmup-updates 10000 \ --optimizer adam --adam-betas "(0.9,0.98)" --clip-norm 10.0 --weight-decay 1e-6 \ --max-update 400000 --max-tokens 80000 --max-tokens-valid 30000 --required-batch-size-multiple 1 \ --max-target-positions 3000 --update-freq 16 \ --seed 1 --fp16 --num-workers 8 #### What have you tried? #### What's your environment? - fairseq Version (e.g., 1.0 or main): - PyTorch Version (e.g., 1.0) - OS (e.g., Linux): - How you installed fairseq (`pip`, source): - Build command you used (if compiling from source): - Python version: - CUDA/cuDNN version: - GPU models and configuration: - Any other relevant information:
open
2023-05-30T22:23:52Z
2023-05-30T22:23:52Z
https://github.com/facebookresearch/fairseq/issues/5177
[ "question", "needs triage" ]
merethebest
0
uriyyo/fastapi-pagination
fastapi
1,134
is there anyway to limit the total value?
Hi again! I'm using motor. Is there any way to limit the Total value? I couldn't find a way that isn't included in give a first `$limit` to aggregate then keep the paginate_aggregate as usual.
open
2024-04-21T08:57:59Z
2024-09-29T12:09:50Z
https://github.com/uriyyo/fastapi-pagination/issues/1134
[ "question" ]
jochman
5
cvat-ai/cvat
pytorch
9,112
How to use external S3 Bucket as CVAT Backend storage?
Hello ! Now in my CVAT Backend Configuration is used PV & PVC in K8s. I have deployed it via helm Chart. The problem is that my cloud provider supports only RWO access mode for storage. So in this architecture it causes scalability problems - backend pods should be strictly launched only on a K8s node with this PV attached. So the solution seems to me - is to **use S3 Bucket** instead PV storages. But I searched through docs - and I couldn't understand, how to use S3 Bucket for Backend Storage. Can you please help me to find a solution for this. Now I have such configuration. What do I have to change to **use S3 Bucket as a CVAT backend storage** ? ``` cvat: backend: defaultStorage: enabled: true storageClassName: <my-cloud-provider-class-name> accessModes: - ReadWriteOnce size: "2600Gi" ``` My cvat backend version: v2.11.0
closed
2025-02-17T11:31:15Z
2025-02-18T05:52:08Z
https://github.com/cvat-ai/cvat/issues/9112
[ "question" ]
A-Kod
4
unit8co/darts
data-science
2,408
[FEATURE REQUEST] Add temporal_hidden_past and temporal_hidden_future hyperparams to TiDEModel
The implementation of TiDE currently uses `hidden_size` for both the hidden layer of the covariates encoder (applied timestep-wise) and for the dense encoder (applied after flattening). This does not seem right as in most cases this would result in either absurd expansion in the covariate encoder or extreme compression in the dense encoder. In the original Google Research paper the authors do not seem to comment on the hidden size of the covariate encoder, but reviewing their Github repo it appears that it was [fixed to 64.](https://github.com/google-research/google-research/blob/master/tide/train.py) ``` # Create model model_config = { 'model_type': 'dnn', 'hidden_dims': [FLAGS.hidden_size] * FLAGS.num_layers, 'time_encoder_dims': [64, 4], # hidden, output 'decoder_output_dim': FLAGS.decoder_output_dim, 'final_decoder_hidden': FLAGS.final_decoder_hidden, 'batch_size': dtl.batch_size, } ``` I think this should be exposed as a hyperparameter as in most cases it should be much smaller than the dense encoder `hidden_size`, as it seems to have been in the original experiments .
closed
2024-06-12T17:49:07Z
2024-07-19T10:18:34Z
https://github.com/unit8co/darts/issues/2408
[ "good first issue", "improvement" ]
eschibli
0
albumentations-team/albumentations
machine-learning
1,720
The bbox obtained with HorizontalFlip has a pixel difference
Here is my calculation process: bboxes = [[0, 0, 100, 100]] after HorizontalFlip bboxes_flipped = [[540, 0, 640, 100]] The height and width of the image are 480, 640, so the correct bbox_flipped should be [[539, 0, 639, 100]], which is a confusing issue for me. I hope it can be answered. Thanks!
closed
2024-05-11T06:06:21Z
2024-05-14T02:03:37Z
https://github.com/albumentations-team/albumentations/issues/1720
[ "bug" ]
the-return-of-the-return
2
JaidedAI/EasyOCR
pytorch
345
About text detection data generation for korean language
could you please provide the information about the text detection data generation process?
closed
2021-01-06T07:55:28Z
2022-03-02T09:24:28Z
https://github.com/JaidedAI/EasyOCR/issues/345
[]
bharatsubedi
0
dgtlmoon/changedetection.io
web-scraping
2,407
Default `User-Agent` header could cause uninteded consequences
This is more of an informative message than a bug. The default user agent is configured in `/settings#fetching` and is configurable. Although, some sites can behave differently when a browser user agent is supplied. ``` ; curl 'https://jira.atlassian.com/rest/issueNav/1/issueTable' -H 'X-Atlassian-Token: no-check' -X POST -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36' -w '%{http_code}\n' XSRF check failed404 ; curl 'https://jira.atlassian.com/rest/issueNav/1/issueTable' -H 'X-Atlassian-Token: no-check' -X POST -H 'User-Agent: curl' -w '%{http_code}\n' 400 ``` Found this the hard way.
open
2024-06-11T17:30:54Z
2024-06-13T11:08:25Z
https://github.com/dgtlmoon/changedetection.io/issues/2407
[ "triage" ]
Hritik14
2
yzhao062/pyod
data-science
337
COPOD explain_outlier method error
As mentioned in #258, explain_outlier is broken in the latest version. We are fixing this at the moment. As a temporary solution, you could use PyOD V0.9.0 or earlier. `pip install pyod==0.9.0`
closed
2021-09-01T16:41:11Z
2021-12-25T02:14:36Z
https://github.com/yzhao062/pyod/issues/337
[]
yzhao062
0
pydata/xarray
pandas
9,232
Deprecate vectors of size-2 in `cross`
### What is your issue? numpy 2 has deprecated passing 2D vectors to `np.cross`. We should do the same presumably, or fix it so that the numpy warning isn't raised.. I deleted the doctests in https://github.com/pydata/xarray/pull/9177 to fix CI.
open
2024-07-11T09:27:13Z
2024-07-11T09:27:13Z
https://github.com/pydata/xarray/issues/9232
[]
dcherian
0
sebp/scikit-survival
scikit-learn
4
Example using `predict()`
More of a feature request than an issue, but I'm trying to use sksurv for some predictive modeling and am having a hard time interpreting the `predict` output for most of the estimators. For example, with `GradientBoostingSurvivalAnalysis` the `predict(X)` output is described as the hazard for X. The output from my test data, however, is often negative which doesn't make sense for a traditional hazard rate. Is there any way to include a `predict` step in your current example notebook? More than anything, I would like to be able to predict the survival duration of items not in my training data (like the `predict_expected in the lifelines library`). If that functionality doesn't exist, I would be happy to submit a pull request to that end if you can advise me a bit on how to interpret the current `predict` output. And a million thanks for making this code available and easy to install/use. This is cool stuff!
closed
2017-07-01T15:18:23Z
2017-09-07T19:50:01Z
https://github.com/sebp/scikit-survival/issues/4
[]
btengels
4
tflearn/tflearn
data-science
297
list index out of range when training 2 models in sequenece
Hi , im performing cross validation ,so i have my training call inside some loops(that vary the parameters) , in pseudocode its something like: param1 = [1,2,3]; param2 = [5,6,7]; for i in param1: for j in param2: model = train_model(i,j); store_metrics(model); model= select_best(); But for some reason ,the code just works for one iteration , if i mannually call train_model(param1,param2) in different python sessions ,it works fine, so i know my problemn is not related to parameter values, instead it looks as if some cache,temporary files, or session files break my code and as if only one train_model call can be done per python session, any ideas? something like clear_temps() ?
closed
2016-08-23T07:05:20Z
2016-08-24T02:40:19Z
https://github.com/tflearn/tflearn/issues/297
[]
llealgt
2
nvbn/thefuck
python
600
Error when correcting `git push` to branch with quotes in name from fish
When called from fish, using thefuck to correct a missing upstream branch fails if the branch name contains quotes or special characters. Example below, ``` ~/my-app> git push fatal: The current branch feat/let's-do-this has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin feat/let's-do-this ~/my-app> fuck git push --set-upstream origin feat/let's-do-this [enter/↑/↓/ctrl+c] Unexpected end of string, quotes are not balanced - (line 1): begin; git push --set-upstream origin feat/let's-do-this ^ from sourcing file - called on line 60 of file /usr/local/Cellar/fish/2.4.0/share/fish/functions/eval.fish in function 'eval' called on line 1 of file - in function 'fuck' called on standard input source: Error while reading file '-' ```
closed
2017-01-30T19:45:42Z
2018-01-04T16:40:02Z
https://github.com/nvbn/thefuck/issues/600
[ "fish" ]
mintyfresh
1
frappe/frappe
rest-api
31,758
Wrong Setup Command
### Information about bug In the setup guide one of the first commands mentioned doesn’t work. The user is told „docker compose“ was the right command but in fact it’s „docker-compose“. This might cause people to get frustrated when installing it on their machine so I wanted to point this out. It’s a pretty small difference and thus hard to even notice. ### Module other ### Version ERPNext Version 15 ### Installation method None ### Relevant log output / Stack trace / Full Error Message. ```shell ```
closed
2025-03-13T14:45:15Z
2025-03-17T10:44:17Z
https://github.com/frappe/frappe/issues/31758
[ "bug" ]
LanzelotSniper
2
waditu/tushare
pandas
1,179
603185财报数据不全
603185缺少2016年和2017年的资产负债表
closed
2019-10-25T03:07:45Z
2019-10-31T05:16:16Z
https://github.com/waditu/tushare/issues/1179
[]
256481788jianghao
1
ansible/awx
django
15,158
AWX 24.3.1 - quay.io/ansible/awx:24.3.1: not found
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. - [X] I am **NOT** reporting a (potential) security vulnerability. (These should be emailed to `security@ansible.com` instead.) ### Bug Summary Error after updating to version 24.3.1. The image: quay.io/ansible/awx:24.3.1 is not available. ### AWX version 24.3.1. ### Select the relevant components - [ ] UI - [ ] UI (tech preview) - [ ] API - [ ] Docs - [ ] Collection - [ ] CLI - [ ] Other ### Installation method kubernetes ### Modifications no ### Ansible version _No response_ ### Operating system _No response_ ### Web browser _No response_ ### Steps to reproduce Install AWX 24.3.1. ### Expected results Running correctly ### Actual results Failed to pull image "quay.io/ansible/awx:24.3.1": rpc error: code = NotFound desc = failed to pull and unpack image "quay.io/ansible/awx:24.3.1": failed to resolve reference "quay.io/ansible/awx:24.3.1": quay.io/ansible/awx:24.3.1: not found ### Additional information _No response_
closed
2024-04-30T22:39:14Z
2024-05-01T02:28:50Z
https://github.com/ansible/awx/issues/15158
[ "type:bug", "needs_triage", "community" ]
moreirodamian
3
ipython/ipython
jupyter
14,237
Sphinx directive is broken without pickleshare
IPython 8.17.0 dropped `pickleshare` as a requirement (https://github.com/ipython/ipython/pull/14217). This apparently caused the IPython Sphinx directive to break with cryptic errors after upgrading: ```bash UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: Bookmark 'ipy_savedir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: Bookmark 'ipy_thisdir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: %bookmark -d: Can't delete bookmark 'ipy_thisdir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: Bookmark 'ipy_savedir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: Bookmark 'ipy_thisdir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: %bookmark -d: Can't delete bookmark 'ipy_thisdir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: Bookmark 'ipy_savedir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: Bookmark 'ipy_thisdir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: %bookmark -d: Can't delete bookmark 'ipy_thisdir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: Bookmark 'ipy_savedir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: Bookmark 'ipy_thisdir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: %bookmark -d: Can't delete bookmark 'ipy_thisdir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' UsageError: Bookmark 'ipy_savedir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: Bookmark 'ipy_thisdir' not found. Use '%bookmark -l' to see your bookmarks. UsageError: %bookmark -d: Can't delete bookmark 'ipy_thisdir' UsageError: %bookmark -d: Can't delete bookmark 'ipy_savedir' ``` The issue is fixed by `pip install pickleshare`. - [CI without pickleshare (failed)](https://github.com/team-ocean/veros/actions/runs/6744471166/job/18398441524) - [CI with pickleshare (passed)](https://github.com/team-ocean/veros/actions/runs/6777576176/job/18421459368?pr=550) - [Sphinx RST file causing the error](https://github.com/team-ocean/veros/blob/main/doc/tutorial/analysis.rst)
open
2023-11-06T23:08:26Z
2023-11-06T23:08:26Z
https://github.com/ipython/ipython/issues/14237
[]
dionhaefner
0
automl/auto-sklearn
scikit-learn
855
What is the difference between ensemble_size and ensemble_nbest?
Hi: Thank you for developing this project,which is very helpful to non-experts in machine learning (like me). I am not very clear about the meaning of these two parameters: `ensemble_size ` && `ensemble_nbest` My current perception is that `ensemble_size` is used to control the number of models of the ensemble. `ensemble_nbest` is used to control the number of models added to ensemble during each iteration. That is just my guess, please tell me the correct concept. Thanks
closed
2020-05-17T05:48:26Z
2020-05-18T16:12:03Z
https://github.com/automl/auto-sklearn/issues/855
[]
weir12
2
plotly/dash-table
plotly
956
Dropdown won't display
I am trying to create a dropdown feature inside a DataTable like this: ![Screenshot 2023-10-05 at 02 34 28](https://github.com/plotly/dash-table/assets/72039004/4dddb053-86a1-425e-aa29-dcd4b6f59aae) Unfortunately I cannot click on the dropdown button and then trigger the full menu showing. Instead I have to manually type in the value I want into the cell for the column. For the working code example from the main website, [here](https://dash.plotly.com/datatable/dropdowns), I get this HTML upon inspection - the .Select-menu-outer in particular seems to be what we're after. ![Screenshot 2023-10-05 at 02 36 47](https://github.com/plotly/dash-table/assets/72039004/b136159b-1106-41cc-8a15-40c5a02745fb) This doesn't appear to be a CSS override issue as I am unable to trigger this HTML by clicking on the dropdown menu button in the first place, and only have this HTML: ![Screenshot 2023-10-05 at 02 38 46](https://github.com/plotly/dash-table/assets/72039004/a8425cd0-1ef1-445d-99b4-cdfc8bc88ff7) This is my sanitised code with dummy values put for the data variable in case this is causing the error (the data gets updated by a function on uploading a separate CSV that overwrites the data) `html.Div(id='output-table', style={'display': 'block'}, children=[ dash_table.DataTable( id='table', editable=True, columns=[ {"name": "Source", "id": "Source"}, {"name": "Target", "id": "Target"}, {"name": "# of Calls", "id": "# of Calls"}, {"name": "color", "id": "color", 'presentation': 'dropdown', "editable": True} ], data=[ {'Source': 'A', 'Target': 'B', '# of Calls': 10, 'color': 'green'}, {'Source': 'B', 'Target': 'C', '# of Calls': 5, 'color': 'red'}, {'Source': 'C', 'Target': 'D', '# of Calls': 2, 'color': 'darkgrey'} ], dropdown={ 'color': { 'options': [ {'label': 'Happy path', 'value': 'green'}, {'label': 'Unhappy path', 'value': 'red'}, {'label': 'Neutral path', 'value': 'darkgrey'} ] } }, )` Any help on what is causing this?
open
2023-10-05T06:43:22Z
2023-10-05T06:43:22Z
https://github.com/plotly/dash-table/issues/956
[]
TKELKAR123
0
opengeos/leafmap
streamlit
688
Cannot get pixel values with inspector tool (notebook 89)
### Environment Information - leafmap version: 0.31.2 - Python version: 3.11 - Operating System: Windows 10 ### Description I completed notebook 89 without issues. However, when exploring the data in the interactive map, the inspector tool only provides the lat/lon location and provides a TypeError rather than the pixel value. The code is unchanged from notebook 89. ### What I Did ![image](https://github.com/opengeos/leafmap/assets/78507308/e6240e1d-2c89-4db9-ad7b-6eb3aa14a22c) ``` m = leafmap.Map() m.add_raster(satellite, band=[1, 2, 3], nodata=-1, layer_name="Landsat 7") m.add_raster(ndvi_image, colormap="Greens", layer_name="NDVI") m ```
closed
2024-02-20T15:59:23Z
2024-02-21T04:47:48Z
https://github.com/opengeos/leafmap/issues/688
[ "bug" ]
ZZMitch
2
iperov/DeepFaceLab
machine-learning
604
Where to find changelog
Where to find changelog without downloading the full DFL.exe? Couldn't locate it on github..?
closed
2020-02-04T08:14:17Z
2020-03-28T05:44:24Z
https://github.com/iperov/DeepFaceLab/issues/604
[]
wuffenberg
5
MentatInnovations/datastream.io
jupyter
19
Support more complicated matchings between data set columns and anomaly detectors
open
2018-02-20T06:22:44Z
2018-02-20T07:03:12Z
https://github.com/MentatInnovations/datastream.io/issues/19
[ "enhancement" ]
canagnos
0
jmcnamara/XlsxWriter
pandas
615
Feature request: Merge duplicate images
Title: Feature request: Merge duplicate images Hi, I am using XlsxWriter 1.1.5 to insert the same image on many sheets. Like this: ```python import xlsxwriter book = xlsxwriter.Workbook("test.xlsx") for i in range(10): sheet = book.add_worksheet("Feuille %s" % (i + 1)) sheet.insert_image("A1", "python-logo.png") book.close() ``` It's work well BUT the image is copied 10 times in the file, which has a big weight: ![2019-04-02_150033](https://user-images.githubusercontent.com/25034353/55404438-2e132380-5558-11e9-961f-a45bf247cc5d.png) I use this code to detect duplicate (with a checksum), remove duplicates and reduce the weight: ```python from os import path as op import tempfile import hashlib import shutil import zipfile from lxml import etree def merge_images(file, file_dest): """ remove duplicates in the xlsx file use it AFTER book.close() """ checksums = {} exclude = set() tmp_dir = tempfile.mkdtemp() with zipfile.ZipFile(file) as xlsx: xlsx.extractall(tmp_dir) filenames = xlsx.namelist() for filename in (f for f in filenames if f.endswith(".rels")): path = op.join(tmp_dir, filename) root = etree.parse(path) for node in root.iter(): target = node.get("Target") if target and target.endswith(".png"): img_path = op.realpath(op.join(op.dirname(path), "..", target)) with open(img_path, "rb") as f: md5 = hashlib.md5(f.read()).hexdigest() alt_target = checksums.get(md5) # If the checksum exist, replace the target and add the file to exclude set if alt_target and alt_target != target: exclude.add(img_path) node.attrib["Target"] = alt_target else: checksums[md5] = target with open(path, "wb") as f: f.write(etree.tostring(root)) with zipfile.ZipFile(file_dest, "w", zipfile.ZIP_DEFLATED) as xlsx: for filename in filenames: path = op.realpath(op.join(tmp_dir, filename)) # Do not write excluded images if path not in exclude: xlsx.write(path, filename) shutil.rmtree(tmp_dir) merge_images("test.xlsx", "test_new.xlsx") ```
closed
2019-04-02T13:23:29Z
2019-12-23T01:11:36Z
https://github.com/jmcnamara/XlsxWriter/issues/615
[ "feature request", "medium term" ]
Linekio
6
521xueweihan/HelloGitHub
python
2,053
【开源自荐】🧾 Resume Generator: 在线简历生成器,只需要一份简历数据,即可在线预览、编辑和下载
## 项目推荐 - 项目地址: https://github.com/visiky/resume - 类别:JS - 项目后续更新计划:见 https://github.com/visiky/resume/issues - 项目描述:一款在线简历生成器,轻松实现在线简历以及下载(支持 PC 端和移动端)。 - 推荐理由: 无须 fork 仓库,即可在线预览、编辑和下载 PDF 简历. - 内置三套模板,可以自定义主题颜色、自定义模块;[简历模板1](https://visiky.github.io/resume/?user=visiky),[简历模板2](https://visiky.github.io/resume/?template=template2&user=visiky),[简历模板3](https://visiky.github.io/resume/?template=template3&user=visiky)。 - 支持国际化 - 支持在线预览,方便随时分享在线简历(简历信息存储在个人仓库) - 支持在线编辑简历信息,可导入、导出以及下载 PDF - 截图: ![image](https://user-images.githubusercontent.com/15646325/147800052-dbbac1ba-8404-4550-92a6-ba39fe5fa718.png)
closed
2021-12-31T02:54:03Z
2022-01-28T01:21:57Z
https://github.com/521xueweihan/HelloGitHub/issues/2053
[ "已发布", "JavaScript 项目" ]
visiky
2
huggingface/transformers
machine-learning
36,252
`padding_side` is of type `bool` when it should be `Literal['right', 'left']`
### System Info main branch ### Who can help? @amyeroberts ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L2816 ### Expected behavior Should be `Literal['right', 'left']` as mentioned in the docs. https://huggingface.co/docs/transformers/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.__call__.padding_side
closed
2025-02-18T10:12:48Z
2025-03-03T00:55:44Z
https://github.com/huggingface/transformers/issues/36252
[ "bug" ]
winstxnhdw
2
mckinsey/vizro
pydantic
846
ALL option doesn't reflect the right options
### Which package? vizro ### Package version 0.1.26 ### Description Not sure whether it's worth fixing now or waiting until we have the new and improved "tick all" functionality though. It's kind of a big bug but the fact that no one spotted it yet shows it can't be that important... There's already an obvious problem with the `ALL` option working "dynamically" like it does now: when you manually set `options` then I would expect `ALL` to mean "all the options you specified" but actually it means "all the options in the data". I don't know why we didn't realise this before because it's pretty obvious in hindsight 🤔 I guess this shows that no one really manually specifies `options` for a `vm.Filter`, which is useful to know anyway. ### How to Reproduce [PyCafe link](https://py.cafe/snippet/vizro/v1#c=H4sIAM93I2cAA41U627aMBR-lci_gkQjYOs6ITFpA9pVWi-iU6eNIGSSA7Hq2J7tABni3XecEEppqmJQfPm-c_Xx2ZBIxkC6JCx-A1h6VCnPSs_q3LMJEwvjycwGJR6KULBUSW09RUVMjYd_Fe8Pl-yflkGKKnkBLdMjSHFpeR7AWmkwpfQ6FHMt05Lg7diPbnMIBFObQAommEaSSz1dUp6Bqej9ux93owfnnaLGed5Dr4IBtfRS0xT8UHg4NuXkRkhKYki63vj5uIQeIinAhKR5DHyjCwysBrjJ5nMm6pA-fapVNZAis7UCUj6xWpG-lswYKmrFhhGnTNc6reVK1Gv8SXWtshsaUS1r47k_1jQ5WCOs5ZxZb0Qtk0Vuz1pBu9X03HReTC2cyrNq3SnWFfy5APDrRDrFBr8OuahMbUPRcHftbnsBeNfLNLjH1e6aLbMcehgdGOvdwsr7U46925HEqhGAiewd3D0qudJUJTst1ZizRaahp9bBjOojzI2yjgKDZbgrSv84DY2jLLohNUMHHC7Q1eRVpt1YI_JSUw0pR1JVyzVw8VpO0VM-K6x8y0QmMzM1EcU0li9rjAV7_TgcXV3fXk1Hw8G0__vrbUgO796Nw0CrdcVxmrXkmHJM9CXjFrSPNrPUxb8PwDPAIbLoMrL6CURPnBnrS-US5a7r-Q2-KO1Jo-EMFWWBbSmZSarjsjIG1dZ31YI63DQpmEWX8RvBLGM89vdyjUBnwkcGaRINfzOmsfNgtWCbLHrRlx6WZtD5WBBsrlz_LADcYvd8ZLAi3TnlBpoEYmaHgs44kqzO8ETlNpECRVQuYxbD2RKfwKegjcKc5thrSXdDlqANRky6HXRBSjuSqHJTGcOwkR0l6LUGJI33iKUzAxbBFYttQrrt81aTpEz8Krcfyt13YIsE7bgti1Fszji4ZBrQfbwnygToNyw46tms5CJFUaeXkO1k23ztxRsuPsthugKVv5Q-Ibr3I9p5-F40VSD17ic7re2L81NsYknjMeXvGa14zqr7bZtFVrG-xpPtf9-1mNiSBwAA) ### Output _No response_ ### Code of Conduct - [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
closed
2024-10-31T12:31:14Z
2025-02-12T17:36:21Z
https://github.com/mckinsey/vizro/issues/846
[ "Bug Report :bug:" ]
antonymilne
1
PokeAPI/pokeapi
graphql
418
null region for location/253
when I ask for location/253 it, says the region is 'null', but I think it should be 'johto'
closed
2019-03-01T10:53:48Z
2020-08-19T10:49:13Z
https://github.com/PokeAPI/pokeapi/issues/418
[]
jachymb
1
statsmodels/statsmodels
data-science
8,553
ENH: add var_weights or weights to RLM
(just a random idea) If we have prior information about heteroscedasticity, then we need to include var_weights in RLM. e.g. observations are averages of subsamples with different nobs or inherent heteroscedasticity as in GLM, discrete. also we might want to downweigh x-outliers. (as in literature for GLM) Then we need more general weights, like importance weights. How do those affect inference? **update** old issue for weights including RLM #505 I found issue searching for carroll ruppert because I looked at the reference again by chance Carroll, Raymond J., and David Ruppert. "Robust estimation in heteroscedastic linear models." The annals of statistics (1982): 429-441.
open
2022-12-02T22:00:36Z
2023-09-25T18:13:02Z
https://github.com/statsmodels/statsmodels/issues/8553
[ "type-enh", "comp-robust" ]
josef-pkt
2
littlecodersh/ItChat
api
774
如何获取群聊非好友头像
大概代码意思如下: img = itchat.get_head_img(chatroomUserName=i["UserName"]) file_image = open('0' + ".jpg", 'wb') file_image.close() 其中UserName 为chatroom MemberList中成员的username.
open
2018-12-29T02:51:45Z
2018-12-29T05:31:11Z
https://github.com/littlecodersh/ItChat/issues/774
[]
yaocanwei
1
saulpw/visidata
pandas
2,557
[xlsx] Not able to read data from file that is not local and no errors are reported
**Small description** Not able to read xlsx file from a location that is not a direct filename. No web or zip file contents are read. **Data to reproduce** Example: ``` vd https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/1179998/LSBS_Year_8__2022__Businesses_with_no_employees_Data.xlsx ``` **Steps to reproduce** See example data. Trying to load from URL returns no data, and does not report an error. **Configuration** VisiData 3.1dev latest develop. Python 3.12.6 **Additional context** It looks like https://xlrd.readthedocs.io/en/latest/api.html#xlrd.open_workbook accepts a file_content arg, which might make it possible to pass in data loaded from zip or web. This might be an issue with other filetypes, too. Also, this might have been broken for some time. https://github.com/saulpw/visidata/blob/90f7eeefe48c93f030fa1af7b6cfbbaf90f53653/visidata/loaders/xlsx.py#L37
closed
2024-10-13T21:09:47Z
2024-10-13T22:43:07Z
https://github.com/saulpw/visidata/issues/2557
[ "bug" ]
frosencrantz
2
peerchemist/finta
pandas
120
linear regression for Squeeze Momentum Indicator
- your code doesn't contain `linreg` compared with the one found from trading-view, may I ask why? - yours: ```python @classmethod def SQZMI(cls, ohlc: DataFrame, period: int = 20, MA: Series = None) -> DataFrame: """ Squeeze Momentum Indicator The Squeeze indicator attempts to identify periods of consolidation in a market. In general the market is either in a period of quiet consolidation or vertical price discovery. By identifying these calm periods, we have a better opportunity of getting into trades with the potential for larger moves. Once a market enters into a “squeeze”, we watch the overall market momentum to help forecast the market direction and await a release of market energy. :param pd.DataFrame ohlc: 'open, high, low, close' pandas DataFrame :period: int - number of periods to take into consideration :MA pd.Series: override internal calculation which uses SMA with moving average of your choice :return pd.Series: indicator calcs as pandas Series SQZMI['SQZ'] is bool True/False, if True squeeze is on. If false, squeeeze has fired. """ if not isinstance(MA, pd.core.series.Series): ma = pd.Series(cls.SMA(ohlc, period)) else: ma = None bb = cls.BBANDS(ohlc, period=period, MA=ma) kc = cls.KC(ohlc, period=period, kc_mult=1.5) comb = pd.concat([bb, kc], axis=1) def sqz_on(row): if row["BB_LOWER"] > row["KC_LOWER"] and row["BB_UPPER"] < row["KC_UPPER"]: return True else: return False comb["SQZ"] = comb.apply(sqz_on, axis=1) return pd.Series(comb["SQZ"], name="{0} period SQZMI".format(period)) ``` - trading-view: ```js // // @author LazyBear // List of all my indicators: https://www.tradingview.com/v/4IneGo8h/ // study(shorttitle = "SQZMOM_LB", title="Squeeze Momentum Indicator [LazyBear]", overlay=false) length = input(20, title="BB Length") mult = input(2.0,title="BB MultFactor") lengthKC=input(20, title="KC Length") multKC = input(1.5, title="KC MultFactor") useTrueRange = input(true, title="Use TrueRange (KC)", type=bool) // Calculate BB source = close basis = sma(source, length) dev = multKC * stdev(source, length) upperBB = basis + dev lowerBB = basis - dev // Calculate KC ma = sma(source, lengthKC) range = useTrueRange ? tr : (high - low) rangema = sma(range, lengthKC) upperKC = ma + rangema * multKC lowerKC = ma - rangema * multKC sqzOn = (lowerBB > lowerKC) and (upperBB < upperKC) sqzOff = (lowerBB < lowerKC) and (upperBB > upperKC) noSqz = (sqzOn == false) and (sqzOff == false) val = linreg(source - avg(avg(highest(high, lengthKC), lowest(low, lengthKC)),sma(close,lengthKC)), lengthKC,0) bcolor = iff( val > 0, iff( val > nz(val[1]), lime, green), iff( val < nz(val[1]), red, maroon)) scolor = noSqz ? blue : sqzOn ? black : gray plot(val, color=bcolor, style=histogram, linewidth=4) plot(0, color=scolor, style=cross, linewidth=2) ```
open
2021-06-28T18:15:34Z
2022-09-02T14:41:05Z
https://github.com/peerchemist/finta/issues/120
[]
tesla-cat
7
deepset-ai/haystack
machine-learning
9,085
Placeholder Issue for 2.12 milestone
Since we can't share the milestone tag between repos we are creating a placeholder one to collect issues that should be added to the 2.12 milestone as sub-issues here
open
2025-03-21T08:47:24Z
2025-03-21T08:48:15Z
https://github.com/deepset-ai/haystack/issues/9085
[]
sjrl
0
pytest-dev/pytest-html
pytest
492
Unable to fetch the path for theHTML report using the item.config.option.html in pytest-html
Hey, Just try to build a new Html report for the test cases & attach a screenshot for the Failed test to My Html Report. Got this piece of Code but was unable to get the HTML path for the report which is being created. Is there a better to get the path without Hard Coding it, Please share? OS : Windows 64 pytest==6.2.5 pytest-html==3.1.1 @pytest.hookimpl(hookwrapper=True) def pytest_runtest_makereport(item, call): pytest_html = item.config.pluginmanager.getplugin("html") outcome = yield report = outcome.get_result() extra = getattr(report, "extra", []) if report.when == "call": extra.append(pytest_html.extras.url("https://www.saucedemo.com/")) xfail = hasattr(report, "wasxfail") if (report.skipped and xfail) or (report.failed and not xfail): report_dir = os.path.dirname(item.config.option.html) file = report.nodeid.replace("::","_") + ".png" dest = os.path.join(report_dir,file) driver.save_screenshot(dest) extra.append(pytest_html.extras.html("<div>Additional HTML</div>")) report.extra = extra
open
2022-01-25T08:03:28Z
2022-01-25T20:33:55Z
https://github.com/pytest-dev/pytest-html/issues/492
[]
pulkit-rajpal
1
fugue-project/fugue
pandas
477
[BUG] Should remove tests folder from Fugue package
In setup.py, the `find_packages` was used without `exclude`, so when building the Fugue package, it includes the folder `tests`. See this [slack chat](https://fugue-project.slack.com/archives/C015RGNUW77/p1685608389038899) for an example issue.
closed
2023-06-06T06:23:48Z
2023-06-06T06:58:30Z
https://github.com/fugue-project/fugue/issues/477
[ "bug" ]
goodwanghan
0
HumanSignal/labelImg
deep-learning
382
Bug: incorrectly rewrite the image file
<!-- --> After save the labels as a .xml file, if do more label work and save them again, it will be wrongly saved as a .jpg file without any notification, which leads to the missing of original image file. - **OS:** Win10 Pro ![lanelimg_bug](https://user-images.githubusercontent.com/35131630/47256031-444e8b00-d4b4-11e8-82ab-47eaffdc418c.PNG) - **PyQt version:**
closed
2018-10-20T13:13:57Z
2018-12-09T19:50:36Z
https://github.com/HumanSignal/labelImg/issues/382
[]
liziken
2
scikit-learn/scikit-learn
python
30,652
Unconsistent FutureWarning when using `force_int_remainder_cols=True` in `ColumnTransformer`
### Describe the bug Calling fit on a pipeline that includes a `ColumnTransformer` step with `remainder="passthrough"` and `force_int_remainder_cols=True` (the default value as in v1.6) raises a `FutureWarning: The format of the columns of the 'remainder' transformer in ColumnTransformer.transformers_ will change in version 1.7 to match the format of the other transformers.` Calling a cross-validation doesn't. ### Steps/Code to Reproduce ```python import pandas as pd from sklearn.compose import make_column_selector as selector from sklearn.model_selection import cross_validate from sklearn.pipeline import make_pipeline from sklearn.compose import ColumnTransformer from sklearn.preprocessing import OrdinalEncoder from sklearn.ensemble import HistGradientBoostingClassifier data = pd.DataFrame({ "quarters": ["Q1", "Q2", "Q3", "Q1", "Q3"], "profit": [4.20, 7.70, 9.20, 4.26, 1.84], "expenses": [3.32, 3.32, 3.32, 2.21, 2.21], } ) target = pd.Series([0, 1, 0, 1, 0]) categorical_columns_selector = selector(dtype_include=object) categorical_columns = categorical_columns_selector(data) categorical_preprocessor = OrdinalEncoder( handle_unknown="use_encoded_value", unknown_value=-1 ) preprocessor = ColumnTransformer( [("categorical", categorical_preprocessor, categorical_columns)], remainder="passthrough", ) model = make_pipeline(preprocessor, HistGradientBoostingClassifier()) model.fit(data, target) # raises FutureWarning cross_validate(model, data, target, cv=2) # does not raise FutureWarning ``` ### Expected Results Warning should be raised when cross-validating as well. At least for the first internal fit. ### Actual Results Warning is not raised when cross-validating. ### Versions ```shell Python dependencies: sklearn: 1.6.1 pip: 24.3.1 setuptools: 75.6.0 numpy: 2.2.0 scipy: 1.14.1 Cython: None pandas: 2.2.3 matplotlib: 3.10.0 joblib: 1.4.2 threadpoolctl: 3.5.0 ```
closed
2025-01-15T16:20:16Z
2025-01-20T14:43:42Z
https://github.com/scikit-learn/scikit-learn/issues/30652
[ "Bug" ]
ArturoAmorQ
3
google-deepmind/graph_nets
tensorflow
91
Any function to perform weighted average as a reducer?
By default, the network uses [unsorted_segment_sum](https://www.tensorflow.org/api_docs/python/tf/math/unsorted_segment_sum) in `node_block_opt` and `global_block_opt`. Is there an implementation to have a reducer learned with weights that performs weighted average? (or any other learned aggregation) I've tried to implement it but it is too hard to replace `unsorted_segment_sum`
closed
2019-10-31T16:56:52Z
2019-12-12T18:30:08Z
https://github.com/google-deepmind/graph_nets/issues/91
[]
JoaoLages
5
InstaPy/InstaPy
automation
5,849
Interact with users not following you
Is there a way to set up the set_relationship_bounds function so as to only interact with accounts that don't currently follow you?
closed
2020-10-27T09:33:34Z
2021-07-21T07:18:33Z
https://github.com/InstaPy/InstaPy/issues/5849
[ "wontfix" ]
ghost
1
opengeos/leafmap
jupyter
203
Leafmap (default backend) and Streamlit for vector tile layers
hi 👋 ### Environment Information - leafmap version: 0.7.7 - steamlit version: 1.5.1 - Python version: 3.9.0 - Operating System: MacOS ### Description I want to run leafmap in stream lit with custom vector tile layers. The page is displayed but the browser console overflows with errors. Eventually it goes unresponsive with this error looping infinitely: ```bash Uncaught (in promise) Error: Invalid LatLng object: (53.24878214338736, NaN) at new B (leaflet-src.js:1365:11) at Object.unproject (leaflet-src.js:1667:12) at Object.pointToLatLng (leaflet-src.js:1506:28) at n.unproject (leaflet-src.js:3963:29) at n.layerPointToLatLng (leaflet-src.js:3971:17) at n.containerPointToLatLng (leaflet-src.js:4028:17) at n._update (leaflet-src.js:5493:10) at n.whenReady (leaflet-src.js:4428:15) at n.onAdd (leaflet-src.js:5469:9) at n.addTo (leaflet-src.js:4757:44) ``` The error does not occur when falling back to folium instead, so my guess is that I'm not supposed to use the default plotting backend. However folium does not allow me to use `m.add_vector_tile_layer()`. ### What I Did Streamlit app ```python from leafmap import leafmap m = leafmap.Map( center=[53.25, 13.37], zoom=8, draw_control=False, measure_control=False, fullscreen_control=True, attribution_control=False, ) m.to_streamlit() ```
closed
2022-02-21T19:28:38Z
2022-03-12T04:36:54Z
https://github.com/opengeos/leafmap/issues/203
[ "bug" ]
iwpnd
3
benbusby/whoogle-search
flask
1,030
[BUG] Replit.com Code Does not work Anymore
Hangs forever then stops after this is displayed in the console: Requirement already satisfied: toml in ./venv/lib/python3.8/site-packages (from pytest==6.2.5->-r requirements.txt (line 26)) (0.10.2) WARNING: pip is using a content-addressable pool to install files from. This experimental feature is enabled through --use-feature=content-addressable-pool and it is not ready for production. **Deployment Method** - [X] Replit.com (one-click deploy) **Version of Whoogle Search** - [X] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc) - [0.8.2] Version [version number]
closed
2023-07-12T20:01:10Z
2023-07-12T22:26:21Z
https://github.com/benbusby/whoogle-search/issues/1030
[ "bug" ]
2zzly
2
joeyespo/grip
flask
142
Is it possible to do hot-reload?
Hey, I've been using grip for a long time and I love it. Recently I'm doing some React development, and [`react-hot-loader`](http://gaearon.github.io/react-hot-loader/) has been immensely helpful. Is it possible to do something similar in grip? Watching changes in `README.md` and live update the webpage.
closed
2015-10-03T01:58:42Z
2015-11-23T16:44:26Z
https://github.com/joeyespo/grip/issues/142
[]
octref
3
graphdeco-inria/gaussian-splatting
computer-vision
1,107
GPU memory increased by 4GiB when running rasterizer
It's a wired problem, which bothered me for a whole day. when i running code below: ``` self.raster_settings = GaussianRasterizationSettings( image_height=self.image_size, image_width=self.image_size, tanfovx=self.tanfov, tanfovy=self.tanfov, bg=self.bg, scale_modifier=1.0, viewmatrix=w2c, projmatrix=full_proj, sh_degree=0, campos=cam_center, prefiltered=False, debug=False, antialiasing=True ) self.rasterizer = GaussianRasterizer(raster_settings=self.raster_settings) rendered_image, _ ,_ = self.rasterizer( means3D = means3D, means2D = screenspace_points, colors_precomp = colors_precomp, opacities = opacities, scales = scales, rotations = rotations, cov3D_precomp = None) ``` CUDA memory increased by 4GiB, which is unbearable. That code is supposed to run several times. Each time that code is run, CUDA memory grows by 4GB until the CUDA out of memory. I check the shape of each variable, it seems that it wasn't the problem of the size of variables, every variables are in expected shape. ``` means3D.shape=torch.Size([50200, 3]) screenspace_points.shape=torch.Size([50200, 3]) colors_precomp.shape=torch.Size([50200, 3]) opacities.shape=torch.Size([50200, 1]) scales.shape=torch.Size([50200, 1]) rotations.shape=torch.Size([50200, 4]) ``` And the output size is also in expected shape, `rendered_image.shape=torch.Size([3, 1024, 1024])` I have tried reinstalled diff_gaussian_rasterization and it doesn't work. Could you tell me some potential possible reasons which cause this problem and it's corresponding solutions? I will be grateful for your timely response !
open
2024-12-14T08:53:26Z
2024-12-14T08:53:26Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/1107
[]
CHMimilanlan
0
minimaxir/textgenrnn
tensorflow
30
start_text?
Hi, Does textgenrnn have the ability to provide a start_text like in char-rnn? Thanks!
closed
2018-06-09T02:33:15Z
2018-06-10T02:16:40Z
https://github.com/minimaxir/textgenrnn/issues/30
[]
ihavetoomanyquestions
1
polarsource/polar
fastapi
4,466
Webhooks: Show Endpoint URL in event log table
### Description Show the endpoint url we send webhooks to in the log. Helpful confirmation & for debugging of which endpoint received a given webhook.
open
2024-11-13T16:06:10Z
2025-01-28T22:14:30Z
https://github.com/polarsource/polar/issues/4466
[ "enhancement", "contributor friendly", "ui" ]
birkjernstrom
6
kymatio/kymatio
numpy
271
DOC quickstart instructions don't show pip command correctly on website
@edouardoyallon ![image](https://user-images.githubusercontent.com/1306635/50116255-f5219e00-0249-11e9-90c3-ee1330124b56.png) It should have a grey background
closed
2018-12-17T21:20:31Z
2018-12-19T13:25:58Z
https://github.com/kymatio/kymatio/issues/271
[]
eickenberg
1
taverntesting/tavern
pytest
534
tavern tests to run concurrently
Hi Michael, We are in the last stage of finalizing whether or not to convert our current tests/infrastructure to using tavern. We need to have a proof-of-concept that the tavern tests can run concurrently. For example, we have 400 tests in separate 400 files, and we want to run them concurrently as part of automation, so the total time taken to complete and the results obtained are must faster. Is there a feature in tavern to accomplish that? If not, what do you suggest? Thanks very much. Bharati
closed
2020-03-10T19:06:13Z
2020-05-02T00:54:34Z
https://github.com/taverntesting/tavern/issues/534
[]
Bharati74
4
jupyter-book/jupyter-book
jupyter
1,475
Dependency conflict with voila 0.2.x
### Describe the problem I'd like to install both `jupyter-book` (0.11.3) and `voila` (0.2.15) into the same conda environment. Currently this results in a dependency conflict. Tracking down the source of the conflict is a little tricky as the error messages from either mamba or conda are a bit confusing. However, I *think* it is due to conflicting constraints on the `nbconvert` package. All versions of `voila` in the 0.2.x series require `nbconvert >=6.0.0,<7`. Currently `jupyter-book` requires `myst-nb ~=0.12.0`. All versions of `myst-nb` in the 0.12.x series require `nbconvert >=5.5,<6`. ### Link to your repository or website _No response_ ### Steps to reproduce ```bash $ mamba create --name=test-jb-voila --dry-run --channel=conda-forge --override-channels --strict-channel-priority jupyter-book==0.11.3 voila==0.2.15 ``` ### The version of Python you're using _No response_ ### Your operating system _No response_ ### Versions of your packages 0.11.3 ### Additional context _No response_
closed
2021-09-28T16:50:41Z
2021-10-31T17:21:52Z
https://github.com/jupyter-book/jupyter-book/issues/1475
[ "bug" ]
alimanfoo
9
ultralytics/ultralytics
computer-vision
18,751
Batch processing not working as expected
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Export, Predict ### Bug Hello, I have a question about batch processing with the ultralytics cli. I have exported the yolo11n-seg.pt to .engine format with the following command: `yolo export model=yolo11n-seg.pt format=engine device=0 batch=10` When I then execute predict with the following command, an error message appears: `yolo predict task=segment model=yolo11n-seg.engine device=0 batch=10` ``` WARNING ⚠️ 'source' argument is missing. Using default 'source=/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/assets'. Ultralytics 8.3.63 🚀 Python-3.10.12 torch-2.5.0a0+872d972e41.nv24.08 CUDA:0 (Orin, 62841MiB) Loading yolo11n-seg-b10.engine for TensorRT inference... [01/18/2025-10:52:04] [TRT] [I] Loaded engine size: 13 MiB [01/18/2025-10:52:04] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +236, now: CPU 0, GPU 247 (MiB) Traceback (most recent call last): File "/home/agxorin2/basler-agx/.venv/bin/yolo", line 8, in <module> sys.exit(entrypoint()) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/cfg/__init__.py", line 983, in entrypoint getattr(model, mode)(**overrides) # default args from model File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/engine/model.py", line 558, in predict return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 188, in predict_cli for _ in gen: # sourcery skip: remove-empty-nested-block, noqa File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 36, in generator_context response = gen.send(None) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 259, in stream_inference preds = self.inference(im, *args, **kwargs) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/engine/predictor.py", line 143, in inference return self.model(im, augment=self.args.augment, visualize=visualize, embed=self.args.embed, *args, **kwargs) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1714, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1725, in _call_impl return forward_call(*args, **kwargs) File "/home/agxorin2/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/nn/autobackend.py", line 599, in forward assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" AssertionError: input size torch.Size([2, 3, 640, 640]) not equal to max model size (10, 3, 640, 640) ``` I don't quite understand where the error comes from. If I run the whole thing via the Python SDK, the inference is executed, but I still get the following error at the end: ``` File ~/basler-agx/.venv/lib/python3.10/site-packages/ultralytics/nn/autobackend.py:599, in AutoBackend.forward(self, im, augment, visualize, embed) 596 self.bindings[name].data.resize_(tuple(self.context.get_binding_shape(i))) 598 s = self.bindings["images"].shape --> 599 assert im.shape == s, f"input size {im.shape} {'>' if self.dynamic else 'not equal to'} max model size {s}" 600 self.binding_addrs["images"] = int(im.data_ptr()) 601 self.context.execute_v2(list(self.binding_addrs.values())) AssertionError: input size torch.Size([6, 3, 640, 640]) not equal to max model size (10, 3, 640, 640) ``` The same happened when i set the batch size to 6. Only the execution with batch size 2 worked. I would be very grateful for any help. ### Environment OS Linux-5.15.148-tegra-aarch64-with-glibc2.35 Environment Linux Python 3.10.12 Install git RAM 61.37 GB Disk 44.3/53.8 GB CPU Cortex-A78AE CPU count 12 GPU Orin, 62841MiB GPU count 1 CUDA 12.6 numpy ✅ 1.23.5>=1.23.0 numpy ✅ 1.23.5<2.0.0; sys_platform == "darwin" matplotlib ✅ 3.5.1>=3.3.0 opencv-python ✅ 4.10.0.84>=4.6.0 pillow ✅ 9.0.1>=7.1.2 pyyaml ✅ 5.4.1>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.14.1>=1.4.1 torch ✅ 2.5.0a0+872d972e41.nv24.8>=1.8.0 torch ✅ 2.5.0a0+872d972e41.nv24.8!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.20.0a0+afc54f7>=0.9.0 tqdm ✅ 4.67.1>=4.64.0 psutil ✅ 6.1.1 py-cpuinfo ✅ 9.0.0 pandas ✅ 1.3.5>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.13>=2.0.0 ### Minimal Reproducible Example yolo export model=yolo11n-seg.pt format=engine device=0 batch=10 yolo predict task=segment model=yolo11n-seg.engine device=0 batch=10 ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
closed
2025-01-18T11:52:27Z
2025-01-19T04:37:18Z
https://github.com/ultralytics/ultralytics/issues/18751
[ "segment", "exports" ]
lassebro
4
benbusby/whoogle-search
flask
582
[BUG] Missing environment variables in whoogle.template.env
Some of the [referenced](https://github.com/benbusby/whoogle-search#environment-variables) environment variables are missing from [whoogle.template.env](https://github.com/benbusby/whoogle-search/blob/main/whoogle.template.env): - WHOOGLE_MINIMAL - WHOOGLE_RESULTS_PER_PAGE - WHOOGLE_AUTOCOMPLETE - EXPOSE_PORT **To Reproduce** Steps to reproduce the behavior: 1. Build from git or install latest release v0.7.0. 2. Compare the [README.md](https://github.com/benbusby/whoogle-search#environment-variables) section on `Environment Variables and Configuration` with the current source [template](https://github.com/benbusby/whoogle-search/blob/main/whoogle.template.env). 3. Notice the difference. **Deployment Method** - [ ] Heroku (one-click deploy) - [ ] Docker - [ ] `run` executable - [ ] pip/pipx - [ x] Other: local and remote installations via Arch Linux PKGBUILDs. **Version of Whoogle Search** - [ x] Build from git master based on the AUR [whoogle-git](https://aur.archlinux.org/packages/whoogle-git/) package. - [ ] Version [version number] - [ ] Not sure **Desktop (please complete the following information):** - OS: Arch Linux & Ubuntu LTS - Browser: Firefox - Version: 95.0.1
closed
2021-12-18T09:47:46Z
2021-12-19T18:43:04Z
https://github.com/benbusby/whoogle-search/issues/582
[ "bug" ]
glitsj16
0
ultralytics/ultralytics
machine-learning
19,805
Could not find a version that satisfies the requirement ai-edge-litert
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report. ### Ultralytics YOLO Component Export ### Bug ``` from ultralytics import YOLO # Load the YOLO11 model model = YOLO("yolo11n.pt") # Export the model to TFLite format model.export(format="tflite") ``` Just by running this code I get: ``` ERROR: Could not find a version that satisfies the requirement ai-edge-litert>=1.2.0 (from versions: none) ERROR: No matching distribution found for ai-edge-litert>=1.2.0 ``` ### Environment ``` Ultralytics 8.3.94 🚀 Python-3.10.11 torch-2.6.0+cpu CPU (AMD Ryzen 9 5950X 16-Core Processor) Setup complete ✅ (32 CPUs, 63.9 GB RAM, 308.9/1861.7 GB disk) OS Windows-10-10.0.26100-SP0 Environment Windows Python 3.10.11 Install pip Path C:\Users\ciko9\AppData\Local\Programs\Python\Python310\Lib\site-packages\ultralytics RAM 63.91 GB Disk 308.9/1861.7 GB CPU AMD Ryzen 9 5950X 16-Core Processor CPU count 32 GPU None GPU count None CUDA None numpy ✅ 2.1.1<=2.1.1,>=1.23.0 matplotlib ✅ 3.10.1>=3.3.0 opencv-python ✅ 4.11.0.86>=4.6.0 pillow ✅ 11.1.0>=7.1.2 pyyaml ✅ 6.0.2>=5.3.1 requests ✅ 2.32.3>=2.23.0 scipy ✅ 1.15.2>=1.4.1 torch ✅ 2.6.0>=1.8.0 torch ✅ 2.6.0!=2.4.0,>=1.8.0; sys_platform == "win32" torchvision ✅ 0.21.0>=0.9.0 tqdm ✅ 4.67.1>=4.64.0 psutil ✅ 7.0.0 py-cpuinfo ✅ 9.0.0 pandas ✅ 2.2.3>=1.1.4 seaborn ✅ 0.13.2>=0.11.0 ultralytics-thop ✅ 2.0.14>=2.0.0 ``` ### Minimal Reproducible Example Already shared above ### Additional _No response_ ### Are you willing to submit a PR? - [ ] Yes I'd like to help by submitting a PR!
open
2025-03-20T18:56:08Z
2025-03-21T12:28:17Z
https://github.com/ultralytics/ultralytics/issues/19805
[ "bug", "dependencies", "exports" ]
fracico-tech
5
thunlp/OpenPrompt
nlp
316
ModuleNotFoundError: No module named 'transformers.generation_utils'
Code: import numpy as np from sklearn.model_selection import KFold #import tensorflow as tf from sklearn import metrics from sklearn.metrics import accuracy_score from sklearn.metrics import f1_score from sklearn.metrics import precision_score from sklearn.metrics import recall_score from torch.utils.data import Dataset, DataLoader,SubsetRandomSampler from sklearn.metrics import precision_recall_fscore_support from transformers import AdamW, get_linear_schedule_with_warmup from openprompt import PromptDataLoader from torch import nn import os error: ModuleNotFoundError Traceback (most recent call last) Cell In[10], line 14 12 from sklearn.metrics import precision_recall_fscore_support 13 from transformers import AdamW, get_linear_schedule_with_warmup ---> 14 from openprompt import PromptDataLoader 15 from torch import nn 16 import os File ~/miniconda3/lib/python3.8/site-packages/openprompt/__init__.py:2 1 __version__ = "1.0.1" ----> 2 from .pipeline_base import PromptDataLoader, PromptModel, PromptForClassification, PromptForGeneration 3 from .utils import * 4 from .prompt_base import Template, Verbalizer File ~/miniconda3/lib/python3.8/site-packages/openprompt/pipeline_base.py:4 2 from torch.utils.data.sampler import RandomSampler 3 from transformers.configuration_utils import PretrainedConfig ----> 4 from transformers.generation_utils import GenerationMixin 5 import torch 6 import torch.nn as nn ModuleNotFoundError: No module named 'transformers.generation_utils'
open
2024-07-28T03:37:11Z
2024-09-23T08:48:17Z
https://github.com/thunlp/OpenPrompt/issues/316
[]
fengmingfeng
6
ResidentMario/geoplot
matplotlib
192
Voronoi plot with duplicate points fails with unhelpful message
Using geoplot 0.4.0 from Conda. I spent some time debugging an issue I had with the Voronoi plot, which turned out to be duplicate points in my input data. In this case the number of Voronoi regions is different from the number of input points, which breaks the plotting code (concretely `VoronoiPlot.draw()` fails on lines 1589-1591, where it tries to assign a series shorter than the target dataframe). I'm not sure if there's a way to handle this case gracefully, but I'd suggest a more informative error message; for example adding something like this at the end of `build_voronoi_polygons()`: ```python if len(df) != len(polygons): raise ValueError('Voronoi diagram has different number of cells than input points, aborting. Maybe you have duplicate points?') ```
closed
2019-12-11T14:41:59Z
2019-12-11T16:39:50Z
https://github.com/ResidentMario/geoplot/issues/192
[]
arnsholt
1
pydantic/pydantic-ai
pydantic
770
QUESTION: static `system_prompt` non deterministic? (or bug)
I have a hard-coded "static' `system_prompt` being set in my `Agent` constructor (just a string). When I run my script, **logfire** shows that ~25% (intermittent issue) do NOT include this `system_prompt`. When they're not included there's a generic agent that gets what is usually added to a "user" agent. When it works, there is both a system (the static one of course) and a user agent. I don't think this is a logfire issue because without the system prompt (it's very necessary in this case) the script fails very reliably via the validation. When they are included the script returns successfully. I couldn't find anything about this in the docs. Is there a force_system_prompt="true" or something?
open
2025-01-25T01:36:11Z
2025-01-25T02:09:12Z
https://github.com/pydantic/pydantic-ai/issues/770
[]
DavidLGoldberg
2
mirumee/ariadne
api
1,012
Deprecate fallback resolves
`make_executable_schema` now has `convert_names_case` option that solves the same issue better.
open
2023-01-20T17:12:30Z
2023-01-20T17:12:30Z
https://github.com/mirumee/ariadne/issues/1012
[ "deprecation" ]
rafalp
0
python-gino/gino
asyncio
501
Gino engine is not initialized.
* GINO version: 0.8.3 * Python version: 3.7 * asyncpg version: 0.18.3 * aiocontextvars version: 0.2.2 * PostgreSQL version: 9.4 ### Description I'm trying to create a simple API using aiohttp and Gino. The point is to make abstract generic views where I can only pass the model and it'll create views for it. ### What I Did I created models and ran migrations using db.gino.create_all() . It worked fine but when I pass the model to my Endpoint class where I need to make a query, it fails with: > gino.exceptions.UninitializedError: Gino engine is not initialized. My Endpoint class: ```python class ListEndpoint(Endpoint): def __init__(self, model): super().__init__() self.model = model async def get(self) -> Response: obj_list = await self.model.query.gino.all() if not obj_list: return Response( status=404, body=json.dumps({'Not found': 404}), content_type='application/json' ) data = await ModelSerializer(obj_list).to_json() return Response( status=200, body=data, content_type='application/json' ) ``` looks like it fails on: ```python obj_list = await self.model.query.gino.all() ``` I thought it was enough for engine config. DB_ADDRESS looks like this: > postgresql://admin:pass@localhost:5432/library But I also added config with 'dsn' on init_app. My code from main.py: ```python from aiohttp.web import Application, run_app from models import Country from resource import GenericResource from settings import DB_ADDRESS from gino.ext.aiohttp import Gino app = Application() db = Gino() db.init_app( app, config={ 'dsn': DB_ADDRESS } ) countries = GenericResource('countries', Country) countries.register(app.router) if __name__ == '__main__': run_app(app) ``` What's wrong? And I've also got a question, is this fine using async with db.with_bind every time I need to hit database? Thanks in advance!
closed
2019-06-18T23:29:47Z
2019-07-19T10:14:09Z
https://github.com/python-gino/gino/issues/501
[ "question" ]
hdgone
6
nonebot/nonebot2
fastapi
2,414
Plugin: nonebot_plugin_fgoavatarguess
### PyPI 项目名 nonebot-plugin-fgoavatarguess ### 插件 import 包名 nonebot_plugin_fgoavatarguess ### 标签 [FGO] ### 插件配置项 _No response_
closed
2023-10-11T04:39:42Z
2023-10-11T04:45:39Z
https://github.com/nonebot/nonebot2/issues/2414
[ "Plugin" ]
influ3nza
1
microsoft/qlib
machine-learning
1,672
Limit orderbook data support?
## 🌟 Feature Description Is there any plan to support limit orderbook data?
open
2023-10-19T07:14:45Z
2023-10-19T07:14:45Z
https://github.com/microsoft/qlib/issues/1672
[ "enhancement" ]
xiaonengmiao
0
mirumee/ariadne
api
780
graphql-core v3.2 breaks directives
Since [this commit](https://github.com/graphql-python/graphql-core/commit/b9423b74ca9d22d009a1a4633b38b2bcd06c604b) in graphql-core replaced all lists with tuples, `schema.directives` is a tuple now. This causes an error in [ariadne.schema_visitor.each](https://github.com/mirumee/ariadne/blob/3921fb1d7cd156eb7b5dc39e9a0695508f187cdd/ariadne/schema_visitor.py#L58). It seems like a possible solution is to replace `isinstance(list_or_dict, list)` with `isinstance(list_or_dict, Sequence)`.
closed
2022-01-25T08:30:22Z
2022-02-18T17:33:24Z
https://github.com/mirumee/ariadne/issues/780
[]
sda97ghb
1
RobertCraigie/prisma-client-py
asyncio
673
Improve documentation for raw query parameters
<!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> One of the examples described in https://prisma-client-py.readthedocs.io/en/stable/reference/operations/ use string formatting for raw SQL queries. Although it is not explicitly documented, it seems that the character `?` would be replaced by subsequent positional arguments: ``` total = await db.execute_raw( ''' SELECT * FROM User WHERE User.id = ? ''', 'cksca3xm80035f08zjonuubik' ) ``` But this seems not to be working in practice: ``` RawQueryError: db error: ERROR: operator does not exist: character varying =? HINT: No operator matches the given name and argument type. You might need to add an explicit type cast. ``` How is string formatting supposed to be used in raw queries? ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> Just run the code snippet of the documentation. ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> I would expect `?` to be replaced by the subsequent positional arguments provided to `query_raw` or `execute_raw`. ## Prisma information <!-- Your Prisma schema, Prisma Client Python queries, ... Do not include your database credentials when sharing your Prisma schema! --> The same happens with any schema. ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Ubuntu 20.04 - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> PostgreSQL - Python version: <!--[Run `python -V` to see your Python version]--> 3.8.10 - Prisma version: 0.7.1 <!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
open
2023-01-16T08:49:44Z
2023-01-23T16:13:08Z
https://github.com/RobertCraigie/prisma-client-py/issues/673
[ "topic: docs", "kind/docs" ]
ezorita
4
graphql-python/gql
graphql
122
Is there documentation besides the README?
Thanks for creating and maintaining this useful library! As a newcomer, I would like to ask if there is more comprehensive documentation for this library besides the README file in this repository? Similarly is there documentation on this library's API/callable objects, etc.? The current README seems to only cover a fraction of this library's functionality and I'd love to learn more!
closed
2020-07-29T20:51:35Z
2020-09-20T19:58:26Z
https://github.com/graphql-python/gql/issues/122
[ "type: question or discussion" ]
penyuan
3
mwaskom/seaborn
pandas
3,318
Add read-only permissions to ci.yaml GitHub workflow
Seaborn's ci.yaml workflow currently run with write-all permissions. This is dangerous, since it opens the project up to supply-chain attacks. [GitHub itself](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-secrets) recommends ensuring all workflows run with minimal permissions. I've taken a look at the workflow, and it doesn't seem to require any permissions other than `contents: read`. This issue can be solved in two ways: - add top-level read-only permissions to ci.yaml; and/or - set the default token permissions to read-only in the repo settings. I'll be sending a PR along with this issue that sets the top-level permissions. If you instead (or also) wish to modify the default token permissions: 1. Open the repo settings 2. Go to [Actions > General](https://github.com/mwaskom/seaborn/settings/actions) 3. Under "Workflow permissions", set them to "Read repository contents and packages permissions" --- **Disclosure:** My name is Pedro and I work with Google and the [Open Source Security Foundation (OpenSSF)](https://www.openssf.org/) to improve the supply-chain security of the open-source ecosystem.
closed
2023-04-12T14:08:36Z
2023-04-12T23:19:14Z
https://github.com/mwaskom/seaborn/issues/3318
[]
pnacht
0
nltk/nltk
nlp
2,544
grammar checker
How to check if a sentence is grammatically correct or not using NLTK
closed
2020-05-18T14:38:34Z
2020-06-16T03:46:00Z
https://github.com/nltk/nltk/issues/2544
[ "invalid" ]
SRIKARHI
1
babysor/MockingBird
deep-learning
761
预处理数据只能检测到少数几条,不完全
我大约做了校对了百来条,但是只会被检测到5条,整体重命名后被检测出的还是那固定5条
open
2022-10-07T21:41:24Z
2022-10-13T00:04:25Z
https://github.com/babysor/MockingBird/issues/761
[]
ydroxy
3
deepset-ai/haystack
pytorch
9,064
Make REQUEST_HEADERS in LinkContentFetcher customizable
``` from haystack.components.fetchers.link_content import LinkContentFetcher headers = ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"] fetcher = LinkContentFetcher(user_agents=headers) streams = fetcher.run(urls=["https://zhuanlan.zhihu.com/p/670768194"])["streams"] ``` ---- This error occurred when executing the above code>>>>> HTTPError Traceback (most recent call last) Cell In[52], line 6 3 headers = ["Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"] 4 fetcher = LinkContentFetcher(user_agents=headers) ----> 6 streams = fetcher.run(urls=["https://zhuanlan.zhihu.com/p/670768194"])["streams"] 8 streams File ~/anaconda3/envs/py311/lib/python3.11/site-packages/haystack/components/fetchers/link_content.py:152, in LinkContentFetcher.run(self, urls) 150 # don't use multithreading if there's only one URL 151 if len(urls) == 1: --> 152 stream_metadata, stream = self._fetch(urls[0]) 153 stream.meta.update(stream_metadata) 154 stream.mime_type = stream.meta.get("content_type", None) File ~/anaconda3/envs/py311/lib/python3.11/site-packages/haystack/components/fetchers/link_content.py:191, in LinkContentFetcher._fetch(self, url) 189 except Exception as e: 190 if self.raise_on_failure: --> 191 raise e 192 # less verbose log as this is expected to happen often (requests failing, blocked, etc.) 193 logger.debug("Couldn't retrieve content from {url} because {error}", url=url, error=str(e)) File ~/anaconda3/envs/py311/lib/python3.11/site-packages/haystack/components/fetchers/link_content.py:185, in LinkContentFetcher._fetch(self, url) 183 stream: ByteStream = ByteStream(data=b"") 184 try: --> 185 response = self._get_response(url) 186 content_type = self._get_content_type(response) 187 handler: Callable = self._resolve_handler(content_type) File ~/anaconda3/envs/py311/lib/python3.11/site-packages/tenacity/__init__.py:336, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw) 334 copy = self.copy() 335 wrapped_f.statistics = copy.statistics # type: ignore[attr-defined] --> 336 return copy(f, *args, **kw) File ~/anaconda3/envs/py311/lib/python3.11/site-packages/tenacity/__init__.py:475, in Retrying.__call__(self, fn, *args, **kwargs) 473 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs) 474 while True: --> 475 do = self.iter(retry_state=retry_state) 476 if isinstance(do, DoAttempt): 477 try: File ~/anaconda3/envs/py311/lib/python3.11/site-packages/tenacity/__init__.py:376, in BaseRetrying.iter(self, retry_state) 374 result = None 375 for action in self.iter_state.actions: --> 376 result = action(retry_state) 377 return result File ~/anaconda3/envs/py311/lib/python3.11/site-packages/tenacity/__init__.py:418, in BaseRetrying._post_stop_check_actions.<locals>.exc_check(rs) 416 retry_exc = self.retry_error_cls(fut) 417 if self.reraise: --> 418 raise retry_exc.reraise() 419 raise retry_exc from fut.exception() File ~/anaconda3/envs/py311/lib/python3.11/site-packages/tenacity/__init__.py:185, in RetryError.reraise(self) 183 def reraise(self) -> t.NoReturn: 184 if self.last_attempt.failed: --> 185 raise self.last_attempt.result() 186 raise self File ~/anaconda3/envs/py311/lib/python3.11/concurrent/futures/_base.py:449, in Future.result(self, timeout) 447 raise CancelledError() 448 elif self._state == FINISHED: --> 449 return self.__get_result() 451 self._condition.wait(timeout) 453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]: File ~/anaconda3/envs/py311/lib/python3.11/concurrent/futures/_base.py:401, in Future.__get_result(self) 399 if self._exception: 400 try: --> 401 raise self._exception 402 finally: 403 # Break a reference cycle with the exception in self._exception 404 self = None File ~/anaconda3/envs/py311/lib/python3.11/site-packages/tenacity/__init__.py:478, in Retrying.__call__(self, fn, *args, **kwargs) 476 if isinstance(do, DoAttempt): 477 try: --> 478 result = fn(*args, **kwargs) 479 except BaseException: # noqa: B902 480 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type] File ~/anaconda3/envs/py311/lib/python3.11/site-packages/haystack/components/fetchers/link_content.py:123, in LinkContentFetcher.__init__.<locals>.get_response(url) 121 headers["User-Agent"] = self.user_agents[self.current_user_agent_idx] 122 response = requests.get(url, headers=headers, timeout=timeout or 3) --> 123 response.raise_for_status() 124 return response File ~/anaconda3/envs/py311/lib/python3.11/site-packages/requests/models.py:1024, in Response.raise_for_status(self) 1019 http_error_msg = ( 1020 f"{self.status_code} Server Error: {reason} for url: {self.url}" 1021 ) 1023 if http_error_msg: -> 1024 raise HTTPError(http_error_msg, response=self) HTTPError: 403 Client Error: Forbidden for url: https://zhuanlan.zhihu.com/p/670768194 I checked the LinkContentFetcher defined by haystack for more available parameters. I want to know how to solve this problem. Maybe I need to add a request header, cancel the proxy, and bring in parameters. How to add the request header to this component
open
2025-03-19T09:26:42Z
2025-03-21T14:36:26Z
https://github.com/deepset-ai/haystack/issues/9064
[ "Contributions wanted!", "P3" ]
aappaappoo
1
Yorko/mlcourse.ai
matplotlib
658
Missing section
Missing section 2.3 and task about kde plot of the height feature.
closed
2020-03-22T12:20:44Z
2020-03-24T11:10:07Z
https://github.com/Yorko/mlcourse.ai/issues/658
[ "minor_fix" ]
dinazzz
1
CorentinJ/Real-Time-Voice-Cloning
deep-learning
315
The minimum cuda capability that we support is 3.5
Help me please. Is there any way to start a project with 2.1 cuda capability? ``` Found GPU0 GeForce GT 630M which is of cuda capability 2.1. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability that we support is 3.5. warnings.warn(old_gpu_warn % (d, name, major, capability[1])) Found 1 GPUs available. Using GPU 0 (GeForce GT 630M) of compute capability 2.1 with 2.1Gb total memory. Preparing the encoder, the synthesizer and the vocoder... Traceback (most recent call last): File "demo_cli.py", line 61, in <module> encoder.load_model(args.enc_model_fpath) File "C:\Users\kisel\Desktop\Real-Time-Voice-Cloning-master\encoder\inference.py", line 32, in load_model _model = SpeakerEncoder(_device, torch.device("cpu")) File "C:\Users\kisel\Desktop\Real-Time-Voice-Cloning-master\encoder\model.py", line 21, in __init__ batch_first=True).to(device) File "C:\Users\kisel\anaconda3\envs\TestVoice\lib\site-packages\torch\nn\modules\module.py", line 386, in to return self._apply(convert) File "C:\Users\kisel\anaconda3\envs\TestVoice\lib\site-packages\torch\nn\modules\rnn.py", line 127, in _apply self.flatten_parameters() File "C:\Users\kisel\anaconda3\envs\TestVoice\lib\site-packages\torch\nn\modules\rnn.py", line 123, in flatten_parameters self.batch_first, bool(self.bidirectional)) RuntimeError: cuDNN error: CUDNN_STATUS_ARCH_MISMATCH ```
closed
2020-04-08T15:44:19Z
2020-07-04T18:08:52Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/315
[]
Snuffy4
2
sgl-project/sglang
pytorch
4,330
[Bug] Memory Issue with --mem-fraction-static Parameter
### Checklist - [x] 1. I have searched related issues but cannot get the expected help. - [x] 2. The bug has not been fixed in the latest version. ### Describe the bug **Description:** I am experiencing a memory-related issue when setting the --mem-fraction-static parameter. **Setup:** GPU: NVIDIA L40s (48GB VRAM) x 2 (I use 1 when running.) CUDA Version: 12.8 PyTorch Version: 2.5.1 sglang Version: 0.4.3.post2 I am trying speculative decoding and when setting --mem-fraction-static to 0.85, I get a CUDA out of memory error. However, if I set it to 0.84, I receive the following error: RuntimeError: Not enough memory. Please try to increase --mem-fraction-static. This creates a situation where I am stuck between these two errors, unable to find a usable memory fraction. I also tried adding --chunked-prefill-size and --max-running-requests parameters as suggested in the documentation to avoid OOM errors, but this did not resolve the issue. **Errors:** 1. Run sglang with --mem-fraction-static 0.85 → CUDA OOM error occurs. 2. Run sglang with --mem-fraction-static 0.84 → RuntimeError: Not enough memory. Please try to increase --mem-fraction-static error occurs. **Expected Behavior:** I expect the model to allocate memory properly without getting stuck between these two errors. Any insights or workarounds would be greatly appreciated! ### Reproduction ``` import requests import os from sglang import assistant_begin, assistant_end from sglang import assistant, function, gen, system, user from sglang import image from sglang import RuntimeEndpoint, set_default_backend from sglang.srt.utils import load_image from sglang.test.test_utils import is_in_ci from sglang.utils import print_highlight, terminate_process, wait_for_server if is_in_ci(): from sglang.docs.frontend.patch import launch_server_cmd else: from sglang.utils import launch_server_cmd server_process, port = launch_server_cmd( """ python3 -m sglang.launch_server --model /LLM/model/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4 --speculative-algorithm EAGLE \ --speculative-draft-model-path /LLM/model/Nvidia-Llama-3.1-Nemotron-70B-Instruct-HF-AWQ-INT4 --speculative-num-steps 3 \ --speculative-eagle-topk 4 --speculative-num-draft-tokens 32 --mem-fraction-static 0.85 --max-running-requests 2 --chunked-prefill-size 256\ --enable-torch-compile --cuda-graph-max-bs 2 """ ) wait_for_server(f"http://localhost:{port}") print(f"Server started on http://localhost:{port}") ``` ### Environment Python: 3.11.11 (main, Dec 9 2024, 15:32:27) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] CUDA available: True GPU 0,1: NVIDIA L40S GPU 0,1 Compute Capability: 8.9 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.8, V12.8.61 CUDA Driver Version: 570.86.10 PyTorch: 2.5.1+cu124 sglang: 0.4.3.post2 sgl_kernel: 0.0.3.post6 flashinfer: 0.2.2.post1 triton: 3.1.0 transformers: 4.48.3 torchao: 0.9.0 numpy: 1.26.4 aiohttp: 3.11.13 fastapi: 0.115.9 hf_transfer: Module Not Found huggingface_hub: 0.29.1 interegular: 0.3.3 modelscope: Module Not Found orjson: 3.10.15 packaging: 24.2 psutil: 7.0.0 pydantic: 2.10.6 multipart: 0.0.20 zmq: 26.2.1 uvicorn: 0.34.0 uvloop: 0.21.0 vllm: 0.7.2 openai: 1.65.1 tiktoken: 0.9.0 anthropic: Module Not Found litellm: Module Not Found decord: 0.6.0 NVIDIA Topology: GPU0 GPU1 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X SYS SYS 0-23,48-71 0 N/A GPU1 SYS X NODE 24-47,72-95 1 N/A NIC0 SYS NODE X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_bond_0 ulimit soft: 262144
closed
2025-03-12T06:40:02Z
2025-03-12T23:08:01Z
https://github.com/sgl-project/sglang/issues/4330
[]
keskinberkem
2
Lightning-AI/pytorch-lightning
data-science
20,282
Saving a checkpoint every n epochs does not work as expected
### Bug description Hi! 👋 I'm trying to save a model checkpoint every n epochs. As my model trains, I want to save checkpoints so I can explore performance at intervals throughout the run. To do this, I'm leveraging the ModelCheckpoint class and creating a callback like the one below. ``` python checkpoint_callback = ModelCheckpoint( dirpath='checkpoints/every_10_epochs', filename='epoch-{epoch:02d}', every_n_epochs=10, ) ``` **It seems like this flag is not working. It only saves one checkpoint, not one every n epochs** Am I misunderstanding how the checkpointing is supposed to work, or is this a bug? ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug This is a minimal example running 50 epochs with every_n_epochs=10. I expect the checkpointing to save 5 checkpoints, but only 1 is saved. ```python import pytorch_lightning as pl import torch from torch import nn from torch.utils.data import DataLoader, TensorDataset from pytorch_lightning.callbacks import ModelCheckpoint import os class MinimalModel(pl.LightningModule): def __init__(self): super().__init__() self.layer = nn.Linear(1, 1) def training_step(self, batch, batch_idx): x, y = batch y_hat = self.layer(x) loss = nn.functional.mse_loss(y_hat, y) self.log('train_loss', loss) return loss def configure_optimizers(self): return torch.optim.SGD(self.parameters(), lr=0.02) # Create dummy data x = torch.linspace(0, 1, 1000).unsqueeze(-1) y = 3 * x + 0.5 + torch.randn_like(x) * 0.1 train_dataset = TensorDataset(x, y) train_loader = DataLoader(train_dataset, batch_size=32) # Create ModelCheckpoint callback checkpoint_callback = ModelCheckpoint( dirpath='checkpoints/every_10_epochs', filename='epoch-{epoch:02d}', every_n_epochs=10, ) # Create trainer with the callback trainer = pl.Trainer(max_epochs=50, callbacks=[checkpoint_callback]) # Train model model = MinimalModel() trainer.fit(model, train_loader) # Function to count models in a directory def count_models(directory): return len([f for f in os.listdir(directory) if f.endswith('.ckpt')]) # Count and print the number of saved models num_checkpoints = count_models('checkpoints/every_10_epochs') print(f"Number of checkpoints saved: {num_checkpoints}") # Test if the number of checkpoints is correct expected_checkpoints = 5 # We expect checkpoints at epochs 10, 20, 30, 40, and 50 if num_checkpoints == expected_checkpoints: print("Test passed: Correct number of checkpoints saved.") else: print(f"Test failed: Expected {expected_checkpoints} checkpoints, but found {num_checkpoints}.") # Print paths of saved checkpoints print("\nSaved checkpoints:") for checkpoint in os.listdir('checkpoints/every_10_epochs'): if checkpoint.endswith('.ckpt'): print(os.path.join('checkpoints/every_10_epochs', checkpoint)) ``` ### Error messages and logs ``` `Trainer.fit` stopped: `max_epochs=50` reached. Number of checkpoints saved: 1 Test failed: Expected 5 checkpoints, but found 1. Saved checkpoints: checkpoints/every_10_epochs/epoch-epoch=49.ckpt ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0 #- PyTorch Version (e.g., 2.4): 2.4.1 #- Python version (e.g., 3.12): 3.11.9 #- OS (e.g., Linux): MacOS #- CUDA/cuDNN version: NA #- GPU models and configuration: ? #- How you installed Lightning(`conda`, `pip`, source): poetry add Lightning (pip) ``` </details> ### More info _No response_
closed
2024-09-14T15:15:01Z
2024-09-16T18:40:37Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20282
[ "bug", "needs triage", "ver: 2.4.x" ]
olly-writes-code
2
deepset-ai/haystack
pytorch
8,399
Single automated test for pipeline YAML serde for all components
We need a test that enumerates all components, adds them to a pipeline to perform a YAML serde roundtrip. This will help catch non-serializable types in init methods, etc.
closed
2024-09-24T15:11:19Z
2024-10-18T08:55:54Z
https://github.com/deepset-ai/haystack/issues/8399
[ "topic:tests", "P1" ]
shadeMe
1
ResidentMario/missingno
pandas
56
Suggestion: Break up helper functions and plotting functions into separate files
`missingno.py` is getting pretty long in terms of SLOC, and the easiest way to divide the package into meaningful separate files would be to break the helper functions out into a `helper.py` file. There's no need for these functions to rest in the same file.
closed
2018-01-31T03:12:35Z
2018-02-02T02:27:54Z
https://github.com/ResidentMario/missingno/issues/56
[]
rhiever
1
aiortc/aiortc
asyncio
131
Server example: ICE connection failed
Everytime I try to set up the server example the ICE connection fails. I already tryed to use a stun and a turn server but the result is the same: `config.iceServers = [ { 'urls': 'stun:stun.l.google.com:19302?transport=tcp' }, { 'urls': 'turn:numb.viagenie.ca', 'credential': 'muazkh', 'username': 'webrtc@live.com' } ];` Is there a way to configure the server to use a stun server because it always responds with its internal IP: `a=candidate:9333c84bcc1b0bf56713df9036e6b4d9 1 udp 2130706431 172.17.0.2 53159 typ host ` Thx in advance hendl
closed
2019-01-22T11:55:40Z
2019-01-22T16:46:54Z
https://github.com/aiortc/aiortc/issues/131
[ "invalid" ]
hendl
7