repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
plotly/dash | plotly | 2,297 | [BUG] High CPU usage when typing in text field and plotly graphs on screen |
**Describe your context**
Please provide us your environment, so we can easily reproduce the issue.
- replace the result of `pip list | grep dash` below
```
dash 2.6.2
dash-bootstrap-components 1.2.1
dash-bootstrap-templates 1.0.4
dash-core-components 2.0.0
dash-dataframe-table 0.1.3
dash-extensions 0.1.5
dash-html-components 2.0.0
dash-renderer 1.9.0
dash-table 5.0.0
```
- if frontend related, tell us your Browser, Version and OS
- OS: Mac
- Browser Safari
**Describe the bug**
High cpu usage when typing characters in a text field if there are plotly graphs on screen.
**Expected behavior**
Minimal CPU usage
**Screenshots**

** Simple example app
https://github.com/astrowonk/streamlit_lag_example/blob/main/dash_example.py
* Discussion
I tried to get some guidance via this [post on the community forum](https://community.plotly.com/t/high-cpu-usage-when-typing-in-text-fields-with-plotly-graphs-on-screen/68161) but did not get any feedback.
No callback is watching the TextArea. I can't think of any reason CPU usage should be high here. On older Macs/computers this creates a lot of input lag just typing.
| closed | 2022-11-03T13:29:58Z | 2024-07-24T17:38:49Z | https://github.com/plotly/dash/issues/2297 | [] | astrowonk | 1 |
google/trax | numpy | 1,685 | Reformer does not export to SavedModel format. | ### Description
...
### Environment information
tensorflow-gpu installed through anaconda, trax installed via pip.
```
OS: Ubuntu 20.04
$ pip freeze | grep trax
trax==1.3.9
$ pip freeze | grep tensor
tensorboard==2.6.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.0
tensorflow==2.5.1
tensorflow-addons==0.13.0
tensorflow-datasets==4.4.0
tensorflow-estimator==2.5.0
tensorflow-gpu==2.6.0
tensorflow-hub==0.12.0
tensorflow-metadata==1.2.0
tensorflow-serving-api==1.15.0
tensorflow-text==2.5.0
$ pip freeze | grep jax
jax==0.2.17
jaxlib==0.1.69
$ python -V
Python 3.6.9 :: Anaconda, Inc.
```
### For bugs: reproduction and error logs
```
# Steps to reproduce:
...
```
```
# Error logs:
...
```
| open | 2021-08-17T02:17:12Z | 2021-08-17T03:22:44Z | https://github.com/google/trax/issues/1685 | [] | StewartSethA | 0 |
ionelmc/pytest-benchmark | pytest | 65 | defer disabled-by-xdist warning until a benchmark is actually disabled | Currently, if pytest-benchmark and pytest-xdist are both installed, `pytest -n $foo` will always cause pytest-benchmark to emit a disabled-by-xdist warning even if the current test suite does not include any benchmarks. Perhaps the warning could be deferred until a benchmark actually gets skipped? | open | 2017-01-08T18:32:40Z | 2019-01-02T22:13:30Z | https://github.com/ionelmc/pytest-benchmark/issues/65 | [
"bug",
"help wanted"
] | anntzer | 3 |
sktime/sktime | scikit-learn | 7,787 | [BUG] failure of `TestAllForecasters` collection due to `pytest` inheritance bug | Update with full suspected description.
The problem is `pytest` not collecting any tests at all from `TestAllForecasters`, due to what seems like a bug in `pytest`. Bug report here: https://github.com/pytest-dev/pytest/issues/13205
The reason, or at least the trigger, as identified by @yarnabrina, seems to be the multiple inheritance introduced in #6628, which adds more parent classes to `TestAllForecasters`, in addition to the class that contains `pytest_generate_tests`, `BaseFixtureGenerator`.
The exact condition that triggers this failure is not clear, but it seems to be a bug in `pytest`, which may surface if `pytest_generate_tests` is used in classes, combined with multiple, non-linear inheritance.
The reason for the `TestAllObjects` symptom reported below is different, it is simply due to `TestAllObjects` being imported, thus being discovered in its original module, as well as in the location where it is imported. This was an unrelated bug that was also intrpduced via #6628.
---
Something is not right with the test framework and `EnsembleForecaster`
Some diagnostics via #7786
* a change in `EnsembleForecaster` does not seem to trigger its tests in either CI element.
* `all_estimators` correctly retrieves `EnsembleForecaster`
* `scitype` correctly identifies `EnsembleForecaster` as `"forecaster"` type
* `run_test_for_class` correctly returns `True` on a changed `EnsembleForecaster` including on the #7786 branch where tests do not run
Very odd: in my local tests, the vs code GUI inserts the tests from `TestAllObjects` where the tests from `TestAllForecasters` should be, and the tests from `TestAllForecasters` are not displayed or executed when requested.
 | closed | 2025-02-08T10:27:29Z | 2025-02-09T17:25:10Z | https://github.com/sktime/sktime/issues/7787 | [
"bug",
"module:forecasting",
"module:tests"
] | fkiraly | 4 |
geopandas/geopandas | pandas | 3,331 | BUG: autolim has no effect on point scatter plots | - [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on ~the latest version of~ geopandas 27de00c.
- [ ] (optional) I have confirmed this bug exists on the main branch of geopandas.
The bug will be reproducible on the main branch after #2817 is merged.
---
#### Code Sample, a copy-pastable example
```python
"""Test point plot preserving axes limits."""
import shapely
import geopandas
points = geopandas.GeoSeries(shapely.geometry.Point(i, i) for i in range(10))
ax = points[:5].plot()
ylim = ax.get_ylim()
points.plot(ax=ax, autolim=False)
assert ax.get_ylim() == ylim # AssertionError
```
#### Problem description
The `autolim` keyword allows disabling auto-scaling for polygon and linestring plots (#2602, #2817), but its application for point geometries depends on upstream changes in matplotlib scatter plots:
- matplotlib/matplotlib#15595
#### Expected Output
Setting `autolim=False` should prevent re-scaling axes limits, as it does for polygon and linestring plots.
#### Output of ``geopandas.show_versions()``
<details>
SYSTEM INFO
-----------
python : 3.12.3 (main, Apr 23 2024, 09:16:07) [GCC 13.2.1 20240417]
executable : /usr/bin/python
machine : Linux-6.6.32-1-MANJARO-x86_64-with-glibc2.39
GEOS, GDAL, PROJ INFO
---------------------
GEOS : 3.12.0
GEOS lib : None
GDAL : 3.8.5
GDAL data dir: /usr/share/gdal
PROJ : 9.4.0
PROJ data dir: /usr/share/proj
PYTHON DEPENDENCIES
-------------------
geopandas : 1.0.0-alpha1+66.g27de00cf
numpy : 1.26.4
pandas : 1.5.3
pyproj : 3.6.1
shapely : 2.0.3
pyogrio : None
geoalchemy2: None
geopy : 2.4.1
matplotlib : 3.8.3
mapclassify: None
fiona : 1.9.6
psycopg : None
psycopg2 : None
pyarrow : None
</details>
| open | 2024-06-07T14:54:26Z | 2024-06-20T11:03:33Z | https://github.com/geopandas/geopandas/issues/3331 | [
"bug",
"upstream issue"
] | juseg | 1 |
strawberry-graphql/strawberry | django | 3,536 | field level relay results limit | ## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
### Current:
The maximum of returned results for a relay connection defaults to 100 and can be changed by a schema wide setting: https://github.com/strawberry-graphql/strawberry/blob/b7f28815c116780127a9abdea42938bff5649057/strawberry/schema/config.py#L14
Example:
```py
MAX_RELAY_RESULTS = 777
schema_config = StrawberryConfig(relay_max_results=MAX_RELAY_RESULTS)
schema = Schema(
query=Query,
mutation=Mutation,
extensions=[
DjangoOptimizerExtension,
],
config=schema_config,
)
```
### Improvement:
The maximum results returned can be overwritten on a per field level via a field setting.
Example:
```py
@strawberry.type
MAX_RELAY_ITEM_RESULTS = 777
class Query:
my_items: ListConnectionWithTotalCount[MyItemType] = strawberry_django.connection(
relay_max_results=MAX_RELAY_ITEM_RESULTS
)
``` | open | 2024-06-08T14:45:40Z | 2025-03-20T15:56:45Z | https://github.com/strawberry-graphql/strawberry/issues/3536 | [] | Eraldo | 5 |
deepset-ai/haystack | nlp | 8,341 | Component with Variadic input is not run if some of its inputs are not sent | **Describe the bug**
`Pipeline.run()` doesn't run expected Component with Variadic input if some of its senders do not send it any input.
**Expected behavior**
`Pipeline.run()` runs Component with Variadic input as expected.
**Additional context**
This has been reported by a user on Discord in [this thread](https://discord.com/channels/993534733298450452/1281538309016784938).
**To Reproduce**
The below snippet reproduces the issue, both `assert`s fails even though it shouldn't.
In both cases below the `joiner` doesn't run.
That's unexpected and must be fixed.
```
from typing import List
from haystack import Document, Pipeline, component
from haystack.components.joiners import DocumentJoiner
document_joiner = DocumentJoiner()
@component
class ConditionalDocumentCreator:
def __init__(self, content: str):
self._content = content
@component.output_types(documents=List[Document], noop=None)
def run(self, create_document: bool = False):
if create_document:
return {"documents": [Document(id=self._content, content=self._content)]}
return {"noop": None}
pipeline = Pipeline()
pipeline.add_component("first_creator", ConditionalDocumentCreator(content="First document"))
pipeline.add_component("second_creator", ConditionalDocumentCreator(content="Second document"))
pipeline.add_component("third_creator", ConditionalDocumentCreator(content="Third document"))
pipeline.add_component("joiner", document_joiner)
pipeline.connect("first_creator.documents", "joiner.documents")
pipeline.connect("second_creator.documents", "joiner.documents")
pipeline.connect("third_creator.documents", "joiner.documents")
output = pipeline.run(data={"first_creator": {"create_document": True}, "third_creator": {"create_document": True}})
print(output)
assert output == {
"second_creator": {"noop": None},
"joiner": {
"documents": [
Document(id="First document", content="First document"),
Document(id="Third document", content="Third document"),
]
},
}
output = pipeline.run(data={"first_creator": {"create_document": True}, "second_creator": {"create_document": True}})
print(output)
assert output == {
"third_creator": {"noop": None},
"joiner": {
"documents": [
Document(id="First document", content="First document"),
Document(id="Second document", content="Second document"),
]
},
}
```
| closed | 2024-09-09T07:51:55Z | 2024-09-10T12:59:55Z | https://github.com/deepset-ai/haystack/issues/8341 | [
"P1"
] | silvanocerza | 1 |
pytest-dev/pytest-qt | pytest | 117 | MultiSignalBlocker.args | It actually would be nice to have an `.args` attribute for `MultiSignalBlocker` too, which is just a list of emitted argument lists, e.g. `[['foo', 2342], ['bar', 1234]]`.
I'll work on this soon (I hope :laughing:)
| closed | 2015-12-17T06:11:14Z | 2016-10-19T00:11:16Z | https://github.com/pytest-dev/pytest-qt/issues/117 | [] | The-Compiler | 8 |
jonaswinkler/paperless-ng | django | 464 | consumer error on scanned pdf | Hi Jonas,
thank you for providing paperless-ng! It is an awesome project. However, my paperless-ng installation's consumer (1.0, docker, debian 10) fails to ocr a scanned pdf.
System:
` Operating System: Debian GNU/Linux 10 (buster)
Kernel: Linux 4.19.0-13-amd64
Architecture: x86-64`
Docker:
`Docker version 20.10.2, build 2291f61`
Docker-compose:
`docker-compose version 1.27.4, build 40524192`
Steps to reproduce the problem:
1. Scan document (Brother ADS-1700W)
2. PDF (5 mb) goes to \\IP\scan
3. \\IP\scan resides on my nas, is mounted to /media/scan via fstab, uid,gid are same as docker user specified in docker-compose.env,
4. OCR Task fails
Expected behavior:
Successfull OCR of document
Error log of failed task:
`: Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 176, in parse
ocrmypdf.ocr(**ocr_args)
File "/usr/local/lib/python3.7/site-packages/ocrmypdf/api.py", line 326, in ocr
return run_pipeline(options=options, plugin_manager=plugin_manager, api=True)
File "/usr/local/lib/python3.7/site-packages/ocrmypdf/_sync.py", line 368, in run_pipeline
validate_pdfinfo_options(context)
File "/usr/local/lib/python3.7/site-packages/ocrmypdf/_pipeline.py", line 193, in validate_pdfinfo_options
raise InputFileError()
ocrmypdf.exceptions.InputFileError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/consumer.py", line 179, in try_consume_file
document_parser.parse(self.path, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 193, in parse
raise ParseError(e)
documents.parsers.ParseError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
res = f(*task["args"], **task["kwargs"])
File "/usr/src/paperless/src/documents/tasks.py", line 73, in consume_file
override_tag_ids=override_tag_ids)
File "/usr/src/paperless/src/documents/consumer.py", line 196, in try_consume_file
raise ConsumerError(e)
documents.consumer.ConsumerError`
Error log of webserver when running docker-compose up:
`webserver_1 | ERROR 2021-01-28 22:58:34,107 _pipeline This PDF has a user fillable form. --redo-ocr is not currently possible on such files.
webserver_1 | ERROR 2021-01-28 22:58:34,114 loggers Error while consuming document Scan_1_28012021_003254.pdf:
webserver_1 | 22:58:34 [Q] ERROR Failed [Scan_1_28012021_003254.pdf] - : Traceback (most recent call last):
webserver_1 | File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 176, in parse
webserver_1 | ocrmypdf.ocr(**ocr_args)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/ocrmypdf/api.py", line 326, in ocr
webserver_1 | return run_pipeline(options=options, plugin_manager=plugin_manager, api=True)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/ocrmypdf/_sync.py", line 368, in run_pipeline
webserver_1 | validate_pdfinfo_options(context)
webserver_1 | File "/usr/local/lib/python3.7/site-packages/ocrmypdf/_pipeline.py", line 193, in validate_pdfinfo_options
webserver_1 | raise InputFileError()
webserver_1 | ocrmypdf.exceptions.InputFileError
webserver_1 |
webserver_1 | During handling of the above exception, another exception occurred:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/src/paperless/src/documents/consumer.py", line 179, in try_consume_file
webserver_1 | document_parser.parse(self.path, mime_type, self.filename)
webserver_1 | File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 193, in parse
webserver_1 | raise ParseError(e)
webserver_1 | documents.parsers.ParseError
webserver_1 |
webserver_1 | During handling of the above exception, another exception occurred:
webserver_1 |
webserver_1 | Traceback (most recent call last):
webserver_1 | File "/usr/local/lib/python3.7/site-packages/django_q/cluster.py", line 436, in worker
webserver_1 | res = f(*task["args"], **task["kwargs"])
webserver_1 | File "/usr/src/paperless/src/documents/tasks.py", line 73, in consume_file
webserver_1 | override_tag_ids=override_tag_ids)
webserver_1 | File "/usr/src/paperless/src/documents/consumer.py", line 196, in try_consume_file
webserver_1 | raise ConsumerError(e)
webserver_1 | documents.consumer.ConsumerError`
I noticed the line: ` ERROR 2021-01-28 22:58:34,107 _pipeline This PDF has a user fillable form. --redo-ocr is not currently possible on such files.`
The PDF definetely has no user fillable form, it is a non-ocr scan of a printed document.
Steps i tried:
1. Rebuilding image --> no success
2. Permissions --> okay, ls-la shows uid and gid are correct
3. OCR options: doesn't make a difference, skip, redo-ocr or omitting them completely make no difference.
Do you have any ideas on this?
Edit: Manually adding the pdf via uploader throws the same error | closed | 2021-01-28T22:14:56Z | 2021-01-28T22:55:44Z | https://github.com/jonaswinkler/paperless-ng/issues/464 | [] | msrv | 1 |
cvat-ai/cvat | tensorflow | 9,023 | Return storage location not supporting shared storage | when adding tasks trough the API, i can define where it should be saved, including shared storage. but when i want the metadata from a specific task i only get either local or cloud_storage. why doesn't it include share as a possible option? i can define this with each uploading task, but don't get it returned with Metadata call. also can't see it as a possible source or target storage, but can select files from there. please help. here is my call:
```python
from pprint import pprint
from cvat_sdk.api_client import Configuration, ApiClient, exceptions
from cvat_sdk.api_client.models import *
# Set up an API client
# Read Configuration class docs for more info about parameters and authentication methods
configuration = Configuration(
host = "http://xx.xx.xx.xx:8080",
username = 'xxxxxxx',
password = 'xxxxxxxx',
)
with ApiClient(configuration) as api_client:
id = 9 # int | A unique integer value identifying this task.
try:
(data, response) = api_client.tasks_api.retrieve(
id,)
pprint(data)
text_file = open("Output_meta_data_all.txt", "w")
text_file.write(data.to_str())
text_file.close()
except exceptions.ApiException as e:
print("Exception when calling TasksApi.retrieve(): %s\n" % e)
```
all i need to know is wether the data resides on the file share or somewhere else. | closed | 2025-01-30T12:59:23Z | 2025-02-21T13:36:24Z | https://github.com/cvat-ai/cvat/issues/9023 | [] | Xterbione | 8 |
newpanjing/simpleui | django | 376 | TabularInline条目的original样式margin-top偏移量过大 | **bug描述**
在使用TabularInline的情况下,inline条目的original信息纵坐标偏移过大。
**重现步骤**
1.在ModelAdmin中内联TabularInline
2.进入change视图,见图

关联代码:
https://github.com/newpanjing/simpleui/blob/c5fdc1b7a1cda33b240e73f62bffa48ff0ce00d5/simpleui/templates/admin/change_form.html#L14
**环境**
1.Operating System:
(Windows/Linux/MacOS)....
2.Python Version:3.8
3.Django Version:3.2
4.SimpleUI Version:2021.5.11
**Description**
1、经初步调查认为并非Django版本原因;
2、建议将margin-top: -50px; 调整为-30px;height: 1.2em;调整为1.5em;经测试可以兼容大多数场合; | closed | 2021-05-17T08:09:15Z | 2021-07-21T01:28:33Z | https://github.com/newpanjing/simpleui/issues/376 | [
"bug"
] | yanhuixie | 1 |
akfamily/akshare | data-science | 5,806 | 东财接口临时解决方案 | 目前东方财富接口应该是在频繁修改,导致本周又出现相同问题。已收到星球用户和 github 用户反馈,可以在此 issue 留言,也欢迎提供更好的解决方案。 | closed | 2025-03-08T16:54:32Z | 2025-03-09T14:21:53Z | https://github.com/akfamily/akshare/issues/5806 | [] | albertandking | 10 |
lexiforest/curl_cffi | web-scraping | 29 | Pyinstaller Error | Pyinstaller Error in version 0.4.0:
Traceback (most recent call last):
File "lt1.py", line 8, in <module>
File "PyInstaller\loader\pyimod03_importers.py", line 540, in exec_module
File "curl_cffi\__init__.py", line 39, in <module>
ImportError: DLL load failed while importing _wrapper: | closed | 2023-03-21T08:15:53Z | 2023-05-23T03:41:20Z | https://github.com/lexiforest/curl_cffi/issues/29 | [] | mstplm | 3 |
qubvel-org/segmentation_models.pytorch | computer-vision | 60 | cannot import name 'cfg' | when I try to import the module, I get an error:
```python
cannot import name 'cfg'
```
want help!!! | closed | 2019-09-18T07:47:16Z | 2019-09-18T08:04:44Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/60 | [] | ZinuoCai | 1 |
PaddlePaddle/models | nlp | 5,315 | no have kpi.py | PaddleNLP\pretrain_language_models\BERT
run ./_run_ce.sh

no share all the code | closed | 2021-06-04T13:22:35Z | 2021-06-14T08:45:19Z | https://github.com/PaddlePaddle/models/issues/5315 | [
"user"
] | zhihuashan | 2 |
inducer/pudb | pytest | 136 | http://mathema.tician.de/software/pudb has gone 404 | The URL "http://mathema.tician.de/software/pudb" results in a 404 error
The URL is shown in the banner on GitHub. It was also the first link on the wiki page at http://wiki.tiker.net/PuDB, where I updated it to "http://pypi.python.org/pypi/pudb" as that is the URL listed as homepage in PyPi
| closed | 2015-04-30T10:38:03Z | 2015-05-01T16:03:56Z | https://github.com/inducer/pudb/issues/136 | [] | jalanb | 1 |
modelscope/modelscope | nlp | 508 | 安装Paraformer的时候,提示版本冲突 | **环境**
windows 10
python 3.7
**目标**
在本地环境安装Paraformer环境
当前已经安装好了modelscope环境,但是在安装modelscope[audio]的时候提示版本冲突

| closed | 2023-08-29T08:06:57Z | 2024-07-08T01:53:20Z | https://github.com/modelscope/modelscope/issues/508 | [
"Stale"
] | stoneHah | 3 |
plotly/dash | plotly | 2,653 | dcc.Dropdown has inconsistent layout flow with other common input components | **Describe your context**
```
dash 2.13.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
Many of the go-to Dash Core Components have the CSS style `display` set to `inline-block`, with a notable exception of `dcc.Dropdown`. This means that without any custom styles, the dropdown component has inconsistent layout flow compared to other input controls it's likely to be found with. I know I can style the component to fix the issue, but this seems like overkill for simple demos where you just need some controls next to each other, and would be confusing for people getting started with Dash.
Here's an example:
```python
from dash import Dash, dcc, html
app = Dash(__name__)
app.layout = html.Div(
children=[
html.Label("Dropdown"),
dcc.Dropdown(),
dcc.DatePickerRange("DatePickerRange"),
html.Label("DatePickerSingle"),
dcc.DatePickerSingle(),
html.Label("Input"),
dcc.Input(),
],
)
app.run(port=8050)
```
Which gives the this layout:

This inconsistent flow layout that comes out of the box is too jarring, even for throwaway demo code, so I inevitably end up adding manual styling just for that one component to normalise things a bit:
```python
app2 = Dash(__name__)
app2.layout = html.Div(
children=[
html.Label("Dropdown"),
dcc.Dropdown(
style={
"display": "inline-block",
"width": 300,
"vertical-align": "middle",
}
),
dcc.DatePickerRange("DatePickerRange"),
html.Label("DatePickerSingle"),
dcc.DatePickerSingle(),
html.Label("Input"),
dcc.Input(),
],
)
app2.run(port=8051)
```

I'm wondering if there would be any appetite for trying to normalise the layout flow for `dcc.Dropdown`? I know there's the impact on the many existing Dash apps out there to consider, but I do think it would make for a better experience, also for people getting started with Dash too.
| open | 2023-10-05T15:35:07Z | 2024-08-13T19:38:28Z | https://github.com/plotly/dash/issues/2653 | [
"bug",
"P3"
] | ned2 | 0 |
erdewit/ib_insync | asyncio | 322 | Unable to fetch data for metals | I am so far unable to fetch data for metals, using the product listing located [here](https://www.interactivebrokers.com/en/index.php?f=2222&exch=idealpro_metals&showcategories=) for gold. There is sparse documentation on this, but I have tried:
```python
contract = Commodity('XAUUSD','IDEALPRO','USD')
```
and,
```python
contract = Contract(conId=69067924,exchange='IDEALPRO')
```
along with,
```python
bars = ib.reqHistoricalData(
contract, endDateTime='', durationStr='30 D',
barSizeSetting='1 hour', whatToShow='MIDPOINT', useRTH=True)
```
In either case, I receive an error stating that "No security definition has been found for the request".
Changing the exchange name to 'IDEALPRO Metals' gives me a `Invalid destination exchange` error.
Please advise. | closed | 2020-12-10T00:56:49Z | 2020-12-13T10:22:49Z | https://github.com/erdewit/ib_insync/issues/322 | [] | Shellcat-Zero | 1 |
Nemo2011/bilibili-api | api | 276 | 【提问】{怎么获取艾特自己的消息} |
如题 找不到接口 | closed | 2023-05-05T07:25:15Z | 2024-01-19T04:19:07Z | https://github.com/Nemo2011/bilibili-api/issues/276 | [
"question"
] | Hzxxxx2002 | 4 |
PokeAPI/pokeapi | api | 643 | Missing TypeScript Wrapper Reference on Poke-Api Website | Hi, I noticed that the TypeScript Wrapper for the Poke Api is not being referenced in the official Website. The two documentations are not equal, as well the ts wrapper also has a logging feature.


| closed | 2021-08-19T14:42:29Z | 2021-10-22T18:00:09Z | https://github.com/PokeAPI/pokeapi/issues/643 | [] | moyzlevi | 1 |
pallets/quart | asyncio | 77 | Motor MongoDB driver integration with Quart | Hi currently Flask doesn't support Motor driver for Mogodb but it supports MongoEngine which uses PyMongo driver internally which not asynchronous. Since Quart framework is built around Async IO, It will be much better if this paradigm is implemented in MongoDB Integration as well. And also Motor runs on async io. Is it possible to implement MotorEngine instead of MongoEngine for quart | closed | 2019-09-10T08:45:49Z | 2022-07-06T00:23:50Z | https://github.com/pallets/quart/issues/77 | [] | harshakumar347 | 1 |
plotly/dash | dash | 2,698 | [BUG] pattern-matching callback output "children" interpret an output list as multiple outputs | Thank you so much for helping improve the quality of Dash!
We do our best to catch bugs during the release process, but we rely on your help to find the ones that slip through.
**Describe your context**
Python 3.7.11 in a venv on Ubuntu 22.04
- replace the result of `pip list | grep dash` below
```
dash 2.14.1
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
The following pattern-matching callback handles a dynamic card:
- there can be an arbitrary amount of cards (variable id field: `card`, managed by a higher level callback that creates the card and works fine)
- each card has a container 'bbl-mdl-filters', which contains a customizable amounts of filters (variable id fields: `card` inherited from card and `filter` for the filter itself)
- there is a button to add a filter as an extra child of the above container
- each filter is a dash bootstrap row which contains 4 items: field, condition, value, delete-button (see screenshot below)
- the filter close button is functional to remove it from the children list
```
@app.callback(
Output({'type': 'bbl-mdl-filters', 'card': ALL}, 'children'),
Input({'type': 'bbl-mdl-filters-add', 'card': ALL}, 'n_clicks'),
Input({'type': 'bbl-query-filter-close', 'card': ALL, 'filter': ALL}, 'n_clicks'),
State({'type': 'bbl-mdl-filters', 'card': ALL}, 'children'),
State('bbl-cards-container', 'children'),
prevent_initial_call=True,
)
def add_delete_query_filters(n_add, n_close, filters, cards):
"""add or remove query filter (+ btn to add, close btn to remove)"""
del n_close
ctx = dash.callback_context
filters = list(filter(lambda f:f, filters or []))
if not ctx.triggered:
return no_update
type_ = ctx.triggered_id['type']
card = ctx.triggered_id['card']
row = ctx.triggered_id.get('row')
if type_ == 'bbl-mdl-filters-add':
if n_add:
source = _find_child_value(cards, 'bbl-query-source', card)
fund = _find_child_value(cards, 'bbl-query-fund', card)
filters = filters or []
filters.append(Query.row_for_filter(source, fund, card, len(filters)))
elif type_=='bbl-query-filter-close':
filters = [
child
for child in filters
if child['props']['id']['row'] != row
]
return filters
```
The unexpected behaviour:
The callback below behaves differently depending on the amount of filters returned:
- empty list: ~~OK~~ EDIT: error, does not let delete the row.
- one filter: OK (as per screenshot)
- more than one filter: **error** raised when marshalling what's returned vs the callback output specs. It interprets the list with two items as they were 2 separate outputs from the callback, where in reality is the list of children. Notably, **validate_multi_return** should not be called here (or should interpret the return differently).
Stack trace:
```
Traceback (most recent call last):
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/flask/app.py", line 2548, in __call__
return self.wsgi_app(environ, start_response)
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/flask/app.py", line 2528, in wsgi_app
response = self.handle_exception(e)
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/dash/dash.py", line 1316, in dispatch
callback_context=g,
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/dash/_callback.py", line 460, in add_context
output_spec, flat_output_values, callback_id
File "<hidden-repo-path>/venv/lib/python3.7/site-packages/dash/_validate.py", line 207, in validate_multi_return
"""
dash.exceptions.InvalidCallbackReturnValue: Invalid number of output values for {"card":["ALL"],"type":"bbl-mdl-filters"}.children item 0.
Expected 1, got 2
output spec: [{'id': {'card': 1, 'type': 'bbl-mdl-filters'}, 'property': 'children'}]
output value: [{'props': {'children': [{'props': {'children': {'props': {'options': ['asset_class', 'call_put', 'country_iso', 'country_of_risk', 'currency', 'description', 'exchange_code_composite', 'exchange_code_mic', 'exchange_code_short', 'fulcrum_ticker', 'id_base', 'id_bbg', 'id_maia', 'id_underlying', 'imm_based', 'inflation_swap', 'instrument_type', 'is_tradable', 'market', 'market_ticker', 'non_deliverable', 'non_notional_ccy', 'notional_ccy', 'ois', 'option_exercise_style', 'portfolio_type', 'region', 'settlement_ccy', 'short_name', 'strategy', 'tenor', 'ticker', 'upload_time'], 'placeholder': 'Field...', 'style': {'display': 'inline-block', 'width': '100%'}, 'id': {'type': 'bbl-query-filter-key', 'card': 1, 'filter': 0}}, 'type': 'Dropdown', 'namespace': 'dash_core_components'}, 'width': 5}, 'type': 'Col', 'namespace': 'dash_bootstrap_components'}, {'props': {'children': {'props': {'options': ['=', '!=', '<', '<=', '>', '>=', 're'], 'value': '=', 'placeholder': 'Operator...', 'style': {'display': 'inline-block', 'width': '100%'}, 'id': {'type': 'bbl-query-filter-op', 'card': 1, 'filter': 0}}, 'type': 'Dropdown', 'namespace': 'dash_core_components'}, 'width': 1}, 'type': 'Col', 'namespace': 'dash_bootstrap_components'}, {'props': {'children': {'props': {'placeholder': 'Value...', 'disabled': True, 'style': {'display': 'inline-block', 'width': '100%'}, 'id': {'type': 'bbl-query-filter-val', 'card': 1, 'filter': 0}}, 'type': 'Dropdown', 'namespace': 'dash_core_components'}}, 'type': 'Col', 'namespace': 'dash_bootstrap_components'}, {'props': {'children': {'props': {'children': {'props': {'children': None, 'className': 'fas fa-times'}, 'type': 'I', 'namespace': 'dash_html_components'}, 'id': {'type': 'bbl-query-filter-close', 'card': 1, 'filter': 0}, 'class_name': 'ml-auto close', 'color': 'danger', 'style': {'display': 'inline-block', 'aspect-ratio': '1', 'border-radius': '50%'}}, 'type': 'Button', 'namespace': 'dash_bootstrap_components'}, 'width': 1}, 'type': 'Col', 'namespace': 'dash_bootstrap_components'}], 'id': {'type': 'bbl-query-filter-filter', 'card': 1, 'filter': 0}, 'style': {'display': 'inline-flex', 'width': '100%'}}, 'type': 'Row', 'namespace': 'dash_bootstrap_components'}, Row(children=[Col(children=Dropdown(options=['asset_class', 'call_put', 'country_iso', 'country_of_risk', 'currency', 'description', 'exchange_code_composite', 'exchange_code_mic', 'exchange_code_short', 'fulcrum_ticker', 'id_base', 'id_bbg', 'id_maia', 'id_underlying', 'imm_based', 'inflation_swap', 'instrument_type', 'is_tradable', 'market', 'market_ticker', 'non_deliverable', 'non_notional_ccy', 'notional_ccy', 'ois', 'option_exercise_style', 'portfolio_type', 'region', 'settlement_ccy', 'short_name', 'strategy', 'tenor', 'ticker', 'upload_time'], placeholder='Field...', style={'display': 'inline-block', 'width': '100%'}, id={'type': 'bbl-query-filter-key', 'card': 1, 'filter': 1}), width=5), Col(children=Dropdown(options=['=', '!=', '<', '<=', '>', '>=', 're'], value='=', placeholder='Operator...', style={'display': 'inline-block', 'width': '100%'}, id={'type': 'bbl-query-filter-op', 'card': 1, 'filter': 1}), width=1), Col(Dropdown(placeholder='Value...', disabled=True, style={'display': 'inline-block', 'width': '100%'}, id={'type': 'bbl-query-filter-val', 'card': 1, 'filter': 1})), Col(children=Button(children=I(className='fas fa-times'), id={'type': 'bbl-query-filter-close', 'card': 1, 'filter': 1}, class_name='ml-auto close', color='danger', style={'display': 'inline-block', 'aspect-ratio': '1', 'border-radius': '50%'}), width=1)], id={'type': 'bbl-query-filter-filter', 'card': 1, 'filter': 1}, style={'display': 'inline-flex', 'width': '100%'})]
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
A card, with only one filter.

| closed | 2023-11-21T12:05:29Z | 2023-11-21T14:12:42Z | https://github.com/plotly/dash/issues/2698 | [] | claudiocmp | 0 |
deezer/spleeter | tensorflow | 874 | My batch script worked 2 days ago, doesnt work now. No changes done. | for %%a in ("*.wav") do spleeter separate -p spleeter:2stems -i "%%a" -o output
Now it just shows the text 50-60 times and quits.
How can that be? | open | 2023-10-21T14:19:39Z | 2023-10-21T15:14:34Z | https://github.com/deezer/spleeter/issues/874 | [
"bug",
"invalid"
] | manus693 | 3 |
vaexio/vaex | data-science | 1,969 | ArrowIndexError while using groupby | Hi Team,
We are using Dask + Vaex combination for one of our use cases. After the recent Blake3 version upgrade to 0.3.1 and vaex upgrade to 4.8.0 (Dask version: 2022.01.0), we started getting the following error in ‘groupby’ randomly (sometimes this works fine and sometimes we get the following error) –
`df = df.groupby(by=group_by_attrs, sort=False, assume_sparse='auto').agg(agg_dict)File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/dataframe.py", line 7000, in groupbyreturn self._delay(delay, progressbar.exit_on(next(groupby._promise_by)))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/dataframe.py", line 1689, in _delayreturn task.get()File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/aplus/__init__.py", line 170, in getraise self._reasonFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 121, in callAndRejectret.fulfill(failure(r))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/progress.py", line 91, in errorraise argFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 121, in callAndRejectret.fulfill(failure(r))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/delayed.py", line 38, in _wrappedraise excFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 121, in callAndRejectret.fulfill(failure(r))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/progress.py", line 91, in errorraise argFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 121, in callAndRejectret.fulfill(failure(r))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/delayed.py", line 38, in _wrappedraise excFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 121, in callAndRejectret.fulfill(failure(r))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/delayed.py", line 38, in _wrappedraise excFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 121, in callAndRejectret.fulfill(failure(r))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/delayed.py", line 38, in _wrappedraise excFile "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/promise.py", line 106, in callAndFulfillret.fulfill(success(v))File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/delayed.py", line 82, in callreturn f(*args_real, **kwargs_real)File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/vaex/groupby.py", line 370, in processbin_values[parent.label] = parent.bin_values.take(indices)File "pyarrow/array.pxi", line 1157, in pyarrow.lib.Array.takereturn _pc().take(self, indices)File "/usr/local/python/python-3.9/std/lib64/python3.9/site-packages/pyarrow/compute.py", line 625, in takereturn call_function('take', [data, indices], options, memory_pool)File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_functionreturn func.call(args, options=options, memory_pool=memory_pool)File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.callresult = GetResultValue(File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_statusreturn check_status(status)File "pyarrow/error.pxi", line 126, in pyarrow.lib.check_statusraise ArrowIndexError(message) pyarrow.lib.ArrowIndexError: Index -137 out of bounds`
Code structure (Could not share the reproducer here as we are using internal libraries) :
```
# create dask cluster
# do computations and store data in cache in parallel using dask cluster
# these computations use vaex
```
Please note that the functionality was working fine with the previous versions of Blake (0.2.1) and Vaex (4.7.0) (Dask version: 2022.01.0). Would appreciate any help here. Thanks. | open | 2022-03-11T12:59:23Z | 2022-03-21T15:46:56Z | https://github.com/vaexio/vaex/issues/1969 | [] | khus07hboo | 2 |
BeanieODM/beanie | asyncio | 1,132 | Document._class_id is always overwritten | **Describe the bug**
`Document._class_id` is overwritten. Document initialization should check if it's set before assigning a value. This messes up the customization of document discriminators and leads to failing queries (update, find, link fetching...).
**To Reproduce**
Create a couple of documents (with inheritance pattern), set `Settings.class_id` for example to `"type"`, add `type: str = "my-doc-type-1"`, `type: str = "my-doc-type-2"`, ... properties to documents and also set `_class_id` on each document to the same value as the `type` property.
**Expected behavior**
Update, find, link fetching queries work.
**Additional context**
I'll submit a PR with a fix that avoids overwriting `_class_id` if it's already set. It solves the issues. | open | 2025-02-24T10:00:20Z | 2025-02-24T11:36:20Z | https://github.com/BeanieODM/beanie/issues/1132 | [] | volfpeter | 0 |
microsoft/MMdnn | tensorflow | 335 | Mxnet->Tf | Hi kitstar,
I have met the same question:
Warning: MXNet Parser has not supported operator null with name data.
Warning: convert the null operator with name [data] into input layer.
Warning: MXNet Parser has not supported operator _minus_scalar with name _minusscalar0.
Warning: MXNet Parser has not supported operator _mul_scalar with name _mulscalar0.
Traceback (most recent call last):
File "/home/bearzhang/anaconda2/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/home/bearzhang/anaconda2/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/bearzhang/anaconda2/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 159, in <module>
_main()
File "/home/bearzhang/anaconda2/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 154, in _main
ret = _convert(args)
File "/home/bearzhang/anaconda2/lib/python2.7/site-packages/mmdnn/conversion/_script/convertToIR.py", line 95, in _convert
parser.gen_IR()
File "/home/bearzhang/anaconda2/lib/python2.7/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 266, in gen_IR
func(current_node)
File "/home/bearzhang/anaconda2/lib/python2.7/site-packages/mmdnn/conversion/mxnet/mxnet_parser.py", line 487, in rename_Convolution
in_channel = self.IR_layer_map[IR_node.input[0]].attr["_output_shapes"].list.shape[0].dim[-1].size
KeyError: u'_mulscalar0'
can you help me?
| open | 2018-07-27T03:06:04Z | 2018-07-31T02:11:52Z | https://github.com/microsoft/MMdnn/issues/335 | [] | xiangdeyizhang | 1 |
waditu/tushare | pandas | 1,653 | 最新代码是不是不开源了 | 看代码的commit很久之前了,和现有的代码的接口也对不上,这个项目是不是早就不再开源最新版本了 | open | 2022-05-22T08:57:08Z | 2022-05-22T08:57:08Z | https://github.com/waditu/tushare/issues/1653 | [] | hitflame | 0 |
Urinx/WeixinBot | api | 22 | [BUG] selector 为 6 时死循环 | 我看你代码里还是todo,想问下6是代表什么。我运行的时候经常碰到6然后死循环了。
| open | 2016-02-24T08:58:46Z | 2017-07-31T15:53:01Z | https://github.com/Urinx/WeixinBot/issues/22 | [
"bug"
] | Zcc | 9 |
PrefectHQ/prefect | automation | 16,828 | Process workpool doesn't make a flow run | ### Bug summary
I am hosting a Prefect server on ECS/AWS and have set up a ”Process” work pool on my local server.
When I tried to run a sample flow through the work pool, it crashed with the following log
```
Failed to submit flow run 'a33827d9-ae22-408c-a914-cfd926fa50ba' to infrastructure.
Traceback (most recent call last):
File "/home/{virtualenv}/lib/python3.12/site-packages/prefect/workers/base.py", line 1009, in _submit_run_and_capture_errors
await self._give_worker_labels_to_flow_run(flow_run.id)
File "/home/{virtualenv}/lib/python3.12/site-packages/prefect/workers/base.py", line 1257, in _give_worker_labels_to_flow_run
await self._client.update_flow_run_labels(flow_run_id, labels)
File "/home/{virtualenv}/lib/python3.12/site-packages/prefect/client/orchestration/_flow_runs/client.py", line 897, in update_flow_run_labels
response = await self.request(
^^^^^^^^^^^^^^^^^^^
File "/home/{virtualenv}/lib/python3.12/site-packages/prefect/client/orchestration/base.py", line 46, in request
return await self._client.request(method, path, params=params, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/{virtualenv}/lib/python3.12/site-packages/httpx/_client.py", line 1540, in request
return await self.send(request, auth=auth, follow_redirects=follow_redirects)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/{virtualenv}/lib/python3.12/site-packages/prefect/client/base.py", line 355, in send
response.raise_for_status()
File "/home/{virtualenv}/lib/python3.12/site-packages/prefect/client/base.py", line 163, in raise_for_status
raise PrefectHTTPStatusError.from_httpx_error(exc) from exc.__cause__
prefect.exceptions.PrefectHTTPStatusError: Client error '404 Not Found' for url '{prefect_api_url}/flow_runs/a33827d9-ae22-408c-a914-cfd926fa50ba/labels'
Response: {'detail': 'Not Found'}
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404
```
I have anonymized {virtualenv} and {prefect_api_url}, but I believe they are correctly configured.
I confirmed that the same flow runs successfully on ECS.
I also verified that the flow runs successfully with the process work pool when using Prefect 3.0.0.
Currently, I am using version 3.1.13.
### Version info
```Text
Version: 3.1.13
API version: 0.8.4
Python version: 3.12.5
Git commit: 16e85ce3
Built: Fri, Jan 17, 2025 8:46 AM
OS/Arch: linux/x86_64
Profile: adhoc-dev
Server type: server
Pydantic version: 2.10.5
Integrations:
prefect-aws: 0.5.3
```
### Additional context
_No response_ | closed | 2025-01-23T16:37:35Z | 2025-01-24T01:56:26Z | https://github.com/PrefectHQ/prefect/issues/16828 | [
"bug"
] | YukioKaneda | 2 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,548 | Next button does not seem to move to next questionnaire tab | ### What version of GlobaLeaks are you using?
4.12.2
### What browser(s) are you seeing the problem on?
Chrome
### What operating system(s) are you seeing the problem on?
Windows
### Describe the issue
[Next] button does not seem to move to next questionnaire tab.
In the specific questionnaire there is a conditional dependence between questionnaire tab 2 and a question in tab 1.
When clicking the tabs it works, and submission also works
### Proposed solution
_No response_ | closed | 2023-07-23T17:09:48Z | 2023-07-23T20:03:32Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3548 | [
"T: Bug",
"C: Client"
] | elbill | 1 |
Johnserf-Seed/TikTokDownload | api | 136 | Web版现已发布 | # Web版项目
[Johnserf-Seed/TikTokWeb](https://github.com/Johnserf-Seed/TikTokWeb)

| open | 2022-04-18T13:35:00Z | 2022-04-18T14:47:56Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/136 | [
"小白必看(good first issue)"
] | Johnserf-Seed | 4 |
comfyanonymous/ComfyUI | pytorch | 6,380 | ControlNetFlux.forward() missing 1 required positional argument: 'y' | ### Your question


### Logs
_No response_
### Other
_No response_ | closed | 2025-01-07T13:13:35Z | 2025-01-11T15:38:24Z | https://github.com/comfyanonymous/ComfyUI/issues/6380 | [
"User Support"
] | Season0468 | 3 |
jowilf/starlette-admin | sqlalchemy | 297 | Bug: pagination size is not remembered | **Describe the bug**
If you change the pagination size to 25, the system does not remember this.
**To Reproduce**
Change the page size from 10 to 25. Then visit an entity, click around, go back to the list. The page size is back to 10.
**Environment (please complete the following information):**
- Starlette-Admin version: 0.11.2
**Additional context**
I suppose all other changes are probably lost and not remembered.
| closed | 2023-09-06T21:42:46Z | 2023-09-09T03:32:02Z | https://github.com/jowilf/starlette-admin/issues/297 | [
"bug"
] | sglebs | 0 |
the0demiurge/ShadowSocksShare | flask | 76 | close | closed | 2019-08-31T06:41:39Z | 2019-08-31T14:52:30Z | https://github.com/the0demiurge/ShadowSocksShare/issues/76 | [] | loewe0202 | 0 | |
ndleah/python-mini-project | data-visualization | 221 | Python projects | closed | 2024-02-11T12:09:02Z | 2024-06-02T06:03:40Z | https://github.com/ndleah/python-mini-project/issues/221 | [] | busssanwesh | 0 | |
BeanieODM/beanie | pydantic | 525 | [BUG] Default values break behaviour of 'Indexed(...)' | **Describe the bug**
When creating a field with `Indexed(...)` type and a default type provided, in my case it is pydantic's `Field(...)`, the field is not being added into MongoDB's indexes.
**To Reproduce**
```python
from beanie import Document, Indexed
from pydantic import Field
class Test(Document):
some_field: Indexed(str, unique=True) = Field("abc", min_length=3)
# then try to create a document...
```
**Expected behavior**
`some_field` was expected to appear to MongoDB's indexes.
**Additional context**
As discussed in Beanie's Discord server, it also would be great to rework `Indexed` to make it more pythonic: `Indexed(...)` to `Indexed[...]` | closed | 2023-04-01T20:54:19Z | 2023-05-05T23:52:28Z | https://github.com/BeanieODM/beanie/issues/525 | [
"bug"
] | yallxe | 1 |
widgetti/solara | flask | 850 | Inconsistent issue when removing a split map control from an ipyleaflet map | I am facing a very difficult issue to trace, so I came up with this [almost reproducible example](https://py.cafe/app/lopezv.oliver/solara-issue-850).
This animation shows the demo initially working as expected. Then, I refresh the app and there is an issue with the demo: the split-map control does not get removed correctly.

## Setup
Let's start with defining a custom class inheriting from ipyleaflet.Map that can dynamically change between a split map and a "stack" map (layers stacked on top of each other).
```python
import ipyleaflet
import traitlets
from traitlets import observe
class Map(ipyleaflet.Map,):
map_type = traitlets.Unicode().tag(sync=True)
@observe("map_type")
def _on_map_type_change(self, change):
if hasattr(self, "split_map_control"):
if change.new=="stack":
self.remove(self.split_map_control)
self.set_stack_mode()
if change.new=="split":
self.set_split_mode()
def set_stack_mode(self):
self.layers = tuple([
self.esri_layer,
self.topo_layer
])
def set_split_mode(self):
self.layers = ()
self.add(self.left_layer)
self.add(self.right_layer)
self.add(self.split_map_control)
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.osm = self.layers[0]
esri_url=ipyleaflet.basemaps.Esri.WorldImagery.build_url()
topo_url = ipyleaflet.basemaps.OpenTopoMap.build_url()
self.left_layer = ipyleaflet.TileLayer(url = topo_url, name="left")
self.right_layer = ipyleaflet.TileLayer(url = esri_url, name="right")
self.topo_layer = ipyleaflet.TileLayer(url=topo_url, name="topo", opacity=0.25)
self.esri_layer = ipyleaflet.TileLayer(url=esri_url, name="esri")
self.stack_layers = [
self.esri_layer,
self.topo_layer,
]
self.split_map_control = ipyleaflet.SplitMapControl(
left_layer=self.left_layer,
right_layer=self.right_layer)
if self.map_type=="split":
self.set_split_mode()
if self.map_type=="stack":
self.set_stack_mode()
```
I haven't encountered the issue when testing the ipyleaflet code without solara.
Now let's add solara to the equation:
```python
import solara
import ipyleaflet
import traitlets
from traitlets import observe
zoom=solara.reactive(4)
map_type = solara.reactive("stack")
class Map(ipyleaflet.Map,):
.... (same code defining Map as above)
....
@solara.component
def Page():
with solara.ToggleButtonsSingle(value=map_type):
solara.Button("Stack", icon_name="mdi-layers-triple", value="stack", text=True)
solara.Button("Split", icon_name="mdi-arrow-split-vertical", value="split", text=True)
Map.element(
zoom=zoom.value,
on_zoom=zoom.set,
map_type=map_type.value
)
Page()
```
Could you please help diagnose this problem?
> Here's a [live version of an app](https://halo-maps.kaust.edu.sa/biodiversity) where I am facing this issue (also inconsistently! try refreshing until you encounter the issue). | open | 2024-11-05T07:01:22Z | 2024-11-17T14:08:02Z | https://github.com/widgetti/solara/issues/850 | [] | lopezvoliver | 2 |
yeongpin/cursor-free-vip | automation | 230 | Auto continue after 25 tool calls | Can we auto continue after 25 tool calls, can you please look into it? | closed | 2025-03-14T16:48:24Z | 2025-03-16T17:05:42Z | https://github.com/yeongpin/cursor-free-vip/issues/230 | [] | Kabi10 | 1 |
Miserlou/Zappa | django | 2,146 | How to invoke or schedule functions from non wsgi app | Context
I have some code for automation https://github.com/manycoding/page-followers. I'd like to invoke functions to test from different files. I noticed when I run `zappa invoke dev "anything but lambda_handler" it actually invokes `lambda_handler`. Same with scheduling.
## Expected Behavior
I'd like to be able to execute any function from any file deployed with zappa, not just `lambda_handler`.
## Actual Behavior
`lambda_handler` is executed
| closed | 2020-07-29T17:05:50Z | 2020-09-28T14:02:04Z | https://github.com/Miserlou/Zappa/issues/2146 | [] | manycoding | 3 |
zappa/Zappa | django | 922 | [Migrated] Add Docker Container Image Support | Originally from: https://github.com/Miserlou/Zappa/issues/2188 by [ian-whitestone](https://github.com/ian-whitestone)
Earlier this month, AWS [announced container image support](https://aws.amazon.com/blogs/aws/new-for-aws-lambda-container-image-support/) for AWS Lambda. This means you can now package and deploy lambda functions as container images, instead of using zip files. The container image based approach will solve a lot of headaches caused by the zip file approach, particularly with file sizes (container images can be up to 10GB) and the dependency issues we all know & love.
In an ideal end state, you should be able to call `zappa deploy` / `zappa update` / `zappa package` (etc.) and specify whether you want to use the traditional zip-based approach or new Docker container based approach. If choosing the latter, Zappa would automatically:
* Build the new docker image for you
* Not 100% sure how this would work yet. There is a [Python library for docker](https://docker-py.readthedocs.io/en/stable/) that could be used. Would need to detect the dependencies a user has in their virtual env. and then install them all in the Docker image creation flow.
* A simpler alternative could involve a user having a Dockerfile that they point Zappa to, and Zappa just executes the build for that.
* Pushes the docker image to Amazon's Container Registry solution
* Automatically creates new repository if one does not exist
* Creates the lambda function with the new Docker image
For a MVP, we should take a BYOI (bring your own image) approach and just get `zappa deploy` and `zappa update` to deploy a lambda function using an existing Docker Image that complies with [these guidelines](https://docs.aws.amazon.com/lambda/latest/dg/images-create.html). | closed | 2021-02-20T13:24:34Z | 2024-04-13T19:36:38Z | https://github.com/zappa/Zappa/issues/922 | [
"no-activity",
"auto-closed"
] | jneves | 7 |
kynan/nbstripout | jupyter | 105 | Use within pre-commit on GitHub Actions fails with `.git/index.lock` error | This issue seems quite similar to https://github.com/kynan/nbstripout/issues/103
I run the following GitHub action which installs pre-commit and runs it on all files.
https://github.com/pymedphys/pymedphys/blob/9a484fd3273b2ea898924ec3efbc16b5bfde377a/.github/workflows/main.yml#L5-L19
Pay particular attention to the following two lines where I make sure `.git/index.lock` doesn't exist and both before and after running pre-commit I list the contents of the `.git` directory:
```bash
while [ -f .git/index.lock ]; do sleep 1; done; ls -hal .git
pre-commit run --all-files || ls -hal .git
```
The pre-commit config is here:
https://github.com/pymedphys/pymedphys/blob/9a484fd3273b2ea898924ec3efbc16b5bfde377a/.pre-commit-config.yaml#L4-L7
The results within GitHub actions is the following error:
```
nbstripout...................................................Failed
hookid: nbstripout
Files were modified by this hook. Additional output:
fatal: Unable to create
'/home/runner/work/pymedphys/pymedphys/.git/index.lock': File exists.
Another git process seems to be running in this repository, e.g.
an editor opened by 'git commit'. Please make sure all processes
are terminated then try again. If it still fails, a git process
may have crashed in this repository earlier:
remove the file manually to continue.
```
See:
https://github.com/pymedphys/pymedphys/pull/486/checks?check_run_id=233766462#step:4:57
More than happy to provide more debugging info, just let me know.
Cheers,
Simon | closed | 2019-09-24T08:20:54Z | 2019-11-13T06:57:33Z | https://github.com/kynan/nbstripout/issues/105 | [
"type:bug",
"resolution:fixed"
] | SimonBiggs | 4 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 769 | Do you have this code available on a Google Colab VM? | I'm new to Python so I just found out that it makes no sense touching apps which make use of CUDA configurations (specially Macs which are not compatible with Nvidia drivers as of MacOS Catalina 10.15.x
Any suggestions... If you help me get going I can devote more time to train an spanish version of this amazing script.
Best regards. | closed | 2021-06-06T05:24:37Z | 2021-08-20T13:15:20Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/769 | [] | inglesuniversal | 2 |
apache/airflow | data-science | 47,511 | BranchPythonOperator.execute cannot be called outside TaskInstance! | ### Apache Airflow version
2.10.5
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
For some reason BranchPythonOperator reports warning:
```
[2025-03-07, 18:13:18 UTC] {local_task_job_runner.py:123} ▶ Pre task execution logs
[2025-03-07, 18:13:19 UTC] {baseoperator.py:424} WARNING - BranchPythonOperator.execute cannot be called outside TaskInstance!
[2025-03-07, 18:13:19 UTC] {python.py:240} INFO - Done. Returned value was: branch_b
[2025-03-07, 18:13:19 UTC] {branch.py:38} INFO - Branch into branch_b
[2025-03-07, 18:13:19 UTC] {skipmixin.py:233} INFO - Following branch ('branch_b',)
[2025-03-07, 18:13:19 UTC] {skipmixin.py:281} INFO - Skipping tasks [('branch_c', -1), ('branch_d', -1), ('branch_a', -1)]
[2025-03-07, 18:13:19 UTC] {taskinstance.py:341} ▶ Post task execution logs
```
it's from example_branch_operator.
### What you think should happen instead?
_No response_
### How to reproduce
Run example 'example_branch_operator'.
### Operating System
Ubuntu 22
### Versions of Apache Airflow Providers
It's a fresh installation (pip install airflow)
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-07T18:43:00Z | 2025-03-10T00:28:23Z | https://github.com/apache/airflow/issues/47511 | [
"kind:bug",
"area:core",
"needs-triage"
] | tomplus | 3 |
nerfstudio-project/nerfstudio | computer-vision | 2,808 | Convert final transform.json camera poses into camera-2-world format | I have a question about converting the final transform.json file into camera-2-world format after running ns-process-data. My ultimate goal is to calculate the angular difference between cameras. Is there any flag or function that I can set to keep or convert the transform.json result into camera-2-world coordinates? Alternatively, is there a way to calculate the angular difference between two cameras from the transform.json file? | open | 2024-01-23T10:53:17Z | 2024-01-23T10:53:17Z | https://github.com/nerfstudio-project/nerfstudio/issues/2808 | [] | aeskandari68 | 0 |
jumpserver/jumpserver | django | 14,913 | [Question] How to connect in the DB using Dbeaver SSH Tunnel | ### Product Version
v4.5.0
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [x] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
In our environment, it is not possible to open port 3306 for direct database access due to our business model.
To manage the database, we use DBeaver via SSH tunneling to the target server. However, after registering the server in JumpServer, and try to connect using the SSH Guide from JS, the SSH Tunnel in DBeaver does not work.
Using the credentials from SSH Guide

Error trying to connect

I tried to use the SSH command aswell
ssh -L 3307:127.0.0.1:3306 JMS-6aca1df7-3a24-4b22-997d-cc3773b2c0a8@192.168.25.87 -p 2222
This is the error when I try to connect

The error in the server I'm trying to connect

I have this variables set on my config.txt
ENABLE_LOCAL_PORT_FORWARD=true
ENABLE_VSCODE_SUPPORT=true
### 🤔 Question Description
I want to know if this is a limitation from the JumpServer or a bug.
### Expected Behavior
_No response_
### Additional Information
_No response_ | closed | 2025-02-21T19:42:16Z | 2025-03-17T14:47:03Z | https://github.com/jumpserver/jumpserver/issues/14913 | [
"🤔 Question"
] | joaoixc | 4 |
yt-dlp/yt-dlp | python | 12,278 | Extract cookies from Edge for youtube? | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Hi. trying extract cookies for youtubem but nothing working.
C:\Users\Alexander\AppData\Local\Microsoft\Edge\User Data\Profile 1\Network cookies are locked.
--disable-features=LockProfileCookieDatabase always using, but not working.
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
C:\Users\Alexander>python -myt_dlp -vU -N 10 -F --cookies-from-browser edge:"profile 1" -o "C:\Biathlon\%(title)s.%(ext)s" "https://www.youtube.com/watch?v=RXJK9rzbgXY&t=17s"
[debug] Command-line config: ['-vU', '-N', '10', '-F', '--cookies-from-browser', 'edge:profile 1', '-o', 'C:\\Biathlon\\%(title)s.%(ext)s', 'https://www.youtube.com/watch?v=RXJK9rzbgXY&t=17s']
[debug] Encodings: locale cp1251, fs utf-8, pref cp1251, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.01.20.232744 from yt-dlp/yt-dlp-nightly-builds [9676b0571] (pip)
[debug] Python 3.13.0 (CPython AMD64 64bit) - Windows-11-10.0.26100-SP0 (OpenSSL 3.0.15 3 Sep 2024)
[debug] exe versions: ffmpeg 2024-09-26-git-f43916e217-full_build-www.gyan.dev (setts), ffprobe 2024-09-26-git-f43916e217-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {'http': 'http://127.0.0.1:10808', 'https': 'http://127.0.0.1:10808', 'ftp': 'http://127.0.0.1:10808'}
Extracting cookies from edge
[debug] Extracting cookies from: "C:\Users\Alexander\AppData\Local\Microsoft\Edge\User Data\profile 1\Network\Cookies"
[debug] Found local state file at "C:\Users\Alexander\AppData\Local\Microsoft\Edge\User Data\Local State"
[Cookies] Loading cookie 0/ 252ERROR: Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\__main__.py", line 17, in <module>
yt_dlp.main()
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\__init__.py", line 1095, in main
_exit(*variadic(_real_main(argv)))
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\__init__.py", line 993, in _real_main
with YoutubeDL(ydl_opts) as ydl:
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 723, in __init__
self.print_debug_header()
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 4081, in print_debug_header
write_debug(f'Request Handlers: {", ".join(rh.RH_NAME for rh in self._request_director.handlers.values())}')
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\functools.py", line 1037, in __get__
val = self.func(instance)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 4255, in _request_director
return self.build_request_director(_REQUEST_HANDLERS.values(), _RH_PREFERENCES)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 4230, in build_request_director
cookiejar=self.cookiejar,
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\functools.py", line 1037, in __get__
val = self.func(instance)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 4121, in cookiejar
return load_cookies(
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 99, in load_cookies
extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring, container=container))
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 122, in extract_cookies_from_browser
return _extract_chrome_cookies(browser_name, profile, keyring, logger)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 331, in _extract_chrome_cookies
is_encrypted, cookie = _process_chrome_cookie(decryptor, *line)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 366, in _process_chrome_cookie
value = decryptor.decrypt(encrypted_value)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 551, in decrypt
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode()
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 1087, in _decrypt_windows_dpapi
logger.error(message)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\utils\_utils.py", line 5650, in error
self._ydl.report_error(message, is_error=is_error)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1095, in report_error
self.trouble(f'{self._format_err("ERROR:", self.Styles.ERROR)} {message}', *args, **kwargs)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\YoutubeDL.py", line 1023, in trouble
tb_data = traceback.format_list(traceback.extract_stack())
ERROR: Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info
Traceback (most recent call last):
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 99, in load_cookies
extract_cookies_from_browser(browser_name, profile, YDLLogger(ydl), keyring=keyring, container=container))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 122, in extract_cookies_from_browser
return _extract_chrome_cookies(browser_name, profile, keyring, logger)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 331, in _extract_chrome_cookies
is_encrypted, cookie = _process_chrome_cookie(decryptor, *line)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 366, in _process_chrome_cookie
value = decryptor.decrypt(encrypted_value)
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 551, in decrypt
return _decrypt_windows_dpapi(encrypted_value, self._logger).decode()
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Alexander\AppData\Local\Programs\Python\Python313\Lib\site-packages\yt_dlp\cookies.py", line 1088, in _decrypt_windows_dpapi
raise DownloadError(message) # force exit
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
yt_dlp.utils.DownloadError: Failed to decrypt with DPAPI. See https://github.com/yt-dlp/yt-dlp/issues/10927 for more info
``` | closed | 2025-02-04T18:15:28Z | 2025-02-04T21:42:23Z | https://github.com/yt-dlp/yt-dlp/issues/12278 | [
"duplicate",
"spam"
] | Cryosim | 1 |
RobertCraigie/prisma-client-py | asyncio | 1,024 | quaint error | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
{"timestamp":"2024-08-20T17:27:18.918606Z","level":"ERROR","fields":{"message":"Error in PostgreSQL connection: Error { kind: Closed, cause: None }"},"target":"quaint::connector::postgres::native"}

## How to reproduce
run fastapi server on ubuntu 20
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]-->
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]-->
- Python version: <!--[Run `python -V` to see your Python version]-->
- Prisma version:
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
```
```
| open | 2024-08-27T18:36:22Z | 2024-10-08T23:37:30Z | https://github.com/RobertCraigie/prisma-client-py/issues/1024 | [] | vikyw89 | 2 |
charlesq34/pointnet | tensorflow | 268 | cannot download the modelnet40 | ---021-03-07 00:08:33-- https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip;
Resolving shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)... 171.67.77.19
Connecting to shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)|171.67.77.19|:443... connected.
WARNING: cannot verify shapenet.cs.stanford.edu's certificate, issued by 'CN=InCommon RSA Server CA,OU=InCommon,O=Internet2,L=Ann Arbor,ST=MI,C=US':
Self-signed certificate encountered.
HTTP request sent, awaiting response... 404 Not Found
2021-03-07 00:08:34 ERROR 404: Not Found.
--2021-03-07 00:08:34-- http://unzip/
Resolving unzip (unzip)... failed: No such host is known. .
wget: unable to resolve host address 'unzip'
--2021-03-07 00:08:37-- http://modelnet40_ply_hdf5_2048.zip/
Resolving modelnet40_ply_hdf5_2048.zip (modelnet40_ply_hdf5_2048.zip)... failed: No such host is known. .
wget: unable to resolve host address 'modelnet40_ply_hdf5_2048.zip'
The syntax of the command is incorrect.
The directory name is invalid.
--2021-03-07 00:08:37-- https://shapenet.cs.stanford.edu/media/modelnet40_ply_hdf5_2048.zip;
Resolving shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)... 171.67.77.19
Connecting to shapenet.cs.stanford.edu (shapenet.cs.stanford.edu)|171.67.77.19|:443... connected.
WARNING: cannot verify shapenet.cs.stanford.edu's certificate, issued by 'CN=InCommon RSA Server CA,OU=InCommon,O=Internet2,L=Ann Arbor,ST=MI,C=US':
Self-signed certificate encountered.
HTTP request sent, awaiting response... 404 Not Found
2021-03-07 00:08:38 ERROR 404: Not Found.
--2021-03-07 00:08:38-- http://unzip/
Resolving unzip (unzip)... failed: No such host is known. .
wget: unable to resolve host address 'unzip'
--2021-03-07 00:08:40-- http://modelnet40_ply_hdf5_2048.zip/
Resolving modelnet40_ply_hdf5_2048.zip (modelnet40_ply_hdf5_2048.zip)... failed: No such host is known. .
wget: unable to resolve host address 'modelnet40_ply_hdf5_2048.zip'
The syntax of the command is incorrect.
The directory name is invalid.
I NEED SOME HELP :( | open | 2021-03-06T16:13:51Z | 2024-11-07T15:13:16Z | https://github.com/charlesq34/pointnet/issues/268 | [] | noridayu1998 | 6 |
iperov/DeepFaceLab | deep-learning | 862 | Please add 2nd and 3rd pass to S3FD face detector just like in DFL 1.0 | Please add 2nd and 3rd pass to S3FD face detector just like in DFL 1.0
There are lots of false positive extracted images | open | 2020-08-15T01:14:37Z | 2020-08-15T01:14:37Z | https://github.com/iperov/DeepFaceLab/issues/862 | [] | justinjohn0306 | 0 |
vaexio/vaex | data-science | 1,938 | head() method not displaying result after doing a replace() with regex |
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: 4.7.0
- Vaex was installed via: pip / conda-forge / from source : Vaex installed through conda-forge
- OS: macOS
- Conda 4.10.3
**Additional information**
Please state any supplementary information or provide additional context for the problem (e.g. screenshots, data, etc..).
Thanks for this great package. I'm not sure if this is a bug or how to classify it but I thought it was just unusual . So I ran the following code to do a replace on all empty strings in my data. The code ran quite well and fast without any error but when I run a head() method to view the result, the process runs indefinitely. I figured it's because of the volume of data (8 million + rows) but then I repeated the task on smaller data (5 rows) but the same thing occurred. However, trying this task on a pandas dataframe gave me the result I wanted and I was able to view the result. Thanks
```
%time var = sample.get_column_names()
for i in var:
if i == 'ibe8579_8579' or i == 'ibe8592_8592' or i == 'raw_create_timestamp' or i == 'raw_load_date':
sample[i]
else:
sample[i] = sample[i].str.replace(pat = r'^\s*$', repl = 'empty', regex = True)
head(2)
```
| closed | 2022-02-18T19:00:18Z | 2022-03-03T09:22:20Z | https://github.com/vaexio/vaex/issues/1938 | [] | omonmaxi | 9 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 280 | text-generation-webui方式运行dtype报错,效果有一些问题 | 感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。
*提示:将[ ]中填入x,表示打对钩。提问时删除上面这两行。请只保留符合的选项,删掉其他。*
### 详细描述问题
text-generation-webui 的方式运行报错,在此项目中文找对应的issue:
`python server.py --model llama-13b-hf --lora chinese-alpaca-lora-13b`
或者
`python server.py --model llama-7b-hf --lora chinese-alpaca-lora-7b`
>Traceback (most recent call last):
File "/home/byy/projects/nlp/text-generation-webui/server.py", line 874, in <module>
add_lora_to_model(shared.args.lora)
File "/home/byy/projects/nlp/text-generation-webui/modules/LoRA.py", line 36, in add_lora_to_model
params['dtype'] = shared.model.dtype
AttributeError: 'LlamaCppModel' object has no attribute 'dtype'
尝试不加--lora也能运行,但是效果不佳,不知道和这个参数关系大不大,llama.cpp测试的效果也不是很突出。
### 运行截图或log
`python server.py --model llama-13b-hf`
input:
>Below is an instruction that describes a task. Write a response that appropriately completes the request.
>### Instruction:
>根据一下才料写一篇人物介绍,多写一些:wcj,男,60岁,xx公司书记,任职期间推进公司数字化建设,取得优秀成果。
>### Response:
output:
>Below is an instruction that describes a task. Write a response that appropriately completes the request.
>### Instruction:
>根据一下才料写一篇人物介绍,多写一些:wcj,男,60岁,xx公司书记,任职期间推进公司数字化建设,取得优秀成果。
>### Response:
>王成杰(WCJ)是一位男性,现年60岁的高管。他于2021年担任xx公司的总裁一职,并在任期内积极推动公司数字化建设,取得了显>著的业绩提升和业务拓展。
`./main -m ../text-generation-webui/models/llama-13b-hf/ggml-model-q4_0.bin --color -f prompts/alpaca.txt -ins -c 2048 --temp 0.2 -n 256 --repeat_penalty 1.3`
> 根据一下才料写一篇人物介绍,多写一些:wcj,男,60岁,xx公司书记,任职期间推进公司数字化建设,取得优秀成果。
姓名:王清杰(WCJ) 性别:男性 年龄:60岁 职务: xx公司的副总经理、总裁等职务
### 必查项目
- [x] 哪个模型的问题:Alpaca
- [x] 问题类型:
- 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat)
- 效果问题
- [x] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [x] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [x] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
| closed | 2023-05-09T09:24:33Z | 2023-05-20T22:02:07Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/280 | [
"stale"
] | brealisty | 3 |
idealo/imagededup | computer-vision | 187 | RuntimeError: stack expects a non-empty TensorList | I get the following Error running
` method_object = CNN()
duplicates = method_object.find_duplicates(image_dir=directory)`:
C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead.
warnings.warn(
C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing `weights=MobileNet_V3_Small_Weights.IMAGENET1K_V1`. You can also use `weights=MobileNet_V3_Small_Weights.DEFAULT` to get the most up-to-date weights.
warnings.warn(msg)
2023-01-06 00:42:34,495: INFO Initialized: MobileNet v3 pretrained on ImageNet dataset sliced at GAP layer
2023-01-06 00:42:34,496: INFO Start: Image encoding generation
2023-01-06 00:42:43,464: WARNING Invalid image file F:\imagecomparison\nice\images\best\best.py:
cannot identify image file 'F:\\imagecomparison\\nice\\images\\best\\best.py'
2023-01-06 00:42:43,465: WARNING Invalid image file F:\imagecomparison\nice\images\best\rank.py:
cannot identify image file 'F:\\imagecomparison\\nice\\images\\best\\rank.py'
2023-01-06 00:42:43,467: WARNING Invalid image file F:\imagecomparison\nice\images\best\ranked.xlsx:
cannot identify image file 'F:\\imagecomparison\\nice\\images\\best\\ranked.xlsx'
2023-01-06 00:42:43,467: WARNING Invalid image file F:\imagecomparison\nice\images\best\ranking_table.json:
cannot identify image file 'F:\\imagecomparison\\nice\\images\\best\\ranking_table.json'
Traceback (most recent call last):
File "F:\imagecomparison\nice\drank.py", line 696, in <module>
if __name__ == "__main__": main()
File "F:\imagecomparison\nice\drank.py", line 687, in main
find_duplicates1(args.figsize,args.photo_dir)
File "F:\imagecomparison\nice\drank.py", line 440, in find_duplicates1
duplicates = method_object.find_duplicates(image_dir=directory)
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\imagededup\methods\cnn.py", line 386, in find_duplicates
result = self._find_duplicates_dir(
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\imagededup\methods\cnn.py", line 327, in _find_duplicates_dir
self.encode_images(image_dir=image_dir, recursive=recursive)
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\imagededup\methods\cnn.py", line 220, in encode_images
return self._get_cnn_features_batch(image_dir, recursive)
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\imagededup\methods\cnn.py", line 121, in _get_cnn_features_batch
for ims, filenames, bad_images in self.dataloader:
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 628, in __next__
data = self._next_data()
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\dataloader.py", line 671, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\torch\utils\data\_utils\fetch.py", line 61, in fetch
return self.collate_fn(data)
File "C:\Users\themc\AppData\Roaming\Python\Python39\site-packages\imagededup\utils\data_generator.py", line 50, in _collate_fn
return torch.stack(ims), filenames, bad_images
RuntimeError: stack expects a non-empty TensorList
For some reason everything works fine, if I run the same two lines in another file. Using another method_object also works.
| closed | 2023-01-05T23:50:53Z | 2023-01-15T18:02:01Z | https://github.com/idealo/imagededup/issues/187 | [] | 11TheM | 2 |
Avaiga/taipy | data-visualization | 2,082 | [🐛 BUG] Column headers of Taipy table are truncated | ### What went wrong? 🤔
When having a lot of columns in a table, the columns headers becomes truncated.

This is a new behavior that didn't exist in 3.1.

### Expected Behavior
We should see the entire name of the columns.
### Steps to Reproduce Issue
Run this code in 4.0:
```python
from taipy.gui import Gui
import taipy.gui.builder as tgb
data = {
"a_long_title": [1],
"b_long_title": [2],
"c_long_title": [3],
"d_long_title": [4],
"e_long_title": [5],
"f_long_title": [6],
"g_long_title": [7],
"h_long_title": [8],
"i_long_title": [9],
"j_long_title": [10],
"k_long_title": [11],
"l_long_title": [12],
"m_long_title": [13],
"n_long_title": [14],
"o_long_title": [15],
"p_long_title": [16],
"q_long_title": [17],
"r_long_title": [18],
"s_long_title": [19],
"t_long_title": [20],
"u_long_title": [21],
"v_long_title": [22],
"w_long_title": [23],
}
with tgb.Page() as page:
tgb.table("{data}")
Gui(page).run()
```
### Version of Taipy
4.0
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-10-17T11:42:59Z | 2024-11-07T09:31:49Z | https://github.com/Avaiga/taipy/issues/2082 | [
"🖰 GUI",
"💥Malfunction",
"🟧 Priority: High"
] | FlorianJacta | 3 |
bmoscon/cryptofeed | asyncio | 563 | demo_liquidation.py timeout | **Describe the bug**
I'm using the demo_liquidation.py code. Receiving a timeout error and because of that am being rate limited.
2021-07-13 12:51:06,428 : WARNING : BITMEX.ws.1: received no messages within timeout, restarting connection
2021-07-13 12:51:07,413 : WARNING : BITMEX.ws.1: received no messages within timeout, restarting connection
2021-07-13 12:51:08,284 : WARNING : BITMEX.ws.1: Rate Limited - waiting 0 seconds to reconnect
2021-07-13 12:51:09,027 : WARNING : BITMEX.ws.1: Rate Limited - waiting 60 seconds to reconnect
**To Reproduce**
I just ran the liquidation_demo.py file. I had this working find on my old system but I got a new macbook and now I'm just getting this error.
**Expected behavior**
No timeout errors
**Operating System:**
- macOS, Python 3.9.6, Cryptofeed 1.9.1
**Cryptofeed Version**
Cryptofeed 1.9.1, installed with pip | closed | 2021-07-13T17:21:29Z | 2021-07-16T05:49:20Z | https://github.com/bmoscon/cryptofeed/issues/563 | [
"bug"
] | jzay | 2 |
widgetti/solara | fastapi | 431 | ipyaggrid does not update when new dataframe is loaded via solara.FileDrop | Hi, nice work, I really like Solara!
I am implementing an exploration tool for data frames. The user can drag and drop a file into a file dropper element and the dataframe is displayed using ipyaggrid.
Whenever I load a different file, output elements such as solara.Markdown are updated (e.g. showing the size of the newly loaded data frame). But the table does not update to contain the new data, please see a minimal working example below. Am I missing something? Thank you for your support on this!
```
import ipyaggrid
import pandas as pd
import solara
from io import StringIO
filename = solara.reactive(None)
filesize = solara.reactive(0)
filecontent = solara.reactive(None)
df_dict = solara.reactive(None)
def generate_column_defs(dict_list):
return [{'field': key, 'filter': True} for key in dict_list[0].keys()] if dict_list else []
@solara.component
def FileDropper():
progress, set_progress = solara.use_state(0.)
def on_progress(value):
set_progress(value)
def on_file(file: solara.components.file_drop.FileInfo):
filename.value = file["name"]
filesize.value = file["size"]
f = file["file_obj"]
# todo: adjust code below to account for different file types (csv, tsv, MS Excel ...)
filecontent.value = pd.read_csv(StringIO(str(f.read(), "utf-8")), sep="\t")
solara.FileDrop(
label="Drag and drop a file here",
on_file=on_file,
on_total_progress=on_progress,
lazy=True,
)
solara.ProgressLinear(value=progress)
if progress == 100:
solara.Markdown(f"Loaded {filesize.value:n} bytes")
@solara.component
def Page():
with solara.Sidebar():
with solara.Card():
FileDropper()
if filecontent.value is not None:
df_dict.value = filecontent.value.to_dict(orient="records")
grid_options = {
"columnDefs": generate_column_defs(df_dict.value),
"defaultColDef": {
"sortable": True
},
"enableSorting": True,
"rowSelection": "multiple",
"enableRangeSelection": True,
"enableFilter": True,
"enableColumnResize": True
}
with solara.Card():
solara.Markdown(f"size: {filecontent.value.shape[0]}")
ipyaggrid.Grid.element(
grid_data=df_dict.value,
grid_options=grid_options,
columns_fit="auto",
theme="ag-theme-blue",
quick_filter=True,
export_mode="buttons",
export_csv=True,
export_excel=True,
export_to_df=True,
sync_grid=True
)
``` | open | 2023-12-26T21:35:48Z | 2024-03-01T13:24:22Z | https://github.com/widgetti/solara/issues/431 | [] | MaWeffm | 2 |
frappe/frappe | rest-api | 31,668 | Read-only BaseControl returns `undefined` instead of this.value when falsy | Not really a true bug, and probably specific to Dialogs and basic FieldGroups usage, but read-only fields can be used to store computed results in where `0` or `false` might be a meaningful value, then using `dialog.get_value(…)` doesn't return the correct value, and `dialog.get_values()` doesn't even list it's key, which might lead to issues with custom integrations* or server-side default values.
*Example: doing a POST request directly using the result of `dialog.get_values()`.
---
The following should probably be: `return this.value ?? undefined;`.
https://github.com/frappe/frappe/blob/cc023453e8abee02bee154e52c50fb57848db0be/frappe/public/js/frappe/form/controls/base_control.js#L266
---
```js
async function test() {
const dialog = new frappe.ui.Dialog({
fields: [{ fieldname: "x", fieldtype: "Float" }],
});
await dialog.set_value("x", 0);
console.log("assertTrue", dialog.get_value("x"), "===", 0);
console.log("assertTrue", dialog.get_values(), "===", {x: 0});
dialog.set_df_property("x", "read_only", true);
console.warn("assertion failed:", dialog.get_value("x"));
console.warn("assertion failed:", dialog.get_values());
}
void test()
```
| open | 2025-03-12T09:04:13Z | 2025-03-12T09:04:13Z | https://github.com/frappe/frappe/issues/31668 | [] | cogk | 0 |
timkpaine/lantern | plotly | 80 | matplotlib support right y label | closed | 2017-10-18T02:05:21Z | 2018-02-05T21:29:06Z | https://github.com/timkpaine/lantern/issues/80 | [
"feature",
"matplotlib/seaborn"
] | timkpaine | 0 | |
autogluon/autogluon | data-science | 4,789 | [Feature Request]: Ensembling extra fitted models for `TimeSeriesPredictor` and `MultiModalPredictor` like for `TabularPredictor` with `fit_extra` method | ## Description
Sometimes we need to run several experiments one by one to gain insights for the project.
And we get many versions of Autogluon models.
Now I am interested in ensembling previous autogluon models together. This is different to the internel ensembling of each autogluon model itself.
## References
This logic is already implemented in `TabularPredictor`, see #4742
The api is `TabularPredictor.fit_extra` https://auto.gluon.ai/dev/api/autogluon.tabular.TabularPredictor.fit_extra.html
But so far, there is no such method for `TimeSeriesPredictor` and `MultiModalPredictor`.
As in your paper, TimeSeriesPredictor's Weighted Ensemble is also forward selection algorithm (Caruana et al., 2004), so the fit extra logic should be the same as `TabularPredictor`.
| open | 2025-01-12T18:33:05Z | 2025-01-13T20:26:22Z | https://github.com/autogluon/autogluon/issues/4789 | [
"enhancement",
"module: timeseries"
] | 2catycm | 3 |
ray-project/ray | pytorch | 51,445 | [<Ray component: java>] expose ObjectRef in DeploymentResponse class | ### Description
It is possible to convert a DeploymentResponse to an ObjectRef in Python, as described in
https://docs.ray.io/en/master/serve/model_composition.html#advanced-convert-a-deploymentresponse-to-a-ray-objectref. However, Ray Java lacks a similar capability, making certain use cases unsupported. It would be beneficial to expose ObjectRef in DeploymentResponse, similar to the approach in this PR https://github.com/ray-project/ray/pull/51444
### Use case
we build a streaming pipeline by ray java which the architecture as below, the data pass through from **Subscriber** -> **Processor** -> **Publisher**, the current issue is DeploymentResponse unable to pass as remote call parameter, is it able to expose ObjectRef in the DeploymentResponse and then we can used as publisherHandle.method("handle").remote(response.getObjectRef());
```
public class Processor {
public Object handle(Object input) {
Object output = null;
return output;
}
}
public class Publisher {
public void handle(Object input) {
}
}
public class Subscriber {
DeploymentHandle processorHandle;
DeploymentHandle publisherHandle;
public void handle() {
while (true) {
DeploymentResponse response = processorHandle.method("handle").remote("");
publisherHandle.method("handle").remote(response);
}
}
}
```
| open | 2025-03-18T07:15:52Z | 2025-03-24T03:21:35Z | https://github.com/ray-project/ray/issues/51445 | [
"java",
"enhancement",
"triage",
"serve"
] | zhiqiwangebay | 1 |
vitalik/django-ninja | django | 514 | How to handle custom headers with the interactive openAPI doc | For authentication I need to pass a header `x-team` together with a Bearer token to API endpoints.
This all works fine, however, I haven't been able to figure out how to successfully integrate this with the OpenAPI docs endpoint.
To make it appear in the (Swagger) UI, I need to specify it as a parameter in the endpoint handler, e.g.
```
@router.get('/', response={
200: ResourceInfoCollectionSchemaOut,
response_codes_4xx: FailureSchema,
500: InternalServerErrorSchema})
def list_resources(
request,
team_uuid: Optional[UUID] = None,
is_active: Optional[bool] = None,
x_team: str | None = Header(default=None),
authorization: str | None = Header(default=None)
):
```
Obviously, I can't call the parameter `x-team`.
Doing this makes the header appear in the OpenAPI doc.
Wen making a request the UI sends the header as `x_team`, however that is stripped out by many (most) HTTP servers/proxies, including the Django development server.
How can I make this work?
Many thanks in advance.
| closed | 2022-07-27T00:38:52Z | 2023-12-16T15:19:02Z | https://github.com/vitalik/django-ninja/issues/514 | [] | bjerzyna | 3 |
Johnserf-Seed/TikTokDownload | api | 307 | 12345 | 12345 | closed | 2023-02-08T05:01:21Z | 2023-02-15T07:48:56Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/307 | [
"故障(bug)",
"额外求助(help wanted)",
"无效(invalid)"
] | zzh151223 | 28 |
ydataai/ydata-profiling | jupyter | 1,316 | Using tsmode=True, check if data.index is a DateTime index, if so include it in the analysis | ### Missing functionality
I've used ProfileReport with tsmode=True on a pd.DataFrame indexed with a DateTime index. My assumption was that the report would provide some basic analysis on that index as well as on the values. The report indicates the number of missing values in all columns, which is great, but I could find no analysis on the index.
Namely, what I was expecting, specifically because my index has the 'right' type (unlike the case in #1292 as I understand it), is the following:
1. Detection of the frequency in the index time series
2. Detection of gaps in the index time series
### Proposed feature
When the data passed to ProfieReport with tsmode=True and the data is DateTime indexed, analyze the index for:
1. Detection of the frequency in the index time series
2. Detection of gaps in the index time series
### Alternatives considered
Possible implementation:
Add an `index=true` (default is false) item under the `vars` section of the default.yml file, this would entail the following:
```
# pseudocode:
if index:
data.reset_index(names='index') # or index_series = data.index.to_series()
check_freq(data.index)
check_gaps(data.index)
data.set_index('index')
[...proceed with current analysis]
```
### Additional context
In the case of time series, the data values could have 0 missing values, yet there could be gaps in their associated time series (my case). | closed | 2023-04-20T17:20:26Z | 2023-08-08T19:44:42Z | https://github.com/ydataai/ydata-profiling/issues/1316 | [
"feature request 💬"
] | CatChenal | 3 |
openapi-generators/openapi-python-client | rest-api | 613 | black posthook throws `ImportError: cannot import name 'izip_longest' from 'pathspec.compat'` | **Describe the bug**
During the `black .` post hook, openapi-python-client throws :
```
openapi-python-client update --url someurl.com/openapi.json
Updating my_project
Error(s) encountered while generating, client was not created
black failed
Traceback (most recent call last):
File "C:\Users\x\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\x\AppData\Local\Programs\Python\Python39\lib\runpy.py", line 87,
in _run_code
exec(code, run_globals)
File "C:\Users\x\code\my_project\venv\Scripts\black.exe\__main__.py", line 4, in <module>
File "src\black\__init__.py", line 12, in <module>
File "C:\Users\x\code\my_project\venv\lib\site-packages\pathspec\__init__.py", line 26, in <module>
from .pathspec import PathSpec
File "C:\Users\x\code\my_project\venv\lib\site-packages\pathspec\pathspec.py", line 27, in <module>
from .compat import (
ImportError: cannot import name 'izip_longest' from 'pathspec.compat' (C:\Users\x\code\my_project\venv\lib\site-packages\pathspec\compat.py)
```
In a fresh environment, running `black .` on my own works originally. Then, running `openapi-python-client update` fails at the black step. Then, the strange thing is that this error persists when running `black .` on my own afterwards.
**To Reproduce**
Steps to reproduce the behavior:
1. Install dependencies as listed in Desktop section, using poetry
2. Run openapi-python-client update
3. See error during black post hook.
**Expected behavior**
I expect the `black .` to complete without errors.
**OpenAPI Spec File**
Unfortunately this is under NDA. However, the package is generated without issue, the problem comes from the `black` post hook.
**Desktop (please complete the following information):**
- OS: Windows 10 Business 19044.1645
[tool.poetry.dependencies]
python = "3.9"
httpx = "0.22.0"
attrs = "21.4.0"
python-dateutil = "2.8.2"
azure-identity = "1.10.0"
exdir = "0.4.2"
[tool.poetry.dev-dependencies]
flake8 = "4.0.1"
mypy = "0.950"
pytest = "7.1.2"
black = "22.3.0"
aiounittest = "1.4.1"
openapi-python-client= "0.11.1"
| open | 2022-05-11T18:28:11Z | 2022-05-11T18:32:12Z | https://github.com/openapi-generators/openapi-python-client/issues/613 | [
"🐞bug"
] | LaurentBergeron | 0 |
fastapi/sqlmodel | fastapi | 309 | Parent instance is not bound to a Session; lazy load operation of attribute cannot proceed | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [x] I commit to help with one of those options 👆
### Example Code
```python
class CharacterBase(SQLModel):
name: str
birthdate: Optional[date]
sex: str
height_metric: Optional[condecimal(max_digits=5, decimal_places=2)]
weight_metric: Optional[condecimal(max_digits=5, decimal_places=2)]
class CharacterRead(CharacterBase):
character_id: int
aliases: List["AliasBase"]
class Character(CharacterBase, table=True):
character_id: Optional[int] = Field(default=None, primary_key=True)
aliases: List["Alias"] = Relationship(back_populates="character")
occupations: List["Occupation"] = Relationship(back_populates="character")
creation_date: datetime = Field(default=datetime.utcnow())
update_date: datetime = Field(default=datetime.utcnow())
class AliasBase(SQLModel):
alias: str
class Alias(AliasBase, table=True):
alias_id: Optional[int] = Field(default=None, primary_key=True)
character_id: Optional[int] = Field(
default=None, foreign_key="character.character_id"
)
character: Optional[Character] = Relationship(back_populates="aliases")
@router.get("/{id}", response_model=models.CharacterRead)
def get_character_by_id(id: int):
with Session(engine) as session:
character = session.exec(
select(models.Character).where(models.Character.character_id == id)
).one()
return character
```
### Description
The character object can have multiple aliases besides its real name and I want to return those aliases with the character data. However, when I access the route I get the error `sqlalchemy.orm.exc.DetachedInstanceError: Parent instance <Character at 0x17c52c98dc0> is not bound to a Session; lazy load operation of attribute 'aliases' cannot proceed`. Which I don't understand why. On the [tutorial](https://sqlmodel.tiangolo.com/tutorial/fastapi/relationships/) it seems to work just fine and my entire function is wrapped within a `with Session(engine) as session:`
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.2
### Additional Context
_No response_ | closed | 2022-04-21T23:25:17Z | 2022-04-23T22:13:15Z | https://github.com/fastapi/sqlmodel/issues/309 | [
"question"
] | Maypher | 3 |
keras-team/keras | tensorflow | 20,953 | Function `openvino.core.custom_gradient()` should be a class | I'm not using this, but I saw by accident while browsing the code that `keras.backend.openvino.core.custom_gradient()` is a function that defines nested functions `__init__()` and `__call__()`, but doesn't do anything (in particular, it seems to return `None`):
https://github.com/keras-team/keras/blob/c03ae353f0702387bff1d9a899115c1b9daeca37/keras/src/backend/openvino/core.py#L598-L617
I think line 598 should be
```python
class custom_gradient:
````
Introduced by https://github.com/keras-team/keras/pull/19727/commits/5c401a92677995888de2b2c5d397fb78e6dddf32 in #19727. It was a function throwing a `NotImplementedError` before, but should have been changed to a class. | open | 2025-02-24T13:52:06Z | 2025-03-06T04:06:03Z | https://github.com/keras-team/keras/issues/20953 | [
"type:Bug"
] | JulianJvn | 0 |
home-assistant/core | python | 140,781 | direction of camera streams, does not work | ### The problem
There is a function to align 90 degrees left/right or 180 degrees from the camera stream, but it is not useful because it does not work
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-17T06:56:37Z | 2025-03-23T14:03:35Z | https://github.com/home-assistant/core/issues/140781 | [
"integration: camera"
] | rezueps | 3 |
mljar/mljar-supervised | scikit-learn | 303 | Better time limit for algorithm training | closed | 2021-01-25T08:06:21Z | 2021-01-25T20:00:29Z | https://github.com/mljar/mljar-supervised/issues/303 | [
"enhancement"
] | pplonski | 1 | |
pydata/xarray | numpy | 9,620 | Nightly Hypothesis tests failed | [Workflow Run URL](https://github.com/pydata/xarray/actions/runs/11318955291)
<details><summary>Python 3.12 Test Summary</summary>
```
properties/test_index_manipulation.py::DatasetTest::runTest: hypothesis.errors.FlakyFailure: Inconsistent results: An example failed on the first run but now succeeds (or fails with another error, or is for some reason not runnable). (1 sub-exception)
Falsifying example:
state = DatasetStateMachine()
state.init_ds(var=Variable(
data=array([6.97735636e+16]),
dims=['żzåäH'],
attrs={'ŵāºŻŋ': {'': None, 'ĺſÍĂł': False},
'óžŅſÒ': {'ŻOÀĜ': array([['}\x9fÉt\x0b', 'x)\U000ae574vu'],
['¶\x85Ê |', '\U000e8a69\U000c7509{`÷']], dtype='<U5'),
'čØŽÞo': None,
'nàM': 'Ō'},
'ĭ': {'īLIř': None,
'őOÇİ6': None,
'żžĠéĆ': 'Yï',
'ŻōŅ4r': 'ąĸ',
'': array([ -inf, -2.41930698e+16]),
'ſŻĮ': None,
'ų': '1À',
'ŃĀD': 'ðżŔ',
'Þſìćû': True}},
))
state.assert_invariants()
Draw 1: ['żzåäH']
> stacking ['żzåäH'] as ſ1
state.stack(create_index=False, data=data(...), newname='ſ1')
state.assert_invariants()
adding dimension coordinate йŶ
state.add_dim_coord(var=Variable(
data=array(['1969-12-31T23:59:56.242924246', '1970-01-01T00:00:00.000024524',
'1969-12-31T23:59:59.999946357', '1970-01-01T00:00:00.000000215',
'2005-12-20T07:21:30.571963408', '1969-12-31T23:59:59.999968231'],
dtype='datetime64[ns]'),
dims=['йŶ'],
attrs={'ÐżĔdŒ': True, '': False},
))
state.assert_invariants()
Draw 2: ['йŶ']
> stacking ['йŶ'] as š³Żh
state.stack(create_index=False, data=data(...), newname='š³Żh')
state.assert_invariants()
adding dimension coordinate ă
state.add_dim_coord(var=Variable(
data=array([ 238, 33549, 9223372036854775807,
33437, -5672900794703796655, 8457639148380062172],
dtype='timedelta64[ns]'),
dims=['ă'],
attrs={},
))
state.assert_invariants()
Draw 3: ['ă']
> stacking ['ă'] as ťĖ
state.stack(create_index=False, data=data(...), newname='ťĖ')
state.assert_invariants()
adding dimension coordinate 7óŸŻç
state.add_dim_coord(var=Variable(
data=array([4157794707, 371, 2440091218, 5774, 53,
16806], dtype=uint32),
dims=['7óŸŻç'],
attrs={'çhŁ': {'V': True, '': array([[6.10351562e-05-2.22507386e-313j],
[1.40129846e-45-3.33333333e-001j]]), 'EÉ': 'ŽÁþH', 'Ŧ': None, 'Ġĩ': False, 'žŝŬŻø': False, 'ſfą': None, 'ðQŤĩ': array([['ú\U000d2b93\U00044bab', ''],
['\x96£5ü', '']], dtype='<U4'), 'ųŽÅ': array([b'\xc5', b';'], dtype='|S2'), 'ijŻŁŻã': 'ăıż', 'ð': array([['', ''],
['', '\x9d']], dtype='>U3'), 'ŤŒ': 'WáŶʾ'}},
))
state.assert_invariants()
Draw 4: ['7óŸŻç']
> stacking ['7óŸŻç'] as 1ôŇĆ
state.stack(create_index=False, data=data(...), newname='1ôŇĆ')
state.assert_invariants()
adding dimension coordinate ŸĸÂqſ
state.add_dim_coord(var=Variable(
data=array([ -5456, 51517, 8787776349668474847,
43576], dtype='timedelta64[ns]'),
dims=['ŸĸÂqſ'],
attrs={},
))
state.assert_invariants()
adding dimension coordinate ª
state.add_dim_coord(var=Variable(
data=array([ 18392, 4260466259, 22621, 3303341823], dtype=uint32),
dims=['ª'],
attrs={'íŻÚŸœ': 'ÂĪßĵſ', 'ćŻė': None, 'ÚrÄſņ': array([[65523, 20189],
[54688, 4286]], dtype=uint16)},
))
state.assert_invariants()
adding dimension coordinate ķŘÀōú
state.add_dim_coord(var=Variable(
data=array(['', '\x9a\x84>\U000781caO'], dtype='<U14'),
dims=['ķŘÀōú'],
attrs={'ÕŕŸz2': {'2ĐſRª': 'żſężĤ', 'ČſŰÑÉ': None, 'ŇJiõŞ': 'ŷ', '': 'ż'}},
))
state.assert_invariants()
assign_coords: žſa
state.assign_coords(var=Variable(
data=array(['', 'Éi\x8ajz\U000d8bfaÀ', ',1J^À\U0003b229\U001094c0\uddd7',
'N\U0010e27eX\x13\x03ìf0', 'í'], dtype='<U8'),
dims=['žſa'],
attrs={},
))
state.assert_invariants()
Draw 5: ['ķŘÀōú']
> drop_indexes: ['ķŘÀōú']
state.drop_indexes(data=data(...))
state.assert_invariants()
adding dimension coordinate Îr
state.add_dim_coord(var=Variable(
data=array(['\U0006a7dc¸!', '\U001028a8&5ç'], dtype='<U15'),
dims=['Îr'],
attrs={'ÿŝfźö': {'ŽġžžŨ': 'O', 'SŢijìō': None}},
))
state.assert_invariants()
adding dimension coordinate ŽŃêńŢ
state.add_dim_coord(var=Variable(
data=array([18446744073709551614, 54242, 59560],
dtype=uint64),
dims=['ŽŃêńŢ'],
attrs={},
))
state.assert_invariants()
assign_coords: Q
state.assign_coords(var=Variable(
data=array(['2253-06-01T23:30:19.684719259', '1970-01-01T00:00:00.000000245',
'1969-12-31T23:59:59.999983953', '1970-01-01T00:00:00.000053418',
'1970-01-01T00:00:00.000000229', '1970-01-01T00:00:00.000062968'],
dtype='datetime64[ns]'),
dims=['Q'],
attrs={'gTſ': {'oVUļg': False, '': '', 'íÌŤZ': None},
'ŽſŻŲŻ': {},
'ĸĩL': {}},
))
state.assert_invariants()
adding dimension coordinate Ūʼn
state.add_dim_coord(var=Variable(data=array([-51867485], dtype=int32), dims=['Ūʼn'], attrs={}))
state.assert_invariants()
Draw 6: ['Ūʼn']
> stacking ['Ūʼn'] as ňŔvjſ
state.stack(create_index=True, data=data(...), newname='ňŔvjſ')
state.assert_invariants()
Draw 7: ['ŽŃêńŢ', 'žſa', 'Q']
> stacking ['ŽŃêńŢ', 'žſa', 'Q'] as cê
state.stack(create_index=True, data=data(...), newname='cê')
state.assert_invariants()
Draw 8: 'cê'
> unstacking cê
state.unstack(data=data(...))
state.assert_invariants()
state.teardown()
You can reproduce this example by temporarily adding @reproduce_failure('6.115.0', b'AXicRVcLtJVjGn7vX537qc5hpAuhUW5FFBWGkdyG6MLShWmiXIouIwpTSReDtDAYg5WSmlXTjIglyhhd5iBlHM1kMilFIUNTU+rM8+1da/Za++yz9//93/e+z/u8z/v8RMKE15W7Ny18eceB2czhweLdRG9SoWGqTNWEvyqx13RyxHmWjJzWkBC5JnlTUxjHVWr3s64gpiQiMVbZNWi1cTLTq0kseVD44UI9jJ5Lis2N8aIq9cQVU4cql41qHHaX+iCx3VSDuF62uE9ltfuZYuOoHLeIjTd5xmKVJRrl/jNqgsDucOkgXD1oefMr6mqjUcnmUapjQkYTqcgjEgMYCelgp474RbHWhqjVJhPWDyglY/M/Egv9IaxGvQnHP5OSqH6kenjQZZZ0sVoQJUW8ykhEw2abDWefTkj2aInRZMEkvpZtdbi5dMIFnwZ4llLy5KqELZGtAW2+tPd3d0TbygkldK5zw04qvoBUYsSCRfKnRPjre0gSE7v1IE8BxFcimPtcypNL2BhHYrIe15MOzmFZ6O1iD5jeSzgqUf+Jg8uFK8sTV1aWNMIJjDWSQ2gxrrS0xAGHD8KJgdWtKx+pz1Fs3HlLu6sAHA2sK4Y1sK4f42zmTSfzM1t4+3fc7Ck6aUzx4gfTryHLoNYfTKL+dKLKZQ+WTWn9elVfDuTNO0+kiduKV8f9iGMW20/E9+GIkBgZST5zGamMfHYQRdAAo13iLTTp2W59gKm+EXpyJCSnuDNeRyg62elt82OFE7CVpB3AO1SIzKKBbQmwBAejqUknTbQ57JcarXIBNX4DDtFW1ddEh4W+lAy1uTVxohfEV2I7GqQ+JyOlso1tKUouiYaGrQd8YdeG90W6h9iLDgBRqjQze2LQ5RxdKbHTDez3K52VCZj0zyny+jIsL+HE8njIHlJuxIq7tuRC54wQHbmDXlKNHRoKvYWaMniDghXOKqESpWUHge7IvGxN8d/E11/R6yThNxuKr3Ki/+54rP1jn68bO+i2Ad2MG7b8fO2OGROwEm2Tiy790ZdqNivobo2LkJrLWx5nqr0CeEAq7xq6Aos3WuxFVYTeQVCveJKbVU/N6Jp7XcqhxbcR54AfVOAVW0hb8/c9zmA/KmVSI+O+ws5iT6DXpC17fU4nSVaKJ5Y4l5aVlwiXl5Z6yvtk6moj7PQ50zkADqURak90Qa1VlAGsiAwK3l8fROLr80ASoYnlxa8Ty8cR3XMQik44gyY8Xbwy4elbmS57rcMZw65ev7BDbufRA2ccWPzcrzYc+mSbgnWHVQrjlEIoxVjoS6aniAZPttISUyvEiF/9YojH7rVfzMzvxg6uj8TPIzvTjBEf9WohQrUPEt5l0FRHxTthx05jQ+XcOUTnzgEEibNQ0n6oLhK+JXe63GLe3oF3gMGtUhRCKFAknyncLCGjVecUM+rgvOyIeYVX4sYgXjnTIVZ0DW7YvKnTrC34P6AiFeAnSaDSJheyT0RjVOOCu25PDm77bJUFuY0MfMX6TWLfOtojUej4nCzKrVieAMDx4dNy9cXGsBwh9oXrgiQguRbCbITUH2OdhRngbD9QYxRoL3N5u5nQt4vmE4pXLRWYHynC9oBvYNolaGQu3s1ylgoOU/pLRlqWZAjkHZwMCbSheZbQFKGXg5EX4MhtPQYrWIr3I46nqWWmSFOiplpBtVnd0apIjL7gqAcM0ZrpBmS5z+wK0QuA0frtWSFzK97JUoemn060pE2UFbpGqxjh0JeLitjWXN4ZcsOruhW/ruo2kjQQ/9jvR6UVM+7ZeOizPWV5mte5uGxeZ7b9QG0pa0/0/mkgN4Yjzpz2eaF3SM9CbXt8THiPoP3fPnglOCX9+hP16w9K8tubDzTkN3t3UEKdRiZwGP8OYt8v/jHYoT3FFibWX6BrXgPYFGwPoNMkfovWYYxW7xR6MUaexl9xJ84rgl4m1/3tWaEWDd8150sv+6QUQkWlVW/ggPW9AIGXB3b0eDJikmgFBkL1w2uP7Dkk3bs6pBV4+7rYJ2I9RNdCwe4BndSudfp7IGHpZnlyQ+6ZHgUfwpuLdIduIj7kb/E7ZZTSX0RKNiPgJsR3sfdj8BVHap1TU9ETXO8K/wxM8h9SVi+BbUBMkM0F75VhwqVMUTgTW+UxVGWa2jCNO4PLp+1lW4bfT8rGxo5wHSCxDFvoouOZq+8bUjVzGfh+5Ie/pg31uVWkJUgVTTZeWPWvu8N/zNIDstQ3qe/AJNchYCaC+CpB2gLwg3+W7dExwFjpGo+dqkNUj2WQCqpnqApkD91sOlf8epY+AAi1NvNOaBnQqGiDajEd/69bpaA9H0bNlEv1SDahG4luLh/9zffnHXsPkJNnLYY4vQiZhWhiMmJDoNtMaHHYZNH/AJtCaS3jTcvZtmYorSb3d9hbZp8KTXW/OeRNJJP1CTXYJtrVMXOwI1DV0FKRs9mnZC6gnFkjhoe+Y3Za7hLz8Yq80DF0IzQhG8WSwkDMc9bkBI3FFpchD7NdYJpTaUUjXC8lOp2py8Sjtj9e9VA9tMVaICEakaUFRik7rDz1MPaHBLDNpkghWkgy9ilnOTLMA1Ty6Gw1FUtCHjXtDt6wxh0qfTIW5g+hg2tEepnWqf0eAWO0G/iGwkurkDKNGqVLMHJpktv52I/tFLbv8q2VKDCK5uiRFYgAsrqMY75Yb7BD9X3M29ALs5JCvoAWJjU8X2dEhanxYZ6e9oHr1DwjTPRWt+psxjxTBXXy5QW1tTMUKGew4U03mgzL0pmtOCTug8KU6630hsqTFnXuM6AOSitVX+Jok30gumCa6ulhPcUfKhhLtVLTz+Ca3kJraHTPLZJ9O2UvmvVTHme5yeQFqABi0edz3VGXE+AHwJXObs8bVJTptvmIEu6XVq5ENSvgSGLOBo0lrF2QUvWBD5vsm1nVdq7FaZ6ZsxWR9cZ+MLyogE8S28RxvEZDdmZgtGfjj/k/B2uzo8mCqF+hWeTwbA5ArSPZtwYtUbvdZG1mT9Lpsxs7N25cUlpZEEboPDTlYGsc6A1ZW7epKKnrNtVCW7bNLX7bNhdUjYCyKLBiG8G6CptTwa/9Y4FwaUmFcVlZ9g/Q5HEtF9Ved9W8p8Yia9p10E3sGsawO7xuC0/tQst7FX/svrBLLu+aecWva+b1xFelLQfv2RKYjEBs156De+zBVESRxdu5tYAE4zzv6NoqsbU0Xx7aLpMXPAV5REeKXsmYodaG8oQH+OAHWotMdwO57ENEl5ovcKrBr4DQ24I2eablV2OoJ7875cZFe07tV7nkhV7Nx217eJYeNrw/+zcs72V3DscCw9bfsxCBz8+I3aCZASqVmG4gQDYOAKGCdIOOpVw1nc9xh2Qx8Ox3RE6Bhrl9Cn9gtUEVLBvQnHgMiSkZX0z3AnnzY0s0CR0t8QjHIPM832DeQcldUcj5Bkyo/EyER6qOLq0RRIwX2oHOV18T+nHuPfmpWQv4lGxMo14Fp9XgrIT7Vf7NNBDSAB+j56e8GDN0R57+tiooSxNUWV7FAkwHi+FZTdCnsF3SMj9EOR0n8mp2F1yc95CkY/jJR1965t15z/cpxxFn8d2502lP0Tnp2Exs1UkaH2HziIJ3vf8HPFHh5nfzo916ji5iJ2pWAMgsPqh+RL4sx7GMgLUTjGa6t01liXE2u2XoiP8BO9kdhg==') as a decorator on your test case
```
</details>
| closed | 2024-10-14T00:30:14Z | 2024-10-29T14:31:02Z | https://github.com/pydata/xarray/issues/9620 | [
"topic-hypothesis"
] | github-actions[bot] | 1 |
man-group/arctic | pandas | 281 | a | #### Arctic Version
```
# 1.30
```
#### Arctic Store
```
# VersionStore
```
#### Platform and version
Linux
#### Description of problem and/or code sample that reproduces the issue
Just a test
| closed | 2016-11-07T17:57:48Z | 2016-11-07T17:57:56Z | https://github.com/man-group/arctic/issues/281 | [] | bmoscon | 0 |
pydata/xarray | numpy | 9,340 | Allow symbolic links between datatree nodes? | ### What is your issue?
[It would be nice](https://github.com/pydata/xarray/issues/4118#issuecomment-875121115) if the tree could support internal symbolic links between nodes.
---
If someone actually wants this then please speak up! Otherwise we won't prioritise it, because it's not at all simple. | open | 2024-08-13T16:08:27Z | 2024-08-14T16:33:02Z | https://github.com/pydata/xarray/issues/9340 | [
"design question",
"topic-backends",
"topic-DataTree"
] | TomNicholas | 2 |
iperov/DeepFaceLab | deep-learning | 916 | Moving directory results "not found" (beauty-error) + strange side effects afterwards | Hello,
first of all let me tell you, that the error i want to mention is more something of a beauty problem instead of a real error!
Yesterday i moved some directories within my hard drive. One of those dirs was the DeepFaceLab directory. Today i wanted to test some ideas within an my first workspace (the one with the initial test files) and just started the "7) merge" batch file.
Everything was fine, the UI loaded, i switched with tab-key to look at the frame with mask and tried to change the blur of mask (first action i tried to to).
An error occured and showed up on console, telling me that the mask file could not be found (e.g. it searched in D:\ instead of E:\).
This was strange because everything seemed to work and show up on UI. However, it seems to me, that there are some different ways used to load the files on some points.
I only observed this on "7) merge" batch but it could affect other batches too.
Further observation - sideffects:
After moving the directory back to its old position i tried to continue my work. The merge worked fine now but the "8) merge avi" now behaves somhow strange.
I was working with a 29.98 fps video but used the default 25 fps days ago on my first tries without problems. The batch still converted my result to 29.98 fps. Now thinking of it, it has been a strange behaviour. It got even more strange after moving and removing the directory. Because now it suddenly converts the video to 25 fps and i cannot use a float value in the console. It worked just some days ago but how?!
I hope this was detailed enough.
For any more informations, please ask anytime.
I will try to help this project, because it realy is a nice one.
tl;dr:
- moving directory gives "file not found"-error afterwards on specific situations and batches
- 8th batch to merge everything cannot receive float values for FPS, but batch converted videos with src FPS days ago without me noticing
Sincerly
Me | open | 2020-10-03T13:40:35Z | 2023-06-08T21:44:09Z | https://github.com/iperov/DeepFaceLab/issues/916 | [] | Void-Droid | 1 |
proplot-dev/proplot | data-visualization | 295 | Better warning dependency of cartopy package | ### Description
When I tried the basic geographic plot without cartopy installed
```
import proplot as pplt
pplt.subplots(proj='npstere')
```
I got this error:
```
ValueError: Unknown projection 'npstere'. Options are: '3d', 'basemap', 'cart', 'cartesian', 'cartopy', 'geo', 'geographic', 'polar', 'rect', 'rectilinar', 'three'.
```
It would be better to raise the dependency of cartopy package.
### Proplot version
0.9.2 | closed | 2021-10-11T12:51:22Z | 2021-10-15T21:30:18Z | https://github.com/proplot-dev/proplot/issues/295 | [
"enhancement"
] | zxdawn | 3 |
huggingface/peft | pytorch | 2,054 | Problem with model.merge_and_unload - the saved model is almost empty - 40kb | ### System Info
Ubuntu 22.04 all latest versions
### Who can help?
@BenjaminBossan @sayakpaul
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder
- [ ] My own task or dataset (give details below)
### Reproduction
I was trying to simply merge a lora model to the base, but it does not work, the saved model is always zero, and I tried it on my pc as well as on cloud gpu hosting and it is all the same, the saved model size is (almost) zero, while the merged model seem to have an ok size:
##here we merge the model with the adapter
###load model
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
#base_model_id = "unsloth/gemma-2-9b-bnb-4bit"
base_model_id = "unsloth/gemma-2-2b"
#base_model_id = "unsloth/gemma-2-2b-bnb-4bit"
#base_model_id = "PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
base_model = AutoModelForCausalLM.from_pretrained(
base_model_id, # Mistral, same as before
#quantization_config=bnb_config, # Same quantization config as before
device_map="auto",
trust_remote_code=True,
torch_dtype=torch.float16
#use_auth_token=True
)
#load peft
from peft import PeftModel
ft_model = PeftModel.from_pretrained(base_model, "/workspace/gemma2b-rlhf/checkpoint-6/")
ft_model.eval()
ft_model.to("cuda")
import torch
def get_model_size_in_gb(model):
# Initialize size counter
total_size = 0
# Iterate over all model parameters
for param in model.parameters():
total_size += param.numel() * param.element_size()
# Convert bytes to gigabytes
size_in_gb = total_size / (1024 ** 3)
return size_in_gb
if 1==1:
print("ft_model",get_model_size_in_gb(ft_model),ft_model)
ft_model.merge_and_unload()
print("merged model")
#ft_model.merge_adapter()
#.merge_and_unload()
print("ft_model",get_model_size_in_gb(ft_model),ft_model)
if 1==1:
import os
# Save the merged model to a folder called "full"
save_path = "/workspace/gemma2b-rlhf/checkpoint-12/full/"
os.makedirs(save_path, exist_ok=True)
ft_model.save_pretrained(save_path, safe_serialization=False)
```
### Expected behavior
The model size should be similar to the base model, in this case around 4-5 GB and not 40kb.
Not sure if merged model is ok when i print it, it is (4.869591236114502 is the size in GB, but when i save it , it is 40kb):
```
merged model
ft_model 4.869591236114502
PeftModelForCausalLM(
(base_model): LoraModel(
(model): Gemma2ForCausalLM(
(model): Gemma2Model(
(embed_tokens): Embedding(256000, 2304, padding_idx=0)
(layers): ModuleList(
(0-25): 26 x Gemma2DecoderLayer(
(self_attn): Gemma2Attention(
(q_proj): Linear(in_features=2304, out_features=2048, bias=False)
(k_proj): Linear(in_features=2304, out_features=1024, bias=False)
(v_proj): Linear(in_features=2304, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=2304, bias=False)
(rotary_emb): Gemma2RotaryEmbedding()
)
(mlp): Gemma2MLP(
(gate_proj): Linear(in_features=2304, out_features=9216, bias=False)
(up_proj): Linear(in_features=2304, out_features=9216, bias=False)
(down_proj): Linear(in_features=9216, out_features=2304, bias=False)
(act_fn): PytorchGELUTanh()
)
(input_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_attention_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(pre_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
(post_feedforward_layernorm): Gemma2RMSNorm((2304,), eps=1e-06)
)
)
(norm): Gemma2RMSNorm((2304,), eps=1e-06)
)
(lm_head): Linear(in_features=2304, out_features=256000, bias=False)
)
)
)
``` | closed | 2024-09-07T22:57:29Z | 2024-10-16T15:03:56Z | https://github.com/huggingface/peft/issues/2054 | [] | Oxi84 | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 567 | ValueError: Input signal length is too small=0 | Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
ValueError: "Input signal length=0 is too small to resample from 88200->44100"
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 286, in seperate
File "separate.py", line 869, in prepare_mix
File "librosa\util\decorators.py", line 88, in inner_f
File "librosa\core\audio.py", line 179, in load
File "librosa\util\decorators.py", line 88, in inner_f
File "librosa\core\audio.py", line 647, in resample
File "resampy\core.py", line 97, in resample
"
Error Time Stamp [2023-05-23 09:25:17]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: 4
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: UVR-MDX-NET Karaoke 2
chunks: Auto
margin: 44100
compensate: Auto
is_denoise: False
is_invert_spec: False
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_gpu_conversion: False
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_create_model_folder: False
mp3_bit_set: 320k
save_format: WAV
wav_type_set: PCM_16
help_hints_var: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems | open | 2023-05-22T23:26:46Z | 2023-05-22T23:26:46Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/567 | [] | Sebba1976 | 0 |
waditu/tushare | pandas | 1,556 | 689009的后复权数据出错了 | ID:125875
ts_code trade_date open high low close pre_close change pct_chg vol amount
1 689009.SH 20210611 NA NA NA NA NA 0.12 0.1485 21024.22 172997.233
2 689009.SH 20210610 NA NA NA NA NA -0.4 -0.4926 17102.09 139235.786
3 689009.SH 20210609 NA NA NA NA NA -1.8 -2.1687 23130.47 192459.286
4 689009.SH 20210608 78.80 83.78 78.71 83.00 79.64 3.36 4.219 24142.71 197649.545
5 689009.SH 20210607 75.36 79.99 75.02 79.64 75.36 4.28 5.6794 21950.52 172864.544
6 689009.SH 20210604 79.00 79.00 74.90 75.36 77.80 -2.44 -3.1362 21581.36 165204.476
7 689009.SH 20210603 80.00 80.99 77.31 77.80 80.17 -2.37 -2.9562 16465.46 129032.589
8 689009.SH 20210602 78.20 82.45 75.18 80.17 77.50 2.67 3.4452 34803.36 276538.529
9 689009.SH 20210601 72.86 82.89 72.86 77.50 73.30 4.2 5.7299 38853.68 305349.244 | open | 2021-06-15T06:31:45Z | 2021-06-15T06:31:45Z | https://github.com/waditu/tushare/issues/1556 | [] | cuberoocp | 0 |
NullArray/AutoSploit | automation | 1,247 | Unhandled Exception (c745f1000) | Autosploit version: `4.0`
OS information: `Linux-4.15.0-76-generic-x86_64-with-Ubuntu-18.04-bionic`
Running context: `autosploit.py`
Error mesage: `[Errno 2] No such file or directory: '/opt/AutoSploit/hosts.txt'`
Error traceback:
```
Traceback (most recent call):
File "/opt/AutoSploit/lib/term/terminal.py", line 721, in terminal_main_display
self.__reload()
File "/opt/AutoSploit/lib/term/terminal.py", line 77, in __reload
self.loaded_hosts = open(lib.settings.HOST_FILE).readlines()
IOError: [Errno 2] No such file or directory: '/opt/AutoSploit/hosts.txt'
```
Metasploit launched: `True`
| closed | 2020-02-13T21:04:57Z | 2020-03-21T21:10:27Z | https://github.com/NullArray/AutoSploit/issues/1247 | [] | AutosploitReporter | 1 |
PablocFonseca/streamlit-aggrid | streamlit | 33 | General Question: plugging in custom components | Awesome library @PablocFonseca! A quick question for you:
My firm has an enterprise license for AGGrid, but we also have a number of custom "wrappers" around the library in order to modify the look/feel of components. If we wanted to plug these into what you have built here what would be the path forward to do so? Really appreciate your time.
(Full disclosure: I am not a frontend dev by any stretch of the imagination so this is uncharted waters for me) | closed | 2021-09-15T03:38:52Z | 2024-04-04T17:52:25Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/33 | [
"enhancement"
] | scottweitzner | 3 |
tflearn/tflearn | data-science | 1,098 | Prediction is slower when model is loaded than if it is fited during the process | Hello,
I have a strange issue, the DNN.predict method is quite slower when I load my model's weight than when I train with the fit method. I've also noted that when I run a prediction over a batch of images, it's getting faster and faster to predict.
Here is my code
`class Reseau(object):
def __init__(self, img_size, lr=-1, activation=" "):
tf.logging.set_verbosity(tf.logging.ERROR)
self.lr = lr
self.activation = activation
self.img_size = img_size
self.alreadySaved = 0
def setting(self, X, Y, test_x, test_y, nbEpoch):
tflearn.init_graph(num_cores=32, gpu_memory_fraction=1)
with tf.device("/device:GPU:0"):
convnet = input_data(shape=[None, self.img_size, self.img_size, 3], name='input')
convnet = conv_2d(convnet, 32, 5, activation=self.activation)
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation=self.activation)
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 128, 5, activation=self.activation)
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation=self.activation)
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 32, 5, activation=self.activation)
convnet = max_pool_2d(convnet, 5)
convnet = flatten(convnet)
convnet = fully_connected(convnet, 1024, activation=self.activation, name='last')
convnet = fully_connected(convnet, 1024, activation=self.activation, name='last')
convnet = fully_connected(convnet, 1024, activation=self.activation, name='last')
convnet = fully_connected(convnet, 1024, activation=self.activation, name='last')
convnet = dropout(convnet, 0.8)
convnet = fully_connected(convnet, 2, activation='softmax')
convnet = regression(convnet, optimizer='adam', learning_rate=self.lr, loss='categorical_crossentropy', name='targets')
self.model = tflearn.DNN(convnet, tensorboard_dir='log')
if self.alreadySaved == 0:
self.model.fit({'input': X}, {'targets': Y}, n_epoch=nbEpoch, validation_set=({'input': test_x}, {'targets': test_y}), snapshot_step=500, show_metric=True, run_id="model")
self.model.save("./model")
else:
self.model.load("./model", weights_only=True)
return self.model
def predire(self, img, label):
image = array(img).reshape(1, self.img_size,self.img_size,3)
model_out = self.model.predict(image)
rep = 0
if np.argmax(model_out) == np.argmax(label): rep = 1
else: rep = 0
return rep
`
Here is a part of my main
`reseau.setting(X, Y, test_x, test_y, NB_EPOCH)
X = np.array([i[0] for i in test]).reshape(-1,IMG_SIZE,IMG_SIZE,3)
Y = [i[1] for i in test]
cpt = 0
vrai = 0
start_time = time.time()
for i in range(20):
cpt = 0
vrai = 0
start_time = time.time()
for img in tqdm(X):
prediction = reseau.predire(img, Y[cpt])
cpt += 1
if prediction == 1:
vrai += 1`
As you can see, I predict the same batch of images 20 times. The first time is always slower than the other ones (without fitting, I predict 82 images the first and then 340 a second, with fitting, it's 255 images the first time and 340 a seconde then).
I'm really out of idea to fix this.
| open | 2018-11-09T11:20:27Z | 2018-11-09T11:20:27Z | https://github.com/tflearn/tflearn/issues/1098 | [] | AxelRagobert | 0 |
modin-project/modin | data-science | 6,535 | IO tests fail with the latest s3fs (2023.9.0) | ```python
import modin.experimental.pandas as pd
res = pd.read_csv_glob("s3://modin-datasets/testing/multiple_csv/", storage_options={"anon": True})
print(res)
```
this code works with `pip install s3fs==2023.6.0` and [fails](https://github.com/modin-project/modin/actions/runs/6081300714/job/16496861135#step:15:341) with `pip install s3fs==2023.9.0`
| closed | 2023-09-05T14:17:00Z | 2023-09-05T15:20:26Z | https://github.com/modin-project/modin/issues/6535 | [
"bug 🦗",
"P0"
] | dchigarev | 0 |
qubvel-org/segmentation_models.pytorch | computer-vision | 764 | Binary Segmentation | Anyone care to share a working example for binary segmentation? The examples shared no longer work. Thank you very much :) | closed | 2023-05-21T01:32:29Z | 2023-07-12T09:45:26Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/764 | [] | chefkrym | 2 |
marimo-team/marimo | data-science | 3,800 | Plotly requires narwhals>=1.15.1, marimo.io/p/dev project stays at version 1.10 | ### Describe the bug
I'm trying to plot with Plotly on marimo.io/p/dev, through the project dashboard. Plotly needs narwhals>=1.15.1, but the Pyodide environment comes with version 1.10.0. Uninstalling, then reinstalling narwhals==1.26 showed narwhals==1.26 in the side bar, but the environment always starts up with 1.10.0, and won't update the version in the session.

### Environment
<details>
Pyodide environment on marimo.io/p/dev
</details>
### Code to reproduce
```
import marimo as mo
import micropip
micropip.uninstall("narwhals")
await micropip.install("narwhals==1.26")
import numpy as np
import plotly.graph_objects as go
N = 1000
t = np.linspace(0, 10, 100)
y = np.sin(t)
fig = go.Figure(data=go.Scatter(x=t, y=y, mode='markers'))
Traceback (most recent call last):
File "/lib/python3.12/site-packages/marimo/_runtime/executor.py", line 115, in execute_cell_async
await eval(cell.body, glbls)
Cell marimo://notebook.py#cell=cell-1
, line 14, in <module>
fig = go.Figure(data=go.Scatter(x=t, y=y, mode='markers'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/plotly/graph_objs/_scatter.py", line 3634, in __init__
self["x"] = _v
~~~~^^^^^
File "/lib/python3.12/site-packages/plotly/basedatatypes.py", line 4860, in __setitem__
self._set_prop(prop, value)
File "/lib/python3.12/site-packages/plotly/basedatatypes.py", line 5199, in _set_prop
val = validator.validate_coerce(val)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/_plotly_utils/basevalidators.py", line 410, in validate_coerce
v = copy_to_readonly_numpy_array(v)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lib/python3.12/site-packages/_plotly_utils/basevalidators.py", line 97, in copy_to_readonly_numpy_array
v = nw.from_native(v, allow_series=True, pass_through=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: from_native() got an unexpected keyword argument 'pass_through'
``` | closed | 2025-02-14T15:46:03Z | 2025-02-14T16:20:09Z | https://github.com/marimo-team/marimo/issues/3800 | [
"bug"
] | essicolo | 1 |
recommenders-team/recommenders | machine-learning | 1,930 | [BUG] AttributeError: 'dict' object has no attribute '__LIGHTFM_SETUP__' | ### Description
My CICD is failing when installing recommenders. Previously this code was working correctly.
Full stacktrace :
```
9s
Run python -m pip install --upgrade pip
Requirement already satisfied: pip in /opt/hostedtoolcache/Python/3.8.[1](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:1)6/x64/lib/python3.8/site-packages (22.0.4)
Collecting pip
Downloading pip-23.1.2-py3-none-any.whl (2.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1/2.1 MB 11.2 MB/s eta 0:00:00
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 22.0.4
Uninstalling pip-22.0.4:
Successfully uninstalled pip-22.0.4
Successfully installed pip-23.1.2
Collecting pytest
Downloading pytest-7.3.1-py3-none-any.whl (320 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 320.5/320.5 kB 9.5 MB/s eta 0:00:00
Collecting iniconfig (from pytest)
Downloading iniconfig-2.0.0-py3-none-any.whl (5.9 kB)
Collecting packaging (from pytest)
Downloading packaging-23.1-py3-none-any.whl (48 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 48.9/48.9 kB 16.7 MB/s eta 0:00:00
Collecting pluggy<2.0,>=0.12 (from pytest)
Downloading pluggy-1.0.0-py2.py3-none-any.whl ([13](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:14) kB)
Collecting exceptiongroup>=1.0.0rc8 (from pytest)
Downloading exceptiongroup-1.1.1-py3-none-any.whl ([14](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:15) kB)
Collecting tomli>=1.0.0 (from pytest)
Downloading tomli-2.0.1-py3-none-any.whl (12 kB)
Installing collected packages: tomli, pluggy, packaging, iniconfig, exceptiongroup, pytest
Successfully installed exceptiongroup-1.1.1 iniconfig-2.0.0 packaging-23.1 pluggy-1.0.0 pytest-7.3.1 tomli-2.0.1
Collecting recommenders==1.1.1 (from -r requirements.txt (line 1))
Downloading recommenders-1.1.1-py3-none-any.whl (339 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 339.0/339.0 kB 7.3 MB/s eta 0:00:00
Collecting numpy>=1.19 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading numpy-1.24.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 17.3/17.3 MB 69.9 MB/s eta 0:00:00
Collecting pandas<2,>1.0.3 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading pandas-1.5.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (12.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 12.2/12.2 MB 111.9 MB/s eta 0:00:00
Collecting scipy<2,>=1.0.0 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading scipy-1.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (34.5 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 34.5/34.5 MB 64.0 MB/s eta 0:00:00
Collecting tqdm<5,>=4.31.1 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading tqdm-4.65.0-py3-none-any.whl (77 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 77.1/77.1 kB 27.5 MB/s eta 0:00:00
Collecting matplotlib<4,>=2.2.2 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading matplotlib-3.7.1-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (9.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 9.2/9.2 MB 125.1 MB/s eta 0:00:00
Collecting scikit-learn<1.0.3,>=0.22.1 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading scikit_learn-1.0.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (26.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 26.7/26.7 MB 62.5 MB/s eta 0:00:00
Collecting numba<1,>=0.38.1 (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading numba-0.57.0-cp38-cp38-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6/3.6 MB 77.6 MB/s eta 0:00:00
Collecting lightfm<2,>=1.[15](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:16) (from recommenders==1.1.1->-r requirements.txt (line 1))
Downloading lightfm-1.17.tar.gz (3[16](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:17) kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 316.4/316.4 kB 69.7 MB/s eta 0:00:00
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'error'
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [[17](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:18) lines of output]
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/opt/hostedtoolcache/Python/3.8.16/x64/lib/python3.8/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 1[18](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:19), in get_requires_for_build_wheel
return hook(config_settings)
File "/tmp/pip-build-env-_9_g7ize/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/tmp/pip-build-env-_9_g7ize/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 3[23](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:24), in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-_9_g7ize/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "/tmp/pip-build-env-_9_g7ize/overlay/lib/python3.8/site-packages/setuptools/build_meta.py", line [33](https://github.com/360Learning/course-recommendations/actions/runs/5044258746/jobs/9047069712?pr=337#step:4:34)8, in run_setup
exec(code, locals())
File "<string>", line 11, in <module>
AttributeError: 'dict' object has no attribute '__LIGHTFM_SETUP__'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
```
### In which platform does it happen?
Github Actions CICD :
```
name: Run Python Tests
on:
push:
branches:
- main
pull_request:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.8"
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Run tests with pytest
run: |
pytest
```
requirements.txt :
```
recommenders==1.1.1
```
| closed | 2023-05-22T10:00:09Z | 2024-03-05T13:13:41Z | https://github.com/recommenders-team/recommenders/issues/1930 | [
"bug"
] | benoit360l | 2 |
flaskbb/flaskbb | flask | 394 | Create a FAQ with some common questions and pitfalls | For example those are issues that could easily prevented with a FAQ (or better docs :P):
#372
#389
| closed | 2018-01-13T17:42:56Z | 2018-04-15T07:47:50Z | https://github.com/flaskbb/flaskbb/issues/394 | [
"enhancement"
] | sh4nks | 1 |
ivy-llc/ivy | numpy | 28,312 | householder_product | I will implement this as a composition function.
#28311 will be good to have for better implementation.
Conversation of torch.linealg locked! | closed | 2024-02-17T17:29:27Z | 2024-02-17T17:32:46Z | https://github.com/ivy-llc/ivy/issues/28312 | [
"Sub Task"
] | ZenithFlux | 0 |
google-research/bert | tensorflow | 889 | "model_fn should return an EstimatorSpec." Error when running "predicting_movie_reviews_with_bert_on_tf_hub.ipynb" for fine-tuning BERT-Chinese Model | I tried to run "predicting_movie_reviews_with_bert_on_tf_hub.ipynb", but fine-tuning on a Chinese-text csv with corresponding labels. As such I loaded 'https://tfhub.dev/google/bert_chinese_L-12_H-768_A-12/1' model instead of English-model in the original code
I only made minimum changes from the original jpynb for data preparation, which was running OK. But during training as in
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
there was an error of "ValueError: model_fn should return an EstimatorSpec." - Note that I didn't make any modification in model_fn_builder(num_labels, learning_rate, num_train_steps, num_warmup_steps)
The full error log is as follows - it was running on CPU (8GB RAM) with Windows 10 environment:
Traceback (most recent call last):
File "bert_classify.py", line 352, in <module>
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "C:\Miniconda_python\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 370, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Miniconda_python\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1161, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Miniconda_python\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1191, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "C:\Miniconda_python\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1153, in _call_model_fn
raise ValueError('model_fn should return an EstimatorSpec.')
ValueError: model_fn should return an EstimatorSpec.
Just wondering what went wrong and needs to be done when loading BERT-Chinese model to fine-tune it for text classification task? Much appreciated | closed | 2019-10-28T13:25:56Z | 2020-02-01T08:08:57Z | https://github.com/google-research/bert/issues/889 | [] | xinxu75 | 2 |
ultralytics/ultralytics | python | 19,732 | Pre processing paramaters | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, the pre-processing during inference should be applied to the image the same way as pre-processing during training, right? So, are the parameters for padding, centering, and everything you used for preprocessing during training saved in the .pth file? Does model.predict take these parameters if they are there? thanks!
### Additional
_No response_ | open | 2025-03-16T21:50:07Z | 2025-03-17T09:45:08Z | https://github.com/ultralytics/ultralytics/issues/19732 | [
"question"
] | roy-orfaig | 3 |
marshmallow-code/flask-marshmallow | rest-api | 143 | Flask SQLAlchemy Integration - Documentation Suggestion | Firstly, thank you for the great extension!!
I've ran into an error that I'm sure others will have ran into, it may be worth updating the docs with a warning about it.
Our structure was as follows:
- Each model has it's own module
- Each model module also contains a Schema and Manager for example UserModel, UserSchema, UserManager all defined within /models/user.py
Some background - with SQLAlchemy, with separate models, you need to import them all at runtime, before the DB is initialised to avoid circular dependancies within relationships.
When the `UserSchema(ma.ModelSchema)` is hit during import `from app.models import *` (in bootstrap) this initialises the models and attempts to execute the relationships. At this stage, we may not have a relationship requirement (which SQLAlchemy avoids using string based relationships) however as the `ma.ModelSchema` initialises the models it creates errors such as this:
> sqlalchemy.exc.InvalidRequestError: When initializing mapper mapped class User->users, expression ‘Team’ failed to locate a name (“name ‘Team’ is not defined”). If this is a class name, consider adding this relationship() to the <class ‘app.models.user.User’> class after both dependent classes have been defined.
and, on subsequent loads:
> sqlalchemy.exc.InvalidRequestError: Table ‘users_teams’ is already defined for this MetaData instance. Specify ‘extend_existing=True’ to redefine options and columns on an existing Table object.
The solution to this is to simply build the UserSchemas in a different import namespace, we've now got:
```
/schemas/user_schema.py
/models/user.py
```
And no more circular issues - hopefully this helps someone else, went around in circles (pun intended) for a few hours before I realised it was the ModelSchema causing it.
Could the docs be updated to make a point of explaining that the ModelSchema initialises the model, and therefore it's a good idea for them to be in separate import destinations? | open | 2019-07-29T09:13:33Z | 2020-04-20T06:52:44Z | https://github.com/marshmallow-code/flask-marshmallow/issues/143 | [
"help wanted",
"docs"
] | williamjulianvicary | 4 |
Miserlou/Zappa | django | 2,089 | TypeError: 'NoneType' object is not callable | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
First time installation of zappa and django
## Expected Behavior
<!--- Tell us what should happen -->
Expected behavior is to show me basic django page.
## Actual Behavior
<!--- Tell us what happens instead -->
when I accessed the page(https://y60g2h2ae6.execute-api.ap-northeast-2.amazonaws.com/dev/), the messages as below came up.
"{'message': 'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', 'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 540, in handler\\n with Response.from_app(self.wsgi_app, environ) as response:\\n', ' File \"/var/task/werkzeug/wrappers/base_response.py\", line 287, in from_app\\n return cls(*_run_wsgi_app(app, environ, buffered))\\n', ' File \"/var/task/werkzeug/test.py\", line 1119, in run_wsgi_app\\n app_rv = app(environ, start_response)\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}"
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. virtualenv venv
2. source venv/bin/activate
3. pip install django
4. django-admin startproject testproject .
5. pip install zappa
6. zappa init
7. zappa deploy dev
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.51.0
* Operating System and Python version: mac os mojave 10.14.6, python 3.8.1
* The output of `pip freeze`:
argcomplete==1.11.1
asgiref==3.2.7
boto3==1.12.46
botocore==1.15.46
certifi==2020.4.5.1
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.1
Django==3.0.5
docutils==0.15.2
durationpy==0.5
future==0.18.2
hjson==3.0.1
idna==2.9
jmespath==0.9.5
kappa==0.6.0
pip-tools==5.0.0
placebo==0.9.0
python-dateutil==2.6.1
python-slugify==4.0.0
pytz==2019.3
PyYAML==5.3.1
requests==2.23.0
s3transfer==0.3.3
six==1.14.0
sqlparse==0.3.1
text-unidecode==1.3
toml==0.10.0
tqdm==4.45.0
troposphere==2.6.0
urllib3==1.25.9
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.51.0
* Link to your project (optional):
* Your `zappa_settings.json`:
{
"dev": {
"aws_region": "ap-northeast-2",
"django_settings": "testproject.settings",
"profile_name": "default",
"project_name": "test-zappa-14",
"runtime": "python3.8",
"s3_bucket": "zappa-iba0235d3"
}
} | open | 2020-04-27T03:02:06Z | 2020-05-02T22:09:56Z | https://github.com/Miserlou/Zappa/issues/2089 | [] | alphahacker | 3 |
recommenders-team/recommenders | data-science | 1,345 | Getting com.microsoft.aad.msal4j.AcquireTokenSilentSupplier failed error on getting access token from clientSecret | Hi Team,
I am trying to get access token from clientSecret. Please find below code snippet.
`IClientCredential cred = ClientCredentialFactory.createFromSecret(clientSecret);
ConfidentialClientApplication app;
try {
// Build the MSAL application object for a client credential flow
app = ConfidentialClientApplication.builder(applicationId, cred ).authority(authority).build();
} catch (MalformedURLException e) {
System.out.println("Error creating confidential client: " + e.getMessage());
return null;
}
IAuthenticationResult result;
try{
SilentParameters silentParameters = SilentParameters.builder(scopeSet).build();
result= app.acquireTokenSilently(silentParameters).join();
} catch (Exception ex ){
if (ex.getCause() instanceof MsalException) {
ClientCredentialParameters parameters =
ClientCredentialParameters
.builder(scopeSet)
.build();
// Try to acquire a token. If successful, you should see
// the token information printed out to console
result = app.acquireToken(parameters).join();
} else {
// Handle other exceptions accordingly
System.out.println("Unable to authenticate = " + ex.getMessage());
return null;
}
}`
But while running I am getting below error, can you please suggest me when changes I have to make.
I am using msal4j-1.9.1.jar.
[ForkJoinPool.commonPool-worker-1] ERROR com.microsoft.aad.msal4j.ConfidentialClientApplication - [Correlation ID: 7c7ce67e-*********]
Execution of class com.microsoft.aad.msal4j.AcquireTokenSilentSupplier failed.
com.microsoft.aad.msal4j.MsalClientException: java.net.SocketTimeoutException: connect timed out
| closed | 2021-03-15T08:39:29Z | 2021-03-16T16:51:59Z | https://github.com/recommenders-team/recommenders/issues/1345 | [
"help wanted"
] | SuryaAnand302 | 3 |
streamlit/streamlit | streamlit | 10,648 | adding the help parameter to a (Button?) widget pads it weirdly instead of staying to the left. | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
```python
st.page_link('sub_pages/resources/lesson_plans.py', label='Lesson Plans', icon=':material/docs:', help='View and download lesson plans')
```
## Unexpected Outcome

---
## Expected

### Reproducible Code Example
```Python
import streamlit as st
st.page_link('sub_pages/resources/lesson_plans.py', label='Lesson Plans', icon=':material/docs:', help='View and download lesson plans')
```
### Steps To Reproduce
1. Add help to a button widget
### Expected Behavior
Button keeps help text and is on the left

### Current Behavior
Button keeps help text but is on the right

### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.43.0
- Python version: 3.12.2
- Operating System: Windows 11
- Browser: Chrome
### Additional Information
> Yes, this used to work in a previous version.
1.42.0 works | closed | 2025-03-05T09:56:06Z | 2025-03-07T21:21:05Z | https://github.com/streamlit/streamlit/issues/10648 | [
"type:bug",
"status:confirmed",
"priority:P1",
"feature:st.download_button",
"feature:st.button",
"feature:st.link_button",
"feature:st.page_link"
] | thehamish555 | 2 |
chezou/tabula-py | pandas | 215 | CalledProcessError Tabula-py | <!--- Provide a general summary of your changes in the Title above -->
When running tabula-py I get a "CalledProcessError":
CalledProcessError at /api/uploadvendorchargefile/
Command '['java', '-Dfile.encoding=UTF8', '-jar', '/home2/backend/appvenv/lib/python3.5/site-packages/tabula/tabula-1.0.2-jar-with-dependencies.jar', '--pages', '1', '--guess', '29168.pdf']' returned non-zero exit status 1
<!-- Write the summary of your issue here -->
CalledProcessError at /api/uploadvendorchargefile/
Command '['java', '-Dfile.encoding=UTF8', '-jar', '/home2/backend/appvenv/lib/python3.5/site-packages/tabula/tabula-1.0.2-jar-with-dependencies.jar', '--pages', '1', '--guess', '29168.pdf']' returned non-zero exit status 1
<!--- Write and check the following questionaries. -->
tabula.environment_info()
Python version:
3.5.1
Java version:
OpenJdk Version "1.8.0_232"
OpenJdk Runtime Environment (build 1.8.0_232-8u232-b09-0ubuntu~18.04.1-b09)
OpenJDK 64-Bit Server VM (build 25.232.b09)
tabula-py version: 1.3.1
platform: Ubuntu 18.04
Working properly on another server on ubuntu 16.04
# What did you do when you faced the problem?
Uninstall and Install tabula-py
Upgrade Java Version
<!--- Provide your information to reproduce the issue. -->
## Code:
```
import tabula
tabula.read_pdf(file_path, stream=True)
```
## Expected behavior:
<!--- Write your expected results/outputs -->
```
Reading the PDF with tabula-py
Read file which is uploaded from Rest Ap
```
## Actual behavior:
<!--- Put the actual results/outputs -->
```
CalledProcessError at /api/uploadvendorchargefile/
Command '['java', '-Dfile.encoding=UTF8', '-jar', '/home2/backend/appvenv/lib/python3.5/site-packages/tabula/tabula-1.0.2-jar-with-dependencies.jar', '--pages', '1', '--guess', '29168.pdf']' returned non-zero exit status 1
```
## Related Issues:
When executing the following code
I got the error
CalledProcessError at /api/uploadvendorchargefile/
Command '['java', '-Dfile.encoding=UTF8', '-jar', '/home2/backend/appvenv/lib/python3.5/site-packages/tabula/tabula-1.0.2-jar-with-dependencies.jar', '--pages', '1', '--guess', '29168.pdf']' returned non-zero exit status 1
| closed | 2020-01-29T05:14:17Z | 2020-01-29T05:14:30Z | https://github.com/chezou/tabula-py/issues/215 | [] | ghost | 1 |
django-import-export/django-import-export | django | 1,150 | Django import export: it's not importing all column's data from csv. | i'm using django import export and it's importing first 3 columns and leaving rest of the columns blank.
from import_export import resources
#from import_export import instance_loaders
from import_export.admin import ImportExportModelAdmin
from .userB import User_Acadamic_Details_B
from import_export.fields import Field
class UserResourceB(resources.ModelResource): #User_Acadamic_Details_A_Resource
def get_export_headers(self):
headers = super().get_export_headers()
for i, h in enumerate(headers):
if h == 'Course ID':
headers[i] = "Course_ID"
if h == 'Rollno':
headers[i] = "Student_ID"
if h == 'Name':
headers[i] = 'Name'
if h == 'Gender':
headers[i] = 'Gender'
if h == 'Dedication':
headers[i] = 'Dedication'
if h == 'Quiz 1 (Quiz 01(A))':
headers[i] = 'Quiz1'
if h == 'Quiz 2 (Quiz 02(A))':
headers[i] = 'Quiz2'
if h == 'Assignment 1 (Assignment no 1(A))':
headers[i] = 'Assignment1'
if h == 'Assignment 2 (Assignment no 2(A))':
headers[i] = 'Assignment2'
if h == 'Mid Exam (MID TERM EXAM(A))':
headers[i] = 'Mid_Exam'
return headers
Course_ID = Field(attribute='Course_ID', column_name='Course ID' )
Student_ID = Field(attribute='Student_ID', column_name='Rollno')
Name = Field(attribute='Name', column_name='Name')
Gender = Field(attribute='Gender', column_name='Gender')
Dedication = Field(attribute='Dedication', column_name='Dedication')
Quiz1 = Field(attribute='Quiz1', column_name='Quiz 1 (Quiz 01(A))')
Quiz2 = Field(attribute='Quiz2', column_name='Quiz 2 (Quiz 02(A))')
Assignment1 = Field(attribute='Assignment1', column_name='Assignment 1 (Assignment no 1(A))')
Assignment2 = Field(attribute='Assignment2', column_name='Assignment 2 (Assignment no 2(A))')
Mid_Exam = Field(attribute='Mid_Exam', column_name='Mid Exam (MID TERM EXAM(A))')
#Attendance = Field(attribute='Attendance', column_name='Password')
class Meta:
model = User_Acadamic_Details_B
import_id_fields = ('Course_ID',)
export_order = ('Course_ID', 'Student_ID', 'Name', 'Gender', 'Dedication', 'Quiz1', 'Quiz2', 'Assignment1', 'Assignment2', 'Mid_Exam', 'Mid_Exam')
skip_unchanged = True
report_skipped = True
class UserBAdmin(ImportExportModelAdmin):
resource_class = UserResourceB
| closed | 2020-06-09T18:38:54Z | 2020-08-07T09:31:49Z | https://github.com/django-import-export/django-import-export/issues/1150 | [
"question"
] | AirnFire | 4 |
thunlp/OpenPrompt | nlp | 41 | TypeError: __init__() missing 1 required positional argument: 'tokenizer_wrapper_class' | When going through the [tutorial](https://thunlp.github.io/OpenPrompt/notes/examples.html)
In step 6, raised the errors below:
>>> data_loader = PromptDataLoader(
... dataset = dataset,
... tokenizer = bertTokenizer,
... template = promptTemplate,
... )
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() missing 1 required positional argument: 'tokenizer_wrapper_class' | closed | 2021-11-09T04:19:44Z | 2021-11-11T06:44:49Z | https://github.com/thunlp/OpenPrompt/issues/41 | [] | dongxiaohuang | 4 |
microsoft/hummingbird | scikit-learn | 66 | pytorch problem with pip install on Python3.7 or Python3.8 | Doing pip install hummingbird-ml on Python3.7 or Python3.8 a user reported:
```
ERROR: Could not find a version that satisfies the requirement torch>=1.4.0 (from hummingbird-ml) (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2)
ERROR: No matching distribution found for torch>=1.4.0 (from hummingbird-ml)
```
Looks like pytorch on pypi is 1.0.2 and on the conda main channel it’s 1.3.1
Maybe linking to the installation page for pytorch would be useful.
| closed | 2020-05-12T17:44:46Z | 2020-06-03T05:12:00Z | https://github.com/microsoft/hummingbird/issues/66 | [] | ksaur | 6 |
xinntao/Real-ESRGAN | pytorch | 857 | How to setting gt_size | my gt = 512x512,and i want train 64x64 upscale 256x256,when i set gt_size=512 report an error:
### ValueError: LQ (100, 100) is smaller than patch size (128, 128). Please remove None.
and what is the relationship between the size of gt_size and the quality of the final generated image? | open | 2024-10-25T10:11:47Z | 2024-10-25T10:11:47Z | https://github.com/xinntao/Real-ESRGAN/issues/857 | [] | L-Teer | 0 |
influxdata/influxdb-client-python | jupyter | 432 | records with set time don't show up with flux | Hi,
I'm trying to insert data while manually setting the date field. I'm doing so as explained in
https://github.com/influxdata/influxdb-client-python/blob/20c867d6516511fe73b0cae36c1705131cfa92f7/influxdb_client/client/write/point.py#L158
(but with `datetime.now()`)
The trouble is that such entries doesn't show up in `flux` queries (in `influxQL` queries they do). As I'm using `influx 1.8`
```bash
$ apt show influxdb
Package: influxdb
Version: 1.8.10-1
Priority: extra
Section: default
Maintainer: support@influxdb.com
Installed-Size: 148 MB
Depends: curl
Homepage: https://influxdata.com
License: Proprietary
Vendor: InfluxData
Download-Size: 50,7 MB
APT-Manual-Installed: yes
APT-Sources: https://repos.influxdata.com/debian bullseye/stable arm64 Packages
Description: Distributed time-series database.
```
I'm not sure if this might be a reason for the problem (nevertheless this python lib should work with `InfluxDB 1.8+`...).
I've created a small example below.
This is the python code first checking if the database is emtpty, then inserting a record with time set and then a record where the time isn't set.
```py
from datetime import datetime
from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
def printRes(query_api, bucket):
tables = query_api.query(f'from(bucket:"{bucket}") |> range(start: -10m)')
for table in tables:
print(table)
for row in table.records:
print (row.values)
client = InfluxDBClient.from_config_file("config.ini")
query_api = client.query_api()
bucket = "test"
with client.write_api(write_options=SYNCHRONOUS) as write_api:
print("before:")
printRes(query_api, bucket)
print("\ninsert with time set")
now = datetime.now()
print(now)
p = Point("my_measurement").tag("location", "Prague").field("temperature", 25.3).time(now)
write_api.write(bucket=bucket, record=p)
printRes(query_api, bucket)
print("\ninsert without time set")
now = datetime.now()
print(now)
p = Point("my_measurement").tag("location", "Prague").field("temperature", 30.0)
write_api.write(bucket=bucket, record=p)
printRes(query_api, bucket)
client.close()
```
Here's the output of that python code. You can see the first record that was inserted isn't visible.
```py
before:
insert with time set
2022-04-22 23:44:23.622783
insert without time set
2022-04-22 23:44:23.711761
FluxTable() columns: 9, records: 1
{'result': '_result', 'table': 0, '_start': datetime.datetime(2022, 4, 22, 21, 34, 23, 724684, tzinfo=datetime.timezone.utc), '_stop': datetime.datetime(2022, 4, 22, 21, 44, 23, 724684, tzinfo=datetime.timezone.utc), '_time': datetime.datetime(2022, 4, 22, 21, 44, 23, 701321, tzinfo=datetime.timezone.utc), '_value': 30.0, '_field': 'temperature', '_measurement': 'my_measurement', 'location': 'Prague'}
```
Now on the influx server I checked with `influxQL`. And oh now both records are visible.
```
> select * from my_measurement
name: my_measurement
time location temperature
---- -------- -----------
1650663863701321959 Prague 30
1650671063622783000 Prague 25.3
```
To double check, I queried the database on the server with `flux` and this time the first record is missing as well.
```
> from(bucket:"test") |> range(start: -100m)
Result: _result
Table: keys: [_start, _stop, _field, _measurement, location]
_start:time _stop:time _field:string _measurement:string location:string _time:time _value:float
------------------------------ ------------------------------ ---------------------- ---------------------- ---------------------- ------------------------------ ----------------------------
2022-04-22T20:09:12.413264374Z 2022-04-22T21:49:12.413264374Z temperature my_measurement Prague 2022-04-22T21:44:23.701321959Z 30
```
So I obviously the record is stored in the database, but somehow `flux` queries aren't able to show them.
I'm not quite sure what's the problem here. I guess one possibility is that querying with `flux` is somehow buggy, but it might also be possible that somehow is wrong with this library and it's insertion routine.
To be honest, I don't have much experience with `influx` (only discovered it today). So maybe someone of you can help me on this (and if the bug, if it is one and I'm not doing something wrong, isn't in this library, maybe you can help me finding the place where to report it to `influxdb`)
PS: Don't wonder why I wanted to manually set the time, I just wanted entries to have the same timestamp (for some reason) | closed | 2022-04-22T22:06:17Z | 2022-05-18T08:59:09Z | https://github.com/influxdata/influxdb-client-python/issues/432 | [
"wontfix"
] | atticus-sullivan | 3 |
tensorpack/tensorpack | tensorflow | 894 | HOW to get information about config in class Model(DQNModel)? | Hi, nice to meet you.
I wonder how to get information about config, such as current learning rate or exploration probability, in class Model(DQNModel), when the program is training. I have tried some method but failed.
| closed | 2018-09-14T08:46:37Z | 2018-09-21T03:53:58Z | https://github.com/tensorpack/tensorpack/issues/894 | [
"examples"
] | silentobservers | 1 |
kynan/nbstripout | jupyter | 147 | Binary file when outputs are not cleared | I use SourceTree, where I can see the changes on the right by clicking on a file.
I have created a new file and run all cells.
If I click on the file, I see this:

I I stage it, I get this message:

If I clear the outputs in the notebook, it looks like this after staging:

I thought that nbstripout should suppress the outputs and wonder why it is not working.
I also have to say that this is not the case for all notebooks with plots. In another .ipynb-file it works as expected without cleaning the outputs.
| closed | 2021-03-02T14:56:04Z | 2022-10-02T09:59:17Z | https://github.com/kynan/nbstripout/issues/147 | [
"type:bug",
"help wanted",
"resolution:wontfix"
] | IsabellLehmann | 16 |
microsoft/nni | deep-learning | 5,012 | How to add skip connections in at least one type of search strategy? | For now I am only able to generate networks where data will flow through each layer one by one (when the searching process has finished). Is it possible to add in search space those skip connections like in ResNet, using `torch.concat` or `torch.sum` at the junction of the layers? | closed | 2022-07-21T19:58:58Z | 2022-07-25T09:58:16Z | https://github.com/microsoft/nni/issues/5012 | [] | alexeyshmelev | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.