repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
napari/napari | numpy | 6,869 | [test-bot] pip install --pre is failing | The --pre Test workflow failed on 2024-04-24 12:07 UTC
The most recent failing test was on ubuntu-latest py3.12 pyqt6
with commit: 9fcf63e69ac61b5dff0259c8618f828cc5169a9c
Full run: https://github.com/napari/napari/actions/runs/8816233366
(This post will be updated if another test fails, as long as this issue remains open.)
| closed | 2024-04-24T12:07:42Z | 2024-04-24T19:42:24Z | https://github.com/napari/napari/issues/6869 | [
"bug"
] | github-actions[bot] | 0 |
robusta-dev/robusta | automation | 1,681 | Sometimes `workload` is `undefined` in Notification Grouping | **Describe the bug**
Here is slack sink configuration:
```
- slack_sink:
name: main_slack_sink
slack_channel: alerts-xxxx
api_key: "{{ env.MAIN_SLACK_SINK }}"
grouping:
group_by:
- workload
- severity
interval: 86400
notification_mode:
summary:
threaded: true
by:
- workload
- severity
```
Nofifications are sent but if pod was killed by OOM or crush reason is `CrashLoopBackOff`. worload is undefined.
```
Matching criteria: severity: HIGH, workload: (undefined)
```
**To Reproduce**
See above
**Expected behavior**
As far as I understand the worload name is the name of a parent object ["Deployment", "ReplicaSet", "DaemonSet", "StatefulSet", "Pod", "Job"]:
```
event.operation in [K8sOperationType.CREATE, K8sOperationType.UPDATE]
and event.obj.kind in ["Deployment", "ReplicaSet", "DaemonSet", "StatefulSet", "Pod", "Job"]
```
I expect the name of the parent object has to be assigned to the workload name.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Linux
- Browser: Slack client
- Version [e.g. 22]
**Smartphone (please complete the following information):**
Not applicable.
**Additional context**
Add any other context about the problem here.
| open | 2025-01-07T11:50:35Z | 2025-01-08T05:07:26Z | https://github.com/robusta-dev/robusta/issues/1681 | [] | SergiiBieliaievskyi | 3 |
lanpa/tensorboardX | numpy | 420 | How to draw two curves in a graph? | Thanks. | closed | 2019-05-10T08:43:42Z | 2019-05-10T08:47:42Z | https://github.com/lanpa/tensorboardX/issues/420 | [] | lartpang | 1 |
numpy/numpy | numpy | 28,199 | Type casting behavior with enum.IntFlag changed between 2.0 and 2.1 | Consider the following example of an 8 bit `IntFlag` and using it together with a `np.int8` or an array of dtype `np.int8`:
```python
from enum import IntFlag, auto
import numpy as np
class PixelStatus(IntFlag):
BIT0 = auto()
BIT1 = auto()
BIT2 = auto()
BIT3 = auto()
BIT4 = auto()
BIT5 = auto()
BIT6 = auto()
BIT7 = auto()
print(f'numpy=={np.__version__}: {np.int8(0) | PixelStatus.BIT0 = !r}')
print(f'numpy=={np.__version__}: {np.int8(0) | 1 = !r}')
```
It seems the type casting handling of this changed between 2.0 and 2.1 (2.2 is same as 2.1):
Before:
```
numpy==2.0.2: np.int8(0) | PixelStatus.BIT0 = np.int8(1)
numpy==2.0.2: np.int8(0) | 1 = np.int8(1)
```
now:
```
numpy==2.1.3: np.int8(0) | PixelStatus.BIT0 = np.int64(1)
numpy==2.1.3: np.int8(0) | 1 = np.int8(1)
```
Is this intentional?
Maybe related to https://github.com/numpy/numpy/issues/27540 | closed | 2025-01-20T17:32:57Z | 2025-01-20T20:09:20Z | https://github.com/numpy/numpy/issues/28199 | [] | maxnoe | 2 |
mwaskom/seaborn | data-science | 3,162 | defaults to grid(False) for dotplots using seaborn objects so.Dot | Trying to plot with the seaborn objects.
Using so.Dot()
the following line when commented out reintroduce the grid options.
https://github.com/mwaskom/seaborn/blob/master/seaborn/_core/plot.py#L1659 | closed | 2022-11-27T21:11:43Z | 2023-03-27T20:18:54Z | https://github.com/mwaskom/seaborn/issues/3162 | [] | Xparx | 4 |
chainer/chainer | numpy | 7,704 | Flaky test: `chainer_tests/functions_tests/loss_tests/test_contrastive.py::TestContrastive` | https://jenkins.preferred.jp/job/chainer/job/chainer_pr/1513/TEST=CHAINERX_chainer-py3,label=mn1-p100/console | closed | 2019-07-04T12:18:12Z | 2019-08-15T18:45:03Z | https://github.com/chainer/chainer/issues/7704 | [
"cat:test",
"prio:high",
"pr-ongoing"
] | niboshi | 2 |
graphql-python/graphene-django | graphql | 705 | ☂️Graphene-Django v3 | This issue is to track v3 of Graphene-Django which will contain some breaking changes.
WIP branch: [`v3`](https://github.com/graphql-python/graphene-django/tree/v3) ([compare](https://github.com/graphql-python/graphene-django/compare/master...v3))
## Breaking changes
* [x] Upgrade to Graphene v3 (which also upgrades graphql-core to v3) (PR work in progress: https://github.com/graphql-python/graphene-django/pull/905)
* [x] Convert MultipleChoiceField to List of type String (#611)
* [x] Start raising `DeprecationWarnings` for using `only_fields` and `exclude_fields` (see https://github.com/graphql-python/graphene-django/pull/691) (PR #980)
* [x] Start warning if neither `fields` or `exclude` are defined on `DjangoObjectTypes` https://github.com/graphql-python/graphene-django/issues/710 (PR #981)
* [x] Default `CAMELCASE_ERRORS` setting to `True` (PR #789)
* [x] Rename `DJANGO_CHOICE_FIELD_ENUM_V3_NAMING` to `DJANGO_CHOICE_FIELD_ENUM_V2_NAMING` and default it to `False` (reference #860) (PR #982)
* [x] Convert decimal fields correctly: https://github.com/graphql-python/graphene-django/issues/91 | closed | 2019-07-09T14:12:00Z | 2022-09-26T12:08:46Z | https://github.com/graphql-python/graphene-django/issues/705 | [] | jkimbo | 96 |
roboflow/supervision | deep-learning | 846 | Supervision using Yolov7 | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hi,
Is it possible to run supervision using Yolov7 ? If so how?
Thanks.
### Additional
_No response_ | closed | 2024-02-02T19:27:16Z | 2024-02-03T11:51:04Z | https://github.com/roboflow/supervision/issues/846 | [
"question"
] | JSNN170 | 3 |
Asabeneh/30-Days-Of-Python | pandas | 533 | 2024 便宜机场推荐:便宜的翻墙机场合集 | 以下是8个便宜且值得推荐的翻墙机场:
**最值得推荐老牌便宜机场:[喵酥云](https://www.miaosu.xyz)
运营于2019年, 运营团队比较可靠,月付:9元/200g ,全中转节点,在国内5g大入口加持下,速度和稳定也算是机场第一梯队,值得推荐**
1. 疾风云机场[1]: 成立于2024年的老牌机场旗下新开的专线性价比机场,提供V2Ray、SSR、Trojan节点,主打优质低价。月付9.9元起,支持支付宝和微信支付。
2. OUO 机场: 成立于2023年的小众中转机场,采用Shadowsocks协议,有定制客户端支持Windows、Mac和Android。月付10元起,100GB流量。
3. 一云梯机场: 成立于2024年的新兴专业机场,采用Trojan协议,全节点IPLC专线网络,节点解锁Netflix、Disney+等流媒体。月付15元起,100GB流量。
4. 小鸡快跑机场: 提供Trojan协议的IPLC内网专线翻墙服务,带宽冗余超过2Gbps,稳定可靠。月付15元起,100GB流量。
5. TotoroCloud: 提供Shadowsocks协议的IPLC专线翻墙服务,支持一键导入Clash、Shadowrocket等客户端。月付15元起。
6. SSRDOG]: 支持按量付费的机场,轻量套餐月付25元,150GB流量。
7. FATCAT 肥猫机场: 2023年成立的性价比机场,采用公网隧道中转的Shadowsocks协议。
8. 万城网络机场[4]: 提供新加坡、日本、美国、阿根廷等多个国家的节点。
以上机场都是老牌稳定运营,性价比较高,适合预算有限的用户。选择时可根据自身需求和预算进行权衡。
| closed | 2024-06-28T09:30:04Z | 2024-08-28T01:14:31Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/533 | [] | ji11220 | 1 |
exaloop/codon | numpy | 177 | error: name 'bytearray' is not defined | I seems Codon does not understand bytes/bytearray/ByteString.
I tried to type with ByteString (from typing module) but is just makes thing worst.
What is the good way to write programs that manipulate bytes ? | closed | 2023-01-14T14:59:03Z | 2024-11-10T19:39:09Z | https://github.com/exaloop/codon/issues/177 | [
"stdlib"
] | setop | 13 |
jupyter/nbgrader | jupyter | 1,165 | Which extensions should I enable? | The following is a list of extensions that can be enabled:
- nbextensions
- create_assignment/main --section=notebook
- formgrader/main --section=notebook
- formgrader/main --section=tree
- assignment_list/main --section=tree
- jupyter-js-widgets/extension --section=notebook
- serverextension
- nbgrader.server_extensions.formgrader
- nbgrader.server_extensions.validate_assignment
- nbgrader.server_extensions.assignment_list
I need to know which ones need to be enabled for an instructor and which ones for students.
| closed | 2019-07-22T21:30:42Z | 2019-08-21T19:13:57Z | https://github.com/jupyter/nbgrader/issues/1165 | [
"question"
] | mkzia | 2 |
aimhubio/aim | data-visualization | 2,441 | Authentication layer | According to https://github.com/aimhubio/aim/issues/1970, there is no auth layer in `aim`.
We are thinking of using it internally, and we want to use an external auth proxy (GCP IAP Proxy in our case). While login to the dashboard using a browser will be easy, I don't know how to inject an OAuth token when using the `aim` library.
Basically, I want to be able to provide a token when `aim_run = Run(repo='aim://11.22.66.55:53800')` is making calls to our remote tracking server that is behind an auth proxy layer.
(ping @cwognum who is working with me on this) | open | 2022-12-22T13:01:28Z | 2023-03-28T06:46:38Z | https://github.com/aimhubio/aim/issues/2441 | [
"type / enhancement"
] | hadim | 8 |
jina-ai/clip-as-service | pytorch | 148 | Prediction | How to make predictions with the pre trained model using bert as service | closed | 2018-12-19T12:16:15Z | 2018-12-25T14:12:48Z | https://github.com/jina-ai/clip-as-service/issues/148 | [] | chandrupc | 1 |
keras-team/keras | machine-learning | 20,397 | Need to reenable coverage test on CI | I had to disable coverage testing here: https://github.com/keras-team/keras/blob/master/.github/workflows/actions.yml#L87-L101
because it was causing a failure on the torch backend CI. The failure was related to coverage file merging.
This does not appear to have been caused by a specific commit. None of our dependencies got updated around the time it started failing, best I can tell. Cause is entirely unclear.
We need to debug it and then reenable coverage testing. | closed | 2024-10-23T03:37:42Z | 2024-10-23T14:47:30Z | https://github.com/keras-team/keras/issues/20397 | [] | fchollet | 0 |
feature-engine/feature_engine | scikit-learn | 566 | [ENH] Possibility to fit base_selector with one feature for automated pipelines. | https://github.com/feature-engine/feature_engine/blob/f94cfaf2bec343b71c73a33b0c5d4e30eb7e7177/feature_engine/selection/base_selector.py#L121-L126
When using [SuperVectorizer (dirty_cat)](https://dirty-cat.github.io/stable/generated/dirty_cat.SuperVectorizer.html) to develop an AutoML solution, I have bumped into a situation where a master table has only one numerical feature, thus raising an error here. In this case I was using specifically DropDuplicateFeatures and wanted the process to move on with the only feature given.
Proposal:
Add a parameter to base_selector that could prevent such raises for automated pipelines.
| closed | 2022-11-23T10:17:26Z | 2022-12-07T10:24:37Z | https://github.com/feature-engine/feature_engine/issues/566 | [] | MatheusHam | 5 |
jonaswinkler/paperless-ng | django | 1,458 | [BUG] PDF processing/archiving changes good text to unicode garbage | <!---
=> Before opening an issue, please check the documentation and see if it helps you resolve your issue: https://paperless-ng.readthedocs.io/en/latest/troubleshooting.html
=> Please also make sure that you followed the installation instructions.
=> Please search the issues and look for similar issues before opening a bug report.
=> If you would like to submit a feature request please submit one under https://github.com/jonaswinkler/paperless-ng/discussions/categories/feature-requests
=> If you encounter issues while installing of configuring Paperless-ng, please post that in the "Support" section of the discussions. Remember that Paperless successfully runs on a variety of different systems. If paperless does not start, it's probably an issue with your system, and not an issue of paperless.
=> Don't remove the [BUG] prefix from the title.
-->
**Describe the bug**
After uploading a PDF to Paperless-ng, the "archived" PDF/A output lacks good text metadata that is present in the original copy.
In my case, this was the original text, copyable when selected from the PDF:
```
Invoice Number: 551341-551342
Billing Date: November 29, 2021
```
After the processing/archiving done by Paperless, this is what I can copy:
```
■♥✈♦✐❝❡ ◆✉♠❜❡r✿ ✺✺✶✸✹✶✲✺✺✶✸✹✷
❇✐❧❧✐♥❣ ❉❛t❡✿ ◆♦✈❡♠❜❡r ✷✾✱ ✷✵✷✶
```
As this was an invoice (from rsync.net, if anyone has their own copy), it contains personal information and I do not want to post it publicly but I am happy to send a copy to any maintainer interested in investigating the issue.
I am running the latest (v1.5.0) Docker image in docker-compose | open | 2021-11-30T00:16:50Z | 2021-11-30T00:16:50Z | https://github.com/jonaswinkler/paperless-ng/issues/1458 | [] | dblitt | 0 |
LAION-AI/Open-Assistant | machine-learning | 3,140 | `Author` label is positioned very poorly | To be honest, I didn't even notice it for a very long time. I have pretty large screen and conversation tree for me looks like this:

As you can see, the distance between the message and `Author` label is kinda ridiculous.
I think it is better to put this label to this place instead:

| closed | 2023-05-12T20:52:30Z | 2023-05-12T23:03:41Z | https://github.com/LAION-AI/Open-Assistant/issues/3140 | [
"website",
"UI/UX"
] | DoctorKrolic | 1 |
OFA-Sys/Chinese-CLIP | computer-vision | 238 | 请教下多机多卡的配置应该如何配置 | open | 2023-12-26T02:37:00Z | 2023-12-26T02:37:00Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/238 | [] | chengjianjie | 0 | |
koxudaxi/datamodel-code-generator | pydantic | 1,925 | JSON Schema: oneOf with const | **Describe the bug**
Does not generate an enumeration for oneOf + cost. For example, autocomplete in Jetbrains IDE works fine for this case.
**To Reproduce**
1. create a schema
2. generate
3. get an incorrect result generation
Example schema:
```yaml
# definition
nodeJsModeEnum:
title: NodeJS mode
type: string
description: |
A long description here.
default: npm
oneOf:
- title: npm
const: npm
- title: yarn
const: yarn
- title: npm ci
const: npm_ci
# usage
properties:
mode:
$ref: "#/definitions/nodejsModeEnum"
```
Used commandline:
```
pdm run datamodel-codegen --input config/schema/cd.schema.yaml --input-file-type jsonschema --output src/config/cd_model.py --output-model-type pydantic_v2.BaseModel
# pyproject.toml options
#[tool.datamodel-codegen]
#field-constraints = true
#snake-case-field = true
#strip-default-none = false
#target-python-version = "3.11"
```
**Expected behavior**
```python
# expects
class NodeJsModeEnum(Enum):
npm = 'npm'
yarn = 'yarn'
npm_ci = 'npm_ci'
# actual
class NodeJsModeEnum(RootModel[str]):
root: str = Field(
..., description='...'
)
```
**Version:**
- OS: MacOS
- Python version: 3.11
- datamodel-code-generator version: 0.25.5
| open | 2024-04-17T18:33:12Z | 2024-09-26T11:42:26Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1925 | [
"enhancement"
] | roquie | 1 |
vanna-ai/vanna | data-visualization | 786 | 希望能支持硅基流动 | 希望能支持硅基流动 | closed | 2025-02-28T03:54:38Z | 2025-03-04T21:15:22Z | https://github.com/vanna-ai/vanna/issues/786 | [] | mingmars | 0 |
opengeos/leafmap | streamlit | 754 | extruding point data from geojson using leafmap.deck fails | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.32.1
- Python version: 3.10.12
- Operating System: Linux (ubuntu 22.04)
### Description
Trying to reproduce the pydeck code below using leafmap.deck, which uses 3d extrusion to represent numerical valued features from the geojson.
### What I Did
in vanilla pydeck, I can do:
```python
import geopandas as gpd
import pydeck as pdk
DATA_URL = 'https://data.source.coop/cboettig/conservation-policy/Inflation_Reduction_Act_Projects.geojson'
df = gpd.read_file(DATA_URL)
column_layer = pdk.Layer(
"ColumnLayer",
data=df,
get_position=["LONGITUDE", "LATITUDE"],
get_elevation="FUNDING_NUMERIC",
get_fill_color = [256,256,0, 140],
elevation_scale=.01,
radius=10000,
pickable=True,
auto_highlight=True,
)
INITIAL_VIEW_STATE = pdk.ViewState(latitude=35, longitude=-100, zoom=4, max_zoom=16, pitch=45, bearing=0)
r = pdk.Deck(
column_layer,
initial_view_state = INITIAL_VIEW_STATE,
map_style=pdk.map_styles.CARTO_ROAD,
)
r
```
I can't reproduce this in leafmap.deck though. with no arguments, leafmap will plot the point observations as flat data (presumably using the geometry rather than the lat and long columns as in the pydeck example). But I cannot get it to plot the extruded points based on funding value like in the pydeck example. Here's one combination I have tried:
```python
import leafmap.deck as leafmap
import geopandas as gpd
m = leafmap.Map(center=(40, -100), zoom=3)
DATA_URL = 'https://data.source.coop/cboettig/conservation-policy/Inflation_Reduction_Act_Projects.geojson'
gdf = gpd.read_file(DATA_URL)
m.add_vector(gdf, get_position=["LONGITUDE", "LATITUDE"], get_elevation="FUNDING_NUMERIC",
get_fill_color = [256,256,0, 140],
elevation_scale=.01,
radius=10000,
pickable=True,
auto_highlight=True,)
m
```
which give the error
```
Exception: 'GeoDataFrame' object has no attribute 'startswith'
```
| closed | 2024-06-12T21:06:36Z | 2024-06-13T14:50:10Z | https://github.com/opengeos/leafmap/issues/754 | [
"bug"
] | cboettig | 1 |
fbdesignpro/sweetviz | data-visualization | 171 | Unable to run report with pandas 2.1.2 | Hi,
I've managed to install and import the latest version of sweetviz(sweetviz-2.3.1-py3-none-any.whl). However, when trying to run the report I'm getting the following error:
AttributeError: 'DataFrame' object has no attribute 'iteritems'
I was able to do a temporary work around by doing "pd.DataFrame.iteritems = pd.DataFrame.items". However, I'm getting another error now "None of ['index'] are in the columns", so I think some of the dependencies may not support the version of pandas I'm using. I'm using pandas 2.1.2.
Thanks | open | 2024-03-11T14:46:28Z | 2024-09-10T02:16:24Z | https://github.com/fbdesignpro/sweetviz/issues/171 | [] | CHALKEB | 1 |
marcomusy/vedo | numpy | 995 | button clicks are triggered twice | Hi Marco,
Hope everything is great with you.
By the way the new update looks great.
Thank you very much for all the work under this library. I cant even tell how it is helping me.
With this new update I came to a bug I guess. The button click is triggered twice when there is a callback for mouse click. If I change the enable_picking=False for mouse click then the button click is triggered once but this time mouse clicks cannot pick anything on the screen.
Do you think if this is bug?
If not then I will try to create a workaround for my cases.
thanks in advance for your time and energy
Regards
from vedo import Plotter, Mesh, dataurl, printc
def buttonfunc(evt,arg):
print("timessss")
mesh = Mesh(dataurl+"magnolia.vtk").c("violet").flat()
plt = Plotter(axes=11)
bu = plt.add_button(
buttonfunc,
pos=(0.7, 0.05), # x,y fraction from bottom left corner
states=["click to hide", "click to show"], # text for each state
c=["w", "w"], # font color for each state
bc=["dg", "dv"], # background color for each state
font="courier", # font type
size=25, # font size
bold=True, # bold font
italic=False, # non-italic font style
)
def _on_left_click_pressed(arg):
print("left_click_times")
return
plt.add_callback("LeftButtonPress", _on_left_click_pressed)
plt.show(mesh, __doc__).close()
| open | 2023-12-21T06:08:31Z | 2023-12-21T15:15:13Z | https://github.com/marcomusy/vedo/issues/995 | [
"bug"
] | smoothumut | 2 |
pallets-eco/flask-wtf | flask | 483 | Allow CSRF to be entirely disabled | Thanks to SameSite-by-default cookies, CSRF protection is pretty much redundant these days. However, if I strip out the call to CSRFProtect.init_app, Flask-WTF still generates and inserts a `csrf_token` field into forms, even if `WTF_CSRF_ENABLED` is set to False.
Would you accept a PR to make it so that a project that never calls `CSRFProtect.init_app` leaves `csrf`, `csrf_class`, and `csrf_context` as their empty defaults?
| open | 2021-11-04T01:59:15Z | 2023-07-25T20:22:29Z | https://github.com/pallets-eco/flask-wtf/issues/483 | [
"enhancement",
"csrf"
] | marksteward | 0 |
pydantic/pydantic | pydantic | 10,637 | Upgrade to `uv` instead of `pdm` | See references: https://github.com/astral-sh/uv
Also: https://github.com/pydantic/logfire/pull/480 | closed | 2024-10-16T20:24:39Z | 2024-11-08T16:18:33Z | https://github.com/pydantic/pydantic/issues/10637 | [
"help wanted",
"good first issue"
] | sydney-runkle | 6 |
microsoft/nni | data-science | 5,723 | Unsupported Layer nn.TransformerEncoderLayer | **Describe the bug**:
**Environment**:
- NNI version: 3.0
- Training service (local|remote|pai|aml|etc): local
- Python version: 3.10
- PyTorch version: 1.13
- Cpu or cuda version: Cuda/Cpu
**Reproduce the problem**
- How to reproduce: It seems like the class[`TransformerEncoderLayer`](https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html) is not currently supported by NNI. I am attempting to prune the network [RepNet](https://github.com/materight/RepNet-pytorch/tree/main/repnet) . Can someone help adding the implementation for that? | open | 2023-12-12T21:56:24Z | 2023-12-12T21:57:13Z | https://github.com/microsoft/nni/issues/5723 | [] | saeedashrraf | 0 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 115 | Loading or validating nested objects fails when their ids are dump_only or hidden | My declarative database model looks like this:
```
class X(db.Model):
id = db.Column(db.String(128), primary_key=True)
yref = db.relationship('Y', backref='x', uselist=False, lazy='joined')
def __init__(self, myid):
self.id = myid
class Y(db.Model):
id = db.Column(db.String(128), db.ForeignKey('x.id'), primary_key=True)
value = db.Column('value', db.SmallInteger, nullable=False, default=0)
def __init__(self, x_id, value=0):
self.id = x_id
self.value = value
```
I have two schemes like this:
```
class YSchema(ma.ModelSchema):
value = fields.Integer()
class Meta:
model = models.YFields
fields = ('value')
class XSchema(ma.ModelSchema):
id = fields.String(dump_only=True)
y = fields.Nested(YSchema, attribute='yref', many=False)
class Meta:
model = models.X
fields = ('id', 'y')
```
When I use jsonify, I get output like this:
```
{
"id": "X874",
"y": {
"value": 0,
},
}
```
Which is exactly what I want, but when I then try to modify y.value using the same json or try to validate the input like this:
```
result = models.x.query.filter_by(id=xid).first()
xschema.load(request.get_json(), instance=result)
xschema.validate(request.get_json())
```
It always results in this error:
```
Traceback (most recent call last):
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1997, in __call__
return self.wsgi_app(environ, start_response)
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1985, in wsgi_app
response = self.handle_exception(e)
File "/projectpath/lib/python3.4/site-packages/flask_restful/__init__.py", line 273, in error_router
return original_handler(e)
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1540, in handle_exception
reraise(exc_type, exc_value, tb)
File "/projectpath/lib/python3.4/site-packages/flask/_compat.py", line 32, in reraise
raise value.with_traceback(tb)
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/projectpath/lib/python3.4/site-packages/flask_restful/__init__.py", line 273, in error_router
return original_handler(e)
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/projectpath/lib/python3.4/site-packages/flask/_compat.py", line 32, in reraise
raise value.with_traceback(tb)
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/projectpath/lib/python3.4/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/projectpath/lib/python3.4/site-packages/flask_restful/__init__.py", line 480, in wrapper
resp = resource(*args, **kwargs)
File "/projectpath/lib/python3.4/site-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/projectpath/lib/python3.4/site-packages/flask_restful/__init__.py", line 595, in dispatch_request
resp = meth(*args, **kwargs)
File "/projectpath/project/project/views_api.py", line 36, in put
print(device_schema.validate(request.get_json()))
File "/projectpath/lib/python3.4/site-packages/marshmallow_sqlalchemy/schema.py", line 194, in validate
return super(ModelSchema, self).validate(data, *args, **kwargs)
File "/projectpath/lib/python3.4/site-packages/marshmallow/schema.py", line 620, in validate
_, errors = self._do_load(data, many, partial=partial, postprocess=False)
File "/projectpath/lib/python3.4/site-packages/marshmallow/schema.py", line 660, in _do_load
index_errors=self.opts.index_errors,
File "/projectpath/lib/python3.4/site-packages/marshmallow/marshalling.py", line 295, in deserialize
index=(index if index_errors else None)
File "/projectpath/lib/python3.4/site-packages/marshmallow/marshalling.py", line 68, in call_and_store
value = getter_func(data)
File "/projectpath/lib/python3.4/site-packages/marshmallow/marshalling.py", line 288, in <lambda>
data
File "/projectpath/lib/python3.4/site-packages/marshmallow/fields.py", line 265, in deserialize
output = self._deserialize(value, attr, data)
File "/projectpath/lib/python3.4/site-packages/marshmallow/fields.py", line 465, in _deserialize
data, errors = self.schema.load(value)
File "/projectpath/lib/python3.4/site-packages/marshmallow_sqlalchemy/schema.py", line 186, in load
ret = super(ModelSchema, self).load(data, *args, **kwargs)
File "/projectpath/lib/python3.4/site-packages/marshmallow/schema.py", line 580, in load
result, errors = self._do_load(data, many, partial=partial, postprocess=True)
File "/projectpath/lib/python3.4/site-packages/marshmallow/schema.py", line 685, in _do_load
original_data=data)
File "/projectpath/lib/python3.4/site-packages/marshmallow/schema.py", line 855, in _invoke_load_processors
data=data, many=many, original_data=original_data)
File "/projectpath/lib/python3.4/site-packages/marshmallow/schema.py", line 957, in _invoke_processors
data = utils.if_none(processor(data), data)
File "/projectpath/lib/python3.4/site-packages/marshmallow_sqlalchemy/schema.py", line 174, in make_instance
return self.opts.model(**data)
TypeError: __init__() missing 1 required positional argument: 'x_id'
```
The same also happens when I do send and expose y.id but make it dump_only (I don't want people to change that, ever), it only works when it can be changed. I've poked around with the debugger and saw that id just gets filtered out in the last steps, never reaching the constructor.
Directly (without Marshmallow) writing to the nested Y works just fine, like this for example:
```
result.yref.value = 93
db.session.merge(result)
db.session.commit()
```
Is there any way to do this that I'm missing, or is this simply not possible right now? | closed | 2017-07-09T05:45:16Z | 2025-01-12T05:04:20Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/115 | [] | kshade | 2 |
AirtestProject/Airtest | automation | 290 | ios10,start_app报错,在IDE里能正常连接手机实时显示手机界面,运行报错 | File "/Users/wjjn3033/dev/idestable/airtest/airtest/core/api.py", line 146, in start_app
File "/Applications/AirtestIDE.app/Contents/MacOS/airtest/core/ios/ios.py", line 264, in start_app
self.driver.session(package)
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/wda/__init__.py", line 280, in session
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/wda/__init__.py", line 324, in __init__
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/wda/__init__.py", line 101, in fetch
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/wda/__init__.py", line 107, in _fetch_no_alert
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/wda/__init__.py", line 75, in httpdo
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/requests/api.py", line 49, in request
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/requests/sessions.py", line 461, in request
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/requests/sessions.py", line 573, in send
File "/Users/wjjn3033/dev/idestable/venv_ide_qt511/lib/python3.6/site-packages/requests/adapters.py", line 415, in send
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(54, 'Connection reset by peer')) | open | 2019-03-04T06:29:09Z | 2019-07-01T11:28:44Z | https://github.com/AirtestProject/Airtest/issues/290 | [] | StevenXUzw | 2 |
iperov/DeepFaceLab | machine-learning | 5,562 | RTX 5000 GPUs supported? | Hi, I recently installed this on linux and it runs fine, but I don't seem to get GPU support, its crunching on CPU only. Is there anything I can do to get my RTX5000s to work for this, or is this not supported? I tried to find some info but could not find anything about this | open | 2022-09-15T12:07:01Z | 2023-06-08T23:00:48Z | https://github.com/iperov/DeepFaceLab/issues/5562 | [] | 132nd-Entropy | 1 |
scikit-learn/scikit-learn | machine-learning | 30,909 | Improve `pos_label` switching for metrics | Supercedes #26758
Switching `pos_label` for metrics, involves some manipulation for `predict_proba` (switch column you pass) and `decision_function` (for binary, multiply by -1) as you must pass the values for the positive class.
In discussions in #26758 we thought of two options:
* Add an example demonstrating what you need to do when switching `pos_label`
* Expose the (currently private) functions [`_process_decision_function`](https://github.com/scikit-learn/scikit-learn/blob/5eb676ac9afd4a5d90cdda198d174c2c8d2da226/sklearn/utils/_response.py#L76) and [`_process_predict_proba`](https://github.com/scikit-learn/scikit-learn/blob/5eb676ac9afd4a5d90cdda198d174c2c8d2da226/sklearn/utils/_response.py#L16)
This is a RFC to discuss if we prefer one, or both options.
cc @glemaitre and maybe @ogrisel ? | open | 2025-02-27T06:45:33Z | 2025-02-28T11:08:02Z | https://github.com/scikit-learn/scikit-learn/issues/30909 | [
"RFC"
] | lucyleeow | 5 |
TracecatHQ/tracecat | pydantic | 56 | Batched S3 gzipped JSON reader | **Is your feature request related to a problem? Please describe.**
gzipped JSON files are still widely used to store security logs. For example:
- AWS CloudTrail logs in S3
- Okta System Logs
- GitHub streaming logs
**Describe the solution you'd like**
A function that takes a S3 prefix and returns a list of dicts (both unnested via stringification and nested).
**Describe alternatives you've considered**
For larger jsons or ndjsons, we should have an external blob store and use references to serialize / deserialize in flow run,
**Prior art**
- https://github.com/TracecatHQ/hunts/blob/main/notebooks/aws_flaws.ipynb
- https://docs.runreveal.com/Sources/S3-Sources
**Other considerations**
- There exist log data sources (e.g. Cloudflare audit logs) that can be accessed via S3 (if configured e.g. Cloudflare logpush) and via a REST API | closed | 2024-04-17T21:12:54Z | 2024-04-20T15:00:49Z | https://github.com/TracecatHQ/tracecat/issues/56 | [
"enhancement"
] | topher-lo | 0 |
TencentARC/GFPGAN | deep-learning | 391 | Why is the effect of processing photos locally not as good as online? | Why is the effect of processing photos locally not as good as online?
<img width="559" alt="image" src="https://github.com/TencentARC/GFPGAN/assets/18223385/689dea45-74f8-4cb2-ac7c-c04ae66eae44">
online:
https://replicate.com/xinntao/gfpgan/
<img width="257" alt="image" src="https://github.com/TencentARC/GFPGAN/assets/18223385/1e3248b1-2dac-4ad7-b837-9477300fced9">
why? | closed | 2023-06-10T16:25:16Z | 2023-06-13T04:36:26Z | https://github.com/TencentARC/GFPGAN/issues/391 | [] | hktalent | 7 |
PokeAPI/pokeapi | graphql | 1,179 | [BUG] Parameters should be case insensitive | Steps to Reproduce:
1. Submit a request with capitals. eg. https://pokeapi.co/api/v2/pokemon/Iron-Crown
2. Receive no reply (404)
Requests with capitals should still work. Very easy fix just to lowercase parameters.
| open | 2024-12-25T22:18:00Z | 2025-01-26T16:40:39Z | https://github.com/PokeAPI/pokeapi/issues/1179 | [] | pedwards95 | 3 |
PokemonGoF/PokemonGo-Bot | automation | 6,329 | I | I found a great File Transfer to share apps, music and files easily and fast! Download it on Google Play:
https://play.google.com/store/apps/details?id=share.file.transfer | open | 2025-01-27T07:09:27Z | 2025-01-27T07:09:27Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/6329 | [] | Kingdredakid5 | 0 |
huggingface/datasets | deep-learning | 6,614 | `datasets/downloads` cleanup tool | ### Feature request
Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files
e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:
```
sudo find /data/huggingface/datasets/downloads -type f -mtime +3 -exec rm {} \+
sudo find /data/huggingface/datasets/downloads -type d -empty -delete
```
could the cleanup be integrated into `huggingface-cli` or a different tool provided to keep the folders tidy and not consume inodes and space
e.g. there were tens of thousands of `.lock` files - I don't know why they never get removed - lock files should be temporary for the duration of the operation requiring the lock and not remain after the operation finished, IMHO.
Also I think one should be able to nuke `datasets/downloads` w/o hurting the cache, but I think there are some datasets that rely on files extracted under this dir - or at least they did in the past - which is very difficult to manage since one has no idea what is safe to delete and what not.
Thank you
@Wauplin (requested to be tagged) | open | 2024-01-24T18:52:10Z | 2024-01-24T18:55:09Z | https://github.com/huggingface/datasets/issues/6614 | [
"enhancement"
] | stas00 | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 138 | 为什么增量预训练后的模型的tokenizer比微调后少一个pad呢 | 预训练的时候,数据不是也需要pad吗。用的是哪个pad呢。 | closed | 2023-04-12T10:30:14Z | 2023-04-18T06:54:56Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/138 | [] | Porraio | 2 |
quantumlib/Cirq | api | 7,086 | Suggest eliminating labels pr/lgtm-with-nit and pr/needs-rebase | **Description of the issue**
The PR labels `pr/lgtm-with-nit` and `pr/needs-rebase` have hardly been used. The first was used 3 times 5 years ago; the latter was used once, 3 years ago. Purely as an attempt to reduce the number of labels in the project, I propose we eliminate these two.
<div align="center">
<img width="250" alt="Image" src="https://github.com/user-attachments/assets/e11baacc-36d9-444a-90dc-a64a2598c615" />
</div> | closed | 2025-02-24T00:05:23Z | 2025-03-19T18:20:25Z | https://github.com/quantumlib/Cirq/issues/7086 | [
"kind/health",
"triage/accepted"
] | mhucka | 1 |
sunscrapers/djoser | rest-api | 437 | Make init fails | https://github.com/sunscrapers/djoser/blob/80769a342d10c39ef9c39423d12b53fd72bb7c97/Pipfile#L37-L42
What do these lines mean? | closed | 2019-11-06T21:07:06Z | 2020-01-27T09:34:55Z | https://github.com/sunscrapers/djoser/issues/437 | [
"bug"
] | ozeranskii | 5 |
RobertCraigie/prisma-client-py | asyncio | 705 | Add fail-safe for downloading binaries. | ## Problem
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Whenever I wish to run the prisma command after installation, it will download binaries. Whenever I CTRL+C the task through the download process. It will cause me to not be able to download the binaries in the future unless there is a new update OR if I install NodeJS. It always frustrates me whenever this happens, so I have to wait till it does install in the future.
## Suggested solution
<!-- A clear and concise description of what you want to happen. -->
I suggest adding a fail-safe for the downloading binaries, as if it is cancelled whilst the binaries are being downloaded, it will lead to the issue that the binaries haven't been downloaded.
## Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
I haven't currently considered anything else, sorry!
## Additional context
<!-- Add any other context or screenshots about the feature request here. -->
Issue #665. | closed | 2023-02-23T03:38:46Z | 2023-06-29T11:28:25Z | https://github.com/RobertCraigie/prisma-client-py/issues/705 | [
"bug/1-repro-available",
"kind/bug",
"level/intermediate",
"priority/medium",
"topic: binaries"
] | L-mbda | 7 |
Yorko/mlcourse.ai | plotly | 743 | AI learning | closed | 2023-04-23T07:20:16Z | 2023-05-03T09:07:35Z | https://github.com/Yorko/mlcourse.ai/issues/743 | [
"invalid"
] | manishikuma | 0 | |
microsoft/qlib | deep-learning | 1,143 | DoubleEnsemble doesn't Honor sub_weights When Computing Loss Values | Hello, I notice in `fit()`, `loss_values` is computed based on the average prediction of trained sub-models (i.e. `pred_ensemble`):
https://github.com/microsoft/qlib/blob/a87b02619aee4aff9c5aba23a66e614106050d75/qlib/contrib/model/double_ensemble.py#L90-L91
However, in `predict()`, the final output is the weighted sum of predictions of sub-models:
https://github.com/microsoft/qlib/blob/a87b02619aee4aff9c5aba23a66e614106050d75/qlib/contrib/model/double_ensemble.py#L245-L248
In the paper https://arxiv.org/abs/2010.01265, the authors don't use weights for sub-models when making predictions. In fact, it is mentioned that
> We note it is possible to set a weight for each sub-model or develop a stacked generalization ensemble (aka stacking). In general, a proper way to combine the sub-models can further improve the performance and we leave it as a future research direction.
My questions are:
1. should we honor the weights when computing `loss_values`?
2. without removing the flexibility of using custom `sub_weights`, is it better to use equal weights (i.e. `[1/6]*6`) as default values? https://github.com/microsoft/qlib/blob/a87b02619aee4aff9c5aba23a66e614106050d75/qlib/contrib/model/double_ensemble.py#L47
| closed | 2022-06-21T20:50:52Z | 2022-10-21T15:05:43Z | https://github.com/microsoft/qlib/issues/1143 | [
"stale"
] | bridgream | 3 |
pytest-dev/pytest-qt | pytest | 206 | Issues when runnning tests with Xvfb | Hi,
I am trying to run some tests with pytest-qt using Travis and to simulate at my workstation the same issue I am using pytest-xvfb.
The following test works when running at my computer without the Xvfb but something very similar fails at Travis and also at my computer when using Xvfb.
```python
from PyQt5.QtWidgets import QLineEdit
def test_focus(qtbot):
line_edit = QLineEdit()
qtbot.addWidget(line_edit)
with qtbot.waitExposed(line_edit):
line_edit.show()
line_edit.setFocus()
qtbot.waitUntil(lambda: line_edit.hasFocus())
assert line_edit.hasFocus()
```
I tried to follow the same as the solution on this issue in here: https://github.com/pytest-dev/pytest-qt/issues/160 but no lucky so far. | closed | 2018-04-27T01:15:31Z | 2021-06-03T13:56:28Z | https://github.com/pytest-dev/pytest-qt/issues/206 | [
"question :question:"
] | hhslepicka | 7 |
PokeAPI/pokeapi | api | 983 | Official Artwork is in kebab case | <!--
Thanks for contributing to the PokéAPI project. To make sure we're effective, please check the following:
- Make sure your issue hasn't already been submitted on the issues tab. (It has search functionality!)
- If your issue is one of outdated API data, please note that we get our data from [veekun](https://github.com/veekun/pokedex/). If they are not up to date either, please look for or create an issue there. Otherwise, feel free to create an issue here.
- Provide a clear description of the issue.
- Provide a clear description of the steps to reproduce.
- Provide a clear description of the expected behavior.
Thank you!
-->
The returned JSON from the API for the PokeAPI uses kebab-case instead of snake_case.
This means that in order to access the official artwork, the object notation looks like this:
data.sprites.other["official-artwork"].front_default
Ideally, it should not use kebab case
data.sprites.other.official_artwork.front_default
I understand that this change would absolutely break everything. Its a minor annoyance at best. | open | 2023-12-27T15:17:00Z | 2024-01-26T14:29:11Z | https://github.com/PokeAPI/pokeapi/issues/983 | [] | alexwittwer | 4 |
healthchecks/healthchecks | django | 101 | Improve "Ping expected but not received" messages in Log pages | The log pages are limited to show no more than 10 consecutive "Ping expected but not received" messages. Otherwise, in some cases, the log pages could get excessively large. For example, when the period is 1 minute, and the check has been down for a month, that would be 43'200 log entries.
To make the log page less confusing I'm thinking about the following:
* if there's 10 or less "Ping expected but not received" entries, show them all
* if there's more than 10, then show first three, followed by a entry saying "Last message repeated X times"
| closed | 2016-12-01T13:21:14Z | 2017-06-29T11:44:34Z | https://github.com/healthchecks/healthchecks/issues/101 | [] | cuu508 | 1 |
chezou/tabula-py | pandas | 40 | AttributeError: 'module' object has no attribute 'read_pdf' | # Summary of your issue
When importing the read_pdf method from tabula-py using
`from tabula import read_pdf`
as the example demonstrated
It shows the following error message
`AttributeError: 'module' object has no attribute 'read_pdf'`
# Environment
anaconda python 2.1.12 + tabula 0.9.0
Write and check your environment.
- [ ] `python --version`: ? anaconda python 2.1.12
- [ ] `java -version`: ? java 1.8.0_111
- [ ] OS and it's version: ? windows 10
- [ ] Your PDF URL:
# What did you do when you faced the problem?
//write here
## Example code:
```
paste your core code
```
## Output:
```
paste your output
```
## What did you intend to be?
| closed | 2017-06-11T03:26:25Z | 2017-07-18T10:34:43Z | https://github.com/chezou/tabula-py/issues/40 | [] | zlqs1985 | 4 |
google-research/bert | nlp | 455 | module 'tensorflow.python.platform.flags' has no attribute 'mark_flag_as_required' | when running the run_squad.py train example from the 'doc' (main .md file):
Traceback (most recent call last):
File "../run_squad.py", line 1280, in <module>
flags.mark_flag_as_required("vocab_file")
AttributeError: module 'tensorflow.python.platform.flags' has no attribute 'mark_flag_as_required'
Ubuntu 16
Python 3.5
TF 0.12.0rc0
I ve searched fro root cause but did nt find one : found that :
https://github.com/tensorflow/models/issues/2777
Requirement already satisfied: tensorflow-gpu in /usr/local/lib/python3.5/dist-packages (0.12.0rc0)
Requirement already satisfied: numpy>=1.11.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow-gpu) (1.11.2)
Requirement already satisfied: six>=1.10.0 in /usr/lib/python3/dist-packages (from tensorflow-gpu) (1.10.0)
Requirement already satisfied: protobuf==3.1.0 in /usr/local/lib/python3.5/dist-packages (from tensorflow-gpu) (3.1.0)
Requirement already satisfied: wheel>=0.26 in /usr/lib/python3/dist-packages (from tensorflow-gpu) (0.29.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.5/dist-packages (from protobuf==3.1.0->tensorflow-gpu) (30.3.0)
| closed | 2019-02-25T18:02:56Z | 2019-10-28T08:35:54Z | https://github.com/google-research/bert/issues/455 | [] | WilliamTambellini | 2 |
gradio-app/gradio | deep-learning | 10,012 | Can't upload any file from iPhone | ### Describe the bug
I have to upload a file as input of my ML pipeline. I use the gr.File component and it goes well from desktop, but completely doesn't work from iPhone. I can't upload any type of file from the latter, they are greyed out independent from the browser I use (I tested Safari and Chrome).
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
input_file = gr.File()
demo.launch(
favicon_path="favicon.ico",
show_api=False,
server_name="0.0.0.0", # Allow external connections
server_port=7860 # Default Gradio port
)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.6.0
gradio_client version: 1.4.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.3.0
gradio-client==1.4.3 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 10.2.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.3
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-11-21T17:45:24Z | 2024-12-05T18:04:57Z | https://github.com/gradio-app/gradio/issues/10012 | [
"bug"
] | virtualmartire | 1 |
ansible/ansible | python | 84,732 | copy module does not preserve file ownership | ### Summary
I just encountered the issue described in https://github.com/ansible/ansible/pull/81592, which was closed indicating it's a doc bug that has been fixed. However, looking at the latest docs (https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html), and under "owner," is says:
When left unspecified, it uses the current user unless you are root, in which case it can preserve the previous ownership.
But it DOES NOT preserve the previous ownership if left unspecified (as illustrated in the issue). More accurate wording, imo, is simply:
When left unspecified, it uses the current user.
### Issue Type
Documentation Report
### Component Name
lib/ansible/modules/copy.py
### Ansible Version
```console
ansible --version
ansible [core 2.15.8]
config file = /home/rowagn/git/oracle-core/automation/ansible/client/ansible.cfg
configured module search path = ['/home/rowagn/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /sso/sfw/virtualenv/ansible2_15_8/lib/python3.9/site-packages/ansible
ansible collection location = /home/rowagn/.ansible/collections:/usr/share/ansible/collections
executable location = /sso/sfw/virtualenv/ansible2_15_8/bin/ansible
python version = 3.9.18 (main, Jul 18 2024, 11:58:42) [GCC 8.5.0 20210514 (Red Hat 8.5.0-22)] (/sso/sfw/virtualenv/ansible2_15_8/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
ansible-config dump --only-changed
CONFIG_FILE() = /home/rowagn/git/oracle-core/automation/ansible/client/ansible.cfg
DEFAULT_EXECUTABLE(/home/rowagn/git/oracle-core/automation/ansible/client/ansible.cfg) = /etc/ansible-wrapper
DEFAULT_TIMEOUT(/home/rowagn/git/oracle-core/automation/ansible/client/ansible.cfg) = 20
EDITOR(env: EDITOR) = /bin/vim
PAGER(env: PAGER) = /bin/less
RETRY_FILES_ENABLED(/home/rowagn/git/oracle-core/automation/ansible/client/ansible.cfg) = False
```
### OS / Environment
cat /etc/redhat-release
Red Hat Enterprise Linux release 8.10 (Ootpa)
### Additional Information
If this doc were improved, it would prevent folks from running into the issue described in https://github.com/ansible/ansible/pull/81592 (i.e., where we try to preserve ownership of copied files and it does not work).
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | open | 2025-02-19T18:36:05Z | 2025-03-04T15:06:54Z | https://github.com/ansible/ansible/issues/84732 | [
"module",
"needs_verified",
"affects_2.15"
] | rwagnergit | 1 |
alteryx/featuretools | scikit-learn | 2,521 | Refactor `can_stack_primitive_on_inputs` helper function | - Refactoring `can_stack_primitive_on_inputs` would help make `DeepFeatureSynthesis` more readable | closed | 2023-03-15T00:58:00Z | 2023-08-02T20:01:05Z | https://github.com/alteryx/featuretools/issues/2521 | [
"refactor"
] | sbadithe | 0 |
tfranzel/drf-spectacular | rest-api | 1,123 | Sporatic uncaught exception: AssertionError: Schema generation REQUIRES a view instance. (Hint: you accessed `schema` from the view class rather than an instance.) | **Describe the bug**
I have a /openapi endpoint, and occasionally the API fails with this exception. This happens a few times a day, but most of the time the spec generates successfully.
```
Traceback (most recent call last):
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/asgiref/sync.py", line 534, in thread_handler
raise exc_info[1]
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/asgiref/sync.py", line 534, in thread_handler
raise exc_info[1]
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/asgiref/sync.py", line 479, in __call__
ret: _R = await loop.run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/asgiref/current_thread_executor.py", line 40, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/asgiref/sync.py", line 538, in thread_handler
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/django/views/generic/base.py", line 104, in view
return self.dispatch(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/django/views/decorators/cache.py", line 40, in _cache_controlled
response = viewfunc(request, *args, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/views.py", line 83, in get
return self._get_schema_response(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/views.py", line 91, in _get_schema_response
data=generator.get_schema(request=request, public=self.serve_public),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/generators.py", line 268, in get_schema
paths=self.parse(request, public),
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/generators.py", line 239, in parse
operation = view.schema.get_operation(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/utils.py", line 422, in get_operation
return super().get_operation(path, path_regex, path_prefix, method, registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/openapi.py", line 78, in get_operation
parameters = self._get_parameters()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/openapi.py", line 248, in _get_parameters
**dict_helper(self._get_format_parameters()),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/openapi.py", line 225, in _get_format_parameters
formats = self.map_renderers('format')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/drf_spectacular/openapi.py", line 1153, in map_renderers
for r in self.view.get_renderers()
^^^^^^^^^
File "/home/app/.cache/pypoetry/virtualenvs/recovery-platform-BwZRr09X-py3.11/lib/python3.11/site-packages/rest_framework/schemas/inspectors.py", line 58, in view
assert self._view is not None, (
^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Schema generation REQUIRES a view instance. (Hint: you accessed `schema` from the view class rather than an instance.)
```
**To Reproduce**
Cannot be reproduced locally, I just have production logs of it happening.
**Expected behavior**
API spec generates successfully every time
| open | 2023-12-07T02:20:48Z | 2025-02-06T16:16:08Z | https://github.com/tfranzel/drf-spectacular/issues/1123 | [] | bradydean | 9 |
microsoft/qlib | deep-learning | 1,781 | How to train futures market data? | open | 2024-04-29T15:48:06Z | 2024-04-29T15:48:06Z | https://github.com/microsoft/qlib/issues/1781 | [
"question"
] | samghwww | 0 | |
keras-rl/keras-rl | tensorflow | 8 | Add Python 3 compatibility | Just changed a print function-call and use range instead of xrange to make keras-rl run on both python 2.7 and 3.5 for me. See pull requests https://github.com/matthiasplappert/keras-rl/pull/4, https://github.com/matthiasplappert/keras-rl/pull/5, https://github.com/matthiasplappert/keras-rl/pull/6, https://github.com/matthiasplappert/keras-rl/pull/7.
| closed | 2016-08-02T19:16:35Z | 2016-08-03T14:34:26Z | https://github.com/keras-rl/keras-rl/issues/8 | [] | jorahn | 1 |
davidsandberg/facenet | tensorflow | 621 | Retrain the checkpoint | How do retrain this [checkpoint](https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk/edit) for better accuracy with asian faces? | open | 2018-01-18T14:28:16Z | 2018-05-04T08:56:45Z | https://github.com/davidsandberg/facenet/issues/621 | [] | Zumbalamambo | 3 |
sktime/pytorch-forecasting | pandas | 834 | Understanding TFT output in raw prediction mode | Hi,
First, I really appreciate your work on this wonderful package.
I am working with TFT model and trying to predict with raw mode on one sample (dataframe which represent 1 sample - e.g: 15 rows - 15 steps), like this:
```
new_raw_predictions, new_x = best_tft.predict(df[:15], mode="raw", return_x=True)
```
The new_raw_predictions shape is (1, decoder_length, n_quantiles). I understand the quantile concept but still don't know how to get the final prediction - **_one number_** correctly. So should I:
- take the **average of quantiles** and get the **first step** of decoder like this:
```
torch.mean(new_raw_predictions['prediction'], dim=2).squeeze()[0]
```
- or take directly the **50 quantile** value:
```
new_raw_predictions['prediction'].squeeze()[0, n_quantiles//2]
```
Any help will be highly appreciated.
| closed | 2022-01-18T17:37:33Z | 2022-01-18T18:05:50Z | https://github.com/sktime/pytorch-forecasting/issues/834 | [] | ngokhoa96 | 1 |
521xueweihan/HelloGitHub | python | 2,211 | 【工具自荐】一款将链接转化为精美分享图的小工具 | ## 项目推荐
- 项目地址:https://github.com/one-tab-group/bookmark.style
- 类别:Vue
- 项目描述:
最近将一直自用的一款将任意链接转换生成分享图的工具 [bookmark.style](https://bookmark.style/) 发布到了 [ProductHunt](https://www.producthunt.com/posts/bookmark-style)
- 推荐理由:bookmark.style 适用于开发者、创作者、公众号写手,它可以美化你的链接,让你的链接“开口说话”
- 截图:

| closed | 2022-05-18T06:45:23Z | 2022-05-25T10:59:05Z | https://github.com/521xueweihan/HelloGitHub/issues/2211 | [
"JavaScript 项目"
] | xiaoluoboding | 1 |
TvoroG/pytest-lazy-fixture | pytest | 52 | pytest-lazy-fixture breaks with Traits in factoryboy 3.2.0: 'Maybe' object has no attribute 'call' | After updating factoryboy to `3.2.0` my tests using `lazy_fixture` with fixtures that use `Trait` (in result using `Maybe`) raise `AttributeError: 'Maybe' object has no attribute 'call'`.
```
python_version = "3.8"
django = "~=3.0"
factory-boy = "~=3.2.0"
pytest = "~=5.4.3"
pytest-factoryboy = "~=2.1.0"
pytest-lazy-fixture = "~=0.6.3"
```
Attached is a full traceback from failed test case.
```
request = <FixtureRequest for <Function test_success>>
def fill(request):
item = request._pyfuncitem
fixturenames = getattr(item, "fixturenames", None)
if fixturenames is None:
fixturenames = request.fixturenames
if hasattr(item, 'callspec'):
for param, val in sorted_by_dependency(item.callspec.params, fixturenames):
if val is not None and is_lazy_fixture(val):
item.callspec.params[param] = request.getfixturevalue(val.name)
elif param not in item.funcargs:
item.funcargs[param] = request.getfixturevalue(param)
> _fillfixtures()
/home/django/venv/lib/python3.8/site-packages/pytest_lazyfixture.py:39:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/django/venv/lib/python3.8/site-packages/pytest_factoryboy/fixture.py:188: in model_fixture
factoryboy_request.evaluate(request)
/home/django/venv/lib/python3.8/site-packages/pytest_factoryboy/plugin.py:83: in evaluate
self.execute(request, function, deferred)
/home/django/venv/lib/python3.8/site-packages/pytest_factoryboy/plugin.py:65: in execute
self.results[model][attr] = function(request)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
request = <SubRequest 'user' for <Function test_success>>
def deferred(request):
> declaration.call(instance, step, context)
E AttributeError: 'Maybe' object has no attribute 'call'
/home/django/venv/lib/python3.8/site-packages/pytest_factoryboy/fixture.py:294: AttributeError
```
Seems like it could be a problem in `pytest_factoryboy` itself but I've seen it raised only for tests using `lazy_fixture`. | open | 2021-05-13T12:05:29Z | 2021-06-24T15:41:47Z | https://github.com/TvoroG/pytest-lazy-fixture/issues/52 | [] | radekwlsk | 3 |
wkentaro/labelme | deep-learning | 868 | what is the way of using image-level flags | Hi, I could don't find any example regarding "flags" usage. Is there any related example like labelme2voc for flags. What is the best way to use flags for training? Thanks. | closed | 2021-05-17T02:57:37Z | 2021-06-10T09:27:00Z | https://github.com/wkentaro/labelme/issues/868 | [] | neouyghur | 4 |
aio-libs/aiomysql | asyncio | 362 | two cursors in two async with but cursors id equ | On Ubuntu 18.04.1 LTS, python 3.6.5 aiomysql 0.0.19
Hello
I use pool to test, the example is like:
```
async def test():
async with pool.acquire() as conn:
async with conn.cursor() as cur:
print ('id(cur)=', id(cur))
row = await cur.execute("select device_id from r_sms_upload_max_id limit 1")
await conn.commit()
async def test1():
await test()
async def test2():
await test()
async def test3():
await test1()
await test2()
loop = asyncio.get_event_loop()
asyncio.ensure_future(test3(), loop=loop)
loop.run_forever()
```
two id(cur) equ
maybe await cur.close() close
but not real close and new acquire get the same cur
| closed | 2018-12-05T09:20:04Z | 2018-12-27T08:12:45Z | https://github.com/aio-libs/aiomysql/issues/362 | [
"bug"
] | shop271 | 2 |
dropbox/PyHive | sqlalchemy | 111 | requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine(' status 80',)) | python connection hive
I using sqlalchemy connection hive
> from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
engine = create_engine('presto://localhost:10000/hive/default')
logs = Table('my_awesome_data', MetaData(bind=engine), autoload=True)
print select([func.count('*')], from_obj=logs).scalar()
get error message:
> requests.exceptions.ConnectionError: ('Connection aborted.', BadStatusLine('\x04\x00\x00\x00\x11Invalid status 80',))
| closed | 2017-04-30T14:38:39Z | 2017-05-01T07:39:07Z | https://github.com/dropbox/PyHive/issues/111 | [] | basebase | 1 |
litestar-org/litestar | asyncio | 4,002 | Bug: Setting the title of an Enum (+ msgspec Struct) does not work (in terms of OpenAPI gen) | ### Description
Setting the title of an Enum with `Body()` or `msgspec.Meta` like so will still not allow it be shown in the autogenerated OpenAPI spec **even if you try setting __schema_name__** does not work:
```
class UniversalClassificationScoringMethod(StrEnum):
AUTO = "auto"
CHUNK_MAX = "chunk_max"
CHUNK_AVG = "chunk_avg"
CHUNK_MIN = "chunk_min"
UniversalClassificationScoringMethod.__schema_name__ = "Universal classification scoring method"
UniversalClassificationScoringMethod = Annotated[
UniversalClassificationScoringMethod,
Meta(
title="Universal classification scoring method",
),
Body(
title="Universal classification scoring method",
description="A method for producing an overall confidence score for a universal classification.",
examples=[Example(value=UniversalClassificationScoringMethod.AUTO)],
),
]
```
**This also applies to msgspec Struct's, however, __schema_name__ does at least still work.**
### URL to code causing the issue
_No response_
### MCVE
```python
# Your MCVE code here
```
### Steps to reproduce
```bash
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
```
### Litestar Version
2.14.0
### Platform
- [x] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2025-02-14T10:29:57Z | 2025-02-14T10:29:57Z | https://github.com/litestar-org/litestar/issues/4002 | [
"Bug :bug:"
] | umarbutler | 0 |
JaidedAI/EasyOCR | machine-learning | 865 | [Craft Error] Training costume model error | Thank you for providing the CRAFT training code.
When I am training the costume model, it is always giving the error message:
```
Error : operands could not be broadcast together with shapes (200,200) (69,0)
On generating affinity map, strange box came out. (width: 0, height: 69)
Error : operands could not be broadcast together with shapes (200,200) (234,0)
On generating affinity map, strange box came out. (width: 0, height: 234)
Error : operands could not be broadcast together with shapes (200,200) (112,0)
On generating affinity map, strange box came out. (width: 0, height: 112)
```
it seems that at some point the model predicts negative coordinates or float number so the code rounds it to int ones.
May you help to understand which part is giving the error and if it affects the training or evaluation process.
| open | 2022-09-30T06:01:50Z | 2023-07-26T09:23:15Z | https://github.com/JaidedAI/EasyOCR/issues/865 | [] | ALIYoussef | 7 |
s3rius/FastAPI-template | fastapi | 111 | Possible Bug - `get_db_session` reference included in `{{cookiecutter.project_name}}web.gql.context.py`when using Tortoise ORM | Awesome project! Really love what you've done here!
I ran into what I think is a bug with the project generation logic. I think it's a simple fix / issue, so I'd be happy to contribute a PR, but wanted to run this by you first.
In the context.py file of the gql package, there's an import for `from {{cookiecutter.project_name}}.db.dependencies import get_db_session`:
https://github.com/s3rius/FastAPI-template/blob/156883798ab4ec54d97080c77a34d366b8484f95/fastapi_template/template/%7B%7Bcookiecutter.project_name%7D%7D/%7B%7Bcookiecutter.project_name%7D%7D/web/gql/context.py#L16-L18
This import does not appear to be used when using Tortoise ORM, however:
https://github.com/s3rius/FastAPI-template/blob/156883798ab4ec54d97080c77a34d366b8484f95/fastapi_template/template/%7B%7Bcookiecutter.project_name%7D%7D/%7B%7Bcookiecutter.project_name%7D%7D/web/gql/context.py#L38-L42
In fact, with Tortoise ORM, there doesn't appear to be a dependencies.py file at all, see [here](https://github.com/s3rius/FastAPI-template/tree/156883798ab4ec54d97080c77a34d366b8484f95/fastapi_template/template/%7B%7Bcookiecutter.project_name%7D%7D/%7B%7Bcookiecutter.project_name%7D%7D/db_tortoise).
Leaving the import statement throws an error with pytest:
```
=========================================================================================================== ERRORS ============================================================================================================
________________________________________________________________________________________________ ERROR collecting test session ________________________________________________________________________________________________
/usr/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
../../.cache/pypoetry/virtualenvs/gremlinengine--i0qMi2L-py3.8/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:168: in exec_module
exec(co, module.__dict__)
GremlinEngine/conftest.py:19: in <module>
from GremlinEngine.web.application import get_app
GremlinEngine/web/application.py:24: in <module>
from GremlinEngine.web.graphql.router import gql_router
GremlinEngine/web/graphql/router.py:4: in <module>
from GremlinEngine.web.graphql import dummy, echo, redis
GremlinEngine/web/graphql/dummy/__init__.py:3: in <module>
from GremlinEngine.web.graphql.dummy.mutation import Mutation
GremlinEngine/web/graphql/dummy/mutation.py:5: in <module>
from GremlinEngine.web.graphql.context import Context
GremlinEngine/web/graphql/context.py:5: in <module>
from GremlinEngine.db.dependencies import get_db_session
E ModuleNotFoundError: No module named 'GremlinEngine.db.dependencies'
```
Removing the import appears to cause no other issues and resolves the error. Seems like the if/else jinja statement in the `gql.context.py` template to import dependencies and get_db_session should be updated to also take into account which database is being used? This is my first in-depth foray into standalone Python ORMs (I've been a Django guy, mostly), so sorry if I'm missing something here.
| closed | 2022-07-23T20:55:37Z | 2022-08-08T05:00:32Z | https://github.com/s3rius/FastAPI-template/issues/111 | [] | JSv4 | 5 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 995 | No enough memory synthesizer preprocess audio | /home/fit/.local/lib/python3.6/site-packages/numba/core/errors.py:154: UserWarning: Insufficiently recent colorama version found. Numba requires colorama >= 0.3.9
warnings.warn(msg)
Arguments:
datasets_root: tts
out_dir: tts/SV2TTS/synthesizer
n_processes: 8
skip_existing: False
hparams:
no_alignments: True
Using data from:
tts/speech_laohac.lst
tts/speech_vietts.lst
tts/speech_vlsp.lst
custom: 0%| | 0/3 [00:00<?, ?speakers/s]/home/fit/.local/lib/python3.6/site-packages/scipy/signal/signaltools.py:1336: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
out = out_full[ind]
/home/fit/.local/lib/python3.6/site-packages/scipy/signal/signaltools.py:1336: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
out = out_full[ind]
/home/fit/.local/lib/python3.6/site-packages/scipy/signal/signaltools.py:1336: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
out = out_full[ind]
custom: 0%| | 0/3 [1:00:22<?, ?speakers/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/media/fit/9BA4-1634/TTS_NCKH/tts_vn-main/synthesizer/preprocess.py", line 77, in preprocess_speaker
skip_existing, hparams))
File "/media/fit/9BA4-1634/TTS_NCKH/tts_vn-main/synthesizer/preprocess.py", line 123, in process_utterance
np.save(wav_fpath, wav, allow_pickle=False)
File "<__array_function__ internals>", line 6, in save
File "/home/fit/.local/lib/python3.6/site-packages/numpy/lib/npyio.py", line 524, in save
file_ctx = open(file, "wb")
OSError: [Errno 28] No space left on device: 'tts/SV2TTS/synthesizer/audio/audio-011420.npy'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "synthesizer_preprocess_audio.py", line 54, in <module>
preprocess_dataset(**vars(args))
File "/media/fit/9BA4-1634/TTS_NCKH/tts_vn-main/synthesizer/preprocess.py", line 35, in preprocess_dataset
for speaker_metadata in tqdm(job, "custom", len(speaker_dirs), unit="speakers"):
File "/home/fit/.local/lib/python3.6/site-packages/tqdm/std.py", line 1129, in __iter__
for obj in iterable:
File "/usr/lib/python3.6/multiprocessing/pool.py", line 735, in next
raise value
OSError: [Errno 28] No space left on device: 'tts/SV2TTS/synthesizer/audio/audio-011420.npy' | open | 2022-01-28T02:39:08Z | 2022-01-28T02:39:08Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/995 | [] | doansangg | 0 |
strawberry-graphql/strawberry | fastapi | 3,613 | Hook for new results in subscription (and also on defer/stream) | From #3554
 | open | 2024-09-02T17:57:36Z | 2025-03-20T15:56:51Z | https://github.com/strawberry-graphql/strawberry/issues/3613 | [
"feature-request"
] | patrick91 | 0 |
plotly/jupyter-dash | dash | 55 | Standard demo with Binder fails | open | 2021-02-27T09:48:35Z | 2021-02-27T09:48:35Z | https://github.com/plotly/jupyter-dash/issues/55 | [] | spratzt | 0 | |
nonebot/nonebot2 | fastapi | 3,287 | Plugin: BotTap | ### PyPI 项目名
nonebot-plugin-bot-tap
### 插件 import 包名
nonebot_plugin_bot_tap
### 标签
[{"label":"Bot","color":"#57a4ce"},{"label":"管理","color":"#57ce66"}]
### 插件配置项
```dotenv
BOT_TAP_TOKEN="abc123456"
```
### 插件测试
- [ ] 如需重新运行插件测试,请勾选左侧勾选框 | closed | 2025-01-27T16:53:50Z | 2025-01-30T12:34:14Z | https://github.com/nonebot/nonebot2/issues/3287 | [
"Plugin",
"Publish"
] | XTxiaoting14332 | 3 |
plotly/plotly.py | plotly | 4,772 | ecdf with normed histogram | I really like the plotly.express.ecdf and have been using it a lot in my daily work.
When I show ecdf plots in meetings, I usually show it with `marginal='histogram'`, since this is easier understandable for the non-data-scientists in the room.
However, since the amount of data varies I would like to have a normalized histogram, i.e. have percent values.
I know this would be possible with subplots, but there are really a lot of ugly adjustments to make.
So a solution could be to show the percentage in the hint as well, or something like `histnorm` from plotly.express.histogram.
Example for easy testing:
```
import plotly.express as px
import numpy as np
import pandas as pd
# Generate random data
np.random.seed(42) # For reproducibility
data = np.random.normal(loc=0, scale=1, size=1000) # Normal distribution data
# Create a pandas dataframe
df = pd.DataFrame({'Values': data})
# Create ECDF plot with histogram
fig = px.ecdf(data,
ecdfnorm='percent',
marginal='histogram')
# Show the figure
fig.show()
```
| open | 2024-09-30T08:18:08Z | 2024-10-02T17:09:48Z | https://github.com/plotly/plotly.py/issues/4772 | [
"feature",
"P3"
] | luifire | 1 |
HumanSignal/labelImg | deep-learning | 164 | No module libs.lib | When I try to run labelImg (MacOS High Sierra / python 2.7 / pip) there is an error "No module libs.lib". Does anyone know how to solve this? | closed | 2017-09-25T23:05:22Z | 2017-09-27T09:58:46Z | https://github.com/HumanSignal/labelImg/issues/164 | [] | thanasissdr | 5 |
deepfakes/faceswap | deep-learning | 445 | Cannot extract video to image use by GUI v3.0 | Exception in Tkinter callback
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\lib\tkinter\__init__.py", line 1702, in __call__
return self.func(*args)
File "E:\Faceswap\faceswap-master\lib\gui\wrapper.py", line 62, in action_command
self.task.terminate()
File "E:\Faceswap\faceswap-master\lib\gui\wrapper.py", line 275, in terminate
pgid = os.getpgid(self.process.pid)
AttributeError: module 'os' has no attribute 'getpgid'
Module missing or ? | closed | 2018-06-24T12:04:44Z | 2018-11-04T17:00:18Z | https://github.com/deepfakes/faceswap/issues/445 | [] | g0147 | 3 |
tox-dev/tox | automation | 3,060 | SyntaxError trying to invoke a Python 2 environment | ## Issue
Attempting to run `tox -e py27` fails to find Python 2.7 with `SyntaxError` in `py_info.py`.
## Environment
Provide at least:
- OS: Ubuntu 22.04 ([jaraco/multipy-tox](https://hub.docker.com/r/jaraco/multipy-tox)).
<details open>
<summary>Output of <code>pip list</code> of the host Python, where <code>tox</code> is installed</summary>
```console
<user>@1021d9b66966 /src main [2] # pipx runpip tox list
Package Version
------------- -------
cachetools 5.3.1
chardet 5.1.0
colorama 0.4.6
distlib 0.3.6
filelock 3.12.2
packaging 23.1
pip 23.1.2
platformdirs 3.8.1
pluggy 1.2.0
pyproject-api 1.5.2
setuptools 68.0.0
tox 4.6.4
virtualenv 20.23.1
wheel 0.40.0
```
</details>
## Output of running tox
<details open>
<summary>Output of <code>tox -rvv</code></summary>
```console
<user>@1021d9b66966 /src main [255] # tox -rvv -e py27
.pkg: 106 I find interpreter for spec PythonSpec(major=2, minor=7) [virtualenv/discovery/builtin.py:58]
.pkg: 106 D got python info of %s from (PosixPath('/usr/bin/python3.11'), PosixPath('/root/.local/share/virtualenv/py_info/1/ca3ed784184f1b3bb7c3539bfb45e71710cd27667424f92c2d5bb4df9c107c23.json')) [virtualenv/app_data/via_disk_folder.py:131]
.pkg: 107 I proposed PythonInfo(spec=CPython3.11.4.final.0-64, system=/usr/bin/python3.11, exe=/root/.local/pipx/venvs/tox/bin/python, platform=linux, version='3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 107 D discover PATH[0]=/root/.local/bin [virtualenv/discovery/builtin.py:111]
.pkg: 109 D discover PATH[1]=/usr/local/sbin [virtualenv/discovery/builtin.py:111]
.pkg: 110 D discover PATH[2]=/usr/local/bin [virtualenv/discovery/builtin.py:111]
.pkg: 112 D got python info of %s from (PosixPath('/usr/local/bin/python'), PosixPath('/root/.local/share/virtualenv/py_info/1/4cd7ab41f5fca4b9b44701077e38c5ffd31fe66a6cab21e0214b68d958d0e462.json')) [virtualenv/app_data/via_disk_folder.py:131]
.pkg: 112 I proposed PathPythonInfo(spec=CPython3.11.4.final.0-64, exe=/usr/local/bin/python, platform=linux, version='3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 112 D discover PATH[3]=/usr/sbin [virtualenv/discovery/builtin.py:111]
.pkg: 114 D discover PATH[4]=/usr/bin [virtualenv/discovery/builtin.py:111]
.pkg: 115 D get interpreter info via cmd: /usr/bin/python2.7 /root/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/discovery/py_info.py phliDgDgBOGklhGFCz5Sp0F4toYaBbSe PuPPp6TsnrVXQpxAm127dy0S6dCtYeHX [virtualenv/discovery/cached_py_info.py:111]
.pkg: 124 I failed to query /usr/bin/python2.7 with code 1 err: ' File "/root/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/discovery/py_info.py", line 24\n return list(OrderedDict.fromkeys(["", *os.environ.get("PATHEXT", "").lower().split(os.pathsep)]))\n ^\nSyntaxError: invalid syntax\n' [virtualenv/discovery/cached_py_info.py:34]
.pkg: 125 D discover PATH[5]=/sbin [virtualenv/discovery/builtin.py:111]
.pkg: 127 D discover PATH[6]=/bin [virtualenv/discovery/builtin.py:111]
.pkg: 128 D get interpreter info via cmd: /bin/python2.7 /root/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/discovery/py_info.py jFa0rQSJrM4aUk6pPseKRGyQTcW6xYt9 GpK9nAxFC7Pe5adTo4P8IT56hngWlu4s [virtualenv/discovery/cached_py_info.py:111]
.pkg: 137 I failed to query /bin/python2.7 with code 1 err: ' File "/root/.local/pipx/venvs/tox/lib/python3.11/site-packages/virtualenv/discovery/py_info.py", line 24\n return list(OrderedDict.fromkeys(["", *os.environ.get("PATHEXT", "").lower().split(os.pathsep)]))\n ^\nSyntaxError: invalid syntax\n' [virtualenv/discovery/cached_py_info.py:34]
.pkg: 139 I find interpreter for spec PythonSpec(path=/root/.local/pipx/venvs/tox/bin/python) [virtualenv/discovery/builtin.py:58]
.pkg: 140 I proposed PythonInfo(spec=CPython3.11.4.final.0-64, system=/usr/bin/python3.11, exe=/root/.local/pipx/venvs/tox/bin/python, platform=linux, version='3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 140 D accepted PythonInfo(spec=CPython3.11.4.final.0-64, system=/usr/bin/python3.11, exe=/root/.local/pipx/venvs/tox/bin/python, platform=linux, version='3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67]
.pkg: 141 D filesystem is case-sensitive [virtualenv/info.py:26]
.pkg: 155 I find interpreter for spec PythonSpec(path=/root/.local/pipx/venvs/tox/bin/python) [virtualenv/discovery/builtin.py:58]
.pkg: 155 I proposed PythonInfo(spec=CPython3.11.4.final.0-64, system=/usr/bin/python3.11, exe=/root/.local/pipx/venvs/tox/bin/python, platform=linux, version='3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:65]
.pkg: 155 D accepted PythonInfo(spec=CPython3.11.4.final.0-64, system=/usr/bin/python3.11, exe=/root/.local/pipx/venvs/tox/bin/python, platform=linux, version='3.11.4 (main, Jun 7 2023, 12:45:48) [GCC 11.3.0]', encoding_fs_io=utf-8-utf-8) [virtualenv/discovery/builtin.py:67]
py27: 157 W remove tox env folder /tox/py27 [tox/tox_env/api.py:322]
.pkg: 157 W remove tox env folder /tox/.pkg [tox/tox_env/api.py:322]
py27: 158 W skipped because could not find python interpreter with spec(s): py27 [tox/session/cmd/run/single.py:49]
py27: SKIP (0.00 seconds)
evaluation failed :( (0.08 seconds)
```
</details>
I understand that tox no longer supports Python 2 for tox itself, but shouldn't tox still support orchestrating those environments? | closed | 2023-07-09T18:53:55Z | 2023-07-09T18:58:22Z | https://github.com/tox-dev/tox/issues/3060 | [] | jaraco | 1 |
adbar/trafilatura | web-scraping | 577 | Update XML-TEI reference data | The file `trafilatura/data/tei-schema-pickle.lzma` could be updated to the latest version of the TEI schema. | closed | 2024-04-29T15:45:59Z | 2024-07-24T17:24:22Z | https://github.com/adbar/trafilatura/issues/577 | [
"maintenance"
] | adbar | 0 |
hbldh/bleak | asyncio | 464 | Send signed byte array with Bleak | Is it possible to send a signed byte array with Bleak? | closed | 2021-02-25T17:06:05Z | 2021-02-25T17:07:41Z | https://github.com/hbldh/bleak/issues/464 | [] | bastianpedersen | 0 |
ResidentMario/geoplot | matplotlib | 142 | Improve default marker scaling in pointplot | The `scale` parameter in `geoplot` takes a scale and applies it to a marker. In the case of `sankey` (`linewidth`) and `cartogram` (`xfact` scaling) the manner and effect of the scaling is obvious. With `pointplot`, which uses `ax.scatter`, it is less so.
The `ax.scatter` function used by `pointplot` manages the size of the points plotted using the `s` parameter. If you marker is a circle, the `s` parameter is the *area* of the circle plotted; if it is some other marker, it is proportional to the bounding box on the shape. There is an [StackOverflow thread](https://stackoverflow.com/q/14827650/1993206) on this subject that is a helpful reference on this subject.
The current implementation takes the `dscale` value of the point and squares it, e.g. uses the scale to determine the radius of the point. This is unsatisfactory because it has the effect of creating a perceptively exponential curve:

Notice how even though 8 mil is only 4x 2 mil, it appears to be an order of magnitude larger instead. That's because its area is actually 16x higher. And it's *area*, not *radius*, which matters to the viewer.
We should move to a different scaling for `pointplot`. | open | 2019-07-05T15:29:16Z | 2019-07-05T20:34:40Z | https://github.com/ResidentMario/geoplot/issues/142 | [
"enhancement"
] | ResidentMario | 0 |
tatsu-lab/stanford_alpaca | deep-learning | 138 | Data License | Can we consider reverting the data license to include commercial use? Lots of models are being released using the alpaca dataset (e.g. https://www.databricks.com/blog/2023/03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html). | open | 2023-03-24T15:51:43Z | 2023-04-08T08:51:19Z | https://github.com/tatsu-lab/stanford_alpaca/issues/138 | [] | mjsteele12 | 1 |
robusta-dev/robusta | automation | 1,657 | <install> statefulset , FailedScheduling | It it not easy to install `robusta` for me, when I install robusta using helm, they can not start
alertmanager-robusta-kube-prometheus-st-alertmanager-0 0/2 Pending 0 18s
prometheus-robusta-kube-prometheus-st-prometheus-0 0/2 Pending 0 8s
the Event is:
Warning FailedScheduling 49s default-scheduler 0/6 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
and `kubectl get pv` shows nothing
what's wrong with it? | open | 2024-12-10T08:56:14Z | 2024-12-14T10:09:39Z | https://github.com/robusta-dev/robusta/issues/1657 | [] | wiluen | 19 |
SCIR-HI/Huatuo-Llama-Med-Chinese | nlp | 70 | 是否扩充了llama的分词器 | closed | 2023-07-07T01:48:45Z | 2023-08-07T08:18:38Z | https://github.com/SCIR-HI/Huatuo-Llama-Med-Chinese/issues/70 | [] | zemu121 | 3 | |
ahmedfgad/GeneticAlgorithmPython | numpy | 201 | Parallelise adaptive mutation population fitness computations when `parallel_processing` is enabled | I was checking my CPU usage while running a GA instance with adaptive mutation and noticed it's actually off most of the time. The fitness function I use takes several seconds to compute for every solution, so the lack of parallel processing on this type of mutation really hurts the overall time per generation.
I had a look at the code and expected a simple copy-paste with some minor updates to enable parallel processing in the mutation, but the architecture of the package (i.e. `pygad.GA` is a child class of `pygad.utils.Mutation`) makes it difficult for the `Mutation` methods to know if parallel processing is enabled without passing additional parameters. What might work is to add an `__init__` method to `Mutation` and calling this method from `GA.__init__`. The parallel processing setup could be moved to `Mutation`, which is then inherited by `GA`. I don't think this is the cleanest solution in terms of OOP best practices, but it would work.
I might be able to put some time into creating a PR for this sometime soon. | open | 2023-05-27T21:57:17Z | 2023-11-08T23:39:15Z | https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/201 | [
"enhancement"
] | Ririshi | 3 |
deepfakes/faceswap | machine-learning | 585 | Link for training data couldn't be open. | Training Data
A pre-trained model is not required, but you can download the following pre-trained Cage/Trump training model:
Whole project with training images and trained model (~300MB): https://anonfile.com/p7w3m0d5be/face-swap.zip or click here to download
Some tips:
Reusing existing models will train much faster than starting from nothing.
If there is not enough training data, start with someone who looks similar, then switch the data.
The link for training data couldn't be opened. Could anyone help me to download the video? | closed | 2019-01-11T02:03:15Z | 2019-01-11T08:56:08Z | https://github.com/deepfakes/faceswap/issues/585 | [] | kayleeliyx | 1 |
ageitgey/face_recognition | machine-learning | 676 | X_img_path | What X_img_path and how do we make it?
| closed | 2018-11-17T03:22:49Z | 2018-11-18T01:00:01Z | https://github.com/ageitgey/face_recognition/issues/676 | [] | nhangox22 | 0 |
onnx/onnx | scikit-learn | 6,489 | [Feature request] Add Support for Exporting DCNv4 to ONNX for Complex Adaptive Networks | ### System information
1.16
### What is the problem that this feature solves?
I would like to suggest adding support for exporting DCNv4 to ONNX, as it would be highly beneficial for handling more complex and adaptive network architectures.
### Alternatives considered
_No response_
### Describe the feature
I need to implement adaptive networks using DCNv4 and deploy them for engineering detection applications.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
_No response_
### Are you willing to contribute it (Y/N)
None
### Notes
_No response_ | open | 2024-10-24T09:20:43Z | 2024-10-24T09:22:17Z | https://github.com/onnx/onnx/issues/6489 | [
"topic: enhancement"
] | jacques0266 | 0 |
donnemartin/system-design-primer | python | 1,009 | System Design for Dummies links are broken | The links were updated to wayback machine (https://github.com/donnemartin/system-design-primer/pull/750) after the original domain was expired. Now, it's not accessible through wayback machine either.
Readme link: https://github.com/donnemartin/system-design-primer?tab=readme-ov-file#step-2-review-the-scalability-article
Unaccessible links:
- https://web.archive.org/web/20221030091841/http://www.lecloud.net/tagged/scalability/chrono
- https://web.archive.org/web/20220530193911/https://www.lecloud.net/post/7295452622/scalability-for-dummies-part-1-clones
- https://web.archive.org/web/20220602114024/https://www.lecloud.net/post/7994751381/scalability-for-dummies-part-2-database
- https://web.archive.org/web/20230126233752/https://www.lecloud.net/post/9246290032/scalability-for-dummies-part-3-cache
- https://web.archive.org/web/20220926171507/https://www.lecloud.net/post/9699762917/scalability-for-dummies-part-4-asynchronism | open | 2024-10-21T18:44:09Z | 2024-12-02T01:13:13Z | https://github.com/donnemartin/system-design-primer/issues/1009 | [
"needs-review"
] | ykeremy | 0 |
microsoft/unilm | nlp | 1,201 | Unknown model while fine tuning BEIT3 | **Describe**
I am currently try to run the fine tuning script give of BEIT3 for VQA task.
I am passing the the argument for model as - beit_base_patch16_224
However it throws the error.
- in create_model raise RuntimeError('Unknown model (%s)' % model_name)(beit_base_patch16_224)
I checked in the timm.list_models(), and it shows the model is available.
Also, is beit_base_patch16_224 model a BEIT3 model or BEIT1 model available in timm ? | closed | 2023-07-14T10:54:50Z | 2023-07-14T11:05:58Z | https://github.com/microsoft/unilm/issues/1201 | [] | rahcode7 | 0 |
pyg-team/pytorch_geometric | deep-learning | 9,940 | GATConv ignores "edge_attr" without error message | ### 🛠 Proposed Refactor
### Description
The current version of `GATConv` and `GATv2Conv` handle the `edge_attr` parameter inconsistently in the **edge_update** function. When users forget to set `edge_dim` in the constructor but pass `edge_attr` in the forward function, `GATv2Conv` will raise an error, whereas `GATConv` will ignore `edge_attr` without informing the users, leading them to mistakenly believe that `edge_attr` has been used in the convolution.
I think it would be better to raise an error in this situation, and it is important to maintain consistency between `GATConv` and `GATv2Conv` in the handling of `edge_attr`.
For more details, please refer to [Issue #810](https://github.com/pyg-team/pytorch_geometric/issues/810).
### Reference
_Originally posted by @FrancisOWO in https://github.com/pyg-team/pytorch_geometric/issues/810#issuecomment-2568227758_

Hi, I noticed that in the latest version (2.7.0) of the code, this `assert` statement has been removed from `GATConv`, but it remains in `GATv2Conv`, as shown in the figure below.

However, removing the `assert` statement leads to an issue: if a user passes `edge_attr` without setting `edge_dim`, the current `GATConv` will ignore `edge_attr` without informing the user, leading them to mistakenly believe that `edge_attr` has been used in the convolution.
I was not aware of this issue while using `GATConv` either, until I encountered an `AssertionError` when using `GATv2Conv`. Therefore, I think it would be better to retain the `assert` statement, or alternatively, to raise an error with some explanatory message, such as: require `edge_dim` to be set in the constructor when `edge_attr` is used.
I checked the commit history and found that this issue was caused by [Commit 025b1cb (GATConv: require edge_dim to be set)](https://github.com/pyg-team/pytorch_geometric/commit/025b1cb0c94eeac768d6facbb942d84c223c0b19).
Do you think it would be better to retain the `assert` statement or raise an error?
At the very least, `GATConv` and `GATv2Conv` should be consistent in this part of the code.
### Suggest a potential alternative/fix
I think it is important to maintain consistency between `GATConv` and `GATv2Conv` in the handling of `edge_attr`. It would be better to retain the assert statement, or alternatively, to raise an error with some explanatory message, such as: require `edge_dim` to be set in the constructor when `edge_attr` is used. | open | 2025-01-14T04:53:44Z | 2025-01-14T04:53:44Z | https://github.com/pyg-team/pytorch_geometric/issues/9940 | [
"refactor"
] | FrancisOWO | 0 |
alteryx/featuretools | scikit-learn | 1,869 | Add IsFederalHoliday transform primitive | closed | 2022-01-27T19:50:30Z | 2022-02-17T20:17:13Z | https://github.com/alteryx/featuretools/issues/1869 | [] | gsheni | 2 | |
pytest-dev/pytest-xdist | pytest | 413 | With different python versions rsync can hangs on | At the end of my investigation of this issue I have realized it happens because of my fault.
So I know pytest/xdist works correctly here.
My expectation would be that pytest could send a Warning message (or a notification) about the possible fix of this issue.
Without going into the details the fix is to use same python versions on both side, in my case the correct pytest.ini is:
```
[pytest]
addopts =
--tx id=debian-latest-192.168.9.111//ssh="-i docker/ssh-config/id_rsa 192.168.9.111 -l root//python=python3"
rsyncdirs = .
```
So, I've been debugging the following case (for a while when I found the root cause):
1, Use such ("wrong") pytest.ini:
```
[pytest]
addopts =
--tx id=debian-latest-192.168.9.111//ssh="-i docker/ssh-config/id_rsa 192.168.9.111 -l root//python=python"
rsyncdirs = .
```
2, Start pytest with such command:
```
python3 -m pytest --dist=each ...
```
3, Realize that pytest hangs on, at (with PYTEST_DEBUG=1):
```
pytest_xdist_setupnodes [hook]
config: <_pytest.config.Config object at 0x7f6d09e91358>
specs: [<XSpec 'id=debian-latest-192.168.9.111//ssh=-i docker/ssh-config/id_rsa 192.168.9.111 -l root//python=python'>]
debian-latest-192.168.9.111 I finish pytest_xdist_setupnodes --> [] [hook]
setting up nodes [config:nodemanager]
early skip of rewriting module: thread [assertion]
pytest_xdist_newgateway [hook]
gateway: <Gateway id='debian-latest-192.168.9.111' receive-live, thread model, 0 active channels>
early skip of rewriting module: __builtin__ [assertion]
[debian-latest-192.168.9.111] linux2 Python 2.7.13 cwd: /root/pyexecnetcache
debian-latest-192.168.9.111 C finish pytest_xdist_newgateway --> [] [hook]
pytest_xdist_rsyncstart [hook]
source: /home/micek/.local/lib/python3.6/site-packages/py
gateways: [<Gateway id='debian-latest-192.168.9.111' receive-live, thread model, 1 active channels>]
finish pytest_xdist_rsyncstart --> [] [hook]
```
stacktrace log with --fulltrace:
```
config = <_pytest.config.Config object at 0x7f6d09e91358>, doit = <function _main at 0x7f6d0a39b598>
def wrap_session(config, doit):
"""Skeleton command line program"""
session = Session(config)
session.exitstatus = EXIT_OK
initstate = 0
try:
try:
config._do_configure()
initstate = 1
> config.hook.pytest_sessionstart(session=session)
/usr/local/lib/python3.6/dist-packages/_pytest/main.py:201:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_HookCaller 'pytest_sessionstart'>, args = (), kwargs = {'session': <Session pytest_framework>}, notincall = set()
def __call__(self, *args, **kwargs):
if args:
raise TypeError("hook calling supports only keyword arguments")
assert not self.is_historic()
if self.argnames:
notincall = set(self.argnames) - set(['__multicall__']) - set(
kwargs.keys())
if notincall:
warnings.warn(
"Argument(s) {} which are declared in the hookspec "
"can not be found in this hook call"
.format(tuple(notincall)),
stacklevel=2,
)
> return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
/usr/local/lib/python3.6/dist-packages/pluggy/hooks.py:258:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_pytest.config.PytestPluginManager object at 0x7f6d0c2ebf28>, hook = <_HookCaller 'pytest_sessionstart'>
methods = [<HookImpl plugin_name='dsession', plugin=<xdist.dsession.DSession object at 0x7f6d0893f4a8>>, <HookImpl plugin_name='...6d08c079e8>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f6d08c076d8>>]
kwargs = {'session': <Session pytest_framework>}
def _hookexec(self, hook, methods, kwargs):
# called from all hookcaller instances.
# enable_tracing will set its own wrapping function at self._inner_hookexec
> return self._inner_hookexec(hook, methods, kwargs)
/usr/local/lib/python3.6/dist-packages/pluggy/manager.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <pluggy._tracing._TracedHookExecution object at 0x7f6d0a3645f8>, hook = <_HookCaller 'pytest_sessionstart'>
hook_impls = [<HookImpl plugin_name='dsession', plugin=<xdist.dsession.DSession object at 0x7f6d0893f4a8>>, <HookImpl plugin_name='...6d08c079e8>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f6d08c076d8>>]
kwargs = {'session': <Session pytest_framework>}
def __call__(self, hook, hook_impls, kwargs):
self.before(hook.name, hook_impls, kwargs)
outcome = _Result.from_call(lambda: self.oldcall(hook, hook_impls, kwargs))
self.after(outcome, hook.name, hook_impls, kwargs)
> return outcome.get_result()
/usr/local/lib/python3.6/dist-packages/pluggy/_tracing.py:82:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> outcome = _Result.from_call(lambda: self.oldcall(hook, hook_impls, kwargs))
/usr/local/lib/python3.6/dist-packages/pluggy/_tracing.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
hook = <_HookCaller 'pytest_sessionstart'>
methods = [<HookImpl plugin_name='dsession', plugin=<xdist.dsession.DSession object at 0x7f6d0893f4a8>>, <HookImpl plugin_name='...6d08c079e8>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f6d08c076d8>>]
kwargs = {'session': <Session pytest_framework>}
self._inner_hookexec = lambda hook, methods, kwargs: \
hook.multicall(
methods, kwargs,
> firstresult=hook.spec_opts.get('firstresult'),
)
/usr/local/lib/python3.6/dist-packages/pluggy/manager.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.dsession.DSession object at 0x7f6d0893f4a8>, session = <Session pytest_framework>
@pytest.mark.trylast
def pytest_sessionstart(self, session):
"""Creates and starts the nodes.
The nodes are setup to put their events onto self.queue. As
soon as nodes start they will emit the worker_workerready event.
"""
self.nodemanager = NodeManager(self.config)
> nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
/usr/local/lib/python3.6/dist-packages/xdist/dsession.py:81:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.workermanage.NodeManager object at 0x7f6d089aee80>, putevent = <bound method Queue.put of <queue.Queue object at 0x7f6d0895f198>>
def setup_nodes(self, putevent):
self.config.hook.pytest_xdist_setupnodes(config=self.config, specs=self.specs)
self.trace("setting up nodes")
nodes = []
for spec in self.specs:
> nodes.append(self.setup_node(spec, putevent))
/usr/local/lib/python3.6/dist-packages/xdist/workermanage.py:68:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.workermanage.NodeManager object at 0x7f6d089aee80>, spec = <XSpec 'id=debian-latest-192.168.9.111//ssh=-i docker/ssh-config/id_rsa 192.168.9.111 -l root//python=python'>
putevent = <bound method Queue.put of <queue.Queue object at 0x7f6d0895f198>>
def setup_node(self, spec, putevent):
gw = self.group.makegateway(spec)
self.config.hook.pytest_xdist_newgateway(gateway=gw)
> self.rsync_roots(gw)
/usr/local/lib/python3.6/dist-packages/xdist/workermanage.py:74:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.workermanage.NodeManager object at 0x7f6d089aee80>, gateway = <Gateway id='debian-latest-192.168.9.111' not-receiving, thread model, 0 active channels>
def rsync_roots(self, gateway):
"""Rsync the set of roots to the node's gateway cwd."""
if self.roots:
for root in self.roots:
> self.rsync(gateway, root, **self.rsyncoptions)
/usr/local/lib/python3.6/dist-packages/xdist/workermanage.py:61:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.workermanage.NodeManager object at 0x7f6d089aee80>, gateway = <Gateway id='debian-latest-192.168.9.111' not-receiving, thread model, 0 active channels>
source = local('/home/micek/.local/lib/python3.6/site-packages/py'), notify = None, verbose = 2, ignores = ['.*', '*.pyc', '*.pyo', '*~']
def rsync(self, gateway, source, notify=None, verbose=False, ignores=None):
"""Perform rsync to remote hosts for node."""
# XXX This changes the calling behaviour of
# pytest_xdist_rsyncstart and pytest_xdist_rsyncfinish to
# be called once per rsync target.
rsync = HostRSync(source, verbose=verbose, ignores=ignores)
spec = gateway.spec
if spec.popen and not spec.chdir:
# XXX This assumes that sources are python-packages
# and that adding the basedir does not hurt.
gateway.remote_exec(
"""
import sys ; sys.path.insert(0, %r)
"""
% os.path.dirname(str(source))
).waitclose()
return
if (spec, source) in self._rsynced_specs:
return
def finished():
if notify:
notify("rsyncrootready", spec, source)
rsync.add_target_host(gateway, finished=finished)
self._rsynced_specs.add((spec, source))
self.config.hook.pytest_xdist_rsyncstart(source=source, gateways=[gateway])
> rsync.send()
/usr/local/lib/python3.6/dist-packages/xdist/workermanage.py:148:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <xdist.workermanage.HostRSync object at 0x7f6d08983cc0>, raises = True
def send(self, raises=True):
""" Sends a sourcedir to all added targets. Flag indicates
whether to raise an error or return in case of lack of
targets
"""
if not self._channels:
if raises:
raise IOError("no targets available, maybe you "
"are trying call send() twice?")
return
# normalize a trailing '/' away
self._sourcedir = os.path.dirname(os.path.join(self._sourcedir, 'x'))
# send directory structure and file timestamps/sizes
self._send_directory_structure(self._sourcedir)
# paths and to_send are only used for doing
# progress-related callbacks
self._paths = {}
self._to_send = {}
# send modified file to clients
while self._channels:
print("self._channels - before self._receivequeue.get(): [%s]" % self._channels)
> channel, req = self._receivequeue.get()
/usr/local/lib/python3.6/dist-packages/execnet/rsync.py:128:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <queue.Queue object at 0x7f6d089836d8>, block = True, timeout = None
def get(self, block=True, timeout=None):
'''Remove and return an item from the queue.
If optional args 'block' is true and 'timeout' is None (the default),
block if necessary until an item is available. If 'timeout' is
a non-negative number, it blocks at most 'timeout' seconds and raises
the Empty exception if no item was available within that time.
Otherwise ('block' is false), return an item if one is immediately
available, else raise the Empty exception ('timeout' is ignored
in that case).
'''
with self.not_empty:
if not block:
if not self._qsize():
raise Empty
elif timeout is None:
while not self._qsize():
> self.not_empty.wait()
/usr/lib/python3.6/queue.py:164:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Condition(<unlocked _thread.lock object at 0x7f6d08938d50>, 0)>, timeout = None
def wait(self, timeout=None):
"""Wait until notified or until a timeout occurs.
If the calling thread has not acquired the lock when this method is
called, a RuntimeError is raised.
This method releases the underlying lock, and then blocks until it is
awakened by a notify() or notify_all() call for the same condition
variable in another thread, or until the optional timeout occurs. Once
awakened or timed out, it re-acquires the lock and returns.
When the timeout argument is present and not None, it should be a
floating point number specifying a timeout for the operation in seconds
(or fractions thereof).
When the underlying lock is an RLock, it is not released using its
release() method, since this may not actually unlock the lock when it
was acquired multiple times recursively. Instead, an internal interface
of the RLock class is used, which really unlocks it even when it has
been recursively acquired several times. Another internal interface is
then used to restore the recursion level when the lock is reacquired.
"""
if not self._is_owned():
raise RuntimeError("cannot wait on un-acquired lock")
waiter = _allocate_lock()
waiter.acquire()
self._waiters.append(waiter)
saved_state = self._release_save()
gotit = False
try: # restore state no matter what (e.g., KeyboardInterrupt)
if timeout is None:
> waiter.acquire()
E KeyboardInterrupt
```
Also added own print debugs near: `execnet/rsync.py -> send()`
```
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_vendored_packages', 'apipkg.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_vendored_packages', '__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_vendored_packages', 'iniconfig.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_log', 'log.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_log', '__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_log', 'warning.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_error.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['__metainfo.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_xmlgen.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_path', 'local.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_path', 'common.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_path', 'cacheutil.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_path', 'svnwc.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_path', '__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_path', 'svnurl.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_io', 'terminalwriter.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_io', 'saferepr.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_io', '__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_io', 'capture.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', 'code.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', '_assertionnew.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', '_assertionold.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', '_py2traceback.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', '__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', 'source.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_code', 'assertion.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['test.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_version.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_builtin.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_std.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_process', 'forkedfunc.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_process', 'cmdexec.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_process', '__init__.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'send', (['_process', 'killproc.py'], None))]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
channel - after self._receivequeue.get(): [<Channel id=5 open>]
req - after self._receivequeue.get(): [(b'list_done', None)]
self._channels - before self._receivequeue.get(): [{<Channel id=5 open>: <function NodeManager.rsync.<locals>.finished at 0x7f6d089c7e18>}]
``` | open | 2019-02-06T09:03:28Z | 2019-02-06T09:03:28Z | https://github.com/pytest-dev/pytest-xdist/issues/413 | [] | mitzkia | 0 |
chaoss/augur | data-visualization | 2,656 | Libyears metric API | The canonical definition is here: https://chaoss.community/?p=3976 | open | 2023-11-30T18:08:09Z | 2024-09-14T16:02:46Z | https://github.com/chaoss/augur/issues/2656 | [
"API",
"first-timers-only"
] | sgoggins | 7 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 42 | 合并权重并量化后,生成速度非常慢 | 你们也是这么慢吗?我用gpt4all还好,合并中文lora权重,并量化后,生成非常慢,是有什么没注意吗? | closed | 2023-04-03T10:06:08Z | 2023-04-10T01:01:16Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/42 | [] | Trangle | 2 |
apachecn/ailearning | nlp | 183 | pdf 下载下来打不开 | 提示“格式错误:不是一个PDF文件或文件已损坏” | closed | 2017-10-25T02:20:20Z | 2017-10-25T02:57:20Z | https://github.com/apachecn/ailearning/issues/183 | [] | flymenn | 1 |
ghtmtt/DataPlotly | plotly | 221 | Errors during QGIS 3.12.2 install with apt on Ubuntu 20.04 | Yesterday I upgraded my Ubuntu 19.10 desktop to 20.04. During that, my QGIS repository was turned off and when I re-enabled it and redid the install of 3.12.2 I noticed the following error messages during the install:
`Setting up python3-plotly (4.4.1+dfsg-1) ...
/usr/lib/python3/dist-packages/_plotly_utils/utils.py:214: SyntaxWarning: "is" with a literal. Did you mean "=="?
if (iso_string.split("-")[:3] is "00:00") or (iso_string.split("+")[0] is "00:00"):
/usr/lib/python3/dist-packages/_plotly_utils/utils.py:214: SyntaxWarning: "is" with a literal. Did you mean "=="?
if (iso_string.split("-")[:3] is "00:00") or (iso_string.split("+")[0] is "00:00"):
/usr/lib/python3/dist-packages/plotly/figure_factory/_candlestick.py:194: SyntaxWarning: "is" with a literal. Did you mean "=="?
if direction is "increasing":
/usr/lib/python3/dist-packages/plotly/figure_factory/_candlestick.py:199: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif direction is "decreasing":
/usr/lib/python3/dist-packages/plotly/figure_factory/_ohlc.py:176: SyntaxWarning: "is" with a literal. Did you mean "=="?
if direction is "increasing":
/usr/lib/python3/dist-packages/plotly/figure_factory/_ohlc.py:179: SyntaxWarning: "is" with a literal. Did you mean "=="?
elif direction is "decreasing":
/usr/lib/python3/dist-packages/plotly/matplotlylib/renderer.py:460: SyntaxWarning: "is" with a literal. Did you mean "=="?
if props["offset_coordinates"] is "data":
/usr/lib/python3/dist-packages/plotly/matplotlylib/renderer.py:572: SyntaxWarning: "is not" with a literal. Did you mean "!="?
if props["coordinates"] is not "data":`
To be clear I haven't tried using the software; these were emitted during the "apt install" step.
| closed | 2020-04-29T22:01:58Z | 2020-05-03T10:06:06Z | https://github.com/ghtmtt/DataPlotly/issues/221 | [] | monetschemist | 6 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,033 | cc | closed | 2020-05-19T20:50:10Z | 2020-05-25T10:21:07Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1033 | [] | kalai2033 | 0 | |
marshmallow-code/flask-marshmallow | sqlalchemy | 5 | Support file fields | It would be awesome if flask-marshmallow supported file upload fields along with a file size validator to go with it. I think this is within the scope of flask-marshmallow.
| closed | 2014-11-21T23:09:55Z | 2025-01-21T18:28:37Z | https://github.com/marshmallow-code/flask-marshmallow/issues/5 | [
"help wanted"
] | svenstaro | 8 |
marshmallow-code/marshmallow-sqlalchemy | sqlalchemy | 118 | how to serieslize ForeignKey default? | I have a model with below field:
```
corpus_id = Column(Integer, ForeignKey('corpus.id', ondelete='CASCADE'))
corpus = relationship('Corpus', backref=backref('documents', order_by=id, cascade='all, delete-orphan'), foreign_keys=corpus_id)
```
After serieslization, it became
```
{
"corpus": 1,
"created": "2017-09-12T05:50:07+00:00",
"id": 1,
"text": "text",
"title": "doc1",
}
```
I want `corpus_id` , not `corpus` | closed | 2017-09-12T05:54:52Z | 2018-12-04T18:13:47Z | https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/118 | [] | eromoe | 4 |
piskvorky/gensim | data-science | 2,924 | Add compute for value perplexity as an evaluation metrics for LDA and LDAseqmodel | I have been trying to compute for value perplexity, topic diversity and topic quality in lda and ldaseq but I can't seem to find any method that has it | open | 2020-08-27T22:24:01Z | 2020-08-27T22:24:01Z | https://github.com/piskvorky/gensim/issues/2924 | [] | Emekaborisama | 0 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 324 | Getting Langchain Output Paser Exception (Invalid JSON output) | **Describe the bug**
I am using SmartScraper graph to scrape data from a website. It is giving me Invalid JSON output error.
**To Reproduce**
This is my graph_config, for the rest of code I am following the tutorial. I using latest release fo ScrapeGraphAI.
The website source: https://www.sortlist.com/
prompt: Give me a summary of top 10 advertising agencies
```
graph_config = {
"llm": {
"model": "groq/llama3-8b-8192",
"api_key": groq_key,
"temperature": 0
},
"embeddings": {
"model": "ollama/nomic-embed-text",
"base_url": base_url, # set Ollama URL
},
"headless": False
}
```
**Screenshots**
```
Note that the output is a JSON object with a single property `links` which is an array of URLs.
Traceback (most recent call last):
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 66, in parse_result
return parse_json_markdown(text)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/utils/json.py", line 147, in parse_json_markdown
return _parse_json(json_str, parser=parser)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/utils/json.py", line 160, in _parse_json
return parser(json_str)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/utils/json.py", line 120, in parse_partial_json
return json.loads(s, strict=strict)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 14 column 5 (char 519)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/chainlit/utils.py", line 40, in wrapper
return await user_function(**params_values)
File "/Users/satyamkumar/development/pocs/python/webscraper-scrapegraph/test.py", line 64, in main
result = json.loads(user_scrapper_graph.run())
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/scrapegraphai/graphs/smart_scraper_graph.py", line 118, in run
self.final_state, self.execution_info = self.graph.execute(inputs)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 171, in execute
return self._execute_standard(initial_state)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/scrapegraphai/graphs/base_graph.py", line 110, in _execute_standard
result = current_node.execute(state)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/scrapegraphai/nodes/generate_answer_node.py", line 124, in execute
answer = map_chain.invoke({"question": user_prompt})
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3142, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3142, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 169, in invoke
return self._call_with_config(
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "/opt/miniconda3/envs/source-x-ai/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
langchain_core.exceptions.OutputParserException: Invalid json output: Here is the JSON output:
```
**Desktop (please complete the following information):**
- OS: McOS M3 pro
- Brave
| closed | 2024-06-01T15:46:22Z | 2024-06-04T16:21:48Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/324 | [] | nashugame | 7 |
reloadware/reloadium | pandas | 74 | Pycharm plugin 0.9.0 not support for Python 3.10.6 On M2 | **Describe the bug**
When I run the orange button, occurs error:
It seems like your platform or Python version are not supported yet.
Windows, Linux, macOS and Python 64 bit >= 3.7 (>= 3.9 for M1) <= 3.10 are currently supported.
**Desktop (please complete the following information):**
- OS: MacOS
- OS version: 12.5.1
- M1 chip: M2
- Reloadium package version: None
- PyCharm plugin version: 0.9.0
- Editor: pycharm
- Python Version: 3.10.6
| closed | 2022-11-29T06:56:26Z | 2022-11-29T23:09:58Z | https://github.com/reloadware/reloadium/issues/74 | [] | endimirion | 1 |
scikit-learn/scikit-learn | data-science | 30,457 | Add checking if tree criterion/splitter are classes | ### Describe the workflow you want to enable
In the process of creating custom splitters, criterions & models that inherit from the respective _scikit-learn_ classes, a very convenient (albeit currently impossible) solution is to add the splitter & criterion classes as parameters to the model constructor. The currently supported parameter types are strings (referencing splitters & criterions that are already in _scikit-learn_) or objects. Because the splitters & criterions depend on parameters from the fitting function, there is a need for class support in the process of parameter parsing.
### Describe your proposed solution
Checking if the splitter/criterion is a class and constructing it accordingly. A code solution is available [here](https://github.com/gilramot/scikit-learn/commit/ed1b4f3920f6f1aa073c620abd29046cc12a1214).
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | open | 2024-12-10T19:27:31Z | 2024-12-16T10:40:32Z | https://github.com/scikit-learn/scikit-learn/issues/30457 | [
"New Feature",
"Needs Info"
] | gilramot | 1 |
CTFd/CTFd | flask | 2,168 | Fix SAFE_MODE | SAFE_MODE isn't working from config.ini apparently at the moment. | closed | 2022-08-23T15:19:00Z | 2022-08-23T19:24:53Z | https://github.com/CTFd/CTFd/issues/2168 | [
"easy"
] | ColdHeat | 0 |
mouredev/Hello-Python | fastapi | 62 | Jefe | Profesor | open | 2024-05-30T17:14:23Z | 2024-05-30T17:14:23Z | https://github.com/mouredev/Hello-Python/issues/62 | [] | IaAndres | 0 |
dmlc/gluon-nlp | numpy | 1,533 | Upgrade the version of WikiExtractor used in nlp_data | ## Description
Currently, `nlp_data` is using wikiextractor with version 0.1 but the latest version is https://pypi.org/project/wikiextractor/3.0.4/. It will be good to try to upgrade to the latest version.
@DOUDOU0314 If you have time, would you take a look?
| open | 2021-03-01T03:16:46Z | 2022-07-23T18:15:15Z | https://github.com/dmlc/gluon-nlp/issues/1533 | [
"enhancement"
] | sxjscience | 2 |
tfranzel/drf-spectacular | rest-api | 1,105 | Pydantic extension blueprint produces empty schema for models | **Describe the bug**
I've followed #1006 and tried using [the extension blueprint for Pydantic 2](https://drf-spectacular.readthedocs.io/en/latest/blueprints.html#pydantic) to generate a schema for some pydantic models we use (as both request and response objects).
But the resultant object definitions in swagger are just an object name and then an empty definition:
```
C{
}
```
But the actual YAML schema does appear to include the properties:
```yaml
components:
schemas:
C:
$defs:
C:
properties:
id:
title: Id
type: integer
b:
$ref: '#/components/schemas'
d:
$ref: '#/components/schemas'
required:
- id
- b
- d
title: C
type: object
allOf:
- $ref: '#/components/schemas'
```
**To Reproduce**
I used the example test model and view from [this comment](https://github.com/tfranzel/drf-spectacular/issues/1006#issuecomment-1598961285).
The example generated is just
```
"string"
```
And the object schema definition is just:
```
C{
}
```
**Expected behavior**
I'd expect to see the fields from the pydantic model `C`, ideally working with nested pydantic models.
| closed | 2023-11-13T14:53:13Z | 2023-11-13T19:14:27Z | https://github.com/tfranzel/drf-spectacular/issues/1105 | [] | devanubis | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.