repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
plotly/plotly.py | plotly | 4,395 | plotly express showing Unicode characters with to_json() | I took some code from the plotly examples on the website and ran it in a notebook.
```py
import plotly.express as px
df = px.data.tips()
fig = px.histogram(df, x="total_bill")
fig.show()
(fig.to_json())[:150]
```
The hover text in the image that shows up in notebook is fine - but when I export the JSON an use it on my website there are unicode characters present in the JSON:
```
'{"data":[{"alignmentgroup":"True","bingroup":"x","hovertemplate":"total_bill=%{x}\\u003cbr\\u003ecount=%{y}\\u003cextra\\u003e\\u003c\\u002fextra\\u003e","le'
```

| closed | 2023-10-25T18:22:38Z | 2024-07-11T17:18:52Z | https://github.com/plotly/plotly.py/issues/4395 | [] | AbdealiLoKo | 1 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,675 | syntax error with mysql bulk update via INSERT ... SELECT ... ON DUPLICATE KEY UPDATE | ### Discussed in https://github.com/sqlalchemy/sqlalchemy/discussions/10514
<div type='discussions-op-text'>
<sup>Originally posted by **anentropic** October 20, 2023</sup>
I am trying to bulk update via an insert on duplicate, but using `from_select` (instead of list of values as seen here https://github.com/sqlalchemy/sqlalchemy/discussions/9328)
I am getting error from mysql like:
```sql
(MySQLdb.ProgrammingError) (1064, "You have an error in your SQL syntax; check the manual that corresponds to
your MySQL server version for the right syntax to use near
'AS new ON DUPLICATE KEY UPDATE dealer_dealer_name = new.dealer_dealer_name, cust' at line 3")
[SQL: INSERT INTO mytable (...) SELECT dealer_dealer_names.value AS dealer_dealer_name, ...
FROM mytable
LEFT OUTER JOIN etl_anonymised_company_name AS dealer_dealer_names ON md5(mytable.dealer_dealer_name) = dealer_dealer_names.`key`
...<more similar joins>...
WHERE mytable.last_modified_date >= %s AND mytable.last_modified_date < %s
ORDER BY mytable.last_modified_date
AS new
ON DUPLICATE KEY UPDATE dealer_dealer_name = new.dealer_dealer_name, ...]
```
my sqlalchemy looks like:
```python
insert_clause = insert(MyTable).from_select(
[col.name for col in insert_columns],
make_select_query(insert_columns),
)
update_query = (
insert_clause.on_duplicate_key_update({
col_name: getattr(insert_clause.inserted, col_name)
for col_name in update_columns
})
)
with engine.begin() as conn:
result = conn.execute(update_query)
```
mysql error seems to imply it doesn't like how sqlalchemy has aliased the select query `AS new` (it's not something I have done explicitly in my `select` instance)
if I print `str(update_query)` it looks different, there's no select alias and the field updates look like `ON DUPLICATE KEY UPDATE dealer_dealer_name = VALUES(dealer_dealer_name)`
I realise this isn't a minimal reproducible case at the moment, just wondered if there's something obvious I should be doing differently</div> | open | 2023-11-22T19:55:10Z | 2023-11-22T19:55:10Z | https://github.com/sqlalchemy/sqlalchemy/issues/10675 | [
"bug",
"mysql",
"PRs (with tests!) welcome",
"dml"
] | CaselIT | 0 |
deepinsight/insightface | pytorch | 2,188 | [ONNXRuntimeError] : 1 : FAIL : Non-zero status code returned while running BatchNormalization node. | I got this error while testing my new model in http://iccv21-mfr.com/ server. I don't know the root of the problem. Is the problem caused by the model to onnx converter or the version of onnxruntime that is on the server? | open | 2022-12-07T01:06:31Z | 2022-12-08T05:33:38Z | https://github.com/deepinsight/insightface/issues/2188 | [] | Sengli11 | 1 |
Kanaries/pygwalker | pandas | 389 | Pygwalker in Streamlit Python 3.9 | Hi,
Has anyone tested pygwalker in streamlit with python 3.9 in amazon redshift?
Locally, the code runs well, but when deployed into the cloud, we get the below error.
All ideas appreciated, thanks!
ModuleNotFoundError: No Module named '_sqlite3'
| closed | 2024-01-10T13:03:50Z | 2024-01-18T00:54:14Z | https://github.com/Kanaries/pygwalker/issues/389 | [
"fixed but needs feedback"
] | ghost | 1 |
errbotio/errbot | automation | 1,216 | Error in the docs reported by Google search index. | http://errbot.io/en/4.2/_modules/errbot/backends/test.html is probably referenced somewhere but points to nothing. | closed | 2018-05-17T11:48:00Z | 2020-01-19T04:30:42Z | https://github.com/errbotio/errbot/issues/1216 | [
"type: documentation"
] | gbin | 1 |
NullArray/AutoSploit | automation | 963 | Divided by zero exception339 | Error: Attempted to divide by zero.339 | closed | 2019-04-19T16:03:52Z | 2019-04-19T16:35:38Z | https://github.com/NullArray/AutoSploit/issues/963 | [] | AutosploitReporter | 0 |
supabase/supabase-py | flask | 395 | Incorrect padding when setting session from URL encoded access_token | **Describe the bug**
I'm trying to handle the redirects for verify-user and password reset. I have grabbed the access_token and refresh_token from the URL, passed it to the function supabase.auth.set_session(access_token, refresh_token) and I immediately get an 'incorrect padding' error.
I can decode the tokens on jwt.io no problem so was confused on the issue. Anyway, GPT4 came up with a custom solution that actually worked:
error:
```
access_token = 'foo'
refresh_token = 'bar'
session = supabase.auth.set_session(access_token, refresh_token)
response = supabase.auth.update_user({"password": new_password}
```
Potential solution:
```
def custom_decode_jwt_payload(self, token: str):
_, payload, _ = token.split(".")
payload += "=" * (-len(payload) % 4)
payload = base64.urlsafe_b64decode(payload)
return json.loads(payload)
SyncGoTrueClient._decode_jwt = custom_decode_jwt_payload
session = supabase.auth.set_session(access_token, refresh_token)
response = supabase.auth.update_user({"password": new_password})
```
**To Reproduce**
Steps to reproduce the behavior:
```
access_token = 'foo'
refresh_token = 'bar'
session = supabase.auth.set_session(access_token, refresh_token)
response = supabase.auth.update_user({"password": new_password})
```
**Expected behavior**
Expecting a session to be made but its erroring with incorrect_padding
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.
| closed | 2023-03-16T12:49:32Z | 2023-09-17T15:12:05Z | https://github.com/supabase/supabase-py/issues/395 | [] | philmade | 2 |
tensorpack/tensorpack | tensorflow | 605 | SyncMultiGPUTrainerReplicated-shared GPUs hang | In short: two tensorpack processes, both using `SyncMultiGPUTrainerReplicated` and two same GPUs, hang.
To reproduce:
1. Choose an example using `SyncMultiGPUTrainerReplicated`, e.g. `tensorpack/examples/ResNet/imagenet-resnet.py`.
To make GPUs sharable, prevent one process from consuming all memory by appending the following snippet at the top of the script:
```python
import tensorflow as tf
tmp_config = tf.ConfigProto()
tmp_config.gpu_options.allow_growth = True
tmp_session = tf.Session(config=tmp_config)
```
2. Run the script twice.
2.a. Run once:
```bash
gpu=0,1; CUDA_VISIBLE_DEVICES=${gpu} python imagenet-resnet.py --gpu ${gpu} --fake
```
2.b. Wait until the first process completes a few epochs, then run the same line again.
As soon as the second process starts its training, both processes hang, making it impossible to kill any of them or free the GPUs' memory and util, without any error report. The only solution is to restart the server.
Don't know if `SyncMultiGPUTrainerReplicated` or the `allow_growth` snippet is misused in the case or it's a bug.
Thanks for any help! | closed | 2018-01-23T15:39:17Z | 2018-06-15T08:05:20Z | https://github.com/tensorpack/tensorpack/issues/605 | [
"upstream issue"
] | arrowrowe | 3 |
adbar/trafilatura | web-scraping | 24 | Only one author extracted, even when there are multiple | Example article: https://www.nytimes.com/2020/10/19/us/politics/trump-ads-biden-election.html
This is authored by _Maggie Haberman, Shane Goldmacher and Michael Crowley_, but trafilatura will only show the first one. They are all in the JSON-LD so I think they should all be extracted, and author should be an array. | closed | 2020-10-22T18:04:34Z | 2020-11-06T15:20:48Z | https://github.com/adbar/trafilatura/issues/24 | [] | atestu | 4 |
man-group/arctic | pandas | 71 | Benchmarking | Hello,
it will be nice to provide some benchmarks files
nose-timer can help https://github.com/mahmoudimus/nose-timer
Here is an example which can be extend to Arctic
``` python
import time
import numpy as np
import numpy.ma as ma
import pandas as pd
pd.set_option('max_rows', 10)
pd.set_option('expand_frame_repr', False)
pd.set_option('max_columns', 12)
import pymongo
import monary
import xray
URI_DEFAULT = 'mongodb://127.0.0.1:27017'
N_DEFAULT = 50000
def ticks(N):
idx = pd.date_range('20150101',freq='ms',periods=N)
bids = np.random.uniform(0.8, 1.0, N)
spread = np.random.uniform(0, 0.0001, N)
asks = bids + spread
df_ticks = pd.DataFrame({'Bid': bids, 'Ask': asks}, index=idx)
df_ticks['Symbol'] = 'CUR1/CUR2'
df_ticks = df_ticks.reset_index()
return df_ticks
class Test00Pandas:
@classmethod
def setupClass(cls):
N = N_DEFAULT
cls.df = ticks(N)
def test_01_to_dict_01_records(self):
d = self.df.to_dict('records')
def test_01_to_dict_02_split(self):
d = self.df.to_dict('split')
class Test01PyMongoPandasDataFrame:
"""
PyMongo and Pandas DataFrame
"""
@classmethod
def setupClass(cls):
N = N_DEFAULT
URI = URI_DEFAULT
cls.db_name = 'benchdb_pymongo'
cls.collection_name = 'ticks'
cls.df = ticks(N)
cls.columns = ['Bid', 'Ask']
cls.df = cls.df[cls.columns]
cls.client = pymongo.MongoClient(URI)
cls.client.drop_database(cls.db_name)
cls.collection = cls.client[cls.db_name][cls.collection_name]
def setUp(self):
pass
def tearDown(self):
pass
def test_01_store(self):
print(self.df)
self.collection.insert_many(self.df.to_dict('records'))
#time.sleep(2)
def test_02_retrieve(self):
df_retrieved = pd.DataFrame(list(self.client[self.db_name][self.collection_name].find()))
print(df_retrieved)
class Test02MonaryPandasDataFrame:
"""
Monary and Pandas DataFrame
"""
@classmethod
def setupClass(cls):
N = N_DEFAULT
URI = URI_DEFAULT
cls.db_name = 'benchdb_monary'
cls.collection_name = 'ticks'
cls.df = ticks(N)
cls.columns = ['Bid', 'Ask']
cls._client = pymongo.MongoClient(URI)
cls._client.drop_database(cls.db_name)
cls.m = monary.Monary(URI)
def test_01_store(self):
#ma.masked_array(self.df['Symbol'].values, self.df['Symbol'].isnull()),
mparams = monary.MonaryParam.from_lists([
ma.masked_array(self.df['Bid'].values, self.df['Bid'].isnull()),
ma.masked_array(self.df['Ask'].values, self.df['Ask'].isnull())],
self.columns)
self.m.insert(self.db_name, self.collection_name, mparams)
def test_02_retrieve(self):
arrays = self.m.query(self.db_name, self.collection_name, {}, self.columns, ['float64', 'float64'])
print(arrays)
df_retrieved = pd.DataFrame(arrays)
print(df_retrieved)
class Test03MonaryXrayDataset:
"""
Monary and xray
https://bitbucket.org/djcbeach/monary/issues/21/use-xraydataset-with-monary
"""
@classmethod
def setupClass(cls):
N = N_DEFAULT
URI = URI_DEFAULT
cls.db_name = 'benchdb_monary_xray'
cls.collection_name = 'ticks'
cls._df = ticks(N)
cls.ds = xray.Dataset.from_dataframe(cls._df)
cls.columns = ['Bid', 'Ask']
cls.ds = cls.ds[cls.columns]
cls._client = pymongo.MongoClient(URI)
cls._client.drop_database(cls.db_name)
cls.m = monary.Monary(URI)
def test_01_store(self):
lst_cols = list(map(lambda col: self.ds[col].to_masked_array(), self.ds.data_vars))
mparams = monary.MonaryParam.from_lists(lst_cols, list(self.ds.data_vars), ['float64', 'float64'])
self.m.insert(self.db_name, self.collection_name, mparams)
class Test04OdoPandasDataFrame:
"""
Pandas DataFrame and odo
"""
@classmethod
def setupClass(cls):
N = N_DEFAULT
URI = URI_DEFAULT
cls.db_name = 'benchdb_odo'
cls.collection_name = 'ticks'
cls.df = ticks(N)
cls.columns = ['Bid', 'Ask']
cls.df = cls.df[cls.columns]
cls.client = pymongo.MongoClient(URI)
cls.client.drop_database(cls.db_name)
cls.collection = cls.client[cls.db_name][cls.collection_name]
def test_01_store(self):
odo(self.df, self.collection)
def test_02_retrieve(self):
df_retrieved = odo(self.collection, pd.DataFrame)
```
it shows:
```
test_mongodb.Test02MonaryPandasDataFrame.test_02_retrieve: 3.7676s
test_mongodb.Test01PyMongoPandasDataFrameToDictRecords.test_01_store: 3.1900s
test_mongodb.Test04OdoPandasDataFrame.test_01_store: 3.0213s
test_mongodb.Test00Pandas.test_01_to_dict_01_records: 1.6180s
test_mongodb.Test02MonaryPandasDataFrame.test_01_store: 1.3025s
test_mongodb.Test03MonaryXrayDataset.test_01_store: 1.2680s
test_mongodb.Test00Pandas.test_01_to_dict_02_split: 1.2489s
test_mongodb.Test01PyMongoPandasDataFrameToDictRecords.test_02_retrieve: 0.5064s
test_mongodb.Test04OdoPandasDataFrame.test_02_retrieve: 0.4867s
```
Pandas uses vbench https://github.com/pydata/vbench
| closed | 2015-12-27T11:03:30Z | 2016-04-29T12:08:58Z | https://github.com/man-group/arctic/issues/71 | [
"enhancement"
] | femtotrader | 3 |
microsoft/unilm | nlp | 1,526 | How to perform inference on a single image using fine-tuned LayoutLMv3 model? | I have fine-tuned a LayoutLMv3 model and now I want to utilize it for layout analysis and information extraction on a single image. I have successfully trained this model, but I'm facing some difficulties during the inference phase.
| open | 2024-04-19T09:00:11Z | 2024-07-26T06:09:08Z | https://github.com/microsoft/unilm/issues/1526 | [] | laminggg | 1 |
reiinakano/scikit-plot | scikit-learn | 102 | Add numerical digit precision parameter | Hi there,
I was wondering if there is a way of defining the digit numerical precision of values such as roc_auc.
To see what I mean, let me point you to `sklearn` API such as for [Classification Report](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.classification_report.html), where the parameter `digits` defines to what precision the values are presented.
This is specially important, for example, when one is training classifiers that are already in the top, say, +99.5% of accuracy/precision/recall/auc and we want to study differences amongst classifiers that are competing at the 0.1% level.
Namely I noticed that digit precision is not consistent throughout `scikit-plot`, where `roc_auc` is presenting three digit precision, whil `precision_recall` is presenting four digit precision.
As you can imagine, for scientific publication purposes it's a bit *inelegant* to present bound metrics with different precision.
Thanks! | open | 2019-05-31T07:51:25Z | 2019-07-08T10:02:12Z | https://github.com/reiinakano/scikit-plot/issues/102 | [
"enhancement",
"help wanted"
] | romanovzky | 1 |
AirtestProject/Airtest | automation | 1,206 | airtest ios่ฟๆฅ็api: connect_deviceๆ ๆณ่ฟๆฅๅคๅฐios | ็ฐๅจairtest่ๆฌapi๏ผconnect_device่ฟๆฅios่ฎพๅคๆถๅช่ฝ็ปๅฎ127.0.0.1:8100๏ผๆ ๆณ่ฟๆฅๅคๅฐios่ฎพๅค๏ผ้่ฟtidevice -u uuid wdaproxyๅฏๅจๅคๅฐios่ฎพๅค็wdaๅ๏ผๆฏไธช่ฎพๅคwda็็ซฏๅฃๆ ๅฐๅฐmacOsๆบๅจไธๅ็ซฏๅฃ๏ผconnect_deviceๅๆฐ็ปๅฎ127.0.0.1๏ผไธๅ็็ซฏๅฃ๏ผๅฎ้
้ฝๆฏ่ฟๆฅ็127.0.0.1๏ผ8100็่ฎพๅค๏ผๆ ๆณ่ฟๆฅๅคๅฐ่ฎพๅค | open | 2024-04-16T10:09:16Z | 2024-04-26T07:02:17Z | https://github.com/AirtestProject/Airtest/issues/1206 | [] | csushiye | 4 |
daleroberts/itermplot | matplotlib | 28 | Pandas | I was trying to follow along with the pandas plot tutorial, but none of the examples work.
Perhaps Pandas is not supported?
[pandas visualization tutorial](https://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html) | closed | 2017-11-25T15:17:12Z | 2021-09-07T13:27:42Z | https://github.com/daleroberts/itermplot/issues/28 | [] | michaelfresco | 3 |
csurfer/pyheat | matplotlib | 7 | Integration with Jupyter Notebooks | It would be really cool to integrate this within Jupyter notebooks through a magic command:

| closed | 2017-02-17T13:17:58Z | 2017-08-19T02:21:12Z | https://github.com/csurfer/pyheat/issues/7 | [
"enhancement"
] | ozroc | 2 |
mirumee/ariadne | api | 165 | Propose a pub/sub contract for resolvers and subscriptions | I propose that we propose a contract/interface that could be implemented over different transports to aid application authors with using pub/sub and observing changes.
I imagine the common pattern would be similar to the one below:
```python
from graphql.pyutils import EventEmitter, EventEmitterAsyncIterator
class PubSub:
def __init__(self):
self.emitter = EventEmitter()
def subscribe(self, event_type):
return EventEmitterAsyncIterator(self.emitter, event_type)
async def publish(self, event_type, message):
raise NotImplementedError()
```
A dummy implementation (useful for local development) could hook the `publish` method right into the emitter:
```python
class DummyPubSub(PubSub):
async def publish(self, event_type, message):
self.emitter.emit(event_type, message)
```
Another implementation could hook it to a Redis server:
```python
import asyncio
import aioredis
async def start_listening(redis, channel, emitter):
listener = await redis.subscribe(channel)
while (await listener.wait_message()):
data = await listener.get_json()
event_type, message = data
emitter.emit(event_type, message)
async def stop_listening(redis, channel):
await redis.unsubscribe(channel)
class RedisPubSub(PubSub):
def __init__(self, redis, channel):
super().__init__()
self.redis = redis
self.channel = channel
asyncio.ensure_future(start_listening(self.redis, self.channel, self.emitter))
def __del__(self):
asyncio.ensure_future(stop_listening(self.redis, self.channel))
async def publish(self, event_type, message):
await self.redis.publish_json(self.channel, [event_type, message])
```
Similar implementations could happen for AWS SNS+SQS, Google Could Pub/Sub etc.
The tricky part is how to help with passing the object between resolvers and subscriptions. I think the most natural way would be to add it to the context. If we can come up with a standard name then it's easy to write a decorator that automatically unpacks it into a keyword argument:
```python
@mutation.field("updateProduct")
@with_pubsub
def resolve_update_product(parent, info, pubsub):
...
pubsub.publish("product_updated", product.id)
``` | closed | 2019-05-08T16:03:09Z | 2024-01-23T17:43:34Z | https://github.com/mirumee/ariadne/issues/165 | [
"enhancement",
"decision needed"
] | patrys | 3 |
deeppavlov/DeepPavlov | tensorflow | 857 | Download weights from command line | Hi,
Is there a way to download the weights from command line?
For example, when I do `python -m deeppavlov install squad_bert`, it only downloads the code, not the weights.
Lucas | closed | 2019-05-29T09:35:30Z | 2019-05-29T09:47:08Z | https://github.com/deeppavlov/DeepPavlov/issues/857 | [] | lcswillems | 5 |
sqlalchemy/alembic | sqlalchemy | 1,096 | Bug in docs example throws Error InvalidSchemaName even if the schema name is valid and exists. | **Describe the bug**
Alembic tenant is not case sensitive and, if a schema name has uppercase chars, it throws an error (schema not found).
I have used this [alembic tutorial (cookbook) on using support for multiple schemas in Postgres](https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases).
I use Postgres and my schema name is `'John'`.
If I run `alembic -x tenant=John upgrade head` I get an error:
`sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidSchemaName) no schema has been selected to create in...`
It works if my schema name is lowercase 'john' and If I run `alembic -x tenant=john upgrade head` .
**How to fix it:**
In this line of the cookbook example:
`connection.execute("set search_path to %s" % current_tenant)`
wrap the `%s` into quotes or double quotes like this:
`connection.execute("set search_path to '%s'" % current_tenant)`
or this
`connection.execute('set search_path to "%s"' % current_tenant)`
Solution hint:
This [answer](https://dba.stackexchange.com/a/195560/200208) on stackexchange.
**Expected behavior**
The tenant (schema name) should be case sensitive.
**To Reproduce**
Use this [alembic tutorial (cookbook) on using support for multiple schemas in Postgres](https://alembic.sqlalchemy.org/en/latest/cookbook.html#rudimental-schema-level-multi-tenancy-for-postgresql-databases).
To get an error:
Create a Postgres schema named `'John'`.
Run `alembic -x tenant=John upgrade head`
To get an succeed:
Create a Postgres schema named `'john'`.
Run `alembic -x tenant=john upgrade head`
**Error**
```
(venv) PS C:\Users\Asus\Desktop\kerp_dev\backend\apps> alembic -x tenant=John upgrade head
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
Traceback (most recent call last):
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1802, in _execute_context
self.dialect.do_execute(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.InvalidSchemaName: no schema has been selected to create in
LINE 2: CREATE TABLE alembic_version (
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\Asus\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\Asus\Desktop\kerp_dev\venv\Scripts\alembic.exe\__main__.py", line 7, in <module>
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\config.py", line 590, in main
CommandLine(prog=prog).main(argv=argv)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\config.py", line 584, in main
self.run_cmd(cfg, options)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\config.py", line 561, in run_cmd
fn(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\command.py", line 322, in upgrade
script.run_env()
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\script\base.py", line 569, in run_env
util.load_python_file(self.dir, "env.py")
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\util\pyfiles.py", line 94, in load_python_file
module = load_module_py(module_id, path)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\util\pyfiles.py", line 110, in load_module_py
spec.loader.exec_module(module) # type: ignore
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\Asus\Desktop\kerp_dev\backend\apps\alembic\env.py", line 99, in <module>
run_migrations_online()
File "C:\Users\Asus\Desktop\kerp_dev\backend\apps\alembic\env.py", line 93, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\runtime\environment.py", line 853, in run_migrations
self.get_context().run_migrations(**kw)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\runtime\migration.py", line 606, in run_migrations
self._ensure_version_table()
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\alembic\runtime\migration.py", line 542, in _ensure_version_table
self._version.create(self.connection, checkfirst=True)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\sql\schema.py", line 950, in create
bind._run_ddl_visitor(ddl.SchemaGenerator, self, checkfirst=checkfirst)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2113, in _run_ddl_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\sql\visitors.py", line 524, in traverse_single
return meth(obj, **kw)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\sql\ddl.py", line 893, in visit_table
self.connection.execute(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1289, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\sql\ddl.py", line 80, in _execute_on_connection
return connection._execute_ddl(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1381, in _execute_ddl
ret = self._execute_context(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1845, in _execute_context
self._handle_dbapi_exception(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2026, in _handle_dbapi_exception
util.raise_(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\util\compat.py", line 207, in raise_
raise exception
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1802, in _execute_context
self.dialect.do_execute(
File "C:\Users\Asus\Desktop\kerp_dev\venv\lib\site-packages\sqlalchemy\engine\default.py", line 732, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.ProgrammingError: (psycopg2.errors.InvalidSchemaName) no schema has been selected to create in
LINE 2: CREATE TABLE alembic_version (
^
[SQL:
CREATE TABLE alembic_version (
version_num VARCHAR(32) NOT NULL,
CONSTRAINT alembic_version_pkc PRIMARY KEY (version_num)
)
]
(Background on this error at: https://sqlalche.me/e/14/f405)
```
**Versions.**
- OS: Windows 11 (build 22000.1042)
- Python: 3.10.7
- Alembic: 1.8.1
- SQLAlchemy: 1.4.29
- Database: Postgres 14
- DBAPI: psycopg2
| closed | 2022-10-09T10:58:11Z | 2022-10-17T12:53:35Z | https://github.com/sqlalchemy/alembic/issues/1096 | [
"bug",
"documentation"
] | alispa | 3 |
seleniumbase/SeleniumBase | pytest | 2,724 | SSL Errors on MacOS when downloading chromedriver | Running Sonoma 14.4.1, reset to factory defaults, with python 3.12.2
The error:
`ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)`
Running `certifi.where()` yields the cacert.pem
`<redacted for length>/lib/python3.12/site-packages/certifi/cacert.pem`
which (seems to) have valid certs upon visual inspection.
When I build the app bundle with Py2app, and then run the executable inside
<details>
<summary>I get this traceback</summary>
```python
Warning: uc_driver not found. Getting it now:
*** chromedriver to download = 124.0.6367.91 (Latest Stable)
Traceback (most recent call last):
File "urllib/request.pyc", line 1344, in do_open
File "http/client.pyc", line 1331, in request
File "http/client.pyc", line 1377, in _send_request
File "http/client.pyc", line 1326, in endheaders
File "http/client.pyc", line 1085, in _send_output
File "http/client.pyc", line 1029, in send
File "http/client.pyc", line 1472, in connect
File "ssl.pyc", line 455, in wrap_socket
File "ssl.pyc", line 1042, in _create
File "ssl.pyc", line 1320, in do_handshake
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "seleniumbase/core/browser_launcher.pyc", line 3540, in get_local_driver
File "seleniumbase/undetected/__init__.pyc", line 130, in __init__
File "seleniumbase/undetected/patcher.pyc", line 108, in auto
File "seleniumbase/undetected/patcher.pyc", line 126, in fetch_release_number
File "urllib/request.pyc", line 215, in urlopen
File "urllib/request.pyc", line 515, in open
File "urllib/request.pyc", line 532, in _open
File "urllib/request.pyc", line 492, in _call_chain
File "urllib/request.pyc", line 1392, in https_open
File "urllib/request.pyc", line 1347, in do_open
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/dist/mac_shytest.app/Contents/Resources/__boot__.py", line 161, in <module>
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/dist/mac_shytest.app/Contents/Resources/__boot__.py", line 84, in _run
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/dist/mac_shytest.app/Contents/Resources/mac_shytest.py", line 5, in <module>
File "seleniumbase/plugins/driver_manager.pyc", line 516, in Driver
File "seleniumbase/core/browser_launcher.pyc", line 1632, in get_driver
File "seleniumbase/core/browser_launcher.pyc", line 3552, in get_local_driver
TypeError: argument of type 'SSLCertVerificationError' is not iterable
```
</details>
I noticed it error'd after running `undetected/patcher`, so I created a patcher object and ran:
```Python
firsttry = Patcher(version_main=0, force=False, executable_path=None)
firsttry.auto()
```
<details>
<summary>which yields this traceback</summary>
```python
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1344, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1331, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1377, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1326, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1085, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1029, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/http/client.py", line 1472, in connect
self.sock = self._context.wrap_socket(self.sock,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ssl.py", line 455, in wrap_socket
return self.sslsocket_class._create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ssl.py", line 1042, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/ssl.py", line 1320, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/test1.py", line 306, in <module>
firsttry.auto()
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/test1.py", line 108, in auto
release = self.fetch_release_number()
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/test1.py", line 126, in fetch_release_number
return urlopen(self.url_repo + path).read().decode()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 215, in urlopen
return opener.open(url, data, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 515, in open
response = self._open(req, data)
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 532, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 492, in _call_chain
result = func(*args)
^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1392, in https_open
return self.do_open(http.client.HTTPSConnection, req,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/urllib/request.py", line 1347, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1000)>
Exception ignored in: <function Patcher.__del__ at 0x1027c8b80>
Traceback (most recent call last):
File "/Users/dylan/PycharmProjects/shy_drivers/apptest/test1.py", line 283, in __del__
AttributeError: 'NoneType' object has no attribute 'monotonic'
```
</details>
| closed | 2024-04-28T21:09:59Z | 2024-04-29T00:49:28Z | https://github.com/seleniumbase/SeleniumBase/issues/2724 | [
"external",
"can't reproduce",
"UC Mode / CDP Mode"
] | Dylgod | 1 |
timkpaine/lantern | plotly | 172 | add "superstore" like random data | closed | 2018-09-19T16:37:43Z | 2018-09-19T21:03:14Z | https://github.com/timkpaine/lantern/issues/172 | [
"feature",
"datasets"
] | timkpaine | 0 | |
pyqtgraph/pyqtgraph | numpy | 3,267 | ParameterTree drop-down list shows up blank | <!-- In the following, please describe your issue in detail! -->
<!-- If some sections do not apply, just remove them. -->
### Short description
<!-- This should summarize the issue. -->
If I configure a parametertree as a drop-down menu with pre-defined list and default value, the drop-down shows up as empty
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import sys
from PyQt5.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget
import pyqtgraph as pg
from pyqtgraph.parametertree import Parameter, ParameterTree
class ParameterTreeApp(QMainWindow):
def __init__(self):
super().__init__()
# Set up the main window
self.setGeometry(100, 100, 400, 300)
# Create a central widget and layout
central_widget = QWidget()
layout = QVBoxLayout()
central_widget.setLayout(layout)
self.setCentralWidget(central_widget)
# Define parameters with a drop-down menu
params = [
{'name': 'Select Item', 'type': 'list', 'values': ['Option 1', 'Option 2', 'Option 3'], 'value': 'Option 1'},
]
# Create a Parameter object
self.parameter = Parameter.create(name='params', type='group', children=params)
# Create a ParameterTree and set the parameters
self.parameter_tree = ParameterTree()
self.parameter_tree.setParameters(self.parameter, showTop=False)
# Add the ParameterTree to the layout
layout.addWidget(self.parameter_tree)
if __name__ == '__main__':
app = QApplication(sys.argv)
window = ParameterTreeApp()
window.show()
sys.exit(app.exec_())
```
### Expected behavior
The drop-down should be default set to "Option 1", which list of options "Option 1", "Option 2", and "Option 3"
### Real behavior
Drop down is empty, with no default value set.
<img width="398" alt="Image" src="https://github.com/user-attachments/assets/82ea6bc2-8022-49cd-a4fd-05276e1348cb" />
### Tested environment(s)
* PyQtGraph version: 0.13.7
* PyQt: 5.15.9
* Python version: 3.12.9
* Operating system: Red Hat 8
* Installation method: conda
| closed | 2025-02-27T16:57:39Z | 2025-02-27T19:09:40Z | https://github.com/pyqtgraph/pyqtgraph/issues/3267 | [] | echandler-anl | 2 |
sigmavirus24/github3.py | rest-api | 892 | branch.protect returns 404 instead of bool value | **Issue type**: bug
------
**Versions**
- Python 2.7
- pip 18.1
- github3.py 1.2.0
- requests 2.19.1
- uritemplate 0.3.0,
- python-dateutil 2.7.3
------
**Traceback**:
```
Traceback (most recent call last):
....
status_checks=['required_pull_request_reviews'])
File "/home/vozniak/projects/github/virtualenv/local/lib/python2.7/site-packages/github3/decorators.py", line 30, in auth_wrapper
return func(self, *args, **kwargs)
File "/home/vozniak/projects/github/virtualenv/local/lib/python2.7/site-packages/github3/repos/branch.py", line 116, in protect
json = self._json(resp, 200)
File "/home/vozniak/projects/github/virtualenv/local/lib/python2.7/site-packages/github3/models.py", line 156, in _json
raise exceptions.error_for(response)
github3.exceptions.NotFoundError: 404 Not Found
```
------
**Description**:
When trying to protect branch I got an issue 404 on this line:
https://github.com/sigmavirus24/github3.py/blob/master/src/github3/repos/branch.py#L116
reproducible example:
```python
master_branch.protect(enforcement='off', status_checks=['required_pull_request_reviews'])
```
------
*Generated with github3.py using the report_issue script*
| closed | 2018-10-08T13:05:13Z | 2021-11-01T01:08:44Z | https://github.com/sigmavirus24/github3.py/issues/892 | [] | VolVoz | 7 |
arnaudmiribel/streamlit-extras | streamlit | 21 | Add mentions | As in https://playground.streamlitapp.com/?q=github-mention
Worth trying with other pages, other social networks too | closed | 2022-09-22T08:17:31Z | 2022-09-22T19:37:30Z | https://github.com/arnaudmiribel/streamlit-extras/issues/21 | [
"enhancement"
] | arnaudmiribel | 0 |
OFA-Sys/Chinese-CLIP | computer-vision | 278 | ๅ
ณไบๆง่กsh่ๆฌๆไปถ็ๆฅ้ RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8 | (CLIP) fumon@LAPTOP-2S5HFEN5:~/Chinese-CLIP-master/Chinese-CLIP-master$ bash run_scripts/muge_finetune_vit-b-16_rbt-base.sh DATAPATH
/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py:178: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torch.distributed.run.
Note that --use_env is set by default in torch.distributed.run.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
warnings.warn(
Loading vision model config from cn_clip/clip/model_configs/RN50.json
Loading text model config from cn_clip/clip/model_configs/RBT3-chinese.json
Traceback (most recent call last):
File "cn_clip/training/main.py", line 350, in <module>
main()
File "cn_clip/training/main.py", line 135, in main
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.local_device_rank], find_unused_parameters=find_unused_parameters)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 496, in __init__
dist._verify_model_across_ranks(self.process_group, parameters)
RuntimeError: NCCL error in: ../torch/lib/c10d/ProcessGroupNCCL.cpp:911, unhandled system error, NCCL version 2.7.8
ncclSystemError: System call (socket, malloc, munmap, etc) failed.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 28405) of binary: /home/fumon/anaconda3/envs/CLIP/bin/python
Traceback (most recent call last):
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
elastic_launch(
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/fumon/anaconda3/envs/CLIP/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
cn_clip/training/main.py FAILED
Root Cause:
[0]:
time: 2024-03-26_21:46:34
rank: 0 (local_rank: 0)
exitcode: 1 (pid: 28405)
error_file: <N/A>
msg: "Process failed with exitcode 1"
Other Failures:
<NO_OTHER_FAILURES>
| open | 2024-03-25T14:24:00Z | 2024-03-26T16:25:44Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/278 | [] | Fumon554 | 0 |
kennethreitz/responder | graphql | 242 | Ability to modify swagger strings | The built-in openapi support is great! Kudos to that. However, it would be nice if there were ways to modify more swagger strings such as page title, `default`, `description` etc. Not a very important feature but would be nice to have to make swagger docs more customizable.
I am thinking passing common variables within `responder.API` along with already existing "title" and "version"? | closed | 2018-11-20T09:42:18Z | 2019-03-13T00:22:21Z | https://github.com/kennethreitz/responder/issues/242 | [
"good first issue"
] | here0to0learn | 3 |
deeppavlov/DeepPavlov | tensorflow | 812 | Dialogue Bot for goal-oriented task issue. | from deeppavlov import build_model, configs
bot1 = build_model(configs.go_bot.gobot_dstc2, download=True)
----------------------------------------------------------------------
2019-04-22 06:18:37.365 INFO in 'deeppavlov.core.data.utils'['utils'] at line 63: Downloading from http://files.deeppavlov.ai/datasets/dstc2_v2.tar.gz to /root/.deeppavlov/downloads/dstc2_v2.tar.gz
100%|โโโโโโโโโโ| 506k/506k [00:00<00:00, 743kB/s]
2019-04-22 06:18:38.53 INFO in 'deeppavlov.core.data.utils'['utils'] at line 201: Extracting /root/.deeppavlov/downloads/dstc2_v2.tar.gz archive into /root/.deeppavlov/downloads/dstc2
2019-04-22 06:18:38.681 INFO in 'deeppavlov.core.data.utils'['utils'] at line 63: Downloading from http://files.deeppavlov.ai/embeddings/glove.6B.100d.txt to /root/.deeppavlov/downloads/embeddings/glove.6B.100d.txt
347MB [00:20, 17.1MB/s]
2019-04-22 06:18:59.505 INFO in 'deeppavlov.core.data.utils'['utils'] at line 63: Downloading from http://files.deeppavlov.ai/deeppavlov_data/slotfill_dstc2.tar.gz to /root/.deeppavlov/slotfill_dstc2.tar.gz
100%|โโโโโโโโโโ| 641k/641k [00:00<00:00, 951kB/s]
2019-04-22 06:19:00.185 INFO in 'deeppavlov.core.data.utils'['utils'] at line 201: Extracting /root/.deeppavlov/slotfill_dstc2.tar.gz archive into /root/.deeppavlov/models
2019-04-22 06:19:00.769 INFO in 'deeppavlov.core.data.utils'['utils'] at line 63: Downloading from http://files.deeppavlov.ai/deeppavlov_data/gobot_dstc2_v7.tar.gz to /root/.deeppavlov/gobot_dstc2_v7.tar.gz
100%|โโโโโโโโโโ| 969k/969k [00:01<00:00, 543kB/s]
2019-04-22 06:19:01.853 INFO in 'deeppavlov.core.data.utils'['utils'] at line 201: Extracting /root/.deeppavlov/gobot_dstc2_v7.tar.gz archive into /root/.deeppavlov/models
2019-04-22 06:19:01.874 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 103: [loading vocabulary from /root/.deeppavlov/models/gobot_dstc2/word.dict]
2019-04-22 06:19:01.878 WARNING in 'deeppavlov.core.models.serializable'['serializable'] at line 47: No load path is set for Sqlite3Database in 'infer' mode. Using save path instead
2019-04-22 06:19:01.880 INFO in 'deeppavlov.core.data.sqlite_database'['sqlite_database'] at line 63: Loading database from /root/.deeppavlov/downloads/dstc2/resto.sqlite.
2019-04-22 06:19:04.804 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 103: [loading vocabulary from /root/.deeppavlov/models/slotfill_dstc2/word.dict]
2019-04-22 06:19:04.814 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 103: [loading vocabulary from /root/.deeppavlov/models/slotfill_dstc2/tag.dict]
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Using TensorFlow backend.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/deeppavlov/core/layers/tf_layers.py:948: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/deeppavlov/core/layers/tf_layers.py:66: conv1d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.conv1d instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/deeppavlov/core/layers/tf_layers.py:69: batch_normalization (from tensorflow.python.layers.normalization) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.batch_normalization instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/deeppavlov/models/ner/network.py:248: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.dense instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/deeppavlov/models/ner/network.py:259: softmax_cross_entropy_with_logits (from tensorflow.python.ops.nn_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Future major versions of TensorFlow will allow gradients to flow
into the labels input on backprop by default.
See `tf.nn.softmax_cross_entropy_with_logits_v2`.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/deeppavlov/core/models/tf_model.py:49: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
2019-04-22 06:19:06.539 INFO in 'deeppavlov.core.models.tf_model'['tf_model'] at line 50: [loading model from /root/.deeppavlov/models/slotfill_dstc2/model]
INFO:tensorflow:Restoring parameters from /root/.deeppavlov/models/slotfill_dstc2/model
/usr/local/lib/python3.6/dist-packages/fuzzywuzzy/fuzz.py:35: UserWarning: Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning
warnings.warn('Using slow pure-python SequenceMatcher. Install python-Levenshtein to remove this warning')
paramiko missing, opening SSH/SCP/SFTP paths will be disabled. `pip install paramiko` to suppress
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-7-7d0b0559004d> in <module>()
1 from deeppavlov import build_model, configs
2
----> 3 bot1 = build_model(configs.go_bot.gobot_dstc2, download=True)
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/commands/infer.py in build_model(config, mode, load_trained, download, serialized)
59 component_serialized = None
60
---> 61 component = from_params(component_config, mode=mode, serialized=component_serialized)
62
63 if 'in' in component_config:
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py in from_params(params, mode, serialized, **kwargs)
95
96 # find the submodels params recursively
---> 97 config_params = {k: _init_param(v, mode) for k, v in config_params.items()}
98
99 try:
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py in <dictcomp>(.0)
95
96 # find the submodels params recursively
---> 97 config_params = {k: _init_param(v, mode) for k, v in config_params.items()}
98
99 try:
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py in _init_param(param, mode)
49 elif isinstance(param, dict):
50 if {'ref', 'class_name', 'config_path'}.intersection(param.keys()):
---> 51 param = from_params(param, mode=mode)
52 else:
53 param = {k: _init_param(v, mode) for k, v in param.items()}
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/params.py in from_params(params, mode, serialized, **kwargs)
92 log.exception(e)
93 raise e
---> 94 cls = get_model(cls_name)
95
96 # find the submodels params recursively
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/registry.py in get_model(name)
69 raise ConfigError("Model {} is not registered.".format(name))
70 return cls_from_str(name)
---> 71 return cls_from_str(_REGISTRY[name])
72
73
/usr/local/lib/python3.6/dist-packages/deeppavlov/core/common/registry.py in cls_from_str(name)
38 .format(name))
39
---> 40 return getattr(importlib.import_module(module_name), cls_name)
41
42
/usr/lib/python3.6/importlib/__init__.py in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
127
128
/usr/lib/python3.6/importlib/_bootstrap.py in _gcd_import(name, package, level)
/usr/lib/python3.6/importlib/_bootstrap.py in _find_and_load(name, import_)
/usr/lib/python3.6/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_)
/usr/lib/python3.6/importlib/_bootstrap.py in _load_unlocked(spec)
/usr/lib/python3.6/importlib/_bootstrap_external.py in exec_module(self, module)
/usr/lib/python3.6/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
/usr/local/lib/python3.6/dist-packages/deeppavlov/models/embedders/glove_embedder.py in <module>()
17
18 import numpy as np
---> 19 from gensim.models import KeyedVectors
20 from overrides import overrides
21
/usr/local/lib/python3.6/dist-packages/gensim/__init__.py in <module>()
3 """
4
----> 5 from gensim import parsing, corpora, matutils, interfaces, models, similarities, summarization, utils # noqa:F401
6 import logging
7
/usr/local/lib/python3.6/dist-packages/gensim/models/__init__.py in <module>()
5
6 # bring model classes directly into package namespace, to save some typing
----> 7 from .coherencemodel import CoherenceModel # noqa:F401
8 from .hdpmodel import HdpModel # noqa:F401
9 from .ldamodel import LdaModel # noqa:F401
/usr/local/lib/python3.6/dist-packages/gensim/models/coherencemodel.py in <module>()
34 from gensim import interfaces, matutils
35 from gensim import utils
---> 36 from gensim.topic_coherence import (segmentation, probability_estimation,
37 direct_confirmation_measure, indirect_confirmation_measure,
38 aggregation)
/usr/local/lib/python3.6/dist-packages/gensim/topic_coherence/probability_estimation.py in <module>()
10 import logging
11
---> 12 from gensim.topic_coherence.text_analysis import (
13 CorpusAccumulator, WordOccurrenceAccumulator, ParallelWordOccurrenceAccumulator,
14 WordVectorsAccumulator)
/usr/local/lib/python3.6/dist-packages/gensim/topic_coherence/text_analysis.py in <module>()
19
20 from gensim import utils
---> 21 from gensim.models.word2vec import Word2Vec
22
23 logger = logging.getLogger(__name__)
/usr/local/lib/python3.6/dist-packages/gensim/models/word2vec.py in <module>()
119
120 from gensim.utils import keep_vocab_item, call_on_class_only
--> 121 from gensim.models.keyedvectors import Vocab, Word2VecKeyedVectors
122 from gensim.models.base_any2vec import BaseWordEmbeddingsModel
123
/usr/local/lib/python3.6/dist-packages/gensim/models/keyedvectors.py in <module>()
160 # If pyemd is attempted to be used, but isn't installed, ImportError will be raised in wmdistance
161 try:
--> 162 from pyemd import emd
163 PYEMD_EXT = True
164 except ImportError:
/usr/local/lib/python3.6/dist-packages/pyemd/__init__.py in <module>()
73
74 from .__about__ import *
---> 75 from .emd import emd, emd_with_flow, emd_samples
__init__.pxd in init pyemd.emd()
ValueError: numpy.ufunc size changed, may indicate binary incompatibility. Expected 216 from C header, got 192 from PyObject | closed | 2019-04-22T06:23:36Z | 2019-04-23T08:33:43Z | https://github.com/deeppavlov/DeepPavlov/issues/812 | [] | Pem14604 | 2 |
vllm-project/vllm | pytorch | 15,380 | [Usage][UT]:Why the answer is ' 0, 1' | ### Your current environment
INFO 03-24 14:31:22 [__init__.py:256] Automatically detected platform cuda.
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA A100-PCIE-40GB
Nvidia driver version: 550.107.02
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_precompiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_engines_runtime_compiled.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_graph.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_heuristic.so.9.1.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops.so.9.1.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (Cascadelake)
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 6
BogoMIPS: 5999.76
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat pku ospke avx512_vnni
L1d cache: 2.5 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 160 MiB (40 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Vulnerable
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.3.0
[pip3] torch==2.6.0
[pip3] torchaudio==2.6.0
[pip3] torchvision==0.21.0
[pip3] transformers==4.49.0
[pip3] triton==3.2.0
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pyzmq 26.3.0 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] torchaudio 2.6.0 pypi_0 pypi
[conda] torchvision 0.21.0 pypi_0 pypi
[conda] transformers 4.49.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.8.0rc3.dev66+gffcfb77c7
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X 0-79 0-1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
OMP_NUM_THREADS=10
MKL_NUM_THREADS=10
LD_LIBRARY_PATH=/root/autodl-tmp/miniconda/envs/vllm_wl/lib/python3.12/site-packages/cv2/../../lib64:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
### How would you like to use vllm
When comparing the short outputs of HF and vLLM(greedy sampling) using the test [script](https://github.com/vllm-project/vllm/blob/v0.8.1/tests/basic_correctness/test_basic_correctness.py), i got the answer and i reproduced it as following:
```
from vllm import LLM, SamplingParams
prompt = "The following numbers of the sequence " + ", ".join(
str(i) for i in range(1024)) + " are:"
sampling_params = SamplingParams(temperature=0, max_tokens=5)
# Create an LLM.
llm = LLM(model="Qwen/Qwen2.5-7B-Instruct")
outputs = llm.generate(prompt, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
and i got Generated text: ' 0, 1', i wonder why is this the answer?
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-24T06:35:05Z | 2025-03-24T06:36:17Z | https://github.com/vllm-project/vllm/issues/15380 | [
"usage"
] | Potabk | 0 |
airtai/faststream | asyncio | 1,899 | Feat: add warning for NATS subscriber factory if user sets useless options | **Describe the bug**
The extra_options parameter is not utilized when using pull_subscribe in nats.
Below is the function signature from natspy
``` python
async def subscribe(
self,
subject: str,
queue: Optional[str] = None,
cb: Optional[Callback] = None,
durable: Optional[str] = None,
stream: Optional[str] = None,
config: Optional[api.ConsumerConfig] = None,
manual_ack: bool = False,
ordered_consumer: bool = False,
idle_heartbeat: Optional[float] = None,
flow_control: bool = False,
pending_msgs_limit: int = DEFAULT_JS_SUB_PENDING_MSGS_LIMIT,
pending_bytes_limit: int = DEFAULT_JS_SUB_PENDING_BYTES_LIMIT,
deliver_policy: Optional[api.DeliverPolicy] = None,
headers_only: Optional[bool] = None,
inactive_threshold: Optional[float] = None,
) -> PushSubscription:
async def pull_subscribe(
self,
subject: str,
durable: Optional[str] = None,
stream: Optional[str] = None,
config: Optional[api.ConsumerConfig] = None,
pending_msgs_limit: int = DEFAULT_JS_SUB_PENDING_MSGS_LIMIT,
pending_bytes_limit: int = DEFAULT_JS_SUB_PENDING_BYTES_LIMIT,
inbox_prefix: bytes = api.INBOX_PREFIX,
) -> JetStreamContext.PullSubscription:
```
**How to reproduce**
```python
import asyncio
from faststream import FastStream
from faststream.nats import PullSub, NatsBroker
from nats.js.api import DeliverPolicy
broker = NatsBroker()
app = FastStream(broker)
@broker.subscriber(subject="test", deliver_policy=DeliverPolicy.LAST, stream="test", pull_sub=PullSub())
async def handle_msg(msg: str): ...
if __name__ == "__main__":
asyncio.run(app.run())
```
| closed | 2024-11-07T10:05:47Z | 2024-11-11T05:58:02Z | https://github.com/airtai/faststream/issues/1899 | [
"enhancement",
"good first issue",
"help wanted"
] | HHongSeungWoo | 4 |
sinaptik-ai/pandas-ai | data-visualization | 871 | index 0 is out of bounds for axis 0 with size 0 | ### System Info
pandasai - 1.5.15
Python - 3.9.13
### ๐ Describe the bug
smart_df = SmartDataframe(
df,
config={"llm": llm, "custom_head": df.head(2)})
ques = 'Which companies are doing better than American Express in Waste category?'
ans = smart_df.chat(ques, output_type = "dataframe")
print(ans)
'Unfortunately, I was not able to answer your question, because of the following error:\n\nindex 0 is out of bounds for axis 0 with size 0\n' | closed | 2024-01-12T08:23:36Z | 2024-06-01T00:21:02Z | https://github.com/sinaptik-ai/pandas-ai/issues/871 | [] | Devicharith | 1 |
facebookresearch/fairseq | pytorch | 5,004 | What is the license of the TTS models? | #### What is your question?
I have been testing your TTS system for both english and spanish. For the later, I'm using facebook/tts_transformer-es-css10.
Fairseq is MIT licensed, but I can't find anything about the model itself.
Where can I find information about under what license is this model registered?
Much grateful. | open | 2023-03-03T15:45:31Z | 2023-03-03T15:45:31Z | https://github.com/facebookresearch/fairseq/issues/5004 | [
"question",
"needs triage"
] | ADD-eNavarro | 0 |
labmlai/annotated_deep_learning_paper_implementations | machine-learning | 118 | bracket balance | No opening bracket after mu in ddpm page
https://nn.labml.ai/diffusion/ddpm/index.html

| closed | 2022-04-23T16:15:12Z | 2022-07-02T10:02:49Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/118 | [
"documentation"
] | maloyan | 1 |
marcomusy/vedo | numpy | 658 | Normalized diverging colormap for Volume object | I want to plot a volume object with a diverging colormap of unequal positive and negative fraction. In the case of a 2D plot with python matplotlib I would create the colormap with the "LinearSegmentedColormap" function from "matplotlib.colors", e.g.:
```python
data = np.random.random([100, 100]) * 100 - 70
minimum = data.min()
maximum = data.max()
absmax = np.abs(data).max()
absmin = np.abs(data).min()
val = -minimum/(maximum - minimum)
m = LinearSegmentedColormap.from_list(
"mycolormap",
colors = [
(0.0, [0.0,0.0,1.0]),
(val, [1.0,1.0,1.0]),
(1.0, [1.0,0.0,0.0])
]
)
plt.figure(figsize=(7, 6))
plt.pcolormesh(data, cmap=m)
plt.colorbar()
plt.show()
```

However, if I create the same colormap "m" for a 3D-Volume "vol" object and use it with "vol.cmap(m)" there is just a black-white coloring of the Volume and the colorbar is black, so fully transparent I would guess.

My questions are:
1. How can I create the diverging colormap for the Volume object?
2. How can I set the transparency of the colorbar to zero (or equally alpha to 1)?
3. How can I fix the colormap at a specific position of the screen (so it does not move with the 3D-view)?
| closed | 2022-06-08T07:15:58Z | 2022-07-16T16:42:10Z | https://github.com/marcomusy/vedo/issues/658 | [] | MesoBolt | 3 |
shaikhsajid1111/facebook_page_scraper | web-scraping | 115 | no post_url, skipping | Hello
When running the scarper: i got the following error "no post_url, skipping" repeatdly,
no post scraped
I am using "firefox" browser.
Is there a solution? | open | 2024-06-01T20:10:14Z | 2024-07-14T12:49:14Z | https://github.com/shaikhsajid1111/facebook_page_scraper/issues/115 | [] | saqtam66 | 9 |
oegedijk/explainerdashboard | dash | 80 | hide metrics table Model Performance Metric | Hi @oegedijk,
Is there some way to hide some metrics in model summary, table Model Performance Metric? I read the documentation but not find this functionality. In source code the metrics are get by class ClassifierModelSummaryComponent, but that class d'ont have any parameter to hide. E.g., using the parameter hide_prauc only hide the PR AUC plot not the metric pr_auc_score. Below snapshot of the table

| closed | 2021-02-04T02:01:07Z | 2021-02-25T19:55:43Z | https://github.com/oegedijk/explainerdashboard/issues/80 | [] | mvpalheta | 8 |
sunscrapers/djoser | rest-api | 111 | Guidance to setup email sending | Is there guidance on setting up the email function for password reset and activation? Currently my implementation only save an email to the media folder and unable to send it out as email. It will be great to provide some tips on the documentation.
Much appreciate!
| closed | 2016-01-18T02:12:55Z | 2016-01-21T01:48:59Z | https://github.com/sunscrapers/djoser/issues/111 | [] | junhua | 0 |
Ehco1996/django-sspanel | django | 571 | ็ดๆฅๆๆงไธญ่ฝฌ่็น | closed | 2021-09-01T00:52:23Z | 2021-12-28T00:36:55Z | https://github.com/Ehco1996/django-sspanel/issues/571 | [] | Ehco1996 | 0 | |
onnx/onnxmltools | scikit-learn | 375 | lgb BUG | 
| open | 2020-03-16T02:13:13Z | 2020-04-15T10:54:47Z | https://github.com/onnx/onnxmltools/issues/375 | [] | yuanjie-ai | 1 |
scanapi/scanapi | rest-api | 503 | --browse option does not work on MacOS | ## Bug report
### Environment
- Operating System: MacOS
- Python version: 3.9.0
- ScanAPI version: main, unreleased
### Description of the bug
<!-- A clear and concise description of what the bug is. -->
`--browser` CLI flag does not open the browser automatically.
https://github.com/scanapi/scanapi/pull/496/
### Expected behavior?
<!-- A clear and concise description of what you expected to happen. -->
Open the report automatically in the browser
### How to reproduce the bug?
<!-- Steps to reproduce the issue. -->
Run `scan run -b` using a MacOS.
### Anything else we need to know?
<!-- Add any other additional details about the issue. -->
Probably it is missing two things:
- absolute path
- `file://` suffix
https://stackoverflow.com/a/22004572/8298081
https://stackoverflow.com/a/33426646/8298081
Testing manually, this works:
```shell
>>> import webbrowser
>>> webbrowser.open("file:///Users/camilamaia/workspace/scanapi-org/examples/demo-api/scanapi-report.html")
```
The following don't work:
```shell
>>> webbrowser.open("file://scanapi-report.html")
>>> webbrowser.open("/Users/camilamaia/workspace/scanapi-org/examples/demo-api/scanapi-report.html")
```
| closed | 2021-08-25T20:54:34Z | 2021-08-27T14:50:06Z | https://github.com/scanapi/scanapi/issues/503 | [
"Bug",
"CLI"
] | camilamaia | 4 |
zappa/Zappa | django | 549 | [Migrated] Unable to access json event data | Originally from: https://github.com/Miserlou/Zappa/issues/1458 by [joshlsullivan](https://github.com/joshlsullivan)
Hi there, when I deploy Zappa, I'm unable to access json data from the Lambda event. If I print the event data, this is what I get:
`[DEBUG] 2018-03-24T14:40:37.991Z 517bfc13-2f71-11e8-9ff3-ed7722cf9e11 Zappa Event: {'eventVersion': '1.0', 'eventName': 'edit_client_event', 'eventArgs': {'jobUUID': 'a5aa3a03-b290-4469-b7ce-711045a57dfb'}, 'auth': {'accountUUID': 'ce9fee13-3327-4bf2-9eb9-89930316690b', 'staffUUID': 'd5b495e7-e3ec-45ff-8ca6-214bfacd13cb'}}`
Here's how I was able to access the json data before deploying Zappa:
`def lambda_handler(event, context):
print(event)
job = event['eventArgs']['jobUUID']`
Any ideas? | closed | 2021-02-20T12:22:36Z | 2024-04-13T16:37:17Z | https://github.com/zappa/Zappa/issues/549 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
seleniumbase/SeleniumBase | web-scraping | 2,402 | Could not connect to the CAPTCHA service. Please try again. | Hello, Im using seleniumbase with uc=True.
The Problem is that I still get detected on a site where i want a bot to checkout. The message "Could not connect to the CAPTCHA service. Please try again." pops up and im not redirected to the checkout page. Is there a workaround? or some settings I have to add? | closed | 2023-12-31T13:42:45Z | 2023-12-31T15:05:50Z | https://github.com/seleniumbase/SeleniumBase/issues/2402 | [
"question",
"UC Mode / CDP Mode"
] | JakobReal-rgb | 1 |
LAION-AI/Open-Assistant | python | 2,849 | Admin interface: Change display name | Currently the display name field of a user in the admin interface is read-only.
Extend the functionality of the [admin/manage_user](https://github.com/LAION-AI/Open-Assistant/blob/main/website/src/pages/admin/manage_user/%5Bid%5D.tsx) page and allow editing of the display name. | closed | 2023-04-23T08:23:28Z | 2023-04-27T11:04:45Z | https://github.com/LAION-AI/Open-Assistant/issues/2849 | [
"website",
"good first issue"
] | andreaskoepf | 1 |
huggingface/datasets | nlp | 6,867 | Improve performance of JSON loader | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant:
> - https://github.com/ultrajson/ultrajson#benchmarks
> - https://github.com/ijl/orjson#performance
I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library.
However:
- We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson`
- Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson | closed | 2024-05-04T15:04:16Z | 2024-05-17T16:22:28Z | https://github.com/huggingface/datasets/issues/6867 | [
"enhancement"
] | albertvillanova | 5 |
Esri/arcgis-python-api | jupyter | 1,506 | clone_items() operation with copy_data=False stills copies data | **Describe the bug**
We have run into cases where the `clone_items()` operation with `copy_data = False` stills copies data from the source Portal to the target Portal.
**To Reproduce**
This happened for services published as dynamic map services from ArcMap that had feature access enabled. for the same Map Service there is a Feature Service end point. We wanted to copy a reference to the source feature service using `clone_item()`, but then the result was a hosted feature service in the target portal with the data copied. parameter `copy_data` was set to `False`.
**Expected behavior**
When `copy_data = False`, a reference to the source Arcgis Server is put in the cloned item in the target portal and no data is copied.
**Platform (please complete the following information):**
- OS: windows
- source: ArcGIS Enterprise 10.8.0
- target: ArcGIS Enterprise 10.9.1
- ArcGIS Pro 3.0.3
| closed | 2023-03-24T22:30:07Z | 2024-10-01T10:30:38Z | https://github.com/Esri/arcgis-python-api/issues/1506 | [
"bug"
] | mhogeweg | 2 |
amidaware/tacticalrmm | django | 1,598 | Feature Request: Cross platform scripting | Please add scripting/programming languages that are (relatively) easy to support across all platforms. Modern languages have the ability to embed files into the binary making them truly single binary applications. Deploying the application is a matter of downloading the release file, uncompressing it if necessary, and copying the binary to a location of your choosing. Tactical can use single binary applications to provide the same functionality across many platforms.
## Programming and Scripting Languages
[Deno][] is the successor to Node.js and provides a full TypeScript engine and runtime. Libraries can be imported from [NPM][npm: specifiers] or [CDNs][npm via CDNs]. Deno has a [language server][] to assist with coding. Deno is secure by default and [permissions][] need to be granted.
"[Nu][] draws inspiration from projects like PowerShell, functional programming languages, and modern CLI tools." While Deno is a full programming language, Nu is an interpreted shell. The Nu shell provides many modern functions such as [HTTP requests][], converting [to/from many formats][], working with [hashes][], and like PowerShell, work with data objects: [Dataframe][] and [Lazyframe][].
[RustPython][] is used to provide a working proof of concept. Similar to CPython, RustPython provides a Python interpreter, and unlike CPython, distribution is with a single binary. The project is young and they do not provide any releases. SSL is required to enable `pip` and `pip install` adds binary stubs to `/usr/local/bin` and installs to `/usr/local/lib/rustpython3.11`. For this reason (and until an alternative location can be configured) RustPython is not suitable for production.
## Proof of Concept
There are 3 pieces to the proof of concept. Minor details may change as I work through the full implementation.
1. The RustPython [install script][] for Linux and Mac computers. This downloads `rustpython`, installs `pip` and a couple necessary modules.
2. An [exec wrapper][] that downloads `deno` or `nushell`, downloads a script from a URL, and executes it.
3. A server hosting [your scripts][], preferably in a git repo.
This setup has the following benefits.
- The same script can be run across all platforms. Platforms being Windows, Mac and Linux. Other platforms/architectures are supported if the binary is available.
- The script can be versioned. Instead of referring to the `main` branch, you can refer to a tag or branch. A `develop` branch can be used in QA, and once the scripts have been verified, they can be merged to the `main` branch or tagged for production.
- This can be enhanced by leveraging custom fields expanded in the environmental variables.
- Script Manager shows only 1 version of each script, not 1 version for Windows and 1 version for Linux/macOS.
- Save time by writing the logic once in one language.
There are some down sides to this setup, some of which can be alleviated by native support in Tactical.
- All scripts have the same wrapper. If the wrapper needs updating, all scripts will need to be updated.
- Script parameters are passed through environmental variables. This gets messy when the variables for the script wrapper are mixed with the variables for the actual script.
- The binaries are hosted on my server (see the install script) due to GitHub charging for LFS usage > 1GB.
- Manually compiling RustPython across all platforms is fraught with errors. Compiling with Docker did not turn out well because then you are cross-compiling with external libraries (OpenSSL). Note: I'm not suggesting to include RustPython in this request. RustPython is used only to bootstrap the Python exec wrapper.
- The RustPython binaries are not statically compiled and may not work on other platforms.
## Proposal
Add support for Deno and Nu to Tactical. I believe this means adding two languages to the server, and support for downloading the `deno` and `nu` binaries to the endpoint. Updates can work like MeshCentral by providing the "approved" version on the server and updating for each release.
## Other considerations
RustPython may be able to solve issue #1470: Install TRMM python version on Mac and Linux.
The proof of concept partially solves issue #1206: Use Git repo for custom scripts. If the URL can be programmatically determined, the provider (GitHub, GitLab, Gitea, etc) and repo can be variables in the Global settings. The branch, and hence version or "tag", can be a custom variable that is expanded in the parameters. The only thing left is path and script name. The question becomes: do you download from the provider every time, or "fetch" a new version of the script in Script Manager?
[Deno]: https://github.com/denoland/deno
[npm: specifiers]: https://deno.land/manual@v1.36.1/node/npm_specifiers
[npm via CDNs]: https://deno.land/manual@v1.36.1/node/cdns
[language server]: https://deno.land/manual@v1.36.1/advanced/language_server#the-language-server
[permissions]: https://deno.land/manual@v1.36.1/basics/permissions
[Nu]: https://github.com/nushell/nushell
[HTTP requests]: https://www.nushell.sh/commands/categories/network.html
[to/from many formats]: https://www.nushell.sh/commands/categories/formats.html
[hashes]: https://www.nushell.sh/commands/categories/hash.html
[Dataframe]: https://www.nushell.sh/commands/categories/dataframe.html
[Lazyframe]: https://www.nushell.sh/commands/categories/lazyframe.html
[RustPython]: https://github.com/RustPython/RustPython
[install script]: https://github.com/NiceGuyIT/pimp-my-tactical/blob/main/scripts/scripts/unix-install-rustpython.sh
[exec wrapper]: https://github.com/NiceGuyIT/pimp-my-tactical/blob/main/scripts/scripts/all-exec-wrapper.py
[your scripts]: https://github.com/NiceGuyIT/pimp-my-tactical/tree/main/scripts/wrapper | closed | 2023-08-15T21:16:38Z | 2024-03-28T00:39:50Z | https://github.com/amidaware/tacticalrmm/issues/1598 | [] | NiceGuyIT | 2 |
pyppeteer/pyppeteer | automation | 84 | UnicodeDecodeError on Response body | Unable to obtain Response body for requests of non-text objects, such as images, as `Response.json()` and `Response.text()` throw UnicodeDecodeErrors. The following snippet produces output including `gif` and `png`:
```python
browser = await pyppeteer.launch()
try:
page = await browser.newPage()
@page.on('requestfinished')
async def handler(r):
if r.response.status == 200:
try:
data = await r.response.text()
except UnicodeDecodeError:
print(r.url.split('.')[-1])
await page.goto('https://www.google.com.au')
await page.waitFor(3000)
finally:
await browser.close()
```
It seems like something is attempting to interpret binary data as utf8 (and then complaining that it isn't valid utf8). The tracebacks include:
```
File "[..]site-packages/pyppeteer/network_manager.py", line 673, in text
return content.decode('utf-8')
```
Tested on pyppeteer 0.0.25, MacOS. | open | 2020-04-17T13:09:25Z | 2020-04-20T03:51:29Z | https://github.com/pyppeteer/pyppeteer/issues/84 | [
"fixed-in-2.1.1"
] | benjimin | 5 |
aimhubio/aim | data-visualization | 2,597 | Does aim server support horizontal scaling? | ## โQuestion
I have a single pod aim server deployed in my k8s cluster, and would like to understand if it's recommended to horizontally scale it to multiple pods, and whether there's any caveat in doing so.
My rationale:
1. Minimize downtime when I need to redeploy aim server, or when the underlying node is taken out for whatever reason.
2. Handle more concurrent runs
For (2) I know setting `--workers` to a larger value is also an option, but that doesn't really work for me because I'm using k8s ingress to route grpc requests from outside the k8s cluster, and there's no easy way to do that when we need multiple open ports on the same pod for the "workers" method.
| open | 2023-03-17T06:56:36Z | 2023-03-20T23:12:16Z | https://github.com/aimhubio/aim/issues/2597 | [
"type / question"
] | jiyuanq | 3 |
openapi-generators/openapi-python-client | fastapi | 224 | Add object-oriented client option | First of all, thank you for this great project, the code is very nice and I think it really has a lot of potential.
**Is your feature request related to a problem? Please describe.**
As mentioned in https://github.com/triaxtec/openapi-python-client/issues/171, the current approach arguably needs a little too much boilerplate that could be avoided by adding a more object oriented `Client`.
**Describe the solution you'd like**
* Add two new client interfaces, `SyncClient` and `AsyncClient` that would wrap the currently generated tag packages in objects, and expose them as properties.
With that approach, a developer could call an endpoint with as little code as:
```python
from petstore_client import SyncPetstoreClient # Named after the spec title, nicer if you import several auto-generated clients
client = SyncPetstoreClient("http://mypetstore")
pets = client.pets.list_pets()
```
* One issue is that we loose the ability to statically tell the user that some endpoint requires an `AuthenticatedClient` or not... Some thinking required here I guess.
* Additionally, a default `base_url` could be provided through an environment variable named after the spec title, for instance: `SWAGGER_PETSTORE_BASE_URL`. It could be nice to prevent microservices from using different naming conventions.
For reference, my crude attempt at implementing those interfaces is available here: https://github.com/upciti/openapi-python-client/tree/feature/0-boilerplate-clients.
| closed | 2020-10-28T13:15:50Z | 2023-08-13T02:10:12Z | https://github.com/openapi-generators/openapi-python-client/issues/224 | [
"โจ enhancement"
] | fyhertz | 5 |
huggingface/datasets | nlp | 6,756 | Support SQLite files? | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`.
See dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite
Note: should we also support DuckDB files? | closed | 2024-03-25T11:48:05Z | 2024-03-26T16:09:32Z | https://github.com/huggingface/datasets/issues/6756 | [
"enhancement"
] | severo | 3 |
chezou/tabula-py | pandas | 248 | Warning: Format 14 cmap table is not supported and will be ignored | While reading PDF file I am getting this as warning, and also some tables are not getting read.
WARNING: Format 14 cmap table is not supported and will be ignored.
If anybody here faced same issue or warning, please help. | closed | 2020-07-16T15:54:43Z | 2020-07-16T15:54:59Z | https://github.com/chezou/tabula-py/issues/248 | [] | MSOANCAH | 1 |
Kanaries/pygwalker | pandas | 353 | Create calculated measure in Pygwalker | **Is your feature request related to a problem? Please describe.**
I'm always frustrated when I want to flexibly create a calculated field in the UI.
**Describe the solution you'd like**
I can create a field and describe the results of this field through sql, like superset.
**Describe alternatives you've considered**
not sure, I want to know other's suggestions.
| closed | 2023-12-12T07:37:31Z | 2025-03-01T02:53:32Z | https://github.com/Kanaries/pygwalker/issues/353 | [
"enhancement"
] | longxiaofei | 6 |
tfranzel/drf-spectacular | rest-api | 840 | Download openapi json file locally during build | **Describe the bug**
I would like to download the drf-spectacular openapi schema into the project repository during my project build stage so that I can use it for CI/CD purposes later. Is that possible?
**To Reproduce**
python manage.py collectstatic
**Expected behavior**
Collectstatic or whatever other command downloads the file into local repository.
Thanks! | closed | 2022-10-24T21:25:10Z | 2024-12-05T11:37:31Z | https://github.com/tfranzel/drf-spectacular/issues/840 | [] | elaamrani | 4 |
geopandas/geopandas | pandas | 2,825 | DOC: avoid warning on geopandas import by setting USE_PYGEOS=0 env variable in readthedocs? | closed | 2023-03-08T13:28:07Z | 2023-03-08T14:45:07Z | https://github.com/geopandas/geopandas/issues/2825 | [
"documentation"
] | jorisvandenbossche | 1 | |
ets-labs/python-dependency-injector | asyncio | 184 | What is the purpose of containers? | The dependency-injector contains so called "containers".
I do not understand the purpose of containers. Why not just a class with fields initialized to providers (so that each field of the class would hold a provider) or just a dict whose values contain providers? | closed | 2018-02-07T20:57:34Z | 2018-02-12T08:16:09Z | https://github.com/ets-labs/python-dependency-injector/issues/184 | [
"question"
] | vporton | 2 |
aio-libs-abandoned/aioredis-py | asyncio | 1,225 | Necessary issues to resolve | EDIT I am in the process of moving aioredis to redis-py at RedisLabs. Apologies for the wait.
---
Several issues will be resolved by #1156 which will probably be included in 2.1.0. Issues to be resolved with potential fixes:
- [x] https://github.com/aio-libs/aioredis-py/issues/1115
- ~~Just delete the `__del__` magic method and get everyone to manually disconnect.~~
- Fixed in #1227
- [ ] https://github.com/aio-libs/aioredis-py/issues/1217
- ~~The fix should be included once the parallel PR in redis-py is merged~~
- Fixed in #1207
- [ ] https://github.com/aio-libs/aioredis-py/issues/1040
- Potentially can be resolved via the solution used at aredis
- [ ] https://github.com/aio-libs/aioredis-py/issues/1208
- Looking for a fix by @m-novikov. Any help at https://github.com/aio-libs/aioredis-py/pull/1216 would be super appreciated!
- [ ] https://github.com/aio-libs/aioredis-py/issues/778#issuecomment-984716024
- ~~Fixed by https://github.com/aio-libs/aioredis-py/issues/778#issuecomment-1000783249~~
- Fixed by #1156 Retry class and redis/redis-py#1832
Wishlist (not necessary to get into 2.0.1):
- [ ] https://github.com/aio-libs/aioredis-py/issues/1173
- [ ] https://github.com/aio-libs/aioredis-py/issues/1137
- Resolved by using Hiredis
- We should put something in the docs saying "INSTALL HIREDIS (RECOMMENDED)"
- [x] https://github.com/aio-libs/aioredis-py/issues/1103
- ~~I'd like to know if this is still an issue.~~
- To be fixed in #1256 . The issue is that redis-py has auto cleanup code in `__del__` but aioredis doesn't have the implicit capability since there is no async del method. | open | 2021-11-30T03:06:37Z | 2022-07-26T00:38:21Z | https://github.com/aio-libs-abandoned/aioredis-py/issues/1225 | [
"help wanted"
] | Andrew-Chen-Wang | 13 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,473 | [Bug]: Batch size, batch count | never mind | closed | 2024-04-09T18:37:00Z | 2024-04-09T18:38:34Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15473 | [
"bug-report"
] | Petomai | 0 |
robinhood/faust | asyncio | 399 | [Question] Integrating Faust with FastAPI framework | Hi all,
Sorry for not following the issue template as this is not a bug issue but a question.
Description
I am currently investigating possibilities on integration with [FastAPI](https://fastapi.tiangolo.com), and was wondering if anyone in the community already had an experience with setting up a simple Faust service running along with FastAPI integrated?
Here is the duplicate issue on their [repo](https://github.com/tiangolo/fastapi/issues/408).
It seems like both Faust and FastAPI run on their own event loops. FastAPI is based on Uvicorn that is based on Uvloop. I am not sure whether It makes sense to attempt to synchronise the event loops here or simply run Faust as a separate service communicating with FastAPI.
Will appreciate any response :-) | open | 2019-08-09T14:38:45Z | 2020-12-01T05:51:01Z | https://github.com/robinhood/faust/issues/399 | [] | aorumbayev | 4 |
marimo-team/marimo | data-visualization | 3,712 | Cross Origin Assets not loading on Playground via AJAX | ### Describe the bug
I'm not sure that this is actually a bug, but it does seem like a nice thing to be able to do. Most mapping SDKs load mercator tiles with AJAX from servers on other domains. I think due to the CSP on the playground page, it is preventing any of those assets from being loaded due to them being cross origin.
Example (watch network tab): https://marimo.app/?slug=l1ny4w
Working version of the same package in their docs. https://python-visualization.github.io/folium/latest/getting_started.html
### Environment
<details>
```
Exhibits on Playground (link attached above).
```
</details>
### Code to reproduce
_No response_ | open | 2025-02-06T21:55:05Z | 2025-02-08T00:20:52Z | https://github.com/marimo-team/marimo/issues/3712 | [
"bug",
"upstream"
] | jtbaker | 1 |
onnx/onnx | deep-learning | 6,180 | Shape inference crash on Conv | # Bug Report
### Describe the bug
```
import onnx
import onnx.parser
model = """
<
ir_version: 9,
opset_import: ["" : 11]
>
graph (float[7,6,1,5] in0, float in1, float[7,2,3,2,1] in2) => () {
out0 = Conv <auto_pad = "NOTSET", group = 1> (in0, in1, in2)
}
"""
onnx.shape_inference.infer_shapes(onnx.parser.parse_model(model))
```
crashes with a segmentation fault.
### System information
- OS Platform and Distribution (*. Linux Ubuntu 20.04*): Linux Ubuntu 20.04
- ONNX version (*e.g. 1.13*): 1.16.1
- Python version: 3.10.12
### Expected behavior
No crash, but an error that the model is invalid | closed | 2024-06-14T10:41:22Z | 2024-07-08T22:46:54Z | https://github.com/onnx/onnx/issues/6180 | [
"bug"
] | mgehre-amd | 0 |
InstaPy/InstaPy | automation | 6,422 | Commenting issue! | Hello.My bot working good and typing comment into comment area but dont click the post comment button.
Here is my bot codes;
from instapy import InstaPy
from instapy import smart_run
session = InstaPy(username="tugrann", password="xxxxxxxxx")
with smart_run(session):
session.set_relationship_bounds(enabled=True,
delimit_by_numbers=True,
max_followers=699,
min_followers=50,
min_posts=10,
min_following=50)
session.set_do_comment(enabled=True, percentage=100)
session.set_comments(['Wow! Nice shot.Share it on @yaylasports'], media='Photo')
session.set_comments(['Wow! Nice video.Share it on @yaylasports'], media='Video')
session.set_do_like(enabled=False)
session.like_by_tags(['soccer'], amount=3)
session.end()
| open | 2021-12-04T17:56:19Z | 2022-02-16T10:22:15Z | https://github.com/InstaPy/InstaPy/issues/6422 | [] | tugran | 4 |
sammchardy/python-binance | api | 1,044 | how to cancel OCO order with orderListId | **Describe the bug**
--Trying to CACCEL OCO order with following commands
result = client.cancel_order(symbol=TRADE_SYMBOL,orderListId=10035)
--it seems issue with orderListId argument. logs for OCO order (pasted in the last) showing orderListId=10035.
--following error received.
error from callback <function on_message at 0x0000023B2820C700>: APIError(code=-1104): Not all sent parameters were read; read '3' parameter(s) but was sent '4'.
**To Reproduce**
result = client.cancel_order(symbol=TRADE_SYMBOL,orderListId=a)
**Expected behavior**
cancel the OCO order. it seems issue is how to pass orderListId
**Environment (please complete the following information):**
- Python version:
$ python -V
Python 3.9.5
- OS: Windows 10, gitbash
**Logs or Additional context**
{
"symbol": "BTCUSDT",
"orderId": 5676213,
"orderListId": 10035,
"clientOrderId": "wH0PkOkJ83mxRnKDVCPGxk",
"price": "41845.69000000",
"origQty": "0.00035100",
"executedQty": "0.00000000",
"cummulativeQuoteQty": "0.00000000",
"status": "NEW",
"timeInForce": "GTC",
"type": "STOP_LOSS_LIMIT",
"side": "SELL",
"stopPrice": "41845.69000000",
"icebergQty": "0.00000000",
"time": 1632553383102,
"updateTime": 1632553383102,
"isWorking": false,
"origQuoteOrderQty": "0.00000000"
},
{
"symbol": "BTCUSDT",
"orderId": 5676214,
"orderListId": 10035,
"clientOrderId": "UaLXJE6SXJq50b9aR7TXZK",
"price": "42828.10000000",
"origQty": "0.00035100",
"executedQty": "0.00000000",
"cummulativeQuoteQty": "0.00000000",
"status": "NEW",
"timeInForce": "GTC",
"type": "LIMIT_MAKER",
"side": "SELL",
"stopPrice": "0.00000000",
"icebergQty": "0.00000000",
"time": 1632553383102,
"updateTime": 1632553383102,
"isWorking": true,
"origQuoteOrderQty": "0.00000000"
}
| open | 2021-09-25T08:26:14Z | 2022-07-17T20:09:59Z | https://github.com/sammchardy/python-binance/issues/1044 | [] | adnan-ulhaque | 2 |
streamlit/streamlit | deep-learning | 10,481 | Make sidebar allowable in fragment? | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Any chance we can allow sidebar to be usable in a fragment?
### Why?
It would be nice to have a viz control in the sidebar but the viz itself in the main page and it would be nice to not have to necessarily re-run the whole page when the control is changed
In my case I have some options in the sidebar and then a button below them. Once the user clicks the button, it creates some data based on the options and creates a plotly chart in the main page. It's convenient to have the button and options in the sidebar so they don't clutter the main page, but the entire main page isn't affected - just the chart - so it would be faster to only have that re-run
### How?
_No response_
### Additional Context
_No response_ | open | 2025-02-21T17:28:32Z | 2025-02-21T21:14:33Z | https://github.com/streamlit/streamlit/issues/10481 | [
"type:enhancement",
"feature:st.sidebar",
"feature:st.fragment"
] | msquaredds | 1 |
aleju/imgaug | machine-learning | 736 | bb.extract_from_image gives negative values | Hi,
I have a small image (60,60), with an even smaller bounding box. I want to extend the bounding box by a constant value (or until the image border is reached). I dont want zero-padding. I used:
`img2 = bb.extend(all_sides=20).extract_from_image(img, pad=False)`
This seems not to work, when the bb overshoots to the top/left. The resulting bb is:
`BoundingBox(x1=7.0000, y1=-3.0000, x2=74.0000, y2=63.0000, label=None)`
and the extracted img2 has the shape:
`(0, 67, 3)`
Using padding fixes the error, but adds a border, obviously. I think this is a bug?
Best regards,
Maik | open | 2020-12-08T10:53:54Z | 2020-12-08T10:53:54Z | https://github.com/aleju/imgaug/issues/736 | [] | mfruhner | 0 |
minimaxir/textgenrnn | tensorflow | 238 | ImportError: cannot import name 'multi_gpu_model' from 'tensorflow.keras.utils' | help please ImportError: cannot import name 'multi_gpu_model' from 'tensorflow.keras.utils' when I from textgenrnn import textgenrnn please | closed | 2021-10-12T22:43:24Z | 2021-12-30T01:39:48Z | https://github.com/minimaxir/textgenrnn/issues/238 | [] | ghost | 3 |
ploomber/ploomber | jupyter | 682 | Re-using tasks | (This issue discusses a few approaches for re-using tasks. The objective is to open the discussion to add a new example that showcases this)
## Re-using tasks in different `pipeline.yaml` files via `import_tasks_from`
This directive allows composing pipelines. [Typically used](https://docs.ploomber.io/en/latest/deployment/batch.html#composing-batch-pipelines) for composing training and serving pipelines. This is a good approach when the same task needs to appear in two separate `pipeline.yaml`
## Re-using the same `source:` in the same `pipeline.yaml`
Another use case is re-using the same task in the same `pipeline.yaml`. For example, let's say we have a `tasks.drop_columns` task that we want to apply to different datasets. By default Ploomber ties the tasks to their upstream since the names of the upstream tasks must appear in the source. Example:
```python
# tasks.py
def drop_columns(upstream, product):
df = pd.read_csv(upstream['train'])
# ...
```
To fix this, we can turn off `extract_upstream`:
```yaml
meta:
extract_upstream: false
tasks:
- source: tasks.train
...
- source: tasks.drop_columns
name: drop-columns-from-train
upstream: [train]
...
- source: tasks.test
...
- source: tasks.drop_columns
name: drop-columns-from-test
upstream: [test]
```
Then, in our `tasks.py`:
```python
# tasks.py
def drop_columns(upstream, product):
name = list(upstream)[0]
df = pd.read_csv(upstream[name])
# ...
```
Note that this new version doesn't refer to the upstream by name, since in one case, the upstream will be `train` and in the second, it will be `test`). Alternatively, we could the shortcut: `upstream.first`, which will return the product of the upstream dependency, regardless of what the name of the upstream task is.
[Here's an example](https://github.com/ploomber/projects/blob/reuse-tasks/cookbook/reuse-tasks/parallel-branches/pipeline.yaml)
This gets a bit trickier if the upstream products come in different shapes (e.g. one generates a single product `output.csv`, but another one generates multiple: `{'train': 'train.csv', 'test': 'test.csv'}`). In such case, the task can be parametrized to know which upstream product to use.
```yaml
- source: tasks.drop_columns
name: drop-columns-from-test
upstream: [split_data]
params:
# process the "train" product generated by the "split_data" upstream task
product_key: train
```
[Here's an example](https://github.com/ploomber/projects/blob/reuse-tasks/cookbook/reuse-tasks/tree/pipeline.yaml)
Alternatively, @fferegrino suggested that a task may declare a specific product as upstream (as opposed to a task). However, this requires a good amount of work since the current implementation only allows to establish "upstream/downstream" relationships among tasks.
| closed | 2022-03-25T23:38:43Z | 2022-09-06T01:49:35Z | https://github.com/ploomber/ploomber/issues/682 | [] | edublancas | 0 |
arogozhnikov/einops | numpy | 274 | einops compatible with ONNX export? | Getting some einops related bugs when trying to export to ONNX.
```
/home/bryan/venv/gpu/lib/python3.10/site-packages/einops/packing.py:149: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
-1 if -1 in p_shape else prod(p_shape)
/home/bryan/venv/gpu/lib/python3.10/site-packages/einops/packing.py:154: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if n_unknown_composed_axes > 1:
/home/bryan/venv/gpu/lib/python3.10/site-packages/einops/packing.py:166: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if n_unknown_composed_axes == 0:
```
Are these safe to be ignored? | closed | 2023-08-09T02:27:54Z | 2023-08-10T05:42:29Z | https://github.com/arogozhnikov/einops/issues/274 | [
"question"
] | bryanhpchiang | 3 |
tartiflette/tartiflette | graphql | 639 | ERROR: Failed building wheel for tartiflette |
## Report a bug
Please provide the steps to reproduce your problem and, if possible, a full reproducible environment. **As we are working directly with containers, please provide the Dockerfile sample or the Docker image name**
* [ ] **Explain with a simple sentence the expected behavior**
* [ ] **Tartiflette version:** 0.1.0 and 1.4.1
* [ ] **Python version:** 3.8 and 3.11
* [ ] **Executed in docker:** No_
* [ ] **Is it a regression from a previous versions?** No_
I need install tartiflette and tartiflette_asgi but have Building wheel for tartiflette (pyproject.toml) did not run successfully. [WinError 2] The system cannot find the file specified
I had this problem with tartiflette, but pip3 install --only-binary tartiflette tartiflette helped me.
Now I have this problem with tartiflette_asgi and try pip3 install --only-binary tartiflette_asgi tartiflette_asgi and 'pip3 install tartiflette_asgi --no-binary :all:' but it isnt help.
Windows 11 Python 3.8 and 3.10 setup-tools 68.2.2 C:\Windows\System32>cmake --version cmake version 3.27.7
`2023-10-23T21:15:05,639 Skipping link: not a file: https://pypi.org/simple/lark-parser/
2023-10-23T21:15:05,641 Given no hashes to check 2 links for project 'lark-parser': discarding no candidates
2023-10-23T21:15:05,642 Collecting lark-parser==0.12.0 (from tartiflette<1.5,>=1.0->tartiflette_asgi)
2023-10-23T21:15:05,643 Created temporary directory: C:\Users\Admin\AppData\Local\Temp\pip-unpack-wiayrbt2
2023-10-23T21:15:05,645 Using cached lark_parser-0.12.0-py2.py3-none-any.whl (103 kB)
2023-10-23T21:15:05,668 Requirement already satisfied: idna>=2.8 in c:\users\admin\appdata\local\programs\python\python311\lib\site-packages (from anyio<5,>=3.4.0->starlette<1.0,>=0.13->tartiflette_asgi) (3.4)
2023-10-23T21:15:05,671 Requirement already satisfied: sniffio>=1.1 in c:\users\admin\appdata\local\programs\python\python311\lib\site-packages (from anyio<5,>=3.4.0->starlette<1.0,>=0.13->tartiflette_asgi) (1.3.0)
2023-10-23T21:15:05,674 Requirement already satisfied: pycparser in c:\users\admin\appdata\local\programs\python\python311\lib\site-packages (from cffi<2.0.0,>=1.0.0->tartiflette<1.5,>=1.0->tartiflette_asgi) (2.21)
2023-10-23T21:15:05,681 Created temporary directory: C:\Users\Admin\AppData\Local\Temp\pip-unpack-n4cs3gf1
2023-10-23T21:15:05,682 Building wheels for collected packages: tartiflette
2023-10-23T21:15:05,684 Created temporary directory: C:\Users\Admin\AppData\Local\Temp\pip-wheel-815nh3t6
2023-10-23T21:15:05,685 Destination directory: C:\Users\Admin\AppData\Local\Temp\pip-wheel-815nh3t6
2023-10-23T21:15:05,687 Running command Building wheel for tartiflette (pyproject.toml)
2023-10-23T21:15:06,040 running bdist_wheel
2023-10-23T21:15:06,052 running build
2023-10-23T21:15:06,053 running build_py
2023-10-23T21:15:06,187 CMake Deprecation Warning at CMakeLists.txt:1 (CMAKE_MINIMUM_REQUIRED):
2023-10-23T21:15:06,187 Compatibility with CMake < 3.5 will be removed from a future version of
2023-10-23T21:15:06,187 CMake.
2023-10-23T21:15:06,188 Update the VERSION argument <min> value or use a ...<max> suffix to tell
2023-10-23T21:15:06,188 CMake that the project does not need compatibility with older versions.
2023-10-23T21:15:10,186 CMake Warning (dev) at CMakeLists.txt:10 (FIND_PACKAGE):
2023-10-23T21:15:10,186 Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
2023-10-23T21:15:10,186 are removed. Run "cmake --help-policy CMP0148" for policy details. Use
2023-10-23T21:15:10,186 the cmake_policy command to set the policy and suppress this warning.
2023-10-23T21:15:10,186 This warning is for project developers. Use -Wno-dev to suppress it.
2023-10-23T21:15:10,438 error: [WinError 2] ะะต ัะดะฐะตััั ะฝะฐะนัะธ ัะบะฐะทะฐะฝะฝัะน ัะฐะนะป
2023-10-23T21:15:10,470 ERROR: Building wheel for tartiflette (pyproject.toml) exited with 1
2023-10-23T21:15:10,470 [bold magenta]full command[/]: [blue]'c:\users\admin\appdata\local\programs\python\python311\python.exe' 'c:\users\admin\appdata\local\programs\python\python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py' build_wheel 'C:\Users\Admin\AppData\Local\Temp\tmps5v5k_0k'[/]
2023-10-23T21:15:10,470 [bold magenta]cwd[/]: C:\Users\Admin\AppData\Local\Temp\pip-install-3ettkc94\tartiflette_7d5a2e1d9b3447749b84b90c152742ea
2023-10-23T21:15:10,470 ERROR: Failed building wheel for tartiflette
2023-10-23T21:15:10,470 Failed to build tartiflette
2023-10-23T21:15:10,470 ERROR: Could not build wheels for tartiflette, which is required to install pyproject.toml-based projects
2023-10-23T21:15:10,470 Exception information:
2023-10-23T21:15:10,470 Traceback (most recent call last):
2023-10-23T21:15:10,470 File "c:\users\admin\appdata\local\programs\python\python311\Lib\site-packages\pip\_internal\cli\base_command.py", line 180, in exc_logging_wrapper
2023-10-23T21:15:10,470 status = run_func(*args)
2023-10-23T21:15:10,470 ^^^^^^^^^^^^^^^
2023-10-23T21:15:10,470 File "c:\users\admin\appdata\local\programs\python\python311\Lib\site-packages\pip\_internal\cli\req_command.py", line 245, in wrapper
2023-10-23T21:15:10,470 return func(self, options, args)
2023-10-23T21:15:10,470 ^^^^^^^^^^^^^^^^^^^^^^^^^
2023-10-23T21:15:10,470 File "c:\users\admin\appdata\local\programs\python\python311\Lib\site-packages\pip\_internal\commands\install.py", line 429, in run
2023-10-23T21:15:10,470 raise InstallationError(
2023-10-23T21:15:10,470 pip._internal.exceptions.InstallationError: Could not build wheels for tartiflette, which is required to install pyproject.toml-based projects
2023-10-23T21:15:10,548 Remote version of pip: 23.3
2023-10-23T21:15:10,548 Local version of pip: 23.3
2023-10-23T21:15:10,564 Was pip installed by pip? True
2023-10-23T21:15:10,564 Removed build tracker: 'C:\\Users\\Admin\\AppData\\Local\\Temp\\pip-build-tracker-jb7gpks9'
`
| open | 2023-10-23T18:18:31Z | 2023-10-23T18:18:51Z | https://github.com/tartiflette/tartiflette/issues/639 | [] | ILugaro | 0 |
pytest-dev/pytest-selenium | pytest | 50 | Implement cloud providers as plugins | I think it would make sense for the various cloud providers (Sauce Labs, BrowserStack, TestingBot) to be reimplemented as plugins. This would give more flexibility than the current implementation as each provider has different features and API models. I'm not sure of the best approach, but suspect that we could implement custom hooks in the main plugin to support these additional driver plugins.
| closed | 2016-01-19T19:59:30Z | 2016-02-24T11:18:00Z | https://github.com/pytest-dev/pytest-selenium/issues/50 | [
"enhancement"
] | davehunt | 2 |
gee-community/geemap | jupyter | 1,017 | Map.addLayerControl() doesn't seem to be working | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.13.1
- Python version: 3.9.12 (conda 4.12.0)
- Operating System: Windows 11
### Description
I'm new to geemap and was looking around a bit and following along the instructions on this page:
[https://geemap.org/notebooks/geemap_and_folium]
### What I Did
in cell [18]
```
Map.addLayerControl()
Map
```
No layercontrol appeared in the top-right of the map, as I was expecting (like in folium/leaflet)
In the later steps adding the various basemaps.. they couldn't be found either
seems something is broken, or I am doing something quite wrong :(
this is the final image after executing cell [21]. pretty bare :(

| closed | 2022-04-13T19:07:42Z | 2022-10-11T09:01:41Z | https://github.com/gee-community/geemap/issues/1017 | [
"bug"
] | meesterp | 5 |
mlfoundations/open_clip | computer-vision | 599 | Batch Inferencing | How to batch inference to get
image_features, text_features.
Facing dimension issue
# Stack all the images into a single tensor
image_tensors = torch.stack([preprocessor_openCLIP(img) for img in crop_imgs], dim=0)
print('img shape batch ', image_tensors.shape)
# Tokenize the query strings
text_inputs = [tokenizer_openCLIP(query) for query in query_strings]
print(text_inputs)
max_length = max([input_tensor.size(0) for input_tensor in text_inputs])
print(max_length)
padded_text_inputs = pad_text_inputs(text_inputs, max_length)
text_inputs_tensor = torch.stack(padded_text_inputs, dim=1) # Convert to a tensor
print('text shape batch ', text_inputs_tensor.shape)
text_inputs_tensor_permuted = text_inputs_tensor.permute(1, 0, 2)
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model_openCLIP.encode_image(image_tensors)
print(333, image_features.shape)
text_features = model_openCLIP.encode_text(text_inputs_tensor_permuted)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
results_list = torch.flatten(text_probs, start_dim=0, end_dim=1).cpu().tolist() | closed | 2023-08-17T13:39:42Z | 2023-09-15T22:08:37Z | https://github.com/mlfoundations/open_clip/issues/599 | [] | nilesh23041999 | 1 |
seleniumbase/SeleniumBase | web-scraping | 3,051 | Please add "tel:" to def assert_no_404_errors(self, multithreaded=True, timeout=None): | Hello,
You have a great exception handler for "data:" "mailto:" etc. links.
Please add also "tel:" to the list, so that tests don't fail in this case.
Best,
Thomas | closed | 2024-08-23T11:16:54Z | 2024-08-29T03:15:18Z | https://github.com/seleniumbase/SeleniumBase/issues/3051 | [
"enhancement"
] | Th0mas89 | 2 |
jupyterlab/jupyter-ai | jupyter | 677 | Contributor documentation: Add guidance on contributing a new provider | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
<!--
Thanks for thinking of a way to improve JupyterLab. If this solves a problem for you, then it probably solves that problem for lots of people! So the whole community will benefit from this request.
Before creating a new feature request please search the issues for relevant feature requests.
-->
### Problem
Open source contributors are opening PRs to add new providers to Jupyter AI. This is extremely helpful to everybody, and we greatly appreciate others taking the time to write and test these. However, contributors have had difficulty doing so due to subtle and mostly undocumented details of how the Python source works.
The contributor documentation should have a new section that:
1. Describes how to contribute a new provider, end-to-end,
2. Indicates that contributors need to run `jlpm dev-install` again to make new providers show in the UI after declaring the entry point in `pyproject.toml`, and
3. Indicates that contributors should define providers in separate files to keep third-party dependencies optional.
| open | 2024-03-05T23:56:41Z | 2024-03-06T02:28:54Z | https://github.com/jupyterlab/jupyter-ai/issues/677 | [
"documentation"
] | dlqqq | 1 |
nschloe/tikzplotlib | matplotlib | 303 | Multiple Errors with Plotting |
I'm trying to save the plot generated by the attached (in the zip file) as a tikz file.
[thermophysical_properties.zip](https://github.com/nschloe/matplotlib2tikz/files/3317378/thermophysical_properties.zip)
The plot should look like the attached pdf.
[Thermophysical.pdf](https://github.com/nschloe/matplotlib2tikz/files/3317379/Thermophysical.pdf)
I'm using the following code to insert it into a latex document (after renaming it to a .tikz file)
```
>
\begin{figure}
\centering
\input{thermophysical_plots.tikz}
\caption{Plots thermophysical properties used in the dynamic model}
\label{fig:thermophysical_diagram}
\end{figure}
```
However, this is what I get:

Which looks nothing like the expected output.
Note, the matplotlib .pgf exporter doesn't work for this either.
| closed | 2019-06-23T00:43:55Z | 2019-10-23T18:35:26Z | https://github.com/nschloe/tikzplotlib/issues/303 | [] | terryphi | 2 |
qubvel-org/segmentation_models.pytorch | computer-vision | 866 | get_preprocessing_fn | preprocess_input = get_preprocessing_fn('resnet18', pretrained='imagenet')
...
img = preprocess_input(img)
What range of pixels should i put there? [0, 1] or [0,255]. Does it depends on specific preprocessing function? Or it's always the same rule? | closed | 2024-03-26T08:24:23Z | 2024-05-26T10:52:41Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/866 | [
"Stale"
] | isayoften | 2 |
iperov/DeepFaceLab | deep-learning | 5,590 | ่ชๅจ้ฉพ้ฉถๆดๆฐ็ฌ่ฎฐย | ๆจๅฅฝ๏ผโจ็ไบๆจๆป็ป็ๅ
ๅฎน้ๅธธๅ
จ้ข๏ผ ๅฏๅฆๅผ่ไธๆฌไบบ็็ฌ่ฎฐ๏ผๆๆๅฏน่ชๅจ้ฉพ้ฉถ็็่งฃๅไบซ็ปๅคงๅฎถ๏ผๅธๆๅคงๅฎถๅๆไธ่ตทไธๆญๅฎๅ็ธๅ
ณๅ
ๅฎนโจ่ฐข่ฐขๆจ
[Autopilot-Updating-Notes](https://github.com/nwaysir/Autopilot-Updating-Notes) | open | 2022-11-26T03:59:53Z | 2023-06-17T16:46:43Z | https://github.com/iperov/DeepFaceLab/issues/5590 | [] | gotonote | 2 |
robotframework/robotframework | automation | 4,924 | WHILE `on_limit` missing from listener v2 attributes | WHILE loops got an `on_limit` option for controlling what to do if the loop limit is reached in RF 6.1 (#4562). It seems we forgot to add that to the attributes passed to `start/end_keyword` methods of the listener v2 API. The User Guide claims it would be there which makes the situation worse. | closed | 2023-11-02T16:19:29Z | 2023-11-07T09:15:05Z | https://github.com/robotframework/robotframework/issues/4924 | [
"bug",
"priority: medium",
"alpha 1",
"effort: small"
] | pekkaklarck | 0 |
sinaptik-ai/pandas-ai | pandas | 1,141 | Provide custom chart name to save in charts_directory while chatting with the PandasAI | ### ๐ The feature
Passing custom_path to save charts is available but giving custom names to charts is still missing.
So, I am requesting this feature to be added.
### Motivation, pitch
I am working on a project where I need to save the charts generated through PandasAI with custom names to display to the user as required.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-05-02T04:48:59Z | 2024-08-08T16:04:31Z | https://github.com/sinaptik-ai/pandas-ai/issues/1141 | [] | satyamj3 | 0 |
SciTools/cartopy | matplotlib | 2,036 | Ordnance Survey WMTS Out of Date | ### Description
The Web Tile Retrieval class for Ordnance Survey's map data uses an out-of-date API and so does not work when you try to use it. This class exists in `cartopy.io.img_tiles`.
OS has a new API service called the [OS Data Hub](https://osdatahub.os.uk/) that has replaced the previous API, so we should update cartopy to reflect the new service.
The new class can use the [OS Maps API](https://osdatahub.os.uk/docs/wmts/overview) WMS instead.
I'll work on an updated version and submit it for a pull request.
| closed | 2022-04-20T14:13:57Z | 2022-12-02T09:02:34Z | https://github.com/SciTools/cartopy/issues/2036 | [
"Type: Infrastructure",
"Component: Raster source"
] | dchirst | 0 |
gradio-app/gradio | data-science | 10,702 | Cannot get selected row in a sorted list (missing documentation) | ### Describe the bug
After much experimentation, I cannot get the gr.DataFrame listener `show_selected()` to determine which row was clicked after the table is sorted, or the underlying df is modified. target.index[0] always shows the visual row that was clicked, regardless of any changes in the underlying data, and there doesn't seem to be a way to figure out which row in the model df was selected.
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import pandas as pd
class DataModel:
def __init__(self):
self.df = pd.DataFrame({"Name": ["Alice", "Bob", "Charlie"], "Age": [25, 30, 35]})
def get_data(self):
return self.df.copy() # Ensures we always work on a copy
model = DataModel()
def show_selected(evt: gr.SelectData, displayed_df):
row_idx = evt.index[0] # Get row index from UI (sorted table)
sorted_df = displayed_df.reset_index(drop=True) # Reset index to match UI order
selected_row = sorted_df.iloc[row_idx].to_dict() # Extract full row as dictionary
return f"Selected Row: {selected_row}"
with gr.Blocks() as demo:
gr.Markdown("# Click a Row in the Table (Sorting Now Works Perfectly)")
df_view = gr.DataFrame(value=model.get_data(), interactive=True)
output_text = gr.Textbox(label="Clicked Row Data")
# Ensure selection retrieves the correct full row
df_view.select(show_selected, inputs=df_view, outputs=output_text)
demo.launch()
```
### Screenshot
<img width="760" alt="Image" src="https://github.com/user-attachments/assets/13a297c0-1fd3-4182-9db8-d343f9506c16" />
### Logs
```shell
```
### System Info
```shell
Safari Version 18.3 (20620.2.4.11.5)
```
### Severity
I cannot work around it :( | closed | 2025-03-01T02:27:00Z | 2025-03-06T15:54:02Z | https://github.com/gradio-app/gradio/issues/10702 | [
"docs/website"
] | rbpasker | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,761 | Allow formatting of the text in the Disclaimer field. | ### Proposal
The information that can be given in this field can be extensive and it would be interesting if the text could be given a certain format in order to differentiate different sections of the disclaimer.
I think allowing a few HTML tags like `<b>,` `<strong>`, `<i>` and `<h1>` to `<h6>` would be enough to give some formatting to this field and make it easier to read.
### Motivation and context
Allowing some formatting in this field would make it easier to read. | closed | 2023-11-07T14:55:10Z | 2023-11-07T18:00:31Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3761 | [] | v-j-f | 1 |
2noise/ChatTTS | python | 796 | How to improve inference latency performance? | ```
2024-10-22 03:26:36.033 | INFO | app:generate_audio:73 - Refined text: ['but since [uv_break] ๆณข ๅก [uv_break] like [uv_break] like ้ ๆณ, like pocari sweat, [uv_break] the drink. [uv_break], and [uv_break] ไธ ๆน ๆฐ ๆ, [uv_break] eastern cultures and peoples, are super different,']
2024-10-22 03:26:36.033 | INFO | app:generate_audio:78 - Start voice inference.
text: 16%|โโ | 62/384(max) [00:01, 56.04it/s]
code: 30%|โโโ | 606/2048(max) [00:10, 55.61it/s]
2024-10-22 03:26:48.069 | INFO | app:generate_audio:91 - Inference completed.
```
This simple sentence took 12 seconds on Nvidia Tesla T4. Is it correct to assume ChatTTS is not suitable for situations that require low "Time To First Audio(TTFA)"? | open | 2024-10-22T13:31:06Z | 2024-10-30T13:29:58Z | https://github.com/2noise/ChatTTS/issues/796 | [
"documentation",
"help wanted",
"algorithm",
"performance"
] | twocode | 1 |
apify/crawlee-python | web-scraping | 389 | Would be great with a user guide. | Would be great with a user guide.
"Just" drag the .gerberset on Main.py does nothing.
_Originally posted by @martin323232 in https://github.com/CRImier/Panelizer2PnP/issues/1_ | closed | 2024-08-02T01:45:17Z | 2024-08-02T06:58:54Z | https://github.com/apify/crawlee-python/issues/389 | [] | Koppom94 | 0 |
pallets/quart | asyncio | 111 | Conceptual Theory | Flask is not ASGI framework, but it supports async and await keywords in their routes, what does that mean. Will that not make flask an async. Can you compare performance if flask used with async-await keywords and using a new ASGI framework like Quart? | closed | 2020-10-16T16:43:44Z | 2022-07-05T01:58:52Z | https://github.com/pallets/quart/issues/111 | [] | jaytimbadia | 3 |
litestar-org/litestar | api | 3,840 | Bug: WebSocket connection fails due to 'GET' method being sent instead of None (Litestar expects None) | ### Description
When using Litestar with Socketify as the ASGI server for handling WebSocket connections, I encountered a MethodNotAllowedException with the following traceback. The error seems to stem from the fact that Socketify is sending a 'GET' method in the ASGI scope, whereas Litestar expects the method to be None for WebSocket connections.
Additional Information: Upon investigation, it seems that Socketify is passing 'GET' in the ASGI scope for WebSocket upgrades. However, Litestar expects the method to be None for WebSocket connections. The code for the ASGI implementation in Socketify at line 106 shows that the method is being set to 'GET'.
https://github.com/cirospaciari/socketify.py/blob/main/src/socketify/asgi.py
` "method": ffi.unpack(info.method, info.method_size).decode("utf8"), `
It would be helpful if Litestar could gracefully handle this scenario or if Socketify could adjust its ASGI scope generation for WebSocket connections to comply with the expected behavior.
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/main/litestar/_asgi/routing_trie/traversal.py
### MCVE
```python
from litestar import Litestar, websocket_listener
from socketify import ASGI
@websocket_listener("/")
async def handler(data: str) -> str:
return data
litestar_app = Litestar([handler], debug=True)
if __name__ == "__main__":
app = ASGI(litestar_app)
app.listen(8000, lambda config: print("Listening on port http://localhost:%d now\n" % config.port))
app.run()
```
### Steps to reproduce
```bash
1. Run the code provided above.
2. Initiate a WebSocket connection to ws://localhost:8000.
3. Observe the error in the logs.
Expected behavior: The WebSocket connection should be established successfully, and Litestar should handle the connection without throwing a MethodNotAllowedException.
```
### Screenshots
_No response_
### Logs
```bash
Listening on port http://localhost:8000 now
ERROR - 2024-10-25 11:05:06,635 - litestar - config - Uncaught exception (connection_type=websocket, path=/):
Traceback (most recent call last):
File "C:\Users\gangstand\Desktop\ws.chat\.venv\Lib\site-packages\litestar\_asgi\routing_trie\traversal.py", line 136, in parse_path_to_route
asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\gangstand\Desktop\ws.chat\.venv\Lib\site-packages\litestar\_asgi\routing_trie\traversal.py", line 82, in parse_node_handlers
return node.asgi_handlers[method]
~~~~~~~~~~~~~~~~~~^^^^^^^^
KeyError: 'GET'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\gangstand\Desktop\ws.chat\.venv\Lib\site-packages\litestar\middleware\_internal\exceptions\middleware.py", line 159, in call
await self.app(scope, receive, capture_response_started)
File "C:\Users\gangstand\Desktop\ws.chat\.venv\Lib\site-packages\litestar\_asgi\asgi_router.py", line 90, in call
asgi_app, route_handler, scope["path"], scope["path_params"], path_template = self.handle_routing(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\gangstand\Desktop\ws.chat\.venv\Lib\site-packages\litestar\_asgi\asgi_router.py", line 115, in handle_routing
return parse_path_to_route(
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\gangstand\Desktop\ws.chat\.venv\Lib\site-packages\litestar\_asgi\routing_trie\traversal.py", line 173, in parse_path_to_route
raise MethodNotAllowedException() from e
litestar.exceptions.http_exceptions.MethodNotAllowedException: 405: Method Not Allowed
```
### Litestar Version
[tool.poetry]
name = "app"
version = "0.1.0"
description = "WebSocket connection fails due to 'GET' method being sent instead of None (Litestar expects None)"
authors = ["gangstand <ganggstand@gmail.com>"]
[tool.poetry.dependencies]
python = "^3.12"
socketify = "^0.0.28"
litestar = "^2.12.1"
granian = "^1.6.1"
uvicorn = "^0.32.0"
websockets = "^13.1"
[tool.poetry.dev-dependencies]
ruff = "*"
isort = "*"
mypy = "*"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
[tool.ruff]
fix = true
unsafe-fixes = true
line-length = 120
[tool.ruff.format]
docstring-code-format = true
[tool.ruff.lint]
select = ["ALL"]
ignore = ["EM", "FBT", "TRY003", "D1", "D203", "D213", "G004", "FA", "COM812", "ISC001", "PLR0913"]
[tool.ruff.lint.isort]
no-lines-before = ["standard-library", "local-folder"]
known-third-party = []
known-local-folder = []
lines-after-imports = 2
[tool.ruff.lint.extend-per-file-ignores]
"tests/*.py" = ["S101", "S311"]
[tool.coverage.report]
exclude_also = ["if typing.TYPE_CHECKING:"]
### Platform
- [ ] Linux
- [ ] Mac
- [X] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-10-25T08:24:42Z | 2025-03-20T15:55:01Z | https://github.com/litestar-org/litestar/issues/3840 | [
"Bug :bug:"
] | gangstand | 3 |
encode/databases | asyncio | 197 | Please include docs and tests directories in the tarball | Hi,
Thanks for writing databases! I use it at work (after trying several other solutions).
Could you re-add docs/ and tests/ directories to the tarball published on PyPI?
I use this tarball to generate [Debian package](https://packages.debian.org/python3-databases) and I want to ship .md files and run tests during build. | closed | 2020-04-27T12:34:23Z | 2020-04-28T06:37:19Z | https://github.com/encode/databases/issues/197 | [] | p1otr | 2 |
hzwer/ECCV2022-RIFE | computer-vision | 91 | Transparent PNG support | Seeing that recently EXR support was added, is it possible to support transparency (alpha channel) for PNG input and output (using `--img --png`) for inference_video.py?
This would enable interpolation of transparent GIFs. | closed | 2021-01-11T15:26:08Z | 2022-12-11T09:58:53Z | https://github.com/hzwer/ECCV2022-RIFE/issues/91 | [] | n00mkrad | 19 |
saleor/saleor | graphql | 17,178 | Bug: Stripe payment gateway not found - Unhandled Runtime Error Error: No available payment gateways | ### What are you trying to achieve?
Stripe checkout form on default storefront checkout page
### Steps to reproduce the problem
Install the default storefront, enable the stripe plugin for the channel in the admin and generate the webhook, then visit the default storefront and stripe payment form doesn't load
### What did you expect to happen?
Stripe card form doesn't load in checkout page
### Logs
2024-12-18 01:43:45,637 WARNING saleor.payment.gateways.stripe.webhooks Invalid signature for Stripe webhook [PID:12:ThreadPoolExecutor-45_0]
2024-12-18 01:43:45,638 WARNING django.request Bad Request: /plugins/channel/channel-pln/saleor.payments.stripe/webhooks/ [PID:12:ThreadPoolExecutor-46_0]
2024-12-18 01:45:44,063 WARNING saleor.payment.gateways.stripe.webhooks Invalid signature for Stripe webhook [PID:9:ThreadPoolExecutor-19_0]
2024-12-18 01:45:44,065 WARNING django.request Bad Request: /plugins/channel/default-channel/saleor.payments.stripe/webhooks/ [PID:9:ThreadPoolExecutor-20_0]
2024-12-18 17:32:59,066 DEBUG saleor.payment.gateways.stripe.webhooks Processing new Stripe webhook [PID:10:ThreadPoolExecutor-40_0]
2024-12-18 17:32:59,067 DEBUG saleor.payment.gateways.stripe.webhooks Processing new Stripe webhook [PID:12:ThreadPoolExecutor-127_0]
2024-12-18 17:32:59,073 WARNING saleor.payment.gateways.stripe.webhooks Payment for PaymentIntent was not found [PID:12:ThreadPoolExecutor-127_0]
### Environment
Saleor version: โฆ
OS and version: โฆ
latest | open | 2024-12-18T19:29:41Z | 2025-03-18T10:45:37Z | https://github.com/saleor/saleor/issues/17178 | [
"bug",
"triage"
] | chillpilllike | 3 |
LibrePhotos/librephotos | django | 696 | Integrate pull request preview environments | I would like to support LibrePhotos by implementing [Uffizzi](https://github.com/UffizziCloud/uffizzi) preview environments.
Disclaimer: I work on [Uffizzi](https://github.com/UffizziCloud/uffizzi).
Uffizzi is a Open Source full stack previews engine and our platform is available completely free for LibrePhotos (and all open source projects). This will provide maintainers with preview environments of every PR in the cloud, which enables faster iterations and reduces time to merge. You can see the open source repos which are currently using Uffizzi over [here](https://uffizzi.notion.site)
Uffizzi is purpose-built for the task of previewing PRs and it integrates with your workflow to deploy preview environments in the background without any manual steps for maintainers or contributors.
We can go ahead and create an Initial PoC for you right away if you think there is value in this proposal.
TODO:
- [ ] Intial PoC
cc @waveywaves
| open | 2022-12-12T14:46:29Z | 2023-01-16T08:37:56Z | https://github.com/LibrePhotos/librephotos/issues/696 | [
"enhancement"
] | jpthurman | 0 |
plotly/dash | flask | 2,813 | How does dash combine with flask jwt? | My previous project used flask jwt for authentication. After switching to dash, how can I support jwt? | closed | 2024-03-25T08:31:51Z | 2024-04-02T18:01:37Z | https://github.com/plotly/dash/issues/2813 | [] | jaxonister | 1 |
davidsandberg/facenet | tensorflow | 457 | Retrain final layer and export frozen graph | I'm trying to build a real-time facial recognition app ([inspired by this repo](https://github.com/datitran/object_detector_app)), which uses Tensorflow object detectors within a video stream. I was able to detect faces, but not _differentiate_ them, which motivated me to discover facenet. The app allows us to load a frozen Tensorflow graph (`.pb`), and so I'm trying to figure out how I can do so with facenet.
**What I've done**
I've managed to run `classifier.py` using the pretrained resnet on the LFW dataset, but I'm trying to avoid having the SVC layer that comes with the classifier (saved in `.pkl`). I'm new to Tensorflow, and would appreciate any help on this matter.
**What I've tried**
I think the script I'm looking for is `train_tripletloss.py`, but I'm not entirely sure. It seems like I can specify the file path to a pretrained model using `args.pretrained_model`, as well as the images I require for retraining using `args.data_dir`. I think the script handles most of the model training, before `save_variables_and_metagraph` runs to save the progress thus far. However, there are only two files being written: the `.ckpt` and `.meta` files. How do I obtain a frozen graph from here?
I tried using Tensorflow's `export_inference_graph.py` ([link here](https://github.com/tensorflow/models/blob/master/object_detection/export_inference_graph.py)), but that script requires a pipeline config file, and runs that using a specified checkpoint before creating a frozen graph. I do not, however, have the pipeline config file.
I'm pretty lost at this point in time, any help whatsoever would be really great! Thank you. | open | 2017-09-13T14:28:49Z | 2018-04-18T07:21:49Z | https://github.com/davidsandberg/facenet/issues/457 | [] | thisisandreeeee | 1 |
nschloe/tikzplotlib | matplotlib | 87 | Legend title support | So far, mpl2tikz does not support legend titles. Consider this mwe,
``` python
import numpy as np
import matplotlib.pyplot as plt
plt.plot([1,2], label="foo")
plt.plot([1,3], label="bar")
plt.legend(loc="lower right", title="title")
from matplotlib2tikz import save as tikz_save
tikz_save("legend_title_mwe.tex")
```
the title is not displayed.
Adding a title to a legend in pgfplots has been discussed [here](http://tex.stackexchange.com/questions/2311/add-a-legend-title-to-the-legend-box-with-pgfplots).
Have you encountered this before/found an idea for a workaround?
Thanks, pylipp
EDIT:
A possible way could be to add
``` python
title_text = obj.get_title().get_text()
if title_text != "None":
texts.append('%s' % title_text)
```
in [draw_legend()](https://github.com/nschloe/matplotlib2tikz/blob/f605bfae12272a2e70b885dd2267f98773a9ae63/matplotlib2tikz/legend.py) before querying any other of the object's texts.
Eventually, `\addlegendimage{empty legend}` has to be added before the first `\addplot`.
| closed | 2016-02-29T17:35:35Z | 2019-03-19T20:06:10Z | https://github.com/nschloe/tikzplotlib/issues/87 | [] | pylipp | 1 |
aiogram/aiogram | asyncio | 1,073 | Add possibility to get message by given chat_id and message_id | ### aiogram version
3.x
### Problem
I'm can't find possibility to get a message object by given chat_id and message_id.
But there are situations when this can be useful.
Telegram API [have a method](https://core.telegram.org/method/messages.getMessages)
Pyrogram [also](https://docs.pyrogram.org/api/methods/get_messages)
### Possible solution
Add a async function
`get_messages( chat_id: int, message_id: int | Iterable of int ) -> Message | List of [Message]`
### Alternatives
_No response_
### Code example
_No response_
### Additional information
_No response_ | closed | 2022-11-26T20:58:24Z | 2022-11-27T06:31:54Z | https://github.com/aiogram/aiogram/issues/1073 | [
"enhancement",
"wontfix",
"3.x"
] | DustinByfuglien | 1 |
microsoft/nni | data-science | 5,623 | this is my configlist,i have determined the names of conv modules to be pruned,but it will still prune other conv modules which are not in the op_names list.Why? | config_list = [{
'sparsity': 0.6,
'op_types':['Conv2d'],
'op_names':['conv1',
'layer1.0.conv1.0','layer1.0.conv2.pwconv','layer1.0.conv3.0','layer1.0.downsample.0',
'layer1.1.conv1.0','layer1.1.conv2.pwconv','layer1.1.conv3.0',
'layer1.2.conv1.0','layer1.2.conv2.pwconv','layer1.2.conv3.0',
'layer2.0.conv1.0','layer2.0.conv2.pwconv','layer2.0.conv3.0','layer1.0.downsample.0',
'layer2.1.conv1.0','layer2.1.conv2.pwconv','layer2.1.conv3.0',
'layer2.2.conv1.0','layer2.2.conv2.pwconv','layer2.2.conv3.0',
'layer2.3.conv1.0','layer2.3.conv2.pwconv','layer2.3.conv3.0',
'layer3.0.conv1.0','layer3.0.conv2.pwconv','layer3.0.conv3.0','layer3.0.downsample.0',
'layer3.1.conv1.0','layer3.1.conv2.pwconv','layer3.1.conv3.0',
'layer3.2.conv1.0','layer3.2.conv2.pwconv','layer3.2.conv3.0',
'layer3.3.conv1.0','layer3.3.conv2.pwconv','layer3.3.conv3.0',
'layer3.4.conv1.0','layer3.4.conv2.pwconv','layer3.4.conv3.0',
'layer3.5.conv1.0','layer3.5.conv2.pwconv','layer3.5.conv3.0',
'layer4.0.conv1.0','layer4.0.conv2.pwconv','layer4.0.conv3.0','layer4.0.downsample.0',
'layer4.1.conv1.0','layer4.1.conv2.pwconv','layer4.1.conv3.0',
'layer4.2.conv1.0','layer4.2.conv2.pwconv','layer4.2.conv3.0',]
}] | open | 2023-06-28T14:20:08Z | 2023-06-30T02:37:40Z | https://github.com/microsoft/nni/issues/5623 | [] | yang-ming-uc | 0 |
gunthercox/ChatterBot | machine-learning | 1,495 | Alows statements to be excluded if text contains any word in a provided list | * The `filter` method on each storage adapter should accept a key word argument `exclude_text_words`.
* If `exclude_text_words` is provided (a list of words to exclude), the statements returned by the filter method should not include statements who's text contains one of the specified words. | closed | 2018-11-18T16:35:09Z | 2018-11-25T14:53:06Z | https://github.com/gunthercox/ChatterBot/issues/1495 | [
"feature"
] | gunthercox | 0 |
encode/uvicorn | asyncio | 2,008 | Improve GitHub templates (issues, PRs and discussions) | People should first create discussions, and the discussion should provide an MRE, if it's supposed to be a bug report. | closed | 2023-06-14T10:31:43Z | 2023-07-07T06:37:38Z | https://github.com/encode/uvicorn/issues/2008 | [
"good first issue"
] | Kludex | 0 |
polarsource/polar | fastapi | 5,299 | Create BillingEntry model | `BillingEntry` is an intermediate data ledger bridging the gap between `Event` and `OrderItem`.<br><br>It's filled during a billing period to keep track of the "things" we need to invoice when the next cycle starts.<br><br>More details in #5114 | open | 2025-03-18T13:33:31Z | 2025-03-18T13:33:31Z | https://github.com/polarsource/polar/issues/5299 | [
"v1.5"
] | frankie567 | 0 |
napari/napari | numpy | 7,513 | Add test coverage for test matrix job without numba | ## ๐งฐ Task
We don't have codecov set up for running napari without numba / without compiled backends. See https://github.com/napari/napari/pull/7346#discussion_r1911619401. We should set that up because a substantial fraction of our users might experience napari that way. | closed | 2025-01-11T02:48:36Z | 2025-01-15T23:32:01Z | https://github.com/napari/napari/issues/7513 | [
"task"
] | jni | 2 |
ploomber/ploomber | jupyter | 256 | Notebooks saver from NotebookRunner.develop() have verbose metadata | papermill empty metadata is added:
```python
+
x = 1
```
becomes:
```python
# + {"papermill": {}}
x = 1
```
| closed | 2020-09-18T21:09:53Z | 2020-12-30T22:43:51Z | https://github.com/ploomber/ploomber/issues/256 | [] | edublancas | 0 |
MaartenGr/BERTopic | nlp | 1,559 | auto_reduce_topic fails when all documents are outliers |
auto_reduce_topic assumes that there is at least one unique non-outlier topic and throws an error if there isn't. | open | 2023-10-04T17:08:43Z | 2023-10-05T10:53:46Z | https://github.com/MaartenGr/BERTopic/issues/1559 | [] | aw578 | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,490 | [Bug]: AMD GPU xFormers 0.0.28 do not support,GPU works but turn out nothing but error | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
AMD GPU xFormers 0.0.28 do not support,GPU works but turn out nothing but error

### Steps to reproduce the problem
1.I git clone the a1111 sdwebui and managed to install the pytorch2.4.1-rocm6.1,and it work well .
2.Then i installed the xformers0.0.28post1 ,the trouble comes.When i fill the prompt and cick che generate botton ,the terminal and GPU sound shows the process ,but after seconds turn out an error.
3.When i run a benchmark in vlad's system info extension,it turns out that error:

penny@Neko:~/stable-diffusion-webui$ '/home/penny/stable-diffusion-webui/webui.sh' --reinstall-xformers --xformers
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on penny user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing xformers
Launching Web UI with arguments: --reinstall-xformers --xformers
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.4.1+cu121 with CUDA 1201 (you have 2.4.1+rocm6.1)
Python 3.10.15 (you have 3.10.12)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
*** Error running preload() for /home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/preload.py
Traceback (most recent call last):
File "/home/penny/stable-diffusion-webui/modules/script_loading.py", line 30, in preload_extensions
module = load_module(preload_script)
File "/home/penny/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/preload.py", line 4, in
from modules.shared import models_path
ImportError: cannot import name 'models_path' from partially initialized module 'modules.shared' (most likely due to a circular import) (/home/penny/stable-diffusion-webui/modules/shared.py)
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
sd-webui-prompt-all-in-one background API service started successfully.
*** Error loading script: tagger.py
Traceback (most recent call last):
File "/home/penny/stable-diffusion-webui/modules/scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/penny/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/scripts/tagger.py", line 5, in
from tagger.ui import on_ui_tabs
File "/home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/tagger/ui.py", line 10, in
from webui import wrap_gradio_gpu_call
ImportError: cannot import name 'wrap_gradio_gpu_call' from 'webui' (/home/penny/stable-diffusion-webui/webui.py)
Loading weights [7c819b6d13] from /home/penny/stable-diffusion-webui/models/Stable-diffusion/majicmixRealistic_v7.safetensors
Running on local URL: http://127.0.0.1:7860/
Creating model from config: /home/penny/stable-diffusion-webui/configs/v1-inference.yaml
/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True.
warnings.warn(
Applying attention optimization: sdp-no-mem... done.
Model loaded in 1.8s (load weights from disk: 0.3s, create model: 0.2s, apply weights to model: 0.9s, calculate empty prompt: 0.1s).
To create a public link, set share=True in launch().
Startup time: 10.4s (prepare environment: 3.1s, import torch: 1.6s, import gradio: 0.3s, setup paths: 2.5s, other imports: 0.2s, load scripts: 0.3s, create ui: 0.2s, gradio launch: 2.2s).
ๆญฃๅจ็ฐๆๆต่งๅจไผ่ฏไธญๆๅผใ
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
/home/penny/stable-diffusion-webui/modules/safe.py:156: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return unsafe_torch_load(filename, *args, **kwargs)
0%| | 0/20 [00:00<?, ?it/s]/usr/lib/python3.10/contextlib.py:103: FutureWarning: torch.backends.cuda.sdp_kernel() is deprecated. In the future, this context manager will be removed. Please see torch.nn.attention.sdpa_kernel() for the new context manager, with updated signature.
self.gen = func(*args, **kwds)
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:01<00:00, 18.65it/s]
ERROR:sd:SD-System-Info benchmark error: 1 No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
ckF is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see python -m xformers.info for more info
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:00<00:00, 20.65it/s]
ERROR:sd:SD-System-Info benchmark error: 1 No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
ckF is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see python -m xformers.info for more info
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:01<00:00, 11.10it/s]
ERROR:sd:SD-System-Info benchmark error: 2 No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
ckF is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see python -m xformers.info for more info
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:03<00:00, 5.75it/s]
ERROR:sd:SD-System-Info benchmark error: 4 No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
ckF is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see python -m xformers.info for more info
4.when i generate the picture in txt2img ,here is the code:
*** Error completing request 4.43s/it]
*** Arguments: ('task(femn6y84yofgpcx)', <gradio.routes.Request object at 0x7b50d04eb640>, '1girl, ', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/penny/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/home/penny/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/home/penny/stable-diffusion-webui/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
File "/home/penny/stable-diffusion-webui/modules/processing.py", line 1002, in process_images_inner
x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
File "/home/penny/stable-diffusion-webui/modules/processing.py", line 632, in decode_latent_batch
sample = decode_first_stage(model, batch[i:i + 1])[0]
File "/home/penny/stable-diffusion-webui/modules/sd_samplers_common.py", line 76, in decode_first_stage
return samples_to_images_tensor(x, approx_index, model)
File "/home/penny/stable-diffusion-webui/modules/sd_samplers_common.py", line 58, in samples_to_images_tensor
x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
File "/home/penny/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/home/penny/stable-diffusion-webui/modules/sd_hijack_utils.py", line 36, in call
return self.__orig_func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 826, in decode_first_stage
return self.first_stage_model.decode(z)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 90, in decode
dec = self.decoder(z)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 631, in forward
h = self.mid.attn_1(h)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 258, in forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/init.py", line 301, in memory_efficient_attention
return _memory_efficient_attention(
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/init.py", line 462, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/init.py", line 481, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 135, in _dispatch_fw
return _run_priority_list(
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 76, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for memory_efficient_attention_forward with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
ckF is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see python -m xformers.info for more info
### What should have happened?
since xFormers has the AMD ROCm support,hope that the dev branch could quickly give the featrure support update.
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
[sysinfo-2024-09-16-12-25.json](https://github.com/user-attachments/files/17012841/sysinfo-2024-09-16-12-25.json)
### Console logs
```Shell
penny@Neko:~/stable-diffusion-webui$ '/home/penny/stable-diffusion-webui/webui.sh' --xformers
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################
################################################################
Running on penny user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
glibc version is 2.35
Cannot locate TCMalloc. Do you have tcmalloc or google-perftool installed on your system? (improves CPU memory usage)
Python 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --xformers
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.4.1+cu121 with CUDA 1201 (you have 2.4.1+rocm6.1)
Python 3.10.15 (you have 3.10.12)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
*** Error running preload() for /home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/preload.py
Traceback (most recent call last):
File "/home/penny/stable-diffusion-webui/modules/script_loading.py", line 30, in preload_extensions
module = load_module(preload_script)
File "/home/penny/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/preload.py", line 4, in <module>
from modules.shared import models_path
ImportError: cannot import name 'models_path' from partially initialized module 'modules.shared' (most likely due to a circular import) (/home/penny/stable-diffusion-webui/modules/shared.py)
---
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
sd-webui-prompt-all-in-one background API service started successfully.
*** Error loading script: tagger.py
Traceback (most recent call last):
File "/home/penny/stable-diffusion-webui/modules/scripts.py", line 515, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/penny/stable-diffusion-webui/modules/script_loading.py", line 13, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/scripts/tagger.py", line 5, in <module>
from tagger.ui import on_ui_tabs
File "/home/penny/stable-diffusion-webui/extensions/stable-diffusion-webui-wd14-tagger/tagger/ui.py", line 10, in <module>
from webui import wrap_gradio_gpu_call
ImportError: cannot import name 'wrap_gradio_gpu_call' from 'webui' (/home/penny/stable-diffusion-webui/webui.py)
---
Loading weights [7c819b6d13] from /home/penny/stable-diffusion-webui/models/Stable-diffusion/majicmixRealistic_v7.safetensors
Running on local URL: http://127.0.0.1:7860
Creating model from config: /home/penny/stable-diffusion-webui/configs/v1-inference.yaml
/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
Applying attention optimization: Doggettx... done.
Model loaded in 1.8s (load weights from disk: 0.3s, create model: 0.2s, apply weights to model: 0.9s, calculate empty prompt: 0.2s).
To create a public link, set `share=True` in `launch()`.
Startup time: 8.7s (prepare environment: 1.4s, import torch: 1.6s, import gradio: 0.3s, setup paths: 2.5s, other imports: 0.2s, load scripts: 0.3s, create ui: 0.2s, gradio launch: 2.2s).
ๆญฃๅจ็ฐๆๆต่งๅจไผ่ฏไธญๆๅผใ
/home/penny/stable-diffusion-webui/modules/safe.py:156: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return unsafe_torch_load(filename, *args, **kwargs)
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:01<00:00, 18.34it/s]
*** Error completing requestโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:00<00:00, 20.39it/s]
*** Arguments: ('task(5mb4apnjy4i0lh3)', <gradio.routes.Request object at 0x7796f0f434c0>, '1girl, ', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, 'NONE:0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0\nALL:1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1\nINS:1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0\nIND:1,0,0,0,1,1,1,0,0,0,0,0,0,0,0,0,0\nINALL:1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0\nMIDD:1,0,0,0,1,1,1,1,1,1,1,1,0,0,0,0,0\nOUTD:1,0,0,0,0,0,0,0,1,1,1,1,0,0,0,0,0\nOUTS:1,0,0,0,0,0,0,0,0,0,0,0,1,1,1,1,1\nOUTALL:1,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,1\nALL0.5:0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5', True, 0, 'values', '0,0.25,0.5,0.75,1', 'Block ID', 'IN05-OUT05', 'none', '', '0.5,1', 'BASE,IN00,IN01,IN02,IN03,IN04,IN05,IN06,IN07,IN08,IN09,IN10,IN11,M00,OUT00,OUT01,OUT02,OUT03,OUT04,OUT05,OUT06,OUT07,OUT08,OUT09,OUT10,OUT11', 1.0, 'black', '20', False, 'ATTNDEEPON:IN05-OUT05:attn:1\n\nATTNDEEPOFF:IN05-OUT05:attn:0\n\nPROJDEEPOFF:IN05-OUT05:proj:0\n\nXYZ:::1', False, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/penny/stable-diffusion-webui/modules/call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "/home/penny/stable-diffusion-webui/modules/call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/modules/txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "/home/penny/stable-diffusion-webui/modules/processing.py", line 847, in process_images
res = process_images_inner(p)
File "/home/penny/stable-diffusion-webui/modules/processing.py", line 1002, in process_images_inner
x_samples_ddim = decode_latent_batch(p.sd_model, samples_ddim, target_device=devices.cpu, check_for_nans=True)
File "/home/penny/stable-diffusion-webui/modules/processing.py", line 632, in decode_latent_batch
sample = decode_first_stage(model, batch[i:i + 1])[0]
File "/home/penny/stable-diffusion-webui/modules/sd_samplers_common.py", line 76, in decode_first_stage
return samples_to_images_tensor(x, approx_index, model)
File "/home/penny/stable-diffusion-webui/modules/sd_samplers_common.py", line 58, in samples_to_images_tensor
x_sample = model.decode_first_stage(sample.to(model.first_stage_model.dtype))
File "/home/penny/stable-diffusion-webui/modules/sd_hijack_utils.py", line 22, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "/home/penny/stable-diffusion-webui/modules/sd_hijack_utils.py", line 36, in __call__
return self.__orig_func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 826, in decode_first_stage
return self.first_stage_model.decode(z)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 90, in decode
dec = self.decoder(z)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 631, in forward
h = self.mid.attn_1(h)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/penny/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 258, in forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 301, in memory_efficient_attention
return _memory_efficient_attention(
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 462, in _memory_efficient_attention
return _memory_efficient_attention_forward(
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/__init__.py", line 481, in _memory_efficient_attention_forward
op = _dispatch_fw(inp, False)
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 135, in _dispatch_fw
return _run_priority_list(
File "/home/penny/stable-diffusion-webui/venv/lib/python3.10/site-packages/xformers/ops/fmha/dispatch.py", line 76, in _run_priority_list
raise NotImplementedError(msg)
NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
`ckF` is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see `python -m xformers.info` for more info
---
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:00<00:00, 20.09it/s]
ERROR:sd:SD-System-Info benchmark error: 1 No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
`ckF` is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see `python -m xformers.info` for more info
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:00<00:00, 20.30it/s]
ERROR:sd:SD-System-Info benchmark error: 1 No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
`ckF` is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see `python -m xformers.info` for more info
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:01<00:00, 11.12it/s]
ERROR:sd:SD-System-Info benchmark error: 2 No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
`ckF` is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see `python -m xformers.info` for more info
WARNING:root:Sampler Scheduler autocorrection: "Euler a" -> "Euler a", "None" -> "Automatic"
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 20/20 [00:03<00:00, 5.71it/s]
ERROR:sd:SD-System-Info benchmark error: 4 No operator found for `memory_efficient_attention_forward` with inputs:
query : shape=(1, 4096, 1, 512) (torch.float16)
key : shape=(1, 4096, 1, 512) (torch.float16)
value : shape=(1, 4096, 1, 512) (torch.float16)
attn_bias : <class 'NoneType'>
p : 0.0
`ckF` is not supported because:
max(query.shape[-1], value.shape[-1]) > 256
operator wasn't built - see `python -m xformers.info` for more info
```
### Additional information
Radeon RX 7900XTX ROCm6.1 | open | 2024-09-16T12:28:22Z | 2024-12-17T04:38:16Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16490 | [
"bug-report"
] | PennyFranklin | 6 |
deezer/spleeter | tensorflow | 72 | About pretrained models | <!-- Please respect the title [Discussion] tag. -->
How many steps you trained models for 2stems/4stems. Are you train the model using the config file which you provided? I trained 2stems model myself using default config file and musdb18 dataset, but can't get clean vocals output. | closed | 2019-11-10T04:41:46Z | 2019-11-14T22:50:13Z | https://github.com/deezer/spleeter/issues/72 | [
"question",
"model",
"training"
] | DickyQi | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.