repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
unionai-oss/pandera | pandas | 1,164 | When trowing validation errors print all errors, not just the first one | **Is your feature request related to a problem? Please describe.**
I am writing data schemas and then validating that they actually fit our source data, and then tweaking it (i.e. "oh this column is actually nullable."). however with a table with many rows I will have to continuously run validate, change run, run, change code (etc etc) - as I only get one error at a time
**Describe the solution you'd like**
That the failure includes all errors, not just the first Pandera finds.
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
| open | 2023-04-23T07:16:39Z | 2023-04-24T19:49:41Z | https://github.com/unionai-oss/pandera/issues/1164 | [
"enhancement"
] | C0DK | 3 |
TencentARC/GFPGAN | deep-learning | 72 | 如何训练不上色的模型 | 我现在用自己的图片数据(ffhq不加美颜)尝试重新训练,但是出来的图片等颜色都发生了改变。黑白照也给上了色,如果要去掉这个的话,应该修改哪里呢?把color jitter关掉够吗 | closed | 2021-09-24T07:29:25Z | 2021-09-29T18:29:09Z | https://github.com/TencentARC/GFPGAN/issues/72 | [] | jorjiang | 0 |
plotly/dash-html-components | dash | 120 | Backwards incompatibility with 1.0.0 | It seems the public module `dash_html_components.version` was removed with no deprecation warning. | closed | 2019-06-20T20:28:06Z | 2019-06-21T17:29:23Z | https://github.com/plotly/dash-html-components/issues/120 | [] | moorepants | 1 |
tatsu-lab/stanford_alpaca | deep-learning | 235 | How to modify llama-Xb-hf/tokenizer_config.json from HuggingFace? | As title, I found that the content of llama-Xb-hf/tokenizer_config.json is like the following,
```
{"bos_token": "", "eos_token": "",
"model_max_length": 1000000000000000019884624838656,
"tokenizer_class": "LLaMATokenizer", "unk_token": ""}
```
How did your team modify this file so that the experiment can be run successfully?
Here is my modification. Is this correct?
```
{"bos_token": "<s>", "eos_token": "</s>",
"model_max_length": 1000000000000000019884624838656,
"tokenizer_class": "LlamaTokenizer", "unk_token": "<unk>"}
``` | open | 2023-04-21T00:46:47Z | 2023-04-23T08:32:00Z | https://github.com/tatsu-lab/stanford_alpaca/issues/235 | [] | foreveronehundred | 1 |
dpgaspar/Flask-AppBuilder | rest-api | 1,437 | Trouble alpha sorting Multi Select field data and options in Edit View | ### Problem
I have Product object with a view that shows a many-many multi select field.
The field holds a list of countries where there are often 20-30 countries selected. When they are rendered in the select field, they are ordered by db ID. **I want them ordered by their name field.**
My attempted solutions failed and I am looking for help. If someone could review this issue and get back to me that would be extremely helpful. My company uses Flask-AppBuilder on many projects and I am always looking for ways to improve the code base. Thanks,
### Attempted Solutions
**Creating a custom widget**
This solution effectively ordered the options, but not the field values. Fails to even display the selected data for the record.
_Before the change_

_After the change_

_Code for this change_
```
def country_query():
return db.session.query(ECatalogCountry)
# ---------------------------
class ECatalogProductView(ModelView):
datamodel = SQLAInterface(ECatalogProduct)
# collapsed view config. All can be found below in the code section
edit_form_extra_fields = {
"commercially_available_countries": QuerySelectField(
"Commercially Available Countries",
query_factory=country_query,
widget=SelectMany2ManyAlphaWidget(),
)
}
# ---------------------------
class Select2ManyWidget(widgets.Select):
extra_classes = None
def __init__(self, extra_classes=None, style=None):
self.extra_classes = extra_classes
self.style = style or u"width:250px"
return super(Select2ManyWidget, self).__init__()
def __call__(self, field, **kwargs):
kwargs["class"] = u"my_select2 form-control"
if self.extra_classes:
kwargs["class"] = kwargs["class"] + " " + self.extra_classes
kwargs["style"] = self.style
kwargs["data-placeholder"] = _("Select Value")
kwargs["multiple"] = u"true"
if "name_" in kwargs:
field.name = kwargs["name_"]
kwargs.setdefault('id', field.id)
if self.multiple:
kwargs['multiple'] = True
if 'required' not in kwargs and 'required' in getattr(field, 'flags', []):
kwargs['required'] = True
html = ['<select %s>' % html_params(name=field.name, **kwargs)]
iter_choices = list(field.iter_choices())
iter_choices.sort(key=lambda x: x[1].name)
for val, label, selected in iter_choices:
html.append(self.render_option(val, label, selected))
html.append('</select>')
return HTMLString(''.join(html))
```
### Code
**Models**
Summary: Products can be related to Countries in 2 many-many relationships.
```
class ECatalogProduct(Model):
id = Column(Integer, primary_key=True)
display_name = Column(String(255))
commercially_available_countries = relationship("ECatalogCountry", secondary="e_catalog_regulatory_availability")
regulatory_available_countries = relationship("ECatalogCountry", secondary="e_catalog_commercial_availability")
def __repr__(self):
return self.display_name
class ECatalogCountry(Model):
__tablename__ = 'e_catalog_country'
id = Column(Integer, primary_key=True)
name = Column(String(255))
iso_code = Column(String(5))
def __repr__(self):
return self.name
class ECatalogCommercialAvailability(Model):
__tablename__ = 'e_catalog_commercial_availability'
id = Column(Integer, primary_key=True)
product_id = Column(ForeignKey(u'e_catalog_product.id', ondelete=u'CASCADE'), index=True)
country_id = Column(ForeignKey(u'e_catalog_country.id', ondelete=u'CASCADE'), index=True)
product = relationship("ECatalogProduct")
country = relationship("ECatalogCountry")
```
**Views (This is the full view and most of this is not necessary)**
```
class ECatalogProductPLMView(ModelView):
datamodel = SQLAInterface(ECatalogProduct)
label_columns = e_catalog_product_label_columns
list_columns = ["display_name", "article_number", "solution", "created_on"]
edit_widget = FormWithSectionDescriptions
search_columns = [
"article_number",
"code",
"commercially_available_countries",
"created_on",
"deleted",
"disabled_in_ecatalog",
"disease_states",
"display_name",
"product_type_segment",
"solution",
"test_category"
]
# ---- SECTIONS ----
general_product_data_fields = [
"display_name",
"short_copy",
"package_size",
"loinc",
"disabled_in_ecatalog"
]
sap_synced_data_fields = [
"article_number",
"bar_code",
"code",
"solution",
"test_type",
"test_category",
"deleted"
]
country_availability_fields_edit = [
"commercially_available_countries",
"default"
]
country_availability_fields_show = [
"commercially_available_countries",
"regulatory_available_countries",
"default"
]
related_product_data_fields = [
"product_type_segment",
"methods",
"phadia_systems",
"assay_specific_reagent"
]
immunocap_product_data_fields = [
"results_reported",
"immunocap_size"
]
elia_product_data_fields = [
"antigen",
"cutoff_negative",
"cutoff_equivocal",
"cutoff_positive",
"disease_states",
"dilution",
"elia_size",
"reference_material",
"short_name"
]
phadia_product_data_fields = [
"calibration_curve",
"connection",
"dimensions",
"onboard_carrier_storage",
"peak_capacity",
"remote_support",
"runs",
"system_description"
]
resource_section_links_fields = [
"clinical_disease_page",
"guidelines_link",
"product_brochure_dam_image_relative_path",
"testing_algorithm_modal_content_relative_path",
"show_dfu_link",
"show_las_link",
"show_coa_link",
"show_sds_link",
"show_prime_link",
"show_lab_community_link",
"show_elia_community_link",
"show_immunocap_community_link",
"show_quality_club_link"
]
seo_fields = [
"meta_title",
"meta_description",
]
allergen_encyclopedia_fields = [
"allergen_encyclopedia_whole_allergen",
"allergen_encyclopedia_allergen_component",
]
# ---- FIELD SETS ----
default_fieldsets = [
('General Product Data', {
"fields": general_product_data_fields,
"expanded": True
}),
('SAP Synced Data', {
"fields": sap_synced_data_fields,
"expanded": True,
"description": "SAP controls and overwrites this data. Do not change."
}),
('Country Availability', {
"fields": country_availability_fields_edit,
"expanded": True,
"description": "Enables the product by country in E-Catalog. Default makes the product visible in the Other Country Option. "
}),
('Related Product Data', {
"fields": related_product_data_fields,
"expanded": True,
"description": "Used to determine which products display on the Product Detail page in E Catalog. "
}),
('ImmunoCAP Product Data', {
"fields": immunocap_product_data_fields,
"expanded": True
}),
('EliA Product Data', {
"fields": elia_product_data_fields,
"expanded": True
}),
('Phadia Product Data', {
"fields": phadia_product_data_fields,
"expanded": True
}),
('Resource Section Links', {
"fields": resource_section_links_fields,
"expanded": True
}),
('Product Detail Page SEO Override Data', {
"fields": seo_fields,
"expanded": True,
"description": "Meta title and description are set automatically for Product Detail pages. These field override the SEO fields on the Product Detail Page."
})
]
show_fieldsets = [
('General Product Data', {
"fields": general_product_data_fields,
"expanded": True
}),
('SAP Synced Data', {
"fields": sap_synced_data_fields,
"expanded": True
}),
('Country Availability', {
"fields": country_availability_fields_show,
"expanded": True
}),
('Related Product Data', {
"fields": related_product_data_fields,
"expanded": True
}),
('ImmunoCAP Product Data', {
"fields": immunocap_product_data_fields,
"expanded": True
}),
('EliA Product Data', {
"fields": elia_product_data_fields,
"expanded": True
}),
('Phadia Product Data', {
"fields": phadia_product_data_fields,
"expanded": True
}),
('Resource Section Links', {
"fields": resource_section_links_fields,
"expanded": True
}),
('Product Detail Page SEO Override Data', {
"fields": seo_fields,
"expanded": True
})
]
# ---- OTHER FIELDSETS ----
edit_fieldsets = default_fieldsets
add_fieldsets = default_fieldsets
# ---- FIELD DESCRIPTIONS ----
description_columns = {
"dilution": "MUST MATCH DFU",
"product_type_segment": "MANDATORY",
"default": "Shows this product when the country filter is set to other. "
}
``` | closed | 2020-07-14T17:56:19Z | 2020-10-23T14:29:13Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1437 | [
"stale"
] | jnorton2 | 2 |
babysor/MockingBird | pytorch | 390 | 数据集预处理时报错 | 1.如果我在train目录下新建一个文件夹,并把音频文件放进去的话,pre.py就会一直卡住不动:
Using data from:
d:\asoul\aidatatang_200zh\corpus\train
aidatatang_200zh: 0%| | 0/1 [00:00<?, ?speakers/s]
2.如果我把音频文件直接放在tran目录下就会报这样的错误:
Using data from:
d:\asoul\aidatatang_200zh\corpus\train
aidatatang_200zh: 100%|█████████████████████████████████████████████████████| 1226/1226 [00:05<00:00, 241.65speakers/s]
The dataset consists of 0 utterances, 0 mel frames, 0 audio timesteps (0.00 hours).
Traceback (most recent call last):
File "D:\MockingBird\pre.py", line 74, in <module>
preprocess_dataset(**vars(args))
File "D:\MockingBird\synthesizer\preprocess.py", line 88, in preprocess_dataset
print("Max input length (text chars): %d" % max(len(m[5]) for m in metadata))
ValueError: max() arg is an empty sequence | closed | 2022-02-15T09:35:40Z | 2023-07-01T08:26:25Z | https://github.com/babysor/MockingBird/issues/390 | [] | yrsn509 | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 878 | Can a self supervised technique be used for generating paired image? | Hi,
I have used your pix2pix GAN model with my custom dataset. But the process of generating image pairs takes a lot of time. I have very less data around 100 image pairs. Even when I tried cycle GAN the results were not great. I came across the term self-supervised learning. My question is whether can I use self-supervision to image translation tasks since I have very little data. | closed | 2019-12-14T11:07:56Z | 2020-06-26T13:52:11Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/878 | [] | kalai2033 | 1 |
Avaiga/taipy | data-visualization | 1,862 | CouchBase Datanode | ### Description
CouchBase is a widely used NoSQL Database. The purpose of this issue is to implement a CouchBase Datanode similar to the MongoDatanode.
More information in the comments: https://github.com/Avaiga/taipy/issues/1862#issuecomment-2404783206
| closed | 2024-09-30T16:01:11Z | 2024-11-25T16:21:05Z | https://github.com/Avaiga/taipy/issues/1862 | [
"Core",
"🟨 Priority: Medium",
"✨New feature",
"🔒 Staff only",
"Core: ⚙️ Configuration",
"Core: 📁 Data node"
] | jrobinAV | 10 |
serengil/deepface | deep-learning | 552 | Error: Tensorflow | I was trying to use the terminal tool and got this error:
`ModuleNotFoundError: No module named 'tensorflow.python.trackable`
Python version: 2.7.18
OS: Ubuntu 20.04
```
deepface analyze -img_path ~/SMILE_5307421.jpg
Traceback (most recent call last):
File "/home/hannibal/.local/bin/deepface", line 5, in <module>
from deepface.DeepFace import cli
File "/home/hannibal/.local/lib/python3.8/site-packages/deepface/DeepFace.py", line 16, in <module>
from deepface.basemodels import VGGFace, OpenFace, Facenet, Facenet512, FbDeepFace, DeepID, DlibWrapper, ArcFace, Boosting, SFaceWrapper
File "/home/hannibal/.local/lib/python3.8/site-packages/deepface/basemodels/VGGFace.py", line 5, in <module>
from deepface.commons import functions
File "/home/hannibal/.local/lib/python3.8/site-packages/deepface/commons/functions.py", line 24, in <module>
from tensorflow.keras.preprocessing.image import load_img, save_img, img_to_array
File "/home/hannibal/.local/lib/python3.8/site-packages/keras/api/_v2/keras/__init__.py", line 12, in <module>
from keras import __version__
File "/home/hannibal/.local/lib/python3.8/site-packages/keras/__init__.py", line 21, in <module>
from keras import models
File "/home/hannibal/.local/lib/python3.8/site-packages/keras/models/__init__.py", line 18, in <module>
from keras.engine.functional import Functional
File "/home/hannibal/.local/lib/python3.8/site-packages/keras/engine/functional.py", line 27, in <module>
from keras.dtensor import layout_map as layout_map_lib
File "/home/hannibal/.local/lib/python3.8/site-packages/keras/dtensor/layout_map.py", line 25, in <module>
from keras.dtensor import lazy_variable
File "/home/hannibal/.local/lib/python3.8/site-packages/keras/dtensor/lazy_variable.py", line 26, in <module>
from tensorflow.python.trackable import base as trackable
ModuleNotFoundError: No module named 'tensorflow.python.trackable'
``` | closed | 2022-09-03T00:52:00Z | 2022-09-03T15:47:01Z | https://github.com/serengil/deepface/issues/552 | [
"dependencies"
] | goldentechie | 1 |
babysor/MockingBird | pytorch | 583 | vocoder pt | I'm training a HIFI vocoder for now and I was wondering which is the difference between the **do_hifigan.pt** and the **g_hifigan.pt** !
Also I noticed that the gound_truth argument in the vocoder_train.py script is not actually used in the train script, Am I wrong? | closed | 2022-05-26T13:41:14Z | 2022-05-30T12:41:03Z | https://github.com/babysor/MockingBird/issues/583 | [] | ireneb612 | 4 |
flasgger/flasgger | rest-api | 252 | @swag_from("relative_path.yml") throws ImportError: No module named 'home' when using venv environments | EDIT: SCRATCH THAT. This was an unfortunate side effect of mixing up `-` and `_` in file names. Flasgger couldn't find the file, and then tried to import it as a module, which worked with importlib due to _ and - import rules. There's still a problem in using imp, but I just hit an unfortunate corner case, so I'm closing this :)
Visiting http://127.0.0.1:5050/apidocs/ throws the following exception when using `@swag_from("test-relative_path.yml", validation=True)`
`imp.find_module` is deprecated since python 3.4, and can be replaced by `import_module.find_module`.
```
Traceback (most recent call last):
File "./lib/python3.6/site-packages/flask/app.py", line 2309, in __call__
return self.wsgi_app(environ, start_response)
File "./lib/python3.6/site-packages/flask/app.py", line 2295, in wsgi_app
response = self.handle_exception(e)
File "./lib/python3.6/site-packages/flask/app.py", line 1741, in handle_exception
reraise(exc_type, exc_value, tb)
File "./lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "./lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "./lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "./lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "./lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "./lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "./lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "./lib/python3.6/site-packages/flask/views.py", line 88, in view
return self.dispatch_request(*args, **kwargs)
File "./lib/python3.6/site-packages/flask/views.py", line 158, in dispatch_request
return meth(*args, **kwargs)
File "./lib/python3.6/site-packages/flasgger/base.py", line 108, in get
return jsonify(self.loader())
File "./lib/python3.6/site-packages/flasgger/base.py", line 331, in get_apispecs
doc_dir=self.config.get('doc_dir'))
File "./lib/python3.6/site-packages/flasgger/utils.py", line 146, in get_specs
method, sanitizer, endpoint=rule.endpoint, verb=verb)
File "./lib/python3.6/site-packages/flasgger/utils.py", line 522, in parse_docstring
full_doc = load_from_file(swag_path, swag_type)
File "./lib/python3.6/site-packages/flasgger/utils.py", line 490, in load_from_file
site_package = imp.find_module(path[0])[1]
File "/usr/lib/python3.6/imp.py", line 297, in find_module
raise ImportError(_ERR_MSG.format(name), name=name)
ImportError: No module named 'home'
``` | closed | 2018-10-11T10:31:09Z | 2018-10-12T06:33:00Z | https://github.com/flasgger/flasgger/issues/252 | [] | jkgeyti | 4 |
desec-io/desec-stack | rest-api | 599 | Add Webapp to Feature List on Landing Page | Me and a friend is looking for a new dns hosting after learning that gratisdns.dk is moving all of their 200.000 domains into one.com, and one of the promising solutions is desec.
But the «Docs» link on the front page only discusses how to do things using curl and the API.
Is it really not possible to manage domains (creating domain, updating records, adding or removing records) by logging in to the website?
I am not prepared to create a user without knowing that one does not have to start creating our own scripts just to manage domain names. :-)
May I suggest you create documentaion on how to use the webinterface to manage domains,if that is a supported feature. | closed | 2022-02-13T19:31:05Z | 2024-10-07T17:00:03Z | https://github.com/desec-io/desec-stack/issues/599 | [
"enhancement",
"help wanted",
"prio: medium",
"gui"
] | solbu | 3 |
awtkns/fastapi-crudrouter | fastapi | 130 | Use CRUD functions internally | Is it possible to the call the CRUD functions inside the application from other parts of the code? How do I do that?
ie:
`user = users.get(f"/user/{user_id}")`
or something like that? | open | 2022-01-02T17:13:04Z | 2022-09-11T20:23:24Z | https://github.com/awtkns/fastapi-crudrouter/issues/130 | [] | Zaffer | 2 |
ultralytics/ultralytics | machine-learning | 19,298 | Training Time | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi, so i have trained and tuned the yolov8 model quite a lot of times before, but i am tuning it again now and somehow, now each epoch is taking around 8 minutes where as previously it used to take around 3 mins, on the same system specification. Does anyone know why this is happening, im on the 8.3.17 version of ultralytics.
### Additional
_No response_ | open | 2025-02-18T17:12:21Z | 2025-03-12T17:27:36Z | https://github.com/ultralytics/ultralytics/issues/19298 | [
"question",
"detect"
] | rhalder2023 | 35 |
serengil/deepface | deep-learning | 507 | Videos stored on local HardDrive | Hi Sefnik,
I wanted to point out that adding the functionality of running videos instead of just a webcam through the stream function can also be a good feature to add to this repo. | closed | 2022-07-10T00:55:44Z | 2022-07-10T19:24:10Z | https://github.com/serengil/deepface/issues/507 | [] | mjan2021 | 0 |
hyperspy/hyperspy | data-visualization | 2,917 | AttributeError raised by spikes_removal_tool | Hello Everyone,
On the `release_next_minor ` version of hyperspy I have been facing a weird issue in interactive spikes removal.
Upon clicking "Find next", the widget freezes and AttributeError is raised.
```python
File "C:\Users\NicolasTappy\Miniconda3\envs\hsd\lib\site-packages\traitsui\qt4\ui_base.py", line 55, in perform
self.ui.do_undoable(handler.perform, self.ui.info, self.action, None)
File "C:\Users\NicolasTappy\Miniconda3\envs\hsd\lib\site-packages\traitsui\ui.py", line 645, in do_undoable
action(*args, **kw)
File "C:\Users\NicolasTappy\Miniconda3\envs\hsd\lib\site-packages\traitsui\handler.py", line 214, in perform
method(info)
File "C:\Users\NicolasTappy\Miniconda3\envs\hsd\lib\site-packages\hyperspy_gui_traitsui\tools.py", line 413, in find
obj.find()
File "c:\users\nicolastappy\documents\git\hyperspy\hyperspy\signal_tools.py", line 1810, in find
self.signal._plot.pointer._set_indices(
AttributeError: 'NoneType' object has no attribute '_set_indices'
```
To reproduce:
```
dd = hs.datasets.artificial_data.get_luminescence_signal(navigation_dimension=0,uniform=False)
dd.spikes_removal_tool(interactive=True)
#Click on "find next" button
```
Note that spikes removal works as it should in non-interactive mode
Any idea what broke? Not sure at this point that it's not a me only issue, but I have experienced it on a clean from source installation | closed | 2022-04-01T12:59:12Z | 2022-04-02T09:59:02Z | https://github.com/hyperspy/hyperspy/issues/2917 | [
"type: bug"
] | LMSC-NTappy | 2 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,662 | use quote_plus to encode for string url | ### Describe the bug
sqlalchemy.engine.url.URL's [render_as_string](https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/engine/url.py#L629) function [url encodes](https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/engine/url.py#L910C39-L910C43) some but not all characters for passwords.
I expect the output of `render_as_string` to url encode all necessary characters for a db url.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
1.4.41
### DBAPI (i.e. the database driver)
n/a
### Database Vendor and Major Version
n/a
### Python Version
3.8
### Operating system
Linux
### To Reproduce
```python
from sqlalchemy import create_engine
import urllib
password = urllib.parse.quote_plus("notareal[password]")
db_url = f"postgresql+pg8000://scott:{password}@localhost:5432/mydatabase"
print(db_url)
# postgresql+pg8000://scott:notareal%5Bpassword%5D@localhost:5432/mydatabase
engine = create_engine(db_url)
print(engine.url.render_as_string(hide_password=False))
# postgresql+pg8000://scott:notareal[password]@localhost:5432/mydatabase
# The password is no longer url encoded.
```
### Error
I am using AWS xray which monkey patches `Session`. The [monkey patched code](https://github.com/aws/aws-xray-sdk-python/blob/master/aws_xray_sdk/ext/sqlalchemy_core/patch.py#L17C27-L17C28) uses `urllib.parse.urlparse` on the string returned by URL's `render_as_string`.
One of my passwords included a square bracket. Because square brackets are not url encoded by `render_as_string`, this leads to `urlparse` interpreting the db url as an invalid IPv6 url. This breaks xray functionality and logs an error:
```
[ERROR] Error parsing sql metadata.
Traceback (most recent call last):
File "/var/task/aws_xray_sdk/ext/sqlalchemy_core/patch.py", line 22, in _sql_meta
url = urlparse(str(engine_instance.engine.url))
File "/var/lang/lib/python3.8/urllib/parse.py", line 384, in urlparse
splitresult = urlsplit(url, scheme, allow_fragments)
File "/var/lang/lib/python3.8/urllib/parse.py", line 486, in urlsplit
raise ValueError("Invalid IPv6 URL")
```
### Additional context
Is there any reason why sqlalchemy is only [url encoding specific characters](https://github.com/sqlalchemy/sqlalchemy/blob/main/lib/sqlalchemy/engine/url.py#L909-L910) instead of using `urllib.parse.quoteplus`? | closed | 2023-11-20T17:53:07Z | 2024-01-09T16:28:09Z | https://github.com/sqlalchemy/sqlalchemy/issues/10662 | [
"bug",
"engine"
] | jachien | 6 |
biolab/orange3 | data-visualization | 6,052 | CSV file autoformat error | In the "CSV File import" widget, the automatic type assigned to numbers truncates values to N e+16.
In this example, it's also strange that the column _"Feat2"_ is shown as descriptive after the "Group by" action. Interposing a "Select column" does not give any improvement.
Here attached, the files to reproduce the error.
[Orange - Check 6052.zip](https://github.com/biolab/orange3/files/9038539/Orange.-.Check.6052.zip) | closed | 2022-07-04T10:18:21Z | 2022-07-05T21:44:04Z | https://github.com/biolab/orange3/issues/6052 | [
"bug report"
] | hydrastarmaster | 4 |
alteryx/featuretools | data-science | 2,458 | Add AgeToDesignation primitive | The following are the American Medical Associations’ age designations:
- Neonates or newborns (birth to 1 month)
- Infants (1 month to 1 year)
- Children (1 year through 12 years)
- Adolescents (13 years through 17 years. They may also be referred to as teenagers depending on the context.)
- Adults (18 years or older)
- Older adults (65 and older)* | open | 2023-01-20T17:03:05Z | 2023-06-26T19:16:19Z | https://github.com/alteryx/featuretools/issues/2458 | [] | gsheni | 0 |
babysor/MockingBird | pytorch | 445 | 使用WaveRNN报错 | 报错如下
> Traceback (most recent call last):
File "D:\Apps\Anaconda3\lib\site-packages\flask\app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "D:\Apps\Anaconda3\lib\site-packages\flask\app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "D:\Apps\Anaconda3\lib\site-packages\flask_restx\api.py", line 672, in error_router
return original_handler(e)
File "D:\Apps\Anaconda3\lib\site-packages\flask\app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "D:\Apps\Anaconda3\lib\site-packages\flask\_compat.py", line 39, in reraise
raise value
File "D:\Apps\Anaconda3\lib\site-packages\flask\app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "D:\Apps\Anaconda3\lib\site-packages\flask\app.py", line 1936, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\web\__init__.py", line 118, in synthesize
write(out, sample_rate, wav.astype(np.float32))
AttributeError: 'tuple' object has no attribute 'astype'
127.0.0.1 - - [2022-03-09 14:13:44] "POST /api/synthesize HTTP/1.1" 500 426 14.539417 | open | 2022-03-09T06:14:50Z | 2022-03-17T16:59:07Z | https://github.com/babysor/MockingBird/issues/445 | [] | JerryZRF | 2 |
plotly/dash-bio | dash | 292 | APP QA 2: Oncoprint | - [ ] Should be a Dash DAQ color picker:

- [ ] From the explainer, it's still hard to understand how to interpret this chart
| closed | 2019-03-31T05:23:41Z | 2019-04-24T15:10:20Z | https://github.com/plotly/dash-bio/issues/292 | [
"App QA"
] | jackparmer | 1 |
OpenBB-finance/OpenBB | python | 6,921 | [🕹️] [oss.gg hackathon] Starry-eyed supporter | ### What side quest or challenge are you solving?
Starry-eyed supporter
### Points
150
### Description
Got five friends to star OpenBB repository.
### Provide proof that you've completed the task





| closed | 2024-10-31T04:48:58Z | 2024-11-02T07:40:46Z | https://github.com/OpenBB-finance/OpenBB/issues/6921 | [] | Shrinivasdumbali | 1 |
PrefectHQ/prefect | data-science | 16,910 | consolidate use of `pendulum` so that it can be replaced | this will be a long-lived issue that's a follow on to https://github.com/PrefectHQ/prefect/pull/16356
### Describe the current behavior
`pendulum` is no longer actively maintained and is blocking our ability to support 3.13
see https://github.com/pydantic/pydantic-extra-types/issues/239
### Describe the proposed behavior
create an intermediate API in `prefect.types._datetime` that will act as an interface as we explore pendulum alternatives
### Example Use
_No response_
### Additional context
_No response_ | open | 2025-01-30T18:24:08Z | 2025-02-12T04:44:45Z | https://github.com/PrefectHQ/prefect/issues/16910 | [
"development"
] | zzstoatzz | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 329 | Cannot import Rank 1-D | Import error
<img width="1102" alt="screen shot 2018-03-10 at 3 54 07 pm" src="https://user-images.githubusercontent.com/24282993/37246647-51c0268e-247b-11e8-8aaa-8ce3784ef9bc.png">
| closed | 2018-03-10T20:54:47Z | 2018-03-28T20:56:12Z | https://github.com/DistrictDataLabs/yellowbrick/issues/329 | [
"priority: high"
] | wagner2010 | 6 |
nerfstudio-project/nerfstudio | computer-vision | 3,601 | How to render images with the same pose of GT from checkpoint. | Can anybody help me
I have trained my model , and I want to render some images to caculate PSNR, SSIM and LPIPS. The pose has been optimised in my metheod. I have tried "ns-render dataset " .Maybe there is a gap , The LIPIS is ok , but PSNR and SSIM is very bad . | open | 2025-02-21T07:09:41Z | 2025-02-22T07:22:11Z | https://github.com/nerfstudio-project/nerfstudio/issues/3601 | [] | sunbeam-217 | 2 |
zappa/Zappa | django | 1,285 | Manually created API Gateway method not working after zappa update | ## Context
We use zappa to deploy our Django api. I wanted to enable caching on one of the endpoints in API Gateway. In API Gateway I created the required resources and GET method and linked it to my Lambda function. After that in the Method request tab I created the necessary URL query string parameters and in the Integration request tab enabled Lambda proxy integration.
This all works fine. Requests with a given query parameter are cached correctly. However, when I run `zappa update` to push some new code, the method I created isn't working anymore. To get it working again, I have to remove the method, create it again and the deploy it (all is done in API Gateway). Is there a way to circumvent this using Zappa or should I look into AWS settings for this?
## Expected Behavior
I expect the endpoint method to keep working, because no setting has changed (I re-create the method with the exact same settings)
## Actual Behavior
See above
## Your Environment
* Zappa version used: 0.57.0
* Operating System and Python version: Python 3.10
* The output of `pip freeze`:
```
aiohttp==3.9.0
aiosignal==1.3.1
annotated-types==0.5.0
argcomplete==3.1.1
asgiref==3.7.2
async-timeout==4.0.3
attrs==23.1.0
aws-psycopg2==1.3.8
beautifulsoup4==4.12.2
bleach==6.0.0
blis==0.7.10
boto3==1.28.37
botocore==1.31.37
cachetools==5.3.1
catalogue==2.0.9
certifi==2023.7.22
cffi==1.15.1
cfgv==3.4.0
cfn-flip==1.3.0
charset-normalizer==3.2.0
click==8.1.7
colorama==0.4.6
confection==0.1.1
cryptography==41.0.4
cymem==2.0.7
defusedxml==0.7.1
distlib==0.3.7
dj-rest-auth==4.0.1
Django==4.2.7
django-allauth==0.54.0
django-annoying==0.10.6
django-constance==3.1.0
django-cors-headers==4.2.0
django-extensions==3.2.3
django-generate-secret-key==1.0.2
django-jsonform==2.19.0
django-resized==1.0.2
django-s3-storage==0.14.0
django-sql-dashboard==1.1
djangorestframework==3.14.0
drf-spectacular==0.26.4
durationpy==0.5
exceptiongroup==1.1.3
filelock==3.12.3
frozenlist==1.4.0
google-api-core==2.11.1
google-api-python-client==2.48.0
google-auth==2.22.0
google-auth-httplib2==0.1.0
google-cloud-core==2.3.3
google-cloud-texttospeech==2.14.1
google-cloud-translate==3.12.0
googleapis-common-protos==1.60.0
grpcio==1.53.0
grpcio-status==1.53.0
grpcio-tools==1.53.0
hjson==3.1.0
httplib2==0.22.0
identify==2.5.27
idna==3.4
inflection==0.5.1
iniconfig==2.0.0
Jinja2==3.1.2
jmespath==1.0.1
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
kappa==0.6.0
langcodes==3.3.0
Markdown==3.4.4
MarkupSafe==2.1.3
multidict==6.0.4
murmurhash==1.0.9
nodeenv==1.8.0
numpy==1.25.2
oauthlib==3.2.2
openai==1.3.5
packaging==23.1
param==1.13.0
pathy==0.10.2
Pillow==10.0.1
placebo==0.9.0
platformdirs==3.10.0
pluggy==1.3.0
pre-commit==3.3.3
preshed==3.0.8
proto-plus==1.22.3
protobuf==4.21.12
psycopg2-binary==2.9.7
pyasn1==0.5.0
pyasn1-modules==0.3.0
pycparser==2.21
pydantic==2.3.0
pydantic_core==2.6.3
pyfiglet==0.8.post1
PyJWT==2.8.0
pyparsing==3.1.1
pyphen==0.14.0
pytest==7.4.0
pytest-django==4.5.2
python-dateutil==2.8.2
python-dotenv==1.0.0
python-resize-image==1.1.20
python-slugify==8.0.1
python3-openid==3.2.0
pytz==2023.3
PyYAML==6.0.1
referencing==0.30.2
requests==2.31.0
requests-oauthlib==1.3.1
rpds-py==0.10.0
rsa==4.9
s3transfer==0.6.2
shortuuid==1.0.11
six==1.16.0
smart-open==6.3.0
soupsieve==2.4.1
spacy==3.6.1
spacy-legacy==3.0.12
spacy-loggers==1.0.4
spacy-syllables==3.0.2
sqlparse==0.4.4
srsly==2.4.7
stability-sdk==0.8.4
termcolor==1.1.0
text-unidecode==1.3
thinc==8.1.12
toml==0.10.2
tomli==2.0.1
tqdm==4.66.1
troposphere==4.4.1
typer==0.9.0
typing_extensions==4.7.1
tzdata==2023.3
uritemplate==4.1.1
urllib3==1.26.18
vcrpy==5.1.0
virtualenv==20.24.3
wasabi==1.1.2
webencodings==0.5.1
werkzeug>=3.0.1
# windows-curses==2.3.1
yarl==1.9.2
zappa==0.57.0
```
* Your `zappa_settings.json`:
```
{
"development": {
"aws_region": "eu-central-1",
"django_settings": "my_project.settings",
"project_name": "app",
"runtime": "python3.10",
"s3_bucket": "zappa-lambda-my_project",
"environment_variables": {
"ENVIRONMENT": "DEV",
"SETTINGS_FILE": "envs/.dev.json"
},
"exclude": ["venv", "*.sqlite3", "scripts", "datadump.json"],
"slim_handler": true,
"keep_warm": true,
"timeout_seconds": 300,
"memory_size": 2000,
"log_level": "DEBUG",
"cloudwatch_log_level": "DEBUG"
},
"production": {
"aws_region": "eu-central-1",
"django_settings": "my_project.settings",
"project_name": "zappa-lambda-my_project",
"runtime": "python3.10",
"s3_bucket": "zappa-lambda-my_project",
"environment_variables": {
"ENVIRONMENT": "PROD",
"SETTINGS_FILE": "envs/.prod.json"
},
"exclude": ["venv", "*.sqlite3", "scripts", "datadump.json"],
"slim_handler": true,
"keep_warm": true,
"timeout_seconds": 600,
"memory_size": 3000,
"log_level": "ERROR",
"cloudwatch_log_level": "INFO"
}
}
``` | closed | 2023-11-23T12:28:28Z | 2024-04-13T20:36:59Z | https://github.com/zappa/Zappa/issues/1285 | [
"no-activity",
"auto-closed"
] | KenSentMe | 4 |
deepset-ai/haystack | nlp | 8,587 | `ChatMessage` - introduce `text` property | (motivation in #8583)
- Introduce a `text` property that mirrors `content`
- If users/applications directly access the `content` attribute, show a deprecation warning telling that `content` will be removed in 2.9.0 and to use `text` instead
- Update all Haystack components to access `text` instead of `content` | closed | 2024-11-26T16:43:18Z | 2024-11-28T10:18:40Z | https://github.com/deepset-ai/haystack/issues/8587 | [] | anakin87 | 1 |
matplotlib/matplotlib | matplotlib | 29,067 | [Bug]: `secondary_xaxis` produces ticks at incorrect locations | ### Bug summary
It is possible I'm doing this incorrectly, but for a very simple example `secondary_xaxis` puts tick marks at incorrect locations. Modifying slightly the interpolation example from here https://matplotlib.org/stable/gallery/subplots_axes_and_figures/secondary_axis.html:
### Code for reproduction
```Python
fig, ax = plt.subplots(constrained_layout=True)
xdata = np.arange(0, 11, 0.4)
ydata = np.random.randn(len(xdata))
ax.plot(xdata, ydata, label='Plotted data')
ax.set_xlabel('X [m]')
ax.legend()
xnew = xdata**2
def forward(x):
return np.interp(x, xdata, xnew)
def inverse(x):
return np.interp(x, xnew, xdata)
secax = ax.secondary_xaxis('top', functions=(forward, inverse))
secax.xaxis.set_minor_locator(AutoMinorLocator())
secax.set_xlabel('$X_{other}$')
plt.show()
```
### Actual outcome
<img width="627" alt="image" src="https://github.com/user-attachments/assets/cb45f32e-4f53-4f6a-ad9d-4eed2c948c35">
### Expected outcome
Notice that e.g. 0 on the lower axis is not aligned with 0 on the top and 10 on the bottom is not aligned with 100 on the top.
### Additional information
_No response_
### Operating system
OS/X
### Matplotlib Version
3.9.2
### Matplotlib Backend
module://matplotlib_inline.backend_inline
### Python version
3.10.14
### Jupyter version
7.2.2
### Installation
pip | closed | 2024-11-04T14:34:54Z | 2024-11-21T20:44:19Z | https://github.com/matplotlib/matplotlib/issues/29067 | [
"Documentation: tutorials"
] | dkweiss31 | 9 |
plotly/dash-bio | dash | 246 | ManhattanPlot is buggy and missing some features | The ManhattanPlot app is missing some features of the other apps, such as:
* the option to download a sample dataset/upload a dataset
* multiple datasets to choose from
* any graph coloring options
In addition, isolating a trace and subsequently changing the threshold causes all of the traces (including the hidden traces) to display.
| closed | 2019-03-19T16:07:31Z | 2021-05-04T20:27:48Z | https://github.com/plotly/dash-bio/issues/246 | [
"App QA"
] | shammamah-zz | 1 |
nltk/nltk | nlp | 2,586 | PunktTokenizer: Inconsistency in two snippets, different languages | I am trying to work with the following example snippet [I found on StackOverflow](https://stackoverflow.com/questions/29746635/nltk-sentence-tokenizer-custom-sentence-starters)
**Example 1: Works**
```py
from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars
class BulletPointLangVars(PunktLanguageVars):
sent_end_chars = ('.', '?', '!', '•')
tokenizer = PunktSentenceTokenizer(lang_vars = BulletPointLangVars())
sentences = tokenizer.tokenize(u"• I am a sentence • I am another sentence")
for sentence in sentences:
print(sentence)
```
The above works, and provides the expected output.
Edit: The above fails if I remove the space preceding •, after some more debugging.
Now, I'm trying to work with the same with minor modifications using a unicode full-stop corresponding to a different language.
**Example 2: Fails**
```py
from nltk.tokenize.punkt import PunktSentenceTokenizer, PunktLanguageVars
class BulletPointLangVars(PunktLanguageVars):
sent_end_chars = ('.', '?', '!', '\u0964')
tokenizer = PunktSentenceTokenizer(lang_vars = BulletPointLangVars())
sentences = tokenizer.tokenize(u"উপরাষ্ট্রপতি শ্রী এম ভেঙ্কাইয়া নাইডু সোমবার আই আই টি দিল্লির হীরক জয়ন্তী উদযাপনের উদ্বোধন করেছেন। অনলাইনের মাধ্যমে এই অনুষ্ঠানে কেন্দ্রীয় মানব সম্পদ উন্নয়নমন্ত্রী শ্রী রমেশ পোখরিয়াল ‘নিশাঙ্ক’ উপস্থিত ছিলেন। এই উপলক্ষ্যে উপরাষ্ট্রপতি হীরকজয়ন্তীর লোগো এবং ২০৩০-এর জন্য প্রতিষ্ঠানের লক্ষ্য ও পরিকল্পনার নথি প্রকাশ করেছেন। অনুষ্ঠানে বক্তব্য রাখতে গিয়ে শ্রী নাইডু বলেছেন, জলবায়ু পরিবর্তন থেকে স্বাস্থ্য সমস্যা ౼ মানবজাতি আজ যে সমস্ত সমস্যার মুখোমুখি হচ্ছে, সেগুলিকে সমাধানের জন্য আইআইটি সহ অন্যান্য উচ্চ শিক্ষা প্রতিষ্ঠানের গবেষণার উপর গুরুত্ব দেওয়া উচিৎ। দেশের সমস্যাগুলির স্থিতিশীল সমাধানের ভারতীয় প্রতিষ্ঠানগুলি যখন এমন কিছু কাজ করবে, যাতে সমাজের উপর তার ইতিবাচক প্রভাব পরে, তাহলেই সেগুলি বিশ্বে সর্বশ্রেষ্ঠ প্রতিষ্ঠান হয়ে উঠতে পারবে। সামাজিক বিভিন্ন সমস্যার সমাধান খুজতে বেসরকারি ক্ষেত্রকে, শিক্ষাক্ষেত্রে গবেষণা ও উন্নয়নের জন্য খোলা মনে বিনিয়োগের তিনি আহ্বান জানিয়েছেন। আইআইটির ছাত্রছাত্রীদের গ্রামীণ ভারত ও কৃষকদের নানা সমসয়ার সমাধান ছাড়াও কি করে পুষ্টিকর ও প্রোটিন সমৃদ্ধ শস্য উৎপ দন বৃদ্ধি করা যায়, শ্রী নাইডু সেই বিষয়গুলি নিয়েও কাজ করার পরামর্শ দেন। উচ্চশিক্ষা প্রতিষ্ঠানগুলিকে এককভাবে নয়, শিল্প সংস্থাগুলির সঙ্গে জোট বেঁধে অতাধুনিক প্রযুক্তি উদ্ভাবন করতে হবে। এর ফলে দ্রুত ও ফলাফল ভিত্তিক নানা প্রকল্প বাস্তবায়নে সুবিধে হবে। নতুন শিক্ষানীতির প্রসঙ্গে উপরাষ্ট্রপতি বলেছেন, ভারতকে আন্তর্জাতিক শিক্ষা প্রতিষ্ঠানের গন্তব্যে পরিণত করার জন্য এই নীতি সহায়ক হবে। উচ্চ শিক্ষা প্রতিষ্ঠানের মানোন্নয়নে সরকার, বিশ্ববিদ্যালয়, শিক্ষাবিদ ও বেসরকারী প্রতিষ্ঠানগুলিকে একযোগে কাজ করতে হবে। আইআইটি দিল্লি শিল্পোদ্যোগ গড়ার কেন্দ্র হয়ে উঠছে বলে উপরাষ্ট্রপতি সন্তোষ প্রকাশ করেছেন। এই প্রসঙ্গে তিনি মানব সম্পদ উন্নয়ন মন্ত্রকের ‘উন্নত ভারত অভিযান’ কর্মসূচীতে দিল্লি আইআইটি অনুঘটকের ভূমিকা পালন করায় এই প্রতিষ্ঠানের প্রশংসা করেছেন। শ্রী পোখরিয়াল, এই অনুষ্ঠানের উদ্বোধন করার জন্য উপরাষ্ট্রপতির প্রতি কৃতজ্ঞতা জানিয়েছেন। তিনি বলেছেন, আমাদের দেশের ছাত্রছাত্রীদের জন্য একটি আধুনিক ও উন্নত শিক্ষা ব্যবস্থা গড়ে তুলতে২০২০-র নতুন জাতীয় শিক্ষানীতি সহায়ক হবে। আইআইটি দিল্লির গৌরবময় ৬০ বছরের উল্লেখ করে তিনি বলেছেন সারা দেশ যখন ভিড-১৯ মহামারীর বিরুদ্ধে লড়াই করছে, তখন আইআইটি দিল্লি নানাভাবে কারিগরি সহায়তা দিয়েছে, যা সময়োপযোগী ও মূল্যবান। বিগত ৫ বছরে এই শিক্ষা প্রতিষ্ঠানের শিক্ষক শিক্ষিকা ও ছাত্রছাত্রীরা ৫০০-র বেশী পেটেন্টের আবেদন করেছেন এবং তাঁদের ১০হাজারের বেশী গবেষণা পত্র বিভিন্ন আন্তর্জাতিক পত্র পত্রিকায় ছাপানো হয়েছে। ২০১৬ সালে সরকার এই প্রতিষ্ঠানকে যেখানে গবেষণার জন্য ১০০ কোটি টাকা দিয়েছিল, ২০১৯ সালে তা বেড়ে হয়েছে ৪০০ কোটি টাকা। দিল্লি আইআইটির ডিরেক্টর অধ্যাপক ভি রামগোপাল রাও জানিয়েছেন, ২০৩০ সালের যে লক্ষ মাত্রা এই শিক্ষা প্রতিষ্ঠা নিয়েছে, তার ফলে ছাত্রছাত্রী, প্রাক্তনী, শিক্ষক শিক্ষিকা ও কর্মীবর্গদের জীবনে ইতিবাচক প্রভাব পড়বে এবং আগামী দিনে দেশের প্রগতিতে তা সহায়ক হবে। অনুষ্ঠানের দ্বিতীয় পর্বে অধ্যাপক দেবাঙ্খখর, অধ্যাপক এম বালাকৃষ্ণাণ বক্তব্য রাখেন। ‘আই আইটি দিল্লিঃ ৬০ বছরের উৎকর্ষতার স্মৃতিচারণা ও ভবিষ্যৎ পরিকল্পনা’ শীর্ষক এক আলোচনায় এই প্রতিষ্ঠানের প্রাক্তন ডিরেক্টর অধ্যাপক ভি এস রাজু, অধ্যাপক আর এস শিরোহী, অধ্যাপক সুরেন্দ্র প্রসাদ ও অধ্যাপক আর কে শেভগাওকর অংশ নেন।")
for sentence in sentences:
print(sentence)
```
`\u0964` corresponds to the Devanagiri full-stop (I have tried putting the normal one as well). I am not getting similar results as example 1. What could be going wrong here? | closed | 2020-08-20T20:04:05Z | 2020-09-06T17:32:28Z | https://github.com/nltk/nltk/issues/2586 | [] | jerinphilip | 5 |
autogluon/autogluon | data-science | 4,841 | [BUG] References to Python 3.8 in workflow files may break builds | **Describe the bug**
Some GitHub workflow configuration files still reference Python 3.8, which is no longer supported by AutoGluon. These outdated references may lead to errors during the build process or unintended issues in the CI/CD pipeline.
The following files and lines contain references to Python 3.8:
https://github.com/autogluon/autogluon/blob/f1bd5f42b2da0099c8d7319f38f811127446d9af/.github/workflows/pythonpublish.yml#L23
https://github.com/autogluon/autogluon/blob/f1bd5f42b2da0099c8d7319f38f811127446d9af/.github/workflows/pythonpublish_testpypi.yml#L19
https://github.com/autogluon/autogluon/blob/f1bd5f42b2da0099c8d7319f38f811127446d9af/.github/workflows/pypi_release.yml#L22
**Proposed solution**
Update the workflow files to reference supported Python versions only (e.g., 3.9+). Additionally, review the affected files to ensure all configurations are up-to-date with AutoGluon's current requirements. | closed | 2025-01-26T10:06:07Z | 2025-01-29T20:13:24Z | https://github.com/autogluon/autogluon/issues/4841 | [
"code cleanup",
"Needs Triage"
] | celestinoxp | 0 |
deepfakes/faceswap | machine-learning | 582 | ERROR :Caught exception in child process: 14128 | GUI Extract error
### GUI log
Loading...
01/08/2019 21:48:29 INFO Log level set to: INFO
01/08/2019 21:48:31 INFO Output Directory: F:\Python\faceswap-master\output
01/08/2019 21:48:31 INFO Input Video: F:\Python\faceswap-master\input\1.mp4
01/08/2019 21:48:31 INFO Loading Detect from Mtcnn plugin...
01/08/2019 21:48:31 INFO Loading Align from Fan plugin...
01/08/2019 21:48:31 INFO NB: Parallel processing disabled.You may get faster extraction speeds by enabling it with the -mp switch
01/08/2019 21:48:31 INFO Starting, this may take a while...
01/08/2019 21:48:32 INFO Initializing MTCNN Detector...
**01/08/2019 21:48:32 ERROR Caught exception in child process: 14128**
01/08/2019 21:49:31 INFO Waiting for Detector... Time out in 4 minutes
01/08/2019 21:50:31 INFO Waiting for Detector... Time out in 3 minutes
01/08/2019 21:51:31 INFO Waiting for Detector... Time out in 2 minutes
01/08/2019 21:52:31 INFO Waiting for Detector... Time out in 1 minutes
### crash_report
01/08/2019 21:48:32 Detector.run MainThread mtcnn initialize INFO Initializing MTCNN Detector...
01/08/2019 21:48:32 Detector.run MainThread _base run ERROR Caught exception in child process: 14128
01/08/2019 21:49:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 4 minutes
01/08/2019 21:50:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 3 minutes
01/08/2019 21:51:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 2 minutes
01/08/2019 21:52:31 MainProcess MainThread extract launch_detector INFO Waiting for Detector... Time out in 1 minutes
Traceback (most recent call last):
File "F:\Python\faceswap-master\lib\cli.py", line 90, in execute_script
process.process()
File "F:\Python\faceswap-master\scripts\extract.py", line 49, in process
self.run_extraction()
File "F:\Python\faceswap-master\scripts\extract.py", line 143, in run_extraction
self.run_detection(to_process)
File "F:\Python\faceswap-master\scripts\extract.py", line 194, in run_detection
self.plugins.launch_detector()
File "F:\Python\faceswap-master\scripts\extract.py", line 379, in launch_detector
raise ValueError("Error initializing Detector")
ValueError: Error initializing Detector
============ System Information ============
git_branch: Not Found
git_commits: Not Found
gpu_cuda: 9.0
gpu_cudnn: 7.4.2
gpu_devices: GPU_0: GeForce GTX 750
gpu_driver: 417.22
gpu_vram: GPU_0: 1024MB
os_machine: AMD64
os_platform: Windows-10-10.0.17134-SP0
os_release: 10
py_command: F:\Python\faceswap-master\faceswap.py extract -i F:/Python/faceswap-master/input/1.mp4 -o F:/Python/faceswap-master/output -l 0.6 --serializer json -D mtcnn -A fan -mtms 20 -mtth 0.6 0.7 0.7 -mtsc 0.709 -sz 256 -L INFO
py_conda_version: N/A
py_implementation: CPython
py_version: 3.6.6
py_virtual_env: False
sys_cores: 4
sys_processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
sys_ram: Total: 8129MB, Available: 3269MB, Used: 4860MB, Free: 3269MB
-------------------------------
| closed | 2019-01-08T14:19:09Z | 2019-01-11T07:49:28Z | https://github.com/deepfakes/faceswap/issues/582 | [] | dream80 | 3 |
plotly/dash | data-visualization | 2,702 | Pages should do recursive search for image files | When creating meta tags, Pages only searches the root of the assets folder for image files. It would be better to do a recursive search since it's common to use folders within `/assets` in larger projects.
This would still be consistent with the docs which states:
> The image value must be the name of a file inside the assets folder.
> If you don't specify image, Pages checks for an image that meets one of these criteria (in order) and uses the first one it finds....
Currently it's necessary to specify the image file name ie `image="images/app.png"` when using subfolder in `/assets`. This change would make that step unnecessary.
I could do the PR to fix this if you like :slightly_smiling_face:
| closed | 2023-11-27T15:22:21Z | 2023-11-28T21:04:37Z | https://github.com/plotly/dash/issues/2702 | [] | AnnMarieW | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,642 | [Bug]: REinstalling mmcv on every launch | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
Whenever I launch ./webui.sh on Ubuntu 22.04 the script outputs: "Installing dependencies" and then uninstalls mmcv, mmdet and openmim.
It then proceeds to reinstall the 3 packages but installs mmcv 2.2 and immediately complains with an AssertionError that mmcv 2.0.0rc4 should be installed.
If I uninstall 2.2 and manually reinstall 2.0.0rc4 in the venv, on the next launch it uninstalls the package and groundhog day...
When passing --disable-all-extensions then the script does not install dependencies.
### Steps to reproduce the problem
pip uninstall mmcv
pip install mmvc==2.0.0rc4
./webui.sh
### What should have happened?
If manual installation of mmcv does not meet the requirement, then why is the script upgrading to 2.2 every time?
### What browsers do you use to access the UI ?
Other
### Sysinfo
{
"Platform": "Linux-5.15.0-125-generic-x86_64-with-glibc2.35",
"Python": "3.10.12",
"Version": "v1.10.1",
"Commit": "82a973c04367123ae98bd9abdf80d9eda9b910e2",
"Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n\tmodified: requirements.txt\n\tmodified: webui-user.bat\n\tmodified: webui-user.sh\n\nUntracked files:\n (use \"git add <file>...\" to include in what will be committed)\n\t.webui-user.bat.un~\n\t.webui-user.sh.un~\n\t.webui.sh.un~\n\t=2.0.0\n\t=3.0.0\n\tapi_out/\n\thtml/img/\n\tnohup.out\n\tscripts/detect_extension.py\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")",
"Script path": "/data/stable-diffusion-webui",
"Data path": "/data/stable-diffusion-webui",
"Extensions dir": "/data/stable-diffusion-webui/extensions",
"Checksum": "2fba5f7e0e22457b806051af343991029b79abbfe12017f5cdbbd81626cc07aa",
"Commandline": [
"launch.py",
"--medvram",
"--xformers",
"--no-half-vae"
],
"Torch env info": {
"torch_version": "2.1.2+cu121",
"is_debug_build": "False",
"cuda_compiled_version": "12.1",
"gcc_version": "(Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0",
"clang_version": null,
"cmake_version": null,
"os": "Ubuntu 22.04.5 LTS (x86_64)",
"libc_version": "glibc-2.35",
"python_version": "3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime)",
"python_platform": "Linux-5.15.0-125-generic-x86_64-with-glibc2.35",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "535.216.01",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce RTX 3060 Ti",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.5.2",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121",
"triton==2.1.0"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "garbage_collection_threshold:0.9,max_split_size_mb:512",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture: x86_64",
"CPU op-mode(s): 32-bit, 64-bit",
"Address sizes: 46 bits physical, 48 bits virtual",
"Byte Order: Little Endian",
"CPU(s): 48",
"On-line CPU(s) list: 0-47",
"Vendor ID: GenuineIntel",
"Model name: Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz",
"CPU family: 6",
"Model: 62",
"Thread(s) per core: 2",
"Core(s) per socket: 12",
"Socket(s): 2",
"Stepping: 4",
"CPU max MHz: 3500.0000",
"CPU min MHz: 1200.0000",
"BogoMIPS: 5400.22",
"Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm cpuid_fault epb pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida arat pln pts md_clear flush_l1d",
"Virtualization: VT-x",
"L1d cache: 768 KiB (24 instances)",
"L1i cache: 768 KiB (24 instances)",
"L2 cache: 6 MiB (24 instances)",
"L3 cache: 60 MiB (2 instances)",
"NUMA node(s): 2",
"NUMA node0 CPU(s): 0-11,24-35",
"NUMA node1 CPU(s): 12-23,36-47",
"Vulnerability Gather data sampling: Not affected",
"Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled",
"Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable",
"Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable",
"Vulnerability Meltdown: Mitigation; PTI",
"Vulnerability Mmio stale data: Unknown: No mitigations",
"Vulnerability Reg file data sampling: Not affected",
"Vulnerability Retbleed: Not affected",
"Vulnerability Spec rstack overflow: Not affected",
"Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp",
"Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization",
"Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected",
"Vulnerability Srbds: Not affected",
"Vulnerability Tsx async abort: Not affected"
]
},
"Exceptions": [
{
"exception": "MMCV==2.2.0 is used but incompatible. Please install mmcv>=2.0.0rc4, <2.2.0.",
"traceback": [
[
"/data/stable-diffusion-webui/modules/scripts.py, line 515, load_scripts",
"script_module = script_loading.load_module(scriptfile.path)"
],
[
"/data/stable-diffusion-webui/modules/script_loading.py, line 13, load_module",
"module_spec.loader.exec_module(module)"
],
[
"<frozen importlib._bootstrap_external>, line 883, exec_module",
""
],
[
"<frozen importlib._bootstrap>, line 241, _call_with_frames_removed",
""
],
[
"/data/stable-diffusion-webui/extensions/dddetailer/scripts/dddetailer.py, line 975, <module>",
"from mmdet.apis import inference_detector, init_detector"
],
[
"/data/stable-diffusion-webui/venv/lib/python3.10/site-packages/mmdet/__init__.py, line 17, <module>",
"and mmcv_version < digit_version(mmcv_maximum_version)), \\"
]
]
}
],
"CPU": {
"model": "x86_64",
"count logical": 48,
"count physical": 24
},
"RAM": {
"total": "126GB",
"used": "7GB",
"free": "4GB",
"active": "21GB",
"inactive": "98GB",
"buffers": "696MB",
"cached": "114GB",
"shared": "192MB"
},
"Extensions": [
{
"name": "adetailer",
"path": "/data/stable-diffusion-webui/extensions/adetailer",
"commit": "8ddf919e1e5c234d6398e83f9cee6565acf550f3",
"branch": "main",
"remote": "https://github.com/Bing-su/adetailer.git"
},
{
"name": "canvas-zoom",
"path": "/data/stable-diffusion-webui/extensions/canvas-zoom",
"commit": "b9c3cff892d448f8825186500aeca710f243752a",
"branch": "main",
"remote": "https://github.com/richrobber2/canvas-zoom.git"
},
{
"name": "controlnet",
"path": "/data/stable-diffusion-webui/extensions/controlnet",
"commit": "",
"branch": null,
"remote": null
},
{
"name": "dddetailer",
"path": "/data/stable-diffusion-webui/extensions/dddetailer",
"commit": "f82fe8980ebc5c93f8b5c2c7a36133dc3422c098",
"branch": "master",
"remote": "https://github.com/Bing-su/dddetailer"
},
{
"name": "multidiffusion-upscaler-for-automatic1111",
"path": "/data/stable-diffusion-webui/extensions/multidiffusion-upscaler-for-automatic1111",
"commit": "22798f6822bc9c8a905b51da8954ee313b973331",
"branch": "main",
"remote": "https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111.git"
},
{
"name": "sd-webui-aspect-ratio-helper",
"path": "/data/stable-diffusion-webui/extensions/sd-webui-aspect-ratio-helper",
"commit": "99fcf9b0a4e3f8c8cac07b12d17b66f12297b828",
"branch": "main",
"remote": "https://github.com/thomasasfk/sd-webui-aspect-ratio-helper.git"
},
{
"name": "sd-webui-controlnet",
"path": "/data/stable-diffusion-webui/extensions/sd-webui-controlnet",
"commit": "56cec5b2958edf3b1807b7e7b2b1b5186dbd2f81",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet.git"
},
{
"name": "sd-webui-infinite-image-browsing",
"path": "/data/stable-diffusion-webui/extensions/sd-webui-infinite-image-browsing",
"commit": "7215a4cadfc14151a3ef8e036ecb0ba8e27d8a68",
"branch": "main",
"remote": "https://github.com/zanllp/sd-webui-infinite-image-browsing.git"
},
{
"name": "sd-webui-inpaint-anything",
"path": "/data/stable-diffusion-webui/extensions/sd-webui-inpaint-anything",
"commit": "91568a8c5f581c15fd5439dba5e25bdc49c563b1",
"branch": "main",
"remote": "https://github.com/Uminosachi/sd-webui-inpaint-anything.git"
},
{
"name": "ultimate-upscale-for-automatic1111",
"path": "/data/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111",
"commit": "2322caa480535b1011a1f9c18126d85ea444f146",
"branch": "master",
"remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git"
}
],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--medvram --xformers --no-half-vae",
"GIT": "git",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"samples_save": false,
"samples_format": "png",
"samples_filename_pattern": "",
"save_images_add_number": true,
"grid_save": false,
"grid_format": "png",
"grid_extended_filename": true,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"enable_pnginfo": true,
"save_txt": false,
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000,
"img_max_size_mp": 200,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": true,
"save_selected_only": true,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/txt2img-images",
"outdir_img2img_samples": "outputs/img2img-images",
"outdir_extras_samples": "outputs/extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/txt2img-grids",
"outdir_img2img_grids": "outputs/img2img-grids",
"outdir_save": "log/images",
"outdir_init_images": "outputs/init-images",
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"upscaler_for_img2img": null,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"show_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120,
"sd_model_checkpoint": "v1-5-pruned-emaonly.safetensors [6ce0161689]",
"sd_checkpoint_cache": 0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "vaeFtMse840000EmaPruned_vaeFtMse840k.safetensors",
"sd_vae_as_default": true,
"sd_unet": "Automatic",
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"enable_quantization": false,
"enable_emphasis": true,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": false,
"randn_source": "GPU",
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0.0,
"token_merging_ratio": 0.0,
"token_merging_ratio_img2img": 0.0,
"token_merging_ratio_hr": 0.0,
"pad_cond_uncond": false,
"experimental_persistent_cond_cache": false,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"extra_networks_show_hidden_directories": true,
"extra_networks_hidden_models": "Always",
"extra_networks_default_view": "cards",
"extra_networks_default_multiplier": 1.0,
"extra_networks_card_width": 300.0,
"extra_networks_card_height": 200.0,
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"sd_hypernetwork": "None",
"localization": "None",
"gradio_theme": "Default",
"img2img_editor_height": 720,
"return_grid": true,
"return_mask": false,
"return_mask_composite": false,
"do_not_show_images": false,
"send_seed": true,
"send_size": true,
"font": "",
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250,
"show_progress_in_title": true,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~()",
"quicksettings_list": [
"sd_model_checkpoint",
"sd_vae",
"CLIP_stop_at_last_layers",
"face_restoration",
"save_images_before_face_restoration",
"face_restoration_model",
"img2img_color_correction",
"initial_noise_multiplier"
],
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"disable_token_counters": false,
"add_model_hash_to_info": true,
"add_model_name_to_info": true,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 10,
"show_progress_type": "Approx NN",
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000,
"hide_samplers": [],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_noise": 1.0,
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"disabled_extensions": [],
"disable_all_extensions": "none",
"restore_config_state_file": "",
"sd_checkpoint_hash": "6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa",
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"lora_functional": false,
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"extra_options": [],
"extra_options_accordion": false,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_disabled_functions": [
"Overlap"
],
"sd_vae_overrides_per_model_preferences": false,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"save_images_replace_action": "Replace",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_incomplete_images": false,
"notification_audio": true,
"notification_volume": 100,
"auto_backcompat": true,
"use_old_scheduling": false,
"use_downcasted_alpha_bar": false,
"refiner_switch_by_sample_steps": false,
"extra_networks_dir_button_function": false,
"extra_networks_card_text_scale": 1,
"extra_networks_card_show_desc": true,
"extra_networks_card_description_is_html": false,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_tree_view_style": "Dirs",
"extra_networks_tree_view_default_enabled": true,
"extra_networks_tree_view_default_width": 180.0,
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"lora_show_all": true,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"lora_not_found_warning_console": true,
"lora_not_found_gradio_warning": true,
"pad_cond_uncond_v0": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"fp8_storage": "Disable",
"cache_fp16_weight": false,
"s_tmax": 0,
"sgm_noise_multiplier": false,
"sd_noise_schedule": "Default",
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"emphasis": "Original",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"enable_prompt_comments": true,
"sdxl_crop_top": 0.0,
"sdxl_crop_left": 0.0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"auto_vae_precision_bfloat16": true,
"auto_vae_precision": false,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"img2img_extra_noise": 0,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"img2img_batch_show_results_limit": 32,
"overlay_inpaint": true,
"sd_webui_modal_lightbox_icon_opacity": 1,
"sd_webui_modal_lightbox_toolbar_opacity": 0.9,
"gallery_height": "",
"open_dir_button_choice": "Subdirectory",
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"infotext_skip_pasting": [],
"live_preview_allow_lowvram_full": false,
"live_preview_fast_interrupt": false,
"js_live_preview_in_modal_lightbox": false,
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"keyedit_move": true,
"include_styles_into_token_counters": true,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"compact_prompt_box": false,
"sd_checkpoint_dropdown_use_short": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"interrupt_after_current": true,
"gradio_themes_cache": true,
"enable_reloading_ui_scripts": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"prioritized_callbacks_app_started": [],
"prioritized_callbacks_model_loaded": [],
"prioritized_callbacks_ui_settings": [],
"prioritized_callbacks_infotext_pasted": [],
"prioritized_callbacks_script_unloaded": [],
"prioritized_callbacks_before_ui": [],
"prioritized_callbacks_list_optimizers": [],
"prioritized_callbacks_before_token_counter": [],
"prioritized_callbacks_script_before_process": [],
"prioritized_callbacks_script_process": [],
"prioritized_callbacks_script_post_sample": [],
"prioritized_callbacks_script_on_mask_blend": [],
"prioritized_callbacks_script_postprocess_maskoverlay": [],
"auto_launch_browser": "Local",
"enable_console_prompts": false,
"show_gradio_deprecation_warnings": true,
"enable_upscale_progressbar": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"face_restoration": false,
"postprocessing_disable_in_extras": [],
"postprocessing_existing_caption_action": "Ignore",
"dat_enabled_models": [
"DAT x2",
"DAT x3",
"DAT x4"
],
"DAT_tile": 192,
"DAT_tile_overlap": 8,
"set_scale_by_when_changing_upscaler": false,
"canvas_hotkey_shrink_brush": "Q",
"canvas_hotkey_grow_brush": "W",
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"prioritized_callbacks_ui_tabs": [],
"arh_javascript_aspect_ratio_show": true,
"arh_javascript_aspect_ratio": "1:1, 3:2, 4:3, 5:4, 16:9",
"arh_ui_javascript_selection_method": "Aspect Ratios Dropdown",
"arh_hide_accordion_by_default": true,
"arh_expand_by_default": false,
"arh_ui_component_order_key": "MaxDimensionScaler, MinDimensionScaler, PredefinedAspectRatioButtons, PredefinedPercentageButtons",
"arh_show_max_width_or_height": false,
"arh_max_width_or_height": 1024.0,
"arh_show_min_width_or_height": false,
"arh_min_width_or_height": 1024.0,
"arh_show_predefined_aspect_ratios": false,
"arh_predefined_aspect_ratio_use_max_dim": false,
"arh_predefined_aspect_ratios": "1:1, 4:3, 16:9, 9:16, 21:9",
"arh_show_predefined_percentages": false,
"arh_predefined_percentages": "25, 50, 75, 125, 150, 175, 200",
"arh_predefined_percentages_display_key": "Incremental/decremental percentage (-50%, +50%)",
"dd_save_previews": false,
"outdir_ddetailer_previews": "extensions/dddetailer/outputs/masks-previews",
"dd_save_masks": false,
"outdir_ddetailer_masks": "extensions/dddetailer/outputs/masks",
"control_net_detectedmap_dir": "detected_maps",
"control_net_models_path": "",
"control_net_modules_path": "",
"control_net_unit_count": 3,
"control_net_model_cache_size": 2,
"control_net_inpaint_blur_sigma": 7,
"control_net_no_detectmap": false,
"control_net_detectmap_autosaving": false,
"control_net_allow_script_control": false,
"control_net_sync_field_args": true,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"controlnet_disable_openpose_edit": false,
"controlnet_disable_photopea_edit": false,
"controlnet_photopea_warning": true,
"controlnet_ignore_noninpaint_mask": false,
"controlnet_clip_detector_on_cpu": false,
"controlnet_control_type_dropdown": false,
"ad_max_models": 4,
"ad_extra_models_dir": "",
"ad_save_previews": false,
"ad_save_images_before": false,
"ad_only_selected_scripts": true,
"ad_script_names": "dynamic_prompting,dynamic_thresholding,lora_block_weight,negpip,wildcard_recursive,wildcards",
"ad_bbox_sortby": "None",
"ad_same_seed_for_each_tab": false,
"prioritized_callbacks_after_component": [],
"prioritized_callbacks_on_reload": [],
"prioritized_callbacks_script_before_process_batch": [],
"prioritized_callbacks_script_postprocess": [],
"prioritized_callbacks_script_postprocess_batch": [],
"prioritized_callbacks_script_after_component": [],
"prioritized_callbacks_script_postprocess_image": [],
"canvas_zoom_undo_extra_key": "Ctrl",
"canvas_zoom_hotkey_undo": "Z",
"canvas_zoom_inc_brush_size": "]",
"canvas_zoom_dec_brush_size": "[",
"canvas_zoom_hotkey_open_colorpanel": "Q",
"canvas_zoom_hotkey_pin_colorpanel": "T",
"canvas_zoom_hotkey_dropper": "A",
"canvas_zoom_hotkey_fill": "X",
"canvas_zoom_hotkey_transparency": "C",
"canvas_zoom_hide_btn": true,
"canvas_zoom_mask_clear": true,
"canvas_zoom_enable_integration": true,
"canvas_zoom_brush_size": 200,
"canvas_zoom_brush_size_change": 5,
"canvas_zoom_transparency_level": 70,
"canvas_zoom_brush_opacity": false,
"canvas_zoom_inpaint_label": true,
"canvas_zoom_inpaint_warning": true,
"canvas_zoom_inpaint_change_btn_color": false,
"canvas_zoom_inpaint_btn_color": "#C33227",
"canvas_zoom_brush_outline": false,
"canvas_zoom_add_buttons": false,
"canvas_zoom_draw_staight_lines": false,
"canvas_zoom_inpaint_brushcolor": "#000000",
"canvas_zoom_disabled_functions": [
"Overlap"
],
"ad_save_images_dir": "",
"ad_dynamic_denoise_power": 0,
"ad_match_inpaint_bbox_size": "Off",
"inpaint_anything_save_folder": "inpaint-anything",
"inpaint_anything_sam_oncpu": false,
"inpaint_anything_offline_inpainting": false,
"inpaint_anything_padding_fill": 127,
"inpain_anything_sam_models_dir": ""
},
"Startup": {
"total": 71.07646226882935,
"records": {
"initial startup": 0.006743669509887695,
"prepare environment/checks": 0.00014019012451171875,
"prepare environment/git version info": 0.016297340393066406,
"prepare environment/torch GPU test": 2.610452651977539,
"prepare environment/clone repositores": 0.05684089660644531,
"prepare environment/install requirements": 4.551353693008423,
"prepare environment/run extensions installers/controlnet": 0.001188516616821289,
"prepare environment/run extensions installers/sd-webui-controlnet": 0.14631366729736328,
"prepare environment/run extensions installers/sd-webui-inpaint-anything": 5.030036687850952,
"prepare environment/run extensions installers/canvas-zoom": 2.4369723796844482,
"prepare environment/run extensions installers/dddetailer": 20.710227251052856,
"prepare environment/run extensions installers/sd-webui-aspect-ratio-helper": 0.0001430511474609375,
"prepare environment/run extensions installers/ultimate-upscale-for-automatic1111": 2.765655517578125e-05,
"prepare environment/run extensions installers/multidiffusion-upscaler-for-automatic1111": 2.3126602172851562e-05,
"prepare environment/run extensions installers/sd-webui-infinite-image-browsing": 0.2711906433105469,
"prepare environment/run extensions installers/adetailer": 5.347283363342285,
"prepare environment/run extensions installers": 33.94344687461853,
"prepare environment": 41.17867851257324,
"launcher": 0.0021791458129882812,
"import torch": 4.757909536361694,
"import gradio": 1.0641181468963623,
"setup paths": 1.707580804824829,
"import ldm": 0.0037512779235839844,
"import sgm": 5.4836273193359375e-06,
"initialize shared": 0.2651069164276123,
"other imports": 0.6259610652923584,
"opts onchange": 0.0006053447723388672,
"setup SD model": 9.799003601074219e-05,
"setup codeformer": 0.0008230209350585938,
"setup gfpgan": 0.007240772247314453,
"set samplers": 4.410743713378906e-05,
"list extensions": 0.002903461456298828,
"restore config state file": 1.52587890625e-05,
"list SD models": 0.026834487915039062,
"list localizations": 0.0003001689910888672,
"load scripts/custom_code.py": 0.004204988479614258,
"load scripts/detect_extension.py": 0.0006046295166015625,
"load scripts/img2imgalt.py": 0.0004169940948486328,
"load scripts/loopback.py": 0.00022220611572265625,
"load scripts/outpainting_mk_2.py": 0.0002722740173339844,
"load scripts/poor_mans_outpainting.py": 0.0002155303955078125,
"load scripts/postprocessing_codeformer.py": 0.0001819133758544922,
"load scripts/postprocessing_gfpgan.py": 0.00016450881958007812,
"load scripts/postprocessing_upscale.py": 0.0002722740173339844,
"load scripts/prompt_matrix.py": 0.000225067138671875,
"load scripts/prompts_from_file.py": 0.00023698806762695312,
"load scripts/sd_upscale.py": 0.000202178955078125,
"load scripts/xyz_grid.py": 0.002648591995239258,
"load scripts/ldsr_model.py": 0.420351505279541,
"load scripts/lora_script.py": 0.1832737922668457,
"load scripts/scunet_model.py": 0.027283668518066406,
"load scripts/swinir_model.py": 0.02671217918395996,
"load scripts/hotkey_config.py": 0.00019049644470214844,
"load scripts/extra_options_section.py": 0.00028014183044433594,
"load scripts/hypertile_script.py": 0.053967952728271484,
"load scripts/postprocessing_autosized_crop.py": 0.00022292137145996094,
"load scripts/postprocessing_caption.py": 0.0001900196075439453,
"load scripts/postprocessing_create_flipped_copies.py": 0.0001735687255859375,
"load scripts/postprocessing_focal_crop.py": 0.0009653568267822266,
"load scripts/postprocessing_split_oversized.py": 0.0001895427703857422,
"load scripts/soft_inpainting.py": 0.0004904270172119141,
"load scripts/!adetailer.py": 0.5497946739196777,
"load scripts/config.py": 0.00032830238342285156,
"load scripts/dddetailer.py": 16.50908327102661,
"load scripts/tilediffusion.py": 0.0053141117095947266,
"load scripts/tileglobal.py": 0.0011188983917236328,
"load scripts/tilevae.py": 0.0008006095886230469,
"load scripts/sd_webui_aspect_ratio_helper.py": 0.09862327575683594,
"load scripts/adapter.py": 0.00047659873962402344,
"load scripts/api.py": 0.3110339641571045,
"load scripts/batch_hijack.py": 0.0005223751068115234,
"load scripts/cldm.py": 0.001079559326171875,
"load scripts/controlnet.py": 0.5039541721343994,
"load scripts/controlnet_diffusers.py": 0.00034165382385253906,
"load scripts/controlnet_lllite.py": 0.00029468536376953125,
"load scripts/controlnet_lora.py": 0.0002913475036621094,
"load scripts/controlnet_model_guess.py": 0.0005085468292236328,
"load scripts/controlnet_sparsectrl.py": 0.00032448768615722656,
"load scripts/controlnet_version.py": 0.0001399517059326172,
"load scripts/enums.py": 0.002435445785522461,
"load scripts/external_code.py": 0.0001876354217529297,
"load scripts/global_state.py": 0.00030803680419921875,
"load scripts/hook.py": 0.0006806850433349609,
"load scripts/infotext.py": 0.00020742416381835938,
"load scripts/logging.py": 0.0004165172576904297,
"load scripts/lvminthin.py": 0.0006203651428222656,
"load scripts/movie2movie.py": 0.0002644062042236328,
"load scripts/supported_preprocessor.py": 0.0021371841430664062,
"load scripts/utils.py": 0.00033664703369140625,
"load scripts/xyz_grid_support.py": 0.0003867149353027344,
"load scripts/iib_setup.py": 0.10164618492126465,
"load scripts/inpaint_anything.py": 0.5150072574615479,
"load scripts/ultimate-upscale.py": 0.0007646083831787109,
"load scripts/comments.py": 0.0320131778717041,
"load scripts/refiner.py": 0.0002295970916748047,
"load scripts/sampler.py": 0.00019979476928710938,
"load scripts/seed.py": 0.000244140625,
"load scripts": 19.36632752418518,
"load upscalers": 0.0024099349975585938,
"refresh VAE": 0.001795053482055664,
"refresh textual inversion templates": 7.104873657226562e-05,
"scripts list_optimizers": 0.000644683837890625,
"scripts list_unets": 5.125999450683594e-05,
"reload hypernetworks": 0.0009043216705322266,
"initialize extra networks": 0.010030269622802734,
"scripts before_ui_callback": 0.00232696533203125,
"create ui": 1.5808982849121094,
"gradio launch": 0.33333706855773926,
"add APIs": 0.023990869522094727,
"app_started_callback/lora_script.py": 0.001112222671508789,
"app_started_callback/!adetailer.py": 0.0014429092407226562,
"app_started_callback/api.py": 0.014088869094848633,
"app_started_callback/iib_setup.py": 0.08627462387084961,
"app_started_callback": 0.10292911529541016
}
},
"Packages": [
"absl-py==2.1.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohappyeyeballs==2.4.3",
"aiohttp==3.10.10",
"aiosignal==1.3.1",
"albumentations==1.4.3",
"aliyun-python-sdk-core==2.16.0",
"aliyun-python-sdk-kms==2.16.5",
"altair==5.4.1",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==24.2.0",
"av==13.1.0",
"basicsr==1.4.2",
"blendmodes==2022",
"certifi==2024.8.30",
"cffi==1.17.1",
"chardet==5.2.0",
"charset-normalizer==3.4.0",
"clean-fid==0.1.35",
"click==8.1.7",
"clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
"colorama==0.4.6",
"coloredlogs==15.0.1",
"colorlog==6.9.0",
"contourpy==1.3.0",
"controlnet-aux==0.0.9",
"crcmod==1.7",
"cryptography==43.0.3",
"cssselect2==0.7.0",
"cycler==0.12.1",
"Cython==3.0.11",
"deprecation==2.1.0",
"depth_anything @ https://github.com/huchenlei/Depth-Anything/releases/download/v1.0.0/depth_anything-2024.1.22.0-py2.py3-none-any.whl#sha256=26c1d38b8c3c306b4a2197d725a4b989ff65f7ebcf4fb5a96a1b6db7fbd56780",
"depth_anything_v2 @ https://github.com/MackinationsAi/UDAV2-ControlNet/releases/download/v1.0.0/depth_anything_v2-2024.7.1.0-py2.py3-none-any.whl#sha256=6848128867d1f7c7519d88df0f88bfab89100dc5225259c4d7cb90325c308c9f",
"diffusers==0.31.0",
"diskcache==5.6.3",
"dsine @ https://github.com/sdbds/DSINE/releases/download/1.0.2/dsine-2024.3.23-py3-none-any.whl#sha256=b9ea3bacce09f9b3f7fb4fa12471da7e465b2f9a60412711105a9238db280442",
"easydict==1.13",
"einops==0.4.1",
"embreex==2.17.7.post5",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.4.0",
"filelock==3.14.0",
"filterpy==1.4.5",
"flatbuffers==24.3.25",
"fonttools==4.54.1",
"frozenlist==1.5.0",
"fsspec==2024.10.0",
"ftfy==6.3.1",
"future==1.0.0",
"fvcore==0.1.5.post20221221",
"geffnet==1.0.2",
"gitdb==4.0.11",
"GitPython==3.1.32",
"glob2==0.5",
"gradio==3.41.2",
"gradio_client==0.5.0",
"grpcio==1.67.1",
"h11==0.12.0",
"handrefinerportable @ https://github.com/huchenlei/HandRefinerPortable/releases/download/v1.0.1/handrefinerportable-2024.2.12.0-py2.py3-none-any.whl#sha256=1e6c702905919f4c49bcb2db7b20d334e8458a7555cd57630600584ec38ca6a9",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.26.2",
"humanfriendly==10.0",
"hydra-core==1.3.2",
"idna==3.10",
"imageio==2.36.0",
"importlib_metadata==8.5.0",
"importlib_resources==6.4.5",
"inflection==0.5.1",
"insightface==0.7.3",
"iopath==0.1.9",
"jax==0.4.35",
"jaxlib==0.4.35",
"Jinja2==3.1.4",
"jmespath==0.10.0",
"joblib==1.4.2",
"jsonmerge==1.8.0",
"jsonschema==4.23.0",
"jsonschema-specifications==2024.10.1",
"kiwisolver==1.4.7",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"lightning-utilities==0.11.8",
"llvmlite==0.43.0",
"lmdb==1.5.1",
"loguru==0.7.2",
"lxml==5.3.0",
"manifold3d==2.5.1",
"mapbox_earcut==1.0.2",
"Markdown==3.7",
"markdown-it-py==3.0.0",
"MarkupSafe==2.1.5",
"matplotlib==3.9.2",
"mdurl==0.1.2",
"mediapipe==0.10.15",
"ml_dtypes==0.5.0",
"mmcv==2.2.0",
"mmdet==3.3.0",
"mmengine==0.10.5",
"model-index==0.1.11",
"mpmath==1.3.0",
"multidict==6.1.0",
"narwhals==1.13.3",
"networkx==3.4.2",
"numba==0.60.0",
"numpy==1.26.2",
"omegaconf==2.2.3",
"onnx==1.17.0",
"onnxruntime-gpu==1.20.0",
"open-clip-torch==2.20.0",
"opencv-contrib-python==4.10.0.84",
"opencv-python==4.10.0.84",
"opencv-python-headless==4.10.0.84",
"opendatalab==0.0.10",
"openmim==0.3.9",
"openxlab==0.1.2",
"opt_einsum==3.4.0",
"ordered-set==4.1.0",
"orjson==3.10.11",
"oss2==2.17.0",
"packaging==24.2",
"pandas==2.2.3",
"piexif==1.1.3",
"Pillow==9.5.0",
"pillow-avif-plugin==1.4.3",
"pip==24.3.1",
"platformdirs==4.3.6",
"portalocker==2.10.1",
"prettytable==3.12.0",
"propcache==0.2.0",
"protobuf==4.25.5",
"psutil==5.9.5",
"py-cpuinfo==9.0.0",
"pycocotools==2.0.8",
"pycollada==0.8",
"pycparser==2.22",
"pycryptodome==3.21.0",
"pydantic==1.10.17",
"pydub==0.25.1",
"Pygments==2.18.0",
"pyparsing==3.2.0",
"python-dateutil==2.9.0.post0",
"python-dotenv==1.0.1",
"python-multipart==0.0.17",
"pytorch-lightning==1.9.4",
"pytz==2023.4",
"PyWavelets==1.7.0",
"PyYAML==6.0.2",
"referencing==0.35.1",
"regex==2024.11.6",
"reportlab==4.2.5",
"requests==2.28.2",
"resize-right==0.0.2",
"rich==13.4.2",
"rpds-py==0.21.0",
"Rtree==1.3.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scikit-learn==1.5.2",
"scipy==1.14.1",
"seaborn==0.13.2",
"segment-anything==1.0",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==60.2.0",
"shapely==2.0.6",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"sounddevice==0.5.1",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"starlette==0.26.1",
"svg.path==6.3",
"svglib==1.5.1",
"sympy==1.13.3",
"tabulate==0.9.0",
"tb-nightly==2.19.0a20241110",
"tensorboard-data-server==0.7.2",
"termcolor==2.5.0",
"terminaltables==3.1.10",
"threadpoolctl==3.5.0",
"tifffile==2024.9.20",
"timm==0.6.7",
"tinycss2==1.4.0",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.2",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.5.2",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121",
"tqdm==4.65.2",
"trampoline==0.1.2",
"transformers==4.30.2",
"trimesh==4.5.2",
"triton==2.1.0",
"typing_extensions==4.12.2",
"tzdata==2024.2",
"ultralytics==8.3.28",
"ultralytics-thop==2.0.11",
"urllib3==1.26.20",
"uvicorn==0.32.0",
"vhacdx==0.0.8.post1",
"wcwidth==0.2.13",
"webencodings==0.5.1",
"websockets==11.0.3",
"Werkzeug==3.1.3",
"xatlas==0.0.9",
"xformers==0.0.23.post1",
"xxhash==3.5.0",
"yacs==0.1.8",
"yapf==0.40.2",
"yarl==1.17.1",
"zipp==3.21.0"
]
}
### Console logs
```Shell
Installing dependencies
.
.
Successfully uninstalled mmcv
Installing mmcv
AssertionError: 2.0.0rc4 is required
```
### Additional information
none | open | 2024-11-11T05:54:36Z | 2024-11-12T08:53:19Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16642 | [
"bug-report"
] | venzen | 1 |
keras-team/keras | data-science | 20,388 | Inconsistent loss/metrics with jax backend | Training an LSTM-based model with `mean_squared_error` loss, I got the following training results, for which the math doesn't add up: the values of the loss (MSE) and metric (RMSE) are inconsistent.
Would anyone have an insight as to what could be happening here? Thank you in advance.
<img width="1859" alt="Screenshot 2024-10-21 at 23 33 39" src="https://github.com/user-attachments/assets/f60b95bc-5e07-45c4-8cee-5e33bbcc7e0c">
| closed | 2024-10-21T21:37:52Z | 2024-11-12T12:39:49Z | https://github.com/keras-team/keras/issues/20388 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | dkgaraujo | 9 |
Lightning-AI/LitServe | fastapi | 443 | cannot pickle '_io.BufferedRandom' object | litserve 0.2.4
## 🐛 Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
I meet the " cannot pickle '_io.BufferedRandom' object" when I request a simple file
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
The server code is
```
import litserve as ls
class SimpleLitAPI(ls.LitAPI):
# Called once at startup. Setup models, DB connections, etc...
def setup(self, device):
self.model = lambda x: x**2
self.deviceid = f"我的设备 { device }"
# Convert the request payload to model input.
def decode_request(self, request):
file_bytes = request['file'].file.read()
print('接收文件的bytes:',len(file_bytes))
print(self.deviceid)
return "1"
# Run inference on the the model, return the output.
def predict(self, x):
return ""
# Convert the model output to a response payload.
def encode_response(self, output):
return {"output": output}
if __name__ =="__main__":
# LitServe scales your API!
server = ls.LitServer(SimpleLitAPI(),timeout= False , accelerator="gpu" , devices=[0,0,1,1], track_requests=True)
server.run(port=8999)
```
The request code is
```
url = f"http://127.0.0.1:8999/predict"
# 发送字节流
file_path = "aaaa.pdf"
# 读取文件内容为字节数据
with open(file_path, 'rb') as file:
file_bytes = file.read()
# 将字节数据发送到 API
response = requests.post(url, files = {'file': ('filename.pdf', file_bytes, 'application/pdf')})
out = response.json()
```
The bug is
```
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 409, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/litserve/middlewares.py", line 69, in __call__
await self.app(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/litserve/middlewares.py", line 69, in __call__
await self.app(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 29, in __call__
await responder(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 126, in __call__
await super().__call__(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/middleware/gzip.py", line 46, in __call__
await self.app(scope, receive, self.send_with_compression)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/routing.py", line 714, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/routing.py", line 734, in app
await route.handle(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/site-packages/litserve/server.py", line 359, in predict
self.request_queue.put_nowait((response_queue_id, uid, time.monotonic(), payload))
File "<string>", line 2, in put_nowait
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/multiprocessing/managers.py", line 817, in _callmethod
conn.send((self._id, methodname, args, kwds))
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/multiprocessing/connection.py", line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/kemove/miniconda3/envs/minerU120/lib/python3.10/multiprocessing/reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
TypeError: cannot pickle '_io.BufferedRandom' object
```
| closed | 2025-03-04T06:50:20Z | 2025-03-20T22:58:26Z | https://github.com/Lightning-AI/LitServe/issues/443 | [
"bug",
"help wanted"
] | ywh-my | 5 |
yunjey/pytorch-tutorial | deep-learning | 115 | pretrained files | Sorry,can you provide a download link of your pretrained files,i can't open it in your README. | open | 2018-05-15T13:52:15Z | 2020-07-07T08:43:15Z | https://github.com/yunjey/pytorch-tutorial/issues/115 | [] | ultimate-fly | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,069 | Result is not what i expected | Maybe i'm missing something, but i've got some voice sample from videogame Thief II, and i used this file to make my text sound like character from the game. It doesn't. I even recorded the process, take a look?
https://youtu.be/lDbpoaaBJSo
| open | 2022-05-25T19:46:01Z | 2022-05-31T23:33:29Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1069 | [] | vorob1 | 3 |
litestar-org/litestar | asyncio | 3,893 | Ehancement: CLI - Better error message for invalid `--app` string | ### Description
A condition is missing for the case that `app_path` does not contain a colon.
```
Using Litestar app from env: 'invalid'
Traceback (most recent call last):
File "/home/henry/miniconda3/envs/facefusion/bin/litestar", line 8, in <module>
sys.exit(run_cli())
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/__main__.py", line 6, in run_cli
litestar_group()
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/rich_click/rich_command.py", line 367, in __call__
return super().__call__(*args, **kwargs)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/rich_click/rich_command.py", line 151, in main
with self.make_context(prog_name, args, **extra) as ctx:
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 224, in make_context
self._prepare(ctx)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 206, in _prepare
env = ctx.obj = LitestarEnv.from_env(ctx.params.get("app_path"), ctx.params.get("app_dir"))
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 112, in from_env
loaded_app = _load_app_from_path(app_path)
File "/home/henry/miniconda3/envs/facefusion/lib/python3.10/site-packages/litestar/cli/_utils.py", line 276, in _load_app_from_path
module_path, app_name = app_path.split(":")
ValueError: not enough values to unpack (expected 2, got 1)
```
Either add a condition to `_load_app_from_path` or introduce a `safe_split` utility/helper.
### URL to code causing the issue
_No response_
### MCVE
```python
litestar --app invalid
```
```
### Steps to reproduce
_No response_
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
2.13.0final0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-12-07T13:27:44Z | 2025-03-20T15:55:03Z | https://github.com/litestar-org/litestar/issues/3893 | [
"Enhancement"
] | henryruhs | 3 |
ploomber/ploomber | jupyter | 499 | consider setting static_analysis to False by default | users have reported problems with this, it isn't clear for new ones what the error mean (for example, when adding a new parameters in pipeline.yaml) but not in the script/notebook
Another alternative would be to have three modes for static analysis: one that checks code + params, another one that only checks code, and nothing. only checking code could be the default | closed | 2022-01-28T13:38:33Z | 2022-02-09T04:30:40Z | https://github.com/ploomber/ploomber/issues/499 | [] | edublancas | 2 |
ploomber/ploomber | jupyter | 640 | ploomber {cmd} --help does not work if missing entry point | Commands that require an entry point (i.e. `pipeline.yaml`) throw an error if the entry point is missing. This is expected. However, this should not happen when the user passes `--help`:
```
ploomber build --help
```
Prints:
```
usage: ploomber build [-h] [--log LOG] [--log-file LOG_FILE] [--entry-point ENTRY_POINT] [--force] [--skip-upstream] [--partially PARTIALLY] [--debug]
ploomber build: error: Unable to find a pipeline entry point. Use --entry-point/-e to pass a entry point's location directly or place it in a standard location.
``` | open | 2022-03-06T02:21:39Z | 2024-07-09T19:32:09Z | https://github.com/ploomber/ploomber/issues/640 | [] | edublancas | 1 |
iterative/dvc | machine-learning | 9,731 | Support OmegaConf custom resolvers when using Hydra integration | It seems that currently custom resolvers (described here: https://omegaconf.readthedocs.io/en/2.1_branch/custom_resolvers.html) are not supported when using DVC with Hydra integration.
I would like to do the following:
```yaml
pipeline:
-
func: some_transform_name
params:
some_param: ${cv2:BORDER_CONSTANT}
other_param: ${cv2:INTER_LINEAR}
```
By writing a custom resolver, we can resolve `cv2` namespace to load `BORDER_CONSTANT` and `INTER_LINEAR` dynamically (and do error checking as well) during hydra composition, which means we don't have to do this in our application.
Normally, you can add custom resolvers like this:
```python
import hydra
from hydra.utils import get_original_cwd
from omegaconf.dictconfig import DictConfig
from omegaconf import OmegaConf
OmegaConf.register_new_resolver("test", lambda x: f'{x} AAA')
@hydra.main(version_base=None, config_path=".", config_name="params")
def main(cfg: DictConfig) -> None:
print(cfg)
if __name__ == '__main__':
main()
```
but that is not possible, because DVC's Hydra composition is called here https://github.com/iterative/dvc/blob/2ef2caafd3f7c540caed4cb60fe3d9ff0255caf9/dvc/utils/hydra.py#L16-L53 before our DVC stage defined in `dvc.yaml`. Is it even the same process ? It seems that it would be quite tricky to implement this ? I'd be happy to implement it
| closed | 2023-07-13T09:02:42Z | 2024-01-23T14:13:15Z | https://github.com/iterative/dvc/issues/9731 | [
"feature request",
"p3-nice-to-have",
"A: hydra"
] | asiron | 17 |
biolab/orange3 | pandas | 6,929 | File reader improvement | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
<!-- Is your request related to a problem, or perhaps a frustration? -->
<!-- Tell us the story that led you to write this request. -->
It seems to me that file readers are not easily joinable, meaning that if there are more than one file formats for some software (example for `Orange` would be **Excel**), the readers have to be implemented and if they are in different classes, they will appear as different options in the `File` widget.
This can become a bit extreme when handling many file types, as shown in the picture below in `Quasar` - notice the multiple readers for the same instrument type such as **Agilent** or **NeaSpec**.
<img width="864" alt="image" src="https://github.com/user-attachments/assets/7910909e-7a1c-4859-8008-f1f607e7f93d">
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
It would be nice to be able to add meta readers that would aggregate individual classes which in turn would not show up as separate options in the list of file formats.
**Are there any alternative solutions?**
Not that I can think of. | closed | 2024-11-13T17:49:25Z | 2024-11-15T11:08:16Z | https://github.com/biolab/orange3/issues/6929 | [] | borondics | 3 |
s3rius/FastAPI-template | graphql | 220 | Request: Please add support/templating for dapr.io | FIrst of all, thank you for creating such a wonderful, feature-rich cookiecutter template.
I have a use case where I want to use [dapr.io](https://dapr.io/) and its Python SDK so I can use a uniform API for pub/sub, DB, caching etc without being concerned about the vendor for each of those components.
As a request - would you be able to add support for generating a FastAPI template using the dapr Python SDK? | closed | 2024-07-19T20:15:30Z | 2024-11-16T17:03:26Z | https://github.com/s3rius/FastAPI-template/issues/220 | [] | cicdguy | 3 |
google-deepmind/sonnet | tensorflow | 152 | Difference between sonnet and tensorflow | Hi, I am using sonnet version 1.32. I found that it is so confusion between sonnet and tensorflow. In sonnet, we don't have placeholders. From this perspective, it seems work like Pytorch. But when I want to check the output of the networks, it is still a tensor. What's happening there? | closed | 2019-11-17T16:24:45Z | 2020-03-27T17:14:15Z | https://github.com/google-deepmind/sonnet/issues/152 | [] | huiwenzhang | 1 |
voila-dashboards/voila | jupyter | 1,019 | Voila won't recognize certain labextensions bundled with python packages, reverts to older version on CDN | <!--
Welcome! Before creating a new issue please search for relevant issues and recreate the issue in a fresh environment.
-->
## Description
For certain labextensions, which are bundled alongside their python packages as of JupyterLab 3, voila fails to find the bundled files. Notably this affects `k3d 2.11.0`: the files bundled with the package are not found. This causes voila instead to find these same packages from a fallback CDN.
I infer that this has happened due to the following output in the browser console:
```
Loading failed for the <script> with source “http://localhost:8866/voila/k3d.js”. localhost:8866:1:1
Falling back to https://cdn.jsdelivr.net/npm/ for k3d@2.11.0
```
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
0. Have voila, JupyterLab, k3d and plotly (versions above) installed via pip
1. I ran voila as `voila postproc_notebook/demonstration.ipynb --VoilaConfiguration.file_whitelist="['.*']" --debug --enable_nbextensions=True` (this to confirm it wasn't a permission error that caused the .js files to not be found)
2. For me it was enough to have a notebook with the following to trigger the above error messages in the browser:
```
import k3d
k3d.plot()
```
I believed this occurs because these packages in particular use hex-strings as filenames instead of the human-readable names that voila expects. This seems to work fine on jupyter-lab, but voila is unable to recognize files in this format.
e.g. in the k3d pypi package, `/k3d-2.11.0/k3d/labextension/static/` contains the following files instead of the human-readable `k3d.js` voila might expect.
```
04adc05e48582c893f35f05dfdc35cbc.ttf 275.c8499736016d941f1889.js 457.d3bb10b542890824a13e.js 703345f865eeee24c474248079a0ba93.ttf 83915f6ea43188e031f15a41a2a13d0e.woff2 993cef711838adabce3a8b5b1c4a1901.woff fc4e48b59849688ac61c0d6daa6c3894.woff2
0e5b99ca96d68358cdbbf0eb132e0bf3.woff 2bd0c2b5932c7e74bc69ff108c7746e3.woff2 457.d3bb10b542890824a13e.js.LICENSE.txt 70d540a088e8e125414815b6afda0e3d.ttf 891.109065ac671f0d003d69.js 9f78515e97ad0ff068507c103e83409d.ttf fd4eccab43b2f46bac37538fc682154d.woff2
1b0be9b9502d481bf047c11fd3afce41.ttf 2ffefb11f0c8412c11682bd0e1413b69.woff2 459e524c11f9d9848bd73bfffdc62077.ttf 7534552259d59c1cba3e86a3c774b8e8.ttf 891.109065ac671f0d003d69.js.LICENSE.txt a33d4c9142212ef3479287135a87faac.woff remoteEntry.cbabe2951651b85fc36b.js
1d2e94d7e1264031867f942653f8139a.woff 317.e40e5431c1666b84d0e7.js 49cc6a3cab050d7c2ec2336cb804e1fc.woff2 7712cfa8ed8093a0c556b7ff8abf14f9.ttf 8b9b3524a9cd80e00610682ca9d48b7a.ttf ac5aad6c4efef1a3d20b75397b7e6218.ttf style.js
1de15e70fec550ef4554de45895cd073.woff2 31faa94a6e7e3e4dc3a777c1244a3d0b.woff 4ad93799ba7ea7199a6f27826a40b061.ttf 77291f2c01508dbfa5a0e8fc8de4acb6.woff2 8cb4c7b5986d922a2dcdb6599a6106a9.woff b6a56b14d09ea3eb5f01e0bbd2b20101.woff third-party-licenses.json
1e909f1c2ba50ad8581dc75d86559eda.ttf 321.7b5f5245fbc6f15897eb.js 4e9fb7097be319b4a3a323dca0626460.woff 775f93f04f3a0bdcfecbc62e733847f1.ttf 8f5c4dcd24a0f3aa86380f0b3562eccd.woff c0ad9a0fcd3872a71585a52543d02054.ttf
209b10f4e35040ca859c96177c458cc6.woff2 3402ceebbaf069244380e282045d5615.woff 53.ec5e883b1147830de421.js 7c3661bed01acdb90f0d0d6e8c2af175.woff2 96a57080955dae1ca302c76abb8af909.woff e8c40ca220bf98110e3d2a9bccc040b4.woff
225.23e516d804513bf6e4f9.js 36d2c2a98402f4c0cb1029a69f2b9806.ttf 589.e04b6a39d6914950e62b.js 7e6803e0645cc02029a2b4b9b5c12dd0.woff2 97.742d9029c9745fc9556c.js ec522b9ccc3e18028de1440991b38119.ttf
23820bbae1b543ae8cb70fe44d40809f.woff 374a109b61e7c419be444a6ad27c8f8c.woff 591.5d8b822758d7a0045515.js 820.1b30f420b58cbcb87d88.js 981.d91ee8671445e94e67d0.js f453c078392f0c23335ea604c6765a12.ttf
263.9a70a7499f2d43c3d665.js 385.6a21e33f8a31d9a4253b.js 591.5d8b822758d7a0045515.js.LICENSE.txt 820.1b30f420b58cbcb87d88.js.LICENSE.txt 981.d91ee8671445e94e67d0.js.LICENSE.txt f93eac2f0543333d30a6abf9262ec864.ttf
263.9a70a7499f2d43c3d665.js.LICENSE.txt 3c431f15b18a392c1711d7de01edf4f3.woff2 6f4b7338e13e491465211e73cc3d9ab2.woff2 834915271cbece10d426ea41e479cfff.ttf 987.92c153a0cd76fadc526b.js f9f7662953c4ef2ee65e1f3a935f9017.woff2
```
Note that jupyter-lab seems to work fine with the filenames being as such.
## Expected behavior
On the surface level the behavior of the labextensions seems fine because the fallback CDN does work so the widgets function. Nevertheless, I would want to use the bundled versions of the labextensions as these are the latest version and prevent a callback to an external CDN.
## Context
- voila version: `voila==0.2.16`
- Operating System and version: Ubuntu 20.04
- Browser and version: Firefox 93.0 (64-bit) (but similar on Chrome)
<details><summary>Jupyter Troubleshoot</summary>
<pre>
WARNING: You are using pip version 20.2.3; however, version 21.3.1 is available.
You should consider upgrading via the '/home/pbakker/.pyenv/versions/3.8.7/envs/jl/bin/python3.8 -m pip install --upgrade pip' command.
$PATH:
/home/pbakker/.pyenv/versions/jl/bin
/home/pbakker/.pyenv/libexec
/home/pbakker/.pyenv/plugins/python-build/bin
/home/pbakker/.pyenv/plugins/pyenv-virtualenv/bin
/home/pbakker/.pyenv/plugins/python-build/bin
/home/pbakker/.pyenv/plugins/pyenv-virtualenv/bin
/home/pbakker/.pyenv/shims
/home/pbakker/.pyenv/bin
/home/pbakker/.local/bin
/usr/local/sbin
/usr/local/bin
/usr/sbin
/usr/bin
/sbin
/bin
/usr/games
/usr/local/games
/snap/bin
sys.path:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/bin
/home/pbakker/.pyenv/versions/3.8.7/lib/python38.zip
/home/pbakker/.pyenv/versions/3.8.7/lib/python3.8
/home/pbakker/.pyenv/versions/3.8.7/lib/python3.8/lib-dynload
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/lib/python3.8/site-packages
/home/pbakker/eal-code/susipop
sys.executable:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/bin/python3.8
sys.version:
3.8.7 (default, May 20 2021, 12:43:16)
[GCC 9.3.0]
platform.platform():
Linux-5.11.0-40-generic-x86_64-with-glibc2.29
which -a jupyter:
/home/pbakker/.pyenv/versions/jl/bin/jupyter
/home/pbakker/.pyenv/shims/jupyter
pip list:
Package Version Location
------------------- ---------------------------------------------------- ------------------------------
anyio 3.3.3
argon2-cffi 21.1.0
attrs 21.2.0
Babel 2.9.1
backcall 0.2.0
bleach 4.1.0
cairocffi 1.3.0
CairoSVG 2.5.2
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.7
click 8.0.3
cssselect2 0.4.1
cycler 0.10.0
debugpy 1.5.0
decorator 5.1.0
deepdiff 5.5.0
defusedxml 0.7.1
dill 0.3.4
docutils 0.16
entrypoints 0.3
et-xmlfile 1.1.0
ffmpeg-python 0.2.0
filelock 3.0.12
future 0.18.2
idna 3.2
imageio 2.9.0
ipykernel 6.4.1
ipysheet 0.4.4
ipython 7.28.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.2
json5 0.9.6
jsonschema 3.2.0
jupyter-client 6.1.12
jupyter-core 4.8.1
jupyter-server 1.11.1
jupyterlab 3.1.13
jupyterlab-pygments 0.1.2
jupyterlab-server 2.8.2
jupyterlab-widgets 1.0.2
k3d 2.11.0
kaleido 0.2.1
kiwisolver 1.3.2
llvmlite 0.36.0
MarkupSafe 2.0.1
matplotlib 3.4.3
matplotlib-inline 0.1.3
mistune 0.8.4
mixpanel 4.9.0
mpmath 1.2.1
nbclassic 0.3.2
nbclient 0.5.4
nbconvert 6.2.0
nbformat 5.1.3
nest-asyncio 1.5.1
networkx 2.6.3
notebook 6.4.4
numba 0.53.1
numpy 1.20.3
openpyxl 3.0.7
ordered-set 4.0.2
ovito 3.5.4
packaging 21.0
pandas 1.2.4
pandoc-include 0.7.3
pandocfilters 1.5.0
panflute 1.12.5
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.2.0
pip 20.2.3
plotly 5.3.1
prometheus-client 0.11.0
prompt-toolkit 3.0.20
psutil 5.8.0
ptyprocess 0.7.0
pycparser 2.20
Pygments 2.10.0
pylatexenc 2.10
pypandoc 1.5
pyparsing 2.4.7
pyrsistent 0.18.0
PySide2 5.15.2
python-dateutil 2.8.2
pytz 2021.3
PyWavelets 1.1.1
PyYAML 5.4.1
pyzmq 22.3.0
rcpopcore 0.5.34.dev0+fe174be87425f872336a48054f4e5ccfa2239b5e
requests 2.26.0
requests-unixsocket 0.2.0
scikit-image 0.18.1
scipy 1.6.3
Send2Trash 1.8.0
sentry-sdk 1.1.0
setuptools 49.2.1
Shapely 1.7.1
shiboken2 5.15.2
six 1.16.0
sniffio 1.2.0
susimetadata 0.5.0.dev0+702cba80076301bc915e28f205be2dc2af3b3925
susipop 0.5.39 /home/pbakker/eal-code/susipop
susitk 0.6.5+fd307b7d3eef8287cba2e0ec5cd9beffa56f60be
sympy 1.7.1
tenacity 8.0.1
terminado 0.12.1
testpath 0.5.0
tifffile 2021.10.10
tinycss2 1.1.0
tornado 6.1
tqdm 4.61.1
traitlets 5.1.0
traittypes 0.2.1
urllib3 1.26.7
voila 0.2.16
vtk 9.0.1
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 1.2.1
wheel 0.37.0
widgetsnbextension 3.5.1
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/lib/python3.8/site-packages/traitlets/traitlets.py:2562: FutureWarning: --VoilaConfiguration.file_whitelist=['.*'] for containers is deprecated in traitlets 5.0. You can pass `--VoilaConfiguration.file_whitelist item` ... multiple times to add items to a list.
[Voila] Looking for voila in /etc/jupyter
[Voila] Looking for voila in /usr/local/etc/jupyter
[Voila] Looking for voila in /home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter
[Voila] Looking for voila in /home/pbakker/.jupyter
[Voila] Looking for voila in /home/pbakker/eal-code/susipop
[Voila] using template: lab
[Voila] template paths:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/lab
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/lab
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/base
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/base
/home/pbakker/.local/share/jupyter
/home/pbakker/.local/share/jupyter/voila/templates
/home/pbakker/.local/share/jupyter/nbconvert/templates
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates
/usr/local/share/jupyter
/usr/local/share/jupyter/voila/templates
/usr/local/share/jupyter/nbconvert/templates
/usr/share/jupyter
/usr/share/jupyter/voila/templates
/usr/share/jupyter/nbconvert/templates
[Voila] static paths:
/home/pbakker/.local/share/jupyter/voila/templates/lab/static
/home/pbakker/.local/share/jupyter/nbconvert/templates/lab/static
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/lab/static
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/lab/static
/usr/local/share/jupyter/voila/templates/lab/static
/usr/local/share/jupyter/nbconvert/templates/lab/static
/usr/share/jupyter/voila/templates/lab/static
/usr/share/jupyter/nbconvert/templates/lab/static
/home/pbakker/.local/share/jupyter/voila/templates/base/static
/home/pbakker/.local/share/jupyter/nbconvert/templates/base/static
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/base/static
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/base/static
/usr/local/share/jupyter/voila/templates/base/static
/usr/local/share/jupyter/nbconvert/templates/base/static
/usr/share/jupyter/voila/templates/base/static
/usr/share/jupyter/nbconvert/templates/base/static
[Voila] Using /tmp to store connection files
[Voila] Storing connection files in /tmp/voila_92yemngm.
[Voila] Serving static files from /home/pbakker/.pyenv/versions/3.8.7/envs/jl/lib/python3.8/site-packages/voila/static.
[Voila] Voilà is running at:
http://localhost:8866/
[Voila] Paths used for configuration of notebook:
/etc/jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/ipysheet.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/jupyterlab-plotly.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/voila.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/etc/jupyter/serverconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/usr/local/etc/jupyter/serverconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/serverconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.jupyter/serverconfig/notebook.json
[Voila] WARNING | Notebook demonstration.ipynb is not trusted
[Voila] Found kernel python3 in /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/kernels
[Voila] Template paths:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/lab
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/lab
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/base
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/base
/home/pbakker/.local/share/jupyter
/home/pbakker/.local/share/jupyter/voila/templates
/home/pbakker/.local/share/jupyter/nbconvert/templates
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates
/usr/local/share/jupyter
/usr/local/share/jupyter/voila/templates
/usr/local/share/jupyter/nbconvert/templates
/usr/share/jupyter
/usr/share/jupyter/voila/templates
/usr/share/jupyter/nbconvert/templates
[Voila] Applying preprocessor: TagRemovePreprocessor
[Voila] Applying preprocessor: RegexRemovePreprocessor
[Voila] Applying preprocessor: coalesce_streams
[Voila] Applying preprocessor: HighlightMagicsPreprocessor
[Voila] Applying preprocessor: CSSHTMLHeaderPreprocessor
[Voila] Attempting to load template index.html.j2
[Voila] template_paths: /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/lab:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/lab:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/base:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/base:/home/pbakker/.local/share/jupyter:/home/pbakker/.local/share/jupyter/voila/templates:/home/pbakker/.local/share/jupyter/nbconvert/templates:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates:/usr/local/share/jupyter:/usr/local/share/jupyter/voila/templates:/usr/local/share/jupyter/nbconvert/templates:/usr/share/jupyter:/usr/share/jupyter/voila/templates:/usr/share/jupyter/nbconvert/templates
[Voila] Starting kernel (async): ['/home/pbakker/.pyenv/versions/3.8.7/envs/jl/bin/python3.8', '-m', 'ipykernel_launcher', '-f', '/tmp/voila_92yemngm/kernel-05d7eb09-dc78-4b5e-af11-2409d3f08fec.json']
[Voila] Connecting to: tcp://127.0.0.1:50819
[Voila] Connecting to: tcp://127.0.0.1:52963
[Voila] Kernel started: 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Kernel args: {'kernel_name': 'python3', 'env': {'SHELL': '/bin/bash', 'SESSION_MANAGER': 'local/pbakker-ThinkPad-E14-Gen-2:@/tmp/.ICE-unix/1983,unix/pbakker-ThinkPad-E14-Gen-2:/tmp/.ICE-unix/1983', 'PYENV_HOOK_PATH': '/home/pbakker/.pyenv/pyenv.d:/usr/local/etc/pyenv.d:/etc/pyenv.d:/usr/lib/pyenv/hooks:/home/pbakker/.pyenv/plugins/pyenv-virtualenv/etc/pyenv.d', 'QT_ACCESSIBILITY': '1', 'COLORTERM': 'truecolor', 'PYENV_SHELL': 'bash', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/etc/xdg', 'PYENV_ACTIVATE_SHELL': '1', 'XDG_MENU_PREFIX': 'gnome-', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'SENTRY_ENVIRONMENT': 'production', 'LC_ADDRESS': 'nl_NL.UTF-8', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'LC_NAME': 'nl_NL.UTF-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'PYENV_VIRTUALENV_DISABLE_PROMPT': '1', 'XMODIFIERS': '@im=ibus', 'DESKTOP_SESSION': 'ubuntu', 'LC_MONETARY': 'nl_NL.UTF-8', 'SSH_AGENT_PID': '1941', 'PYENV_VERSION': 'jl', 'GTK_MODULES': 'gail:atk-bridge', 'PWD': '/home/pbakker/eal-code/susipop', 'XDG_SESSION_DESKTOP': 'ubuntu', 'LOGNAME': 'pbakker', 'XDG_SESSION_TYPE': 'x11', 'GPG_AGENT_INFO': '/run/user/1000/gnupg/S.gpg-agent:0:1', 'SUSIPOP': '/home/pbakker/eal-code/susipop/', 'XAUTHORITY': '/run/user/1000/gdm/Xauthority', 'WINDOWPATH': '2', 'HOME': '/home/pbakker', 'USERNAME': 'pbakker', 'SENTRY_DSN': 'https://4573f501f5944ec6939fc5f09c74a605@sentry.dev.rheocube.net/7', 'IM_CONFIG_PHASE': '1', 'LANG': 'en_US.UTF-8', 'LC_PAPER': 'nl_NL.UTF-8', 'USER_UUID': '1b70869c-cdeb-42e4-8318-cfd692284998', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'GL_TOKEN_PASS': '7eosPvdBZgEE_GR66pW4', 'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME', 'VIRTUAL_ENV': '/home/pbakker/.pyenv/versions/3.8.7/envs/jl', 'GL_TOKEN_NAME': 'gitlab-pypi', 'STARSHIP_SHELL': 'bash', 'VTE_VERSION': '6003', 'PIP_EXTRA_INDEX_URL': 'https://gitlab-pypi:7eosPvdBZgEE_GR66pW4@gitlab.com/api/v4/projects/27162757/packages/pypi/simple', 'GNOME_TERMINAL_SCREEN': '/org/gnome/Terminal/screen/9261fcca_c6ce_4f7f_82b1_bbc3139c36f0', 'PYBIND_HEADERS': '/home/pbakker/.pyenv/versions/3.8.7/envs/susipop/lib/python3.8/site-packages/pybind11/include/pybind11/', 'INVOCATION_ID': '482a30f49db54d219192654ec7e65b39', 'MANAGERPID': '1720', 'PYENV_DIR': '/home/pbakker/eal-code/susipop', 'STARSHIP_SESSION_KEY': '7542181522662230', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_SESSION_CLASS': 'user', 'LC_IDENTIFICATION': 'nl_NL.UTF-8', 'TERM': 'xterm-256color', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'USER': 'pbakker', 'GNOME_TERMINAL_SERVICE': ':1.108', 'DISPLAY': ':0', 'PYENV_VIRTUAL_ENV': '/home/pbakker/.pyenv/versions/3.8.7/envs/jl', 'SHLVL': '1', 'LC_TELEPHONE': 'nl_NL.UTF-8', 'QT_IM_MODULE': 'ibus', 'LC_MEASUREMENT': 'nl_NL.UTF-8', 'PYV': '/home/pbakker/.pyenv/versions/susipop/lib/python3.8/site-packages', 'XDG_RUNTIME_DIR': '/run/user/1000', 'JIRA_ID': '609a8973b050a70069960b79', 'PYENV_ROOT': '/home/pbakker/.pyenv', 'LC_TIME': 'nl_NL.UTF-8', 'EMAIL': 'p.bakker@electricant.com', 'JOURNAL_STREAM': '8:54159', 'XDG_DATA_DIRS': '/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop', 'PATH': '/home/pbakker/.pyenv/versions/jl/bin:/home/pbakker/.pyenv/libexec:/home/pbakker/.pyenv/plugins/python-build/bin:/home/pbakker/.pyenv/plugins/pyenv-virtualenv/bin:/home/pbakker/.pyenv/plugins/python-build/bin:/home/pbakker/.pyenv/plugins/pyenv-virtualenv/bin:/home/pbakker/.pyenv/shims:/home/pbakker/.pyenv/bin:/home/pbakker/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'GDMSESSION': 'ubuntu', 'GITLAB_TOKEN': 'us2xg7sF2jfwbd5BssGS', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'PYPI_REPO_ID': '27162757', 'LC_NUMERIC': 'nl_NL.UTF-8', 'OLDPWD': '/home/pbakker', 'PYDEVD_USE_FRAME_EVAL': 'NO', 'SCRIPT_NAME': '/', 'PATH_INFO': '', 'QUERY_STRING': '', 'SERVER_SOFTWARE': 'voila/0.2.16', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_PORT': '8866', 'SERVER_NAME': 'localhost'}, 'cwd': '/home/pbakker/eal-code/susipop/postproc_notebook'}
[Voila] connecting iopub channel to tcp://127.0.0.1:52963
[Voila] Connecting to: tcp://127.0.0.1:52963
[Voila] connecting shell channel to tcp://127.0.0.1:47031
[Voila] Connecting to: tcp://127.0.0.1:47031
[Voila] connecting stdin channel to tcp://127.0.0.1:35627
[Voila] Connecting to: tcp://127.0.0.1:35627
[Voila] connecting heartbeat channel to tcp://127.0.0.1:52333
[Voila] connecting control channel to tcp://127.0.0.1:50819
[Voila] Connecting to: tcp://127.0.0.1:50819
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] Executing cell:
import k3d
k3d.plot()
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: execute_input
[Voila] msg_type: status
[Voila] content: {'execution_state': 'busy'}
[Voila] msg_type: execute_input
[Voila] content: {'code': 'import k3d\nk3d.plot()', 'execution_count': 1}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_open
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_open
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_open
[Voila] content: {'data': {'state': {'_model_module': '@jupyter-widgets/base', '_model_module_version': '1.2.0', '_model_name': 'LayoutModel', '_view_count': None, '_view_module': '@jupyter-widgets/base', '_view_module_version': '1.2.0', '_view_name': 'LayoutView', 'align_content': None, 'align_items': None, 'align_self': None, 'border': None, 'bottom': None, 'display': None, 'flex': None, 'flex_flow': None, 'grid_area': None, 'grid_auto_columns': None, 'grid_auto_flow': None, 'grid_auto_rows': None, 'grid_column': None, 'grid_gap': None, 'grid_row': None, 'grid_template_areas': None, 'grid_template_columns': None, 'grid_template_rows': None, 'height': None, 'justify_content': None, 'justify_items': None, 'left': None, 'margin': None, 'max_height': None, 'max_width': None, 'min_height': None, 'min_width': None, 'object_fit': None, 'object_position': None, 'order': None, 'overflow': None, 'overflow_x': None, 'overflow_y': None, 'padding': None, 'right': None, 'top': None, 'visibility': None, 'width': None}, 'buffer_paths': []}, 'comm_id': '35e11ba8639c4eabaf54c60f4fc9f06a', 'target_name': 'jupyter.widget', 'target_module': None}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_open
[Voila] content: {'data': {'state': {'_backend_version': '2.11.0', '_dom_classes': [], '_model_module': 'k3d', '_model_module_version': '2.11.0', '_model_name': 'PlotModel', '_view_count': None, '_view_module': 'k3d', '_view_module_version': '2.11.0', '_view_name': 'PlotView', 'antialias': 0, 'auto_rendering': True, 'axes': ['x', 'y', 'z'], 'axes_helper': 0.0, 'background_color': 0, 'camera': [], 'camera_animation': [], 'camera_auto_fit': True, 'camera_damping_factor': 0.0, 'camera_fov': 0.0, 'camera_mode': '', 'camera_no_pan': False, 'camera_no_rotate': False, 'camera_no_zoom': False, 'camera_pan_speed': 0.0, 'camera_rotate_speed': 0.0, 'camera_zoom_speed': 0.0, 'clipping_planes': [], 'colorbar_object_id': -1, 'colorbar_scientific': False, 'fps': 0.0, 'fps_meter': True, 'grid': [-1, -1, -1, 1, 1, 1], 'grid_auto_fit': True, 'grid_color': 0, 'grid_visible': True, 'height': 0, 'label_color': 0, 'layout': 'IPY_MODEL_35e11ba8639c4eabaf54c60f4fc9f06a', 'lighting': 0.0, 'manipulate_mode': '', 'menu_visibility': True, 'mode': '', 'name': None, 'object_ids': [], 'rendering_steps': 1, 'screenshot': '', 'screenshot_scale': 0.0, 'snapshot': '', 'snapshot_type': '', 'time': 0.0, 'voxel_paint_color': 0}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579', 'target_name': 'jupyter.widget', 'target_module': None}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'antialias': 3}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'fps_meter': False}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'fps': 25.0}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'background_color': 16777215}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: display_data
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'grid_color': 15132390}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'label_color': 4473924}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'screenshot_scale': 2.0}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'height': 512}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'lighting': 1.5}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_rotate_speed': 1.0}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_zoom_speed': 1.2}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_pan_speed': 0.3}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_fov': 60.0}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'axes_helper': 1.0}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'mode': 'view'}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'snapshot_type': 'full'}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_mode': 'trackball'}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'manipulate_mode': 'translate'}, 'buffer_paths': []}, 'comm_id': '4b56acf9a4f045f08db2048ad3404579'}
[Voila] msg_type: display_data
[Voila] content: {'data': {'text/plain': "Plot(antialias=3, axes=['x', 'y', 'z'], axes_helper=1.0, background_color=16777215, camera_animation=[], camer…", 'application/vnd.jupyter.widget-view+json': {'version_major': 2, 'version_minor': 0, 'model_id': '4b56acf9a4f045f08db2048ad3404579'}}, 'metadata': {}, 'transient': {}}
[Voila] msg_type: status
[Voila] content: {'execution_state': 'idle'}
[Voila] Executing cell:
import plotly.graph_objects as go
go.FigureWidget()
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: execute_input
[Voila] msg_type: status
[Voila] content: {'execution_state': 'busy'}
[Voila] msg_type: execute_input
[Voila] content: {'code': 'import plotly.graph_objects as go\ngo.FigureWidget()', 'execution_count': 2}
WARNING:tornado.access:404 GET /voila/files/favicon.ico (127.0.0.1) 0.92ms
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_open
[Voila] msg_type: comm_open
[Voila] content: {'data': {'state': {'_config': {}, '_data': [], '_dom_classes': [], '_js2py_layoutDelta': {}, '_js2py_pointsCallback': {}, '_js2py_relayout': {}, '_js2py_restyle': {}, '_js2py_traceDeltas': {}, '_js2py_update': {}, '_last_layout_edit_id': 0, '_last_trace_edit_id': 0, '_layout': {}, '_model_module': 'jupyterlab-plotly', '_model_module_version': '^5.3.1', '_model_name': 'FigureModel', '_py2js_addTraces': {}, '_py2js_animate': {}, '_py2js_deleteTraces': {}, '_py2js_moveTraces': {}, '_py2js_relayout': {}, '_py2js_removeLayoutProps': {}, '_py2js_removeTraceProps': {}, '_py2js_restyle': {}, '_py2js_update': {}, '_view_count': None, '_view_module': 'jupyterlab-plotly', '_view_module_version': '^5.3.1', '_view_name': 'FigureView'}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42', 'target_name': 'jupyter.widget', 'target_module': None}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_config': {'plotlyServerURL': 'https://plot.ly'}}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_last_layout_edit_id': 1}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: display_data
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_py2js_relayout': {'relayout_data': {'template': {'data': {'barpolar': [{'marker': {'line': {'color': '#E5ECF6', 'width': 0.5}, 'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}}, 'type': 'barpolar'}], 'bar': [{'error_x': {'color': '#2a3f5f'}, 'error_y': {'color': '#2a3f5f'}, 'marker': {'line': {'color': '#E5ECF6', 'width': 0.5}, 'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}}, 'type': 'bar'}], 'carpet': [{'aaxis': {'endlinecolor': '#2a3f5f', 'gridcolor': 'white', 'linecolor': 'white', 'minorgridcolor': 'white', 'startlinecolor': '#2a3f5f'}, 'baxis': {'endlinecolor': '#2a3f5f', 'gridcolor': 'white', 'linecolor': 'white', 'minorgridcolor': 'white', 'startlinecolor': '#2a3f5f'}, 'type': 'carpet'}], 'choropleth': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'type': 'choropleth'}], 'contourcarpet': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'type': 'contourcarpet'}], 'contour': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'contour'}], 'heatmapgl': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'heatmapgl'}], 'heatmap': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'heatmap'}], 'histogram2dcontour': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'histogram2dcontour'}], 'histogram2d': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'histogram2d'}], 'histogram': [{'marker': {'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}}, 'type': 'histogram'}], 'mesh3d': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'type': 'mesh3d'}], 'parcoords': [{'line': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'parcoords'}], 'pie': [{'automargin': True, 'type': 'pie'}], 'scatter3d': [{'line': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatter3d'}], 'scattercarpet': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattercarpet'}], 'scattergeo': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattergeo'}], 'scattergl': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattergl'}], 'scattermapbox': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattermapbox'}], 'scatterpolargl': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatterpolargl'}], 'scatterpolar': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatterpolar'}], 'scatter': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatter'}], 'scatterternary': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatterternary'}], 'surface': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'surface'}], 'table': [{'cells': {'fill': {'color': '#EBF0F8'}, 'line': {'color': 'white'}}, 'header': {'fill': {'color': '#C8D4E3'}, 'line': {'color': 'white'}}, 'type': 'table'}]}, 'layout': {'annotationdefaults': {'arrowcolor': '#2a3f5f', 'arrowhead': 0, 'arrowwidth': 1}, 'autotypenumbers': 'strict', 'coloraxis': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'colorscale': {'diverging': [[0, '#8e0152'], [0.1, '#c51b7d'], [0.2, '#de77ae'], [0.3, '#f1b6da'], [0.4, '#fde0ef'], [0.5, '#f7f7f7'], [0.6, '#e6f5d0'], [0.7, '#b8e186'], [0.8, '#7fbc41'], [0.9, '#4d9221'], [1, '#276419']], 'sequential': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'sequentialminus': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']]}, 'colorway': ['#636efa', '#EF553B', '#00cc96', '#ab63fa', '#FFA15A', '#19d3f3', '#FF6692', '#B6E880', '#FF97FF', '#FECB52'], 'font': {'color': '#2a3f5f'}, 'geo': {'bgcolor': 'white', 'lakecolor': 'white', 'landcolor': '#E5ECF6', 'showlakes': True, 'showland': True, 'subunitcolor': 'white'}, 'hoverlabel': {'align': 'left'}, 'hovermode': 'closest', 'mapbox': {'style': 'light'}, 'paper_bgcolor': 'white', 'plot_bgcolor': '#E5ECF6', 'polar': {'angularaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}, 'bgcolor': '#E5ECF6', 'radialaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}}, 'scene': {'xaxis': {'backgroundcolor': '#E5ECF6', 'gridcolor': 'white', 'gridwidth': 2, 'linecolor': 'white', 'showbackground': True, 'ticks': '', 'zerolinecolor': 'white'}, 'yaxis': {'backgroundcolor': '#E5ECF6', 'gridcolor': 'white', 'gridwidth': 2, 'linecolor': 'white', 'showbackground': True, 'ticks': '', 'zerolinecolor': 'white'}, 'zaxis': {'backgroundcolor': '#E5ECF6', 'gridcolor': 'white', 'gridwidth': 2, 'linecolor': 'white', 'showbackground': True, 'ticks': '', 'zerolinecolor': 'white'}}, 'shapedefaults': {'line': {'color': '#2a3f5f'}}, 'ternary': {'aaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}, 'baxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}, 'bgcolor': '#E5ECF6', 'caxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}}, 'title': {'x': 0.05}, 'xaxis': {'automargin': True, 'gridcolor': 'white', 'linecolor': 'white', 'ticks': '', 'title': {'standoff': 15}, 'zerolinecolor': 'white', 'zerolinewidth': 2}, 'yaxis': {'automargin': True, 'gridcolor': 'white', 'linecolor': 'white', 'ticks': '', 'title': {'standoff': 15}, 'zerolinecolor': 'white', 'zerolinewidth': 2}}}}, 'layout_edit_id': 1, 'source_view_id': None}}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_py2js_relayout': None}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_last_layout_edit_id': 0}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_view_count': 0}, 'buffer_paths': []}, 'comm_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}
[Voila] msg_type: display_data
[Voila] content: {'data': {'text/plain': "FigureWidget({\n 'data': [], 'layout': {'template': '...'}\n})", 'application/vnd.jupyter.widget-view+json': {'version_major': 2, 'version_minor': 0, 'model_id': '0bdc5f3194814186b8f7fb4bd2aa0b42'}}, 'metadata': {}, 'transient': {}}
[Voila] msg_type: status
[Voila] content: {'execution_state': 'idle'}
[Voila] Skipping non-executing cell 2
[Voila] Path ipysheet/extension.js served from /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbextensions/ipysheet/extension.js
[Voila] Path jupyterlab-plotly/extension.js served from /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbextensions/jupyterlab-plotly/extension.js
[Voila] Initializing websocket connection /api/kernels/05d7eb09-dc78-4b5e-af11-2409d3f08fec/channels
[Voila] Requesting kernel info from 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Connecting to: tcp://127.0.0.1:47031
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] Received kernel info: {'status': 'ok', 'protocol_version': '5.3', 'implementation': 'ipython', 'implementation_version': '7.28.0', 'language_info': {'name': 'python', 'version': '3.8.7', 'mimetype': 'text/x-python', 'codemirror_mode': {'name': 'ipython', 'version': 3}, 'pygments_lexer': 'ipython3', 'nbconvert_exporter': 'python', 'file_extension': '.py'}, 'banner': "Python 3.8.7 (default, May 20 2021, 12:43:16) \nType 'copyright', 'credits' or 'license' for more information\nIPython 7.28.0 -- An enhanced Interactive Python. Type '?' for help.\n", 'help_links': [{'text': 'Python Reference', 'url': 'https://docs.python.org/3.8'}, {'text': 'IPython Reference', 'url': 'https://ipython.org/documentation.html'}, {'text': 'NumPy Reference', 'url': 'https://docs.scipy.org/doc/numpy/reference/'}, {'text': 'SciPy Reference', 'url': 'https://docs.scipy.org/doc/scipy/reference/'}, {'text': 'Matplotlib Reference', 'url': 'https://matplotlib.org/contents.html'}, {'text': 'SymPy Reference', 'url': 'http://docs.sympy.org/latest/index.html'}, {'text': 'pandas Reference', 'url': 'https://pandas.pydata.org/pandas-docs/stable/'}]}
[Voila] Opening websocket /api/kernels/05d7eb09-dc78-4b5e-af11-2409d3f08fec/channels
[Voila] Getting buffer for 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Connecting to: tcp://127.0.0.1:52963
[Voila] Connecting to: tcp://127.0.0.1:47031
[Voila] Connecting to: tcp://127.0.0.1:50819
[Voila] Connecting to: tcp://127.0.0.1:35627
[Voila] Connecting to: tcp://127.0.0.1:47031
[Voila] Nudge: attempt 1 on kernel 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] Nudge: IOPub received: 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Nudge: resolving iopub future: 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Nudge: shell info reply received: 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Nudge: resolving shell future: 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] Path jupyterlab-plotly/index.js served from /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbextensions/jupyterlab-plotly/index.js
WARNING:tornado.access:404 GET /voila/files/voila/k3d.js (127.0.0.1) 0.48ms
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (busy)
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: comm_msg
[Voila] activity on 05d7eb09-dc78-4b5e-af11-2409d3f08fec: status (idle)
[Voila] Paths used for configuration of notebook:
/etc/jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/usr/local/etc/jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/ipysheet.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/jupyterlab-plotly.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/voila.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.d/widgetsnbextension.json
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.jupyter/nbconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/etc/jupyter/serverconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/usr/local/etc/jupyter/serverconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/etc/jupyter/serverconfig/notebook.json
[Voila] Paths used for configuration of notebook:
/home/pbakker/.jupyter/serverconfig/notebook.json
[Voila] WARNING | Notebook demonstration.ipynb is not trusted
[Voila] Found kernel python3 in /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/kernels
[Voila] Template paths:
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/lab
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/lab
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/base
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/base
/home/pbakker/.local/share/jupyter
/home/pbakker/.local/share/jupyter/voila/templates
/home/pbakker/.local/share/jupyter/nbconvert/templates
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates
/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates
/usr/local/share/jupyter
/usr/local/share/jupyter/voila/templates
/usr/local/share/jupyter/nbconvert/templates
/usr/share/jupyter
/usr/share/jupyter/voila/templates
/usr/share/jupyter/nbconvert/templates
[Voila] Applying preprocessor: TagRemovePreprocessor
[Voila] Applying preprocessor: RegexRemovePreprocessor
[Voila] Applying preprocessor: coalesce_streams
[Voila] Applying preprocessor: HighlightMagicsPreprocessor
[Voila] Applying preprocessor: CSSHTMLHeaderPreprocessor
[Voila] Attempting to load template index.html.j2
[Voila] template_paths: /home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/lab:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/lab:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates/base:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates/base:/home/pbakker/.local/share/jupyter:/home/pbakker/.local/share/jupyter/voila/templates:/home/pbakker/.local/share/jupyter/nbconvert/templates:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/voila/templates:/home/pbakker/.pyenv/versions/3.8.7/envs/jl/share/jupyter/nbconvert/templates:/usr/local/share/jupyter:/usr/local/share/jupyter/voila/templates:/usr/local/share/jupyter/nbconvert/templates:/usr/share/jupyter:/usr/share/jupyter/voila/templates:/usr/share/jupyter/nbconvert/templates
[Voila] Starting kernel (async): ['/home/pbakker/.pyenv/versions/3.8.7/envs/jl/bin/python3.8', '-m', 'ipykernel_launcher', '-f', '/tmp/voila_92yemngm/kernel-49256b08-1855-4c4a-8776-e1908410e475.json']
[Voila] Connecting to: tcp://127.0.0.1:49579
[Voila] Connecting to: tcp://127.0.0.1:41835
[Voila] Kernel started: 49256b08-1855-4c4a-8776-e1908410e475
[Voila] Kernel args: {'kernel_name': 'python3', 'env': {'SHELL': '/bin/bash', 'SESSION_MANAGER': 'local/pbakker-ThinkPad-E14-Gen-2:@/tmp/.ICE-unix/1983,unix/pbakker-ThinkPad-E14-Gen-2:/tmp/.ICE-unix/1983', 'PYENV_HOOK_PATH': '/home/pbakker/.pyenv/pyenv.d:/usr/local/etc/pyenv.d:/etc/pyenv.d:/usr/lib/pyenv/hooks:/home/pbakker/.pyenv/plugins/pyenv-virtualenv/etc/pyenv.d', 'QT_ACCESSIBILITY': '1', 'COLORTERM': 'truecolor', 'PYENV_SHELL': 'bash', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/etc/xdg', 'PYENV_ACTIVATE_SHELL': '1', 'XDG_MENU_PREFIX': 'gnome-', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'SENTRY_ENVIRONMENT': 'production', 'LC_ADDRESS': 'nl_NL.UTF-8', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'LC_NAME': 'nl_NL.UTF-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'PYENV_VIRTUALENV_DISABLE_PROMPT': '1', 'XMODIFIERS': '@im=ibus', 'DESKTOP_SESSION': 'ubuntu', 'LC_MONETARY': 'nl_NL.UTF-8', 'SSH_AGENT_PID': '1941', 'PYENV_VERSION': 'jl', 'GTK_MODULES': 'gail:atk-bridge', 'PWD': '/home/pbakker/eal-code/susipop', 'XDG_SESSION_DESKTOP': 'ubuntu', 'LOGNAME': 'pbakker', 'XDG_SESSION_TYPE': 'x11', 'GPG_AGENT_INFO': '/run/user/1000/gnupg/S.gpg-agent:0:1', 'SUSIPOP': '/home/pbakker/eal-code/susipop/', 'XAUTHORITY': '/run/user/1000/gdm/Xauthority', 'WINDOWPATH': '2', 'HOME': '/home/pbakker', 'USERNAME': 'pbakker', 'SENTRY_DSN': 'https://4573f501f5944ec6939fc5f09c74a605@sentry.dev.rheocube.net/7', 'IM_CONFIG_PHASE': '1', 'LANG': 'en_US.UTF-8', 'LC_PAPER': 'nl_NL.UTF-8', 'USER_UUID': '1b70869c-cdeb-42e4-8318-cfd692284998', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'GL_TOKEN_PASS': '7eosPvdBZgEE_GR66pW4', 'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME', 'VIRTUAL_ENV': '/home/pbakker/.pyenv/versions/3.8.7/envs/jl', 'GL_TOKEN_NAME': 'gitlab-pypi', 'STARSHIP_SHELL': 'bash', 'VTE_VERSION': '6003', 'PIP_EXTRA_INDEX_URL': 'https://gitlab-pypi:7eosPvdBZgEE_GR66pW4@gitlab.com/api/v4/projects/27162757/packages/pypi/simple', 'GNOME_TERMINAL_SCREEN': '/org/gnome/Terminal/screen/9261fcca_c6ce_4f7f_82b1_bbc3139c36f0', 'PYBIND_HEADERS': '/home/pbakker/.pyenv/versions/3.8.7/envs/susipop/lib/python3.8/site-packages/pybind11/include/pybind11/', 'INVOCATION_ID': '482a30f49db54d219192654ec7e65b39', 'MANAGERPID': '1720', 'PYENV_DIR': '/home/pbakker/eal-code/susipop', 'STARSHIP_SESSION_KEY': '7542181522662230', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_SESSION_CLASS': 'user', 'LC_IDENTIFICATION': 'nl_NL.UTF-8', 'TERM': 'xterm-256color', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'USER': 'pbakker', 'GNOME_TERMINAL_SERVICE': ':1.108', 'DISPLAY': ':0', 'PYENV_VIRTUAL_ENV': '/home/pbakker/.pyenv/versions/3.8.7/envs/jl', 'SHLVL': '1', 'LC_TELEPHONE': 'nl_NL.UTF-8', 'QT_IM_MODULE': 'ibus', 'LC_MEASUREMENT': 'nl_NL.UTF-8', 'PYV': '/home/pbakker/.pyenv/versions/susipop/lib/python3.8/site-packages', 'XDG_RUNTIME_DIR': '/run/user/1000', 'JIRA_ID': '609a8973b050a70069960b79', 'PYENV_ROOT': '/home/pbakker/.pyenv', 'LC_TIME': 'nl_NL.UTF-8', 'EMAIL': 'p.bakker@electricant.com', 'JOURNAL_STREAM': '8:54159', 'XDG_DATA_DIRS': '/usr/share/ubuntu:/usr/local/share/:/usr/share/:/var/lib/snapd/desktop', 'PATH': '/home/pbakker/.pyenv/versions/jl/bin:/home/pbakker/.pyenv/libexec:/home/pbakker/.pyenv/plugins/python-build/bin:/home/pbakker/.pyenv/plugins/pyenv-virtualenv/bin:/home/pbakker/.pyenv/plugins/python-build/bin:/home/pbakker/.pyenv/plugins/pyenv-virtualenv/bin:/home/pbakker/.pyenv/shims:/home/pbakker/.pyenv/bin:/home/pbakker/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'GDMSESSION': 'ubuntu', 'GITLAB_TOKEN': 'us2xg7sF2jfwbd5BssGS', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'PYPI_REPO_ID': '27162757', 'LC_NUMERIC': 'nl_NL.UTF-8', 'OLDPWD': '/home/pbakker', 'PYDEVD_USE_FRAME_EVAL': 'NO', 'SCRIPT_NAME': '/', 'PATH_INFO': '', 'QUERY_STRING': '', 'SERVER_SOFTWARE': 'voila/0.2.16', 'SERVER_PROTOCOL': 'HTTP/1.1', 'SERVER_PORT': '8866', 'SERVER_NAME': 'localhost'}, 'cwd': '/home/pbakker/eal-code/susipop/postproc_notebook'}
[Voila] connecting iopub channel to tcp://127.0.0.1:41835
[Voila] Connecting to: tcp://127.0.0.1:41835
[Voila] connecting shell channel to tcp://127.0.0.1:43555
[Voila] Connecting to: tcp://127.0.0.1:43555
[Voila] connecting stdin channel to tcp://127.0.0.1:44063
[Voila] Connecting to: tcp://127.0.0.1:44063
[Voila] connecting heartbeat channel to tcp://127.0.0.1:33609
[Voila] connecting control channel to tcp://127.0.0.1:49579
[Voila] Connecting to: tcp://127.0.0.1:49579
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (starting)
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (busy)
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (idle)
[Voila] Executing cell:
import k3d
k3d.plot()
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (busy)
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: execute_input
[Voila] msg_type: status
[Voila] content: {'execution_state': 'busy'}
[Voila] msg_type: execute_input
[Voila] content: {'code': 'import k3d\nk3d.plot()', 'execution_count': 1}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_open
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_open
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_open
[Voila] content: {'data': {'state': {'_model_module': '@jupyter-widgets/base', '_model_module_version': '1.2.0', '_model_name': 'LayoutModel', '_view_count': None, '_view_module': '@jupyter-widgets/base', '_view_module_version': '1.2.0', '_view_name': 'LayoutView', 'align_content': None, 'align_items': None, 'align_self': None, 'border': None, 'bottom': None, 'display': None, 'flex': None, 'flex_flow': None, 'grid_area': None, 'grid_auto_columns': None, 'grid_auto_flow': None, 'grid_auto_rows': None, 'grid_column': None, 'grid_gap': None, 'grid_row': None, 'grid_template_areas': None, 'grid_template_columns': None, 'grid_template_rows': None, 'height': None, 'justify_content': None, 'justify_items': None, 'left': None, 'margin': None, 'max_height': None, 'max_width': None, 'min_height': None, 'min_width': None, 'object_fit': None, 'object_position': None, 'order': None, 'overflow': None, 'overflow_x': None, 'overflow_y': None, 'padding': None, 'right': None, 'top': None, 'visibility': None, 'width': None}, 'buffer_paths': []}, 'comm_id': 'f41852f7db9e475d831124ddd975d7d1', 'target_name': 'jupyter.widget', 'target_module': None}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_open
[Voila] content: {'data': {'state': {'_backend_version': '2.11.0', '_dom_classes': [], '_model_module': 'k3d', '_model_module_version': '2.11.0', '_model_name': 'PlotModel', '_view_count': None, '_view_module': 'k3d', '_view_module_version': '2.11.0', '_view_name': 'PlotView', 'antialias': 0, 'auto_rendering': True, 'axes': ['x', 'y', 'z'], 'axes_helper': 0.0, 'background_color': 0, 'camera': [], 'camera_animation': [], 'camera_auto_fit': True, 'camera_damping_factor': 0.0, 'camera_fov': 0.0, 'camera_mode': '', 'camera_no_pan': False, 'camera_no_rotate': False, 'camera_no_zoom': False, 'camera_pan_speed': 0.0, 'camera_rotate_speed': 0.0, 'camera_zoom_speed': 0.0, 'clipping_planes': [], 'colorbar_object_id': -1, 'colorbar_scientific': False, 'fps': 0.0, 'fps_meter': True, 'grid': [-1, -1, -1, 1, 1, 1], 'grid_auto_fit': True, 'grid_color': 0, 'grid_visible': True, 'height': 0, 'label_color': 0, 'layout': 'IPY_MODEL_f41852f7db9e475d831124ddd975d7d1', 'lighting': 0.0, 'manipulate_mode': '', 'menu_visibility': True, 'mode': '', 'name': None, 'object_ids': [], 'rendering_steps': 1, 'screenshot': '', 'screenshot_scale': 0.0, 'snapshot': '', 'snapshot_type': '', 'time': 0.0, 'voxel_paint_color': 0}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920', 'target_name': 'jupyter.widget', 'target_module': None}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'antialias': 3}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'fps_meter': False}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'fps': 25.0}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'background_color': 16777215}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: display_data
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (idle)
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'grid_color': 15132390}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'label_color': 4473924}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'screenshot_scale': 2.0}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'height': 512}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'lighting': 1.5}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_rotate_speed': 1.0}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_zoom_speed': 1.2}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_pan_speed': 0.3}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_fov': 60.0}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'axes_helper': 1.0}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'mode': 'view'}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'snapshot_type': 'full'}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'camera_mode': 'trackball'}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'manipulate_mode': 'translate'}, 'buffer_paths': []}, 'comm_id': '72f279fcc94e4531be7a3654fc9fc920'}
[Voila] msg_type: display_data
[Voila] content: {'data': {'text/plain': "Plot(antialias=3, axes=['x', 'y', 'z'], axes_helper=1.0, background_color=16777215, camera_animation=[], camer…", 'application/vnd.jupyter.widget-view+json': {'version_major': 2, 'version_minor': 0, 'model_id': '72f279fcc94e4531be7a3654fc9fc920'}}, 'metadata': {}, 'transient': {}}
[Voila] msg_type: status
[Voila] content: {'execution_state': 'idle'}
[Voila] Executing cell:
import plotly.graph_objects as go
go.FigureWidget()
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (busy)
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: execute_input
[Voila] msg_type: status
[Voila] content: {'execution_state': 'busy'}
[Voila] msg_type: execute_input
[Voila] content: {'code': 'import plotly.graph_objects as go\ngo.FigureWidget()', 'execution_count': 2}
WARNING:tornado.access:404 GET /voila/templates/lab/static/voila.js.map (127.0.0.1) 2.06ms
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_open
[Voila] msg_type: comm_open
[Voila] content: {'data': {'state': {'_config': {}, '_data': [], '_dom_classes': [], '_js2py_layoutDelta': {}, '_js2py_pointsCallback': {}, '_js2py_relayout': {}, '_js2py_restyle': {}, '_js2py_traceDeltas': {}, '_js2py_update': {}, '_last_layout_edit_id': 0, '_last_trace_edit_id': 0, '_layout': {}, '_model_module': 'jupyterlab-plotly', '_model_module_version': '^5.3.1', '_model_name': 'FigureModel', '_py2js_addTraces': {}, '_py2js_animate': {}, '_py2js_deleteTraces': {}, '_py2js_moveTraces': {}, '_py2js_relayout': {}, '_py2js_removeLayoutProps': {}, '_py2js_removeTraceProps': {}, '_py2js_restyle': {}, '_py2js_update': {}, '_view_count': None, '_view_module': 'jupyterlab-plotly', '_view_module_version': '^5.3.1', '_view_name': 'FigureView'}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca', 'target_name': 'jupyter.widget', 'target_module': None}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_config': {'plotlyServerURL': 'https://plot.ly'}}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: comm_msg
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_last_layout_edit_id': 1}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: display_data
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_py2js_relayout': {'relayout_data': {'template': {'data': {'barpolar': [{'marker': {'line': {'color': '#E5ECF6', 'width': 0.5}, 'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}}, 'type': 'barpolar'}], 'bar': [{'error_x': {'color': '#2a3f5f'}, 'error_y': {'color': '#2a3f5f'}, 'marker': {'line': {'color': '#E5ECF6', 'width': 0.5}, 'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}}, 'type': 'bar'}], 'carpet': [{'aaxis': {'endlinecolor': '#2a3f5f', 'gridcolor': 'white', 'linecolor': 'white', 'minorgridcolor': 'white', 'startlinecolor': '#2a3f5f'}, 'baxis': {'endlinecolor': '#2a3f5f', 'gridcolor': 'white', 'linecolor': 'white', 'minorgridcolor': 'white', 'startlinecolor': '#2a3f5f'}, 'type': 'carpet'}], 'choropleth': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'type': 'choropleth'}], 'contourcarpet': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'type': 'contourcarpet'}], 'contour': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'contour'}], 'heatmapgl': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'heatmapgl'}], 'heatmap': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'heatmap'}], 'histogram2dcontour': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'histogram2dcontour'}], 'histogram2d': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'histogram2d'}], 'histogram': [{'marker': {'pattern': {'fillmode': 'overlay', 'size': 10, 'solidity': 0.2}}, 'type': 'histogram'}], 'mesh3d': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'type': 'mesh3d'}], 'parcoords': [{'line': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'parcoords'}], 'pie': [{'automargin': True, 'type': 'pie'}], 'scatter3d': [{'line': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatter3d'}], 'scattercarpet': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattercarpet'}], 'scattergeo': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattergeo'}], 'scattergl': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattergl'}], 'scattermapbox': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scattermapbox'}], 'scatterpolargl': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatterpolargl'}], 'scatterpolar': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatterpolar'}], 'scatter': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatter'}], 'scatterternary': [{'marker': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'type': 'scatterternary'}], 'surface': [{'colorbar': {'outlinewidth': 0, 'ticks': ''}, 'colorscale': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'type': 'surface'}], 'table': [{'cells': {'fill': {'color': '#EBF0F8'}, 'line': {'color': 'white'}}, 'header': {'fill': {'color': '#C8D4E3'}, 'line': {'color': 'white'}}, 'type': 'table'}]}, 'layout': {'annotationdefaults': {'arrowcolor': '#2a3f5f', 'arrowhead': 0, 'arrowwidth': 1}, 'autotypenumbers': 'strict', 'coloraxis': {'colorbar': {'outlinewidth': 0, 'ticks': ''}}, 'colorscale': {'diverging': [[0, '#8e0152'], [0.1, '#c51b7d'], [0.2, '#de77ae'], [0.3, '#f1b6da'], [0.4, '#fde0ef'], [0.5, '#f7f7f7'], [0.6, '#e6f5d0'], [0.7, '#b8e186'], [0.8, '#7fbc41'], [0.9, '#4d9221'], [1, '#276419']], 'sequential': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']], 'sequentialminus': [[0.0, '#0d0887'], [0.1111111111111111, '#46039f'], [0.2222222222222222, '#7201a8'], [0.3333333333333333, '#9c179e'], [0.4444444444444444, '#bd3786'], [0.5555555555555556, '#d8576b'], [0.6666666666666666, '#ed7953'], [0.7777777777777778, '#fb9f3a'], [0.8888888888888888, '#fdca26'], [1.0, '#f0f921']]}, 'colorway': ['#636efa', '#EF553B', '#00cc96', '#ab63fa', '#FFA15A', '#19d3f3', '#FF6692', '#B6E880', '#FF97FF', '#FECB52'], 'font': {'color': '#2a3f5f'}, 'geo': {'bgcolor': 'white', 'lakecolor': 'white', 'landcolor': '#E5ECF6', 'showlakes': True, 'showland': True, 'subunitcolor': 'white'}, 'hoverlabel': {'align': 'left'}, 'hovermode': 'closest', 'mapbox': {'style': 'light'}, 'paper_bgcolor': 'white', 'plot_bgcolor': '#E5ECF6', 'polar': {'angularaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}, 'bgcolor': '#E5ECF6', 'radialaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}}, 'scene': {'xaxis': {'backgroundcolor': '#E5ECF6', 'gridcolor': 'white', 'gridwidth': 2, 'linecolor': 'white', 'showbackground': True, 'ticks': '', 'zerolinecolor': 'white'}, 'yaxis': {'backgroundcolor': '#E5ECF6', 'gridcolor': 'white', 'gridwidth': 2, 'linecolor': 'white', 'showbackground': True, 'ticks': '', 'zerolinecolor': 'white'}, 'zaxis': {'backgroundcolor': '#E5ECF6', 'gridcolor': 'white', 'gridwidth': 2, 'linecolor': 'white', 'showbackground': True, 'ticks': '', 'zerolinecolor': 'white'}}, 'shapedefaults': {'line': {'color': '#2a3f5f'}}, 'ternary': {'aaxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}, 'baxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}, 'bgcolor': '#E5ECF6', 'caxis': {'gridcolor': 'white', 'linecolor': 'white', 'ticks': ''}}, 'title': {'x': 0.05}, 'xaxis': {'automargin': True, 'gridcolor': 'white', 'linecolor': 'white', 'ticks': '', 'title': {'standoff': 15}, 'zerolinecolor': 'white', 'zerolinewidth': 2}, 'yaxis': {'automargin': True, 'gridcolor': 'white', 'linecolor': 'white', 'ticks': '', 'title': {'standoff': 15}, 'zerolinecolor': 'white', 'zerolinewidth': 2}}}}, 'layout_edit_id': 1, 'source_view_id': None}}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca'}
[Voila] activity on 49256b08-1855-4c4a-8776-e1908410e475: status (idle)
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_py2js_relayout': None}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_last_layout_edit_id': 0}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca'}
[Voila] msg_type: comm_msg
[Voila] content: {'data': {'method': 'update', 'state': {'_view_count': 0}, 'buffer_paths': []}, 'comm_id': '59f7b4a878fb48669276661401a726ca'}
[Voila] msg_type: display_data
[Voila] content: {'data': {'text/plain': "FigureWidget({\n 'data': [], 'layout': {'template': '...'}\n})", 'application/vnd.jupyter.widget-view+json': {'version_major': 2, 'version_minor': 0, 'model_id': '59f7b4a878fb48669276661401a726ca'}}, 'metadata': {}, 'transient': {}}
[Voila] msg_type: status
[Voila] content: {'execution_state': 'idle'}
[Voila] Skipping non-executing cell 2
[Voila] Websocket closed 05d7eb09-dc78-4b5e-af11-2409d3f08fec:13ea8268-4df4-44e7-b9f4-6c835fea22bd
[Voila] Starting buffering for 05d7eb09-dc78-4b5e-af11-2409d3f08fec:13ea8268-4df4-44e7-b9f4-6c835fea22bd
[Voila] Clearing buffer for 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Clearing buffer for 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Kernel shutdown: 05d7eb09-dc78-4b5e-af11-2409d3f08fec
[Voila] Connecting to: tcp://127.0.0.1:50819
</pre>
</details>
<details><summary>Browser Output</summary>
<pre>
Loaded classic notebook extension "ipysheet/extension". localhost:8866:324:15
Loaded classic notebook extension "jupyterlab-plotly/extension". localhost:8866:332:15
Starting WebSocket: ws://localhost:8866/api/kernels/64748502-665f-4276-b5d7-fd94b80cb7bf localhost:8866:30:15
Falling back to https://cdn.jsdelivr.net/npm/ for k3d@2.11.0 voila.js:502:54527
Loading failed for the <script> with source “http://localhost:8866/voila/k3d.js”. localhost:8866:1:1
K3D: (UNMASKED_VENDOR_WEBGL) Intel Renderer.js:60:12
K3D: (UNMASKED_RENDERER_WEBGL) Intel(R) HD Graphics Renderer.js:61:12
WEBGL_debug_renderer_info is deprecated in Firefox and will be removed. Please use RENDERER. Renderer.js:59:25
Source map error: Error: request failed with status 404
Resource URL: http://localhost:8866/voila/templates/lab/static/voila.js?v=7bd473184d96bb801a79c00c9268efef0fbd3a6f03cb9b0889be3db8225dd4281151995ac5ee643ed2489fdf5a79162f3a67a9e5f674e2117b85fdd672f0d3c5
Source Map URL: voila.js.map
</pre>
</details>
### If using JupyterLab
- JupyterLab version: 3.1.13
| closed | 2021-11-01T08:32:20Z | 2023-08-03T13:50:36Z | https://github.com/voila-dashboards/voila/issues/1019 | [
"bug"
] | Archemedes | 5 |
litestar-org/litestar | api | 3,762 | Bug: Swagger and Redoc docs don't work for Piccolo ORM in Litestar>2.11.0 | ### Description
Piccolo ORM has a feature for scaffolding simple ASGI applications for various ASGI frameworks. I notice that
`Swagger` and `Redoc` docs do not work with the latest version of Litestar. The latest working version is `Litestar==2.11.0`.
For scaffolding ASGI apps, we don't use `PiccoloDTO` but Piccolo's internal tool ([create_pydantic_model](https://piccolo-orm.readthedocs.io/en/latest/piccolo/serialization/index.html)) to create a Pydantic model from a Piccolo table, which has an `extra` property, but Litestar [Schema](https://github.com/litestar-org/litestar/blob/b18774922fedc86089d143d9a5484f393826557d/litestar/openapi/spec/schema.py#L41) does not has a `extra` key and that causes a `ValueError` to be raised. I tried two things and after that everything works.
1. adding an `extra` key to a Schema like this
```python
class Schema(BaseSchemaObject):
...
extra: dict[str, Any] | None = None
```
2. or excluding the `extra` key from checking [here](https://github.com/litestar-org/litestar/blob/main/litestar/_openapi/schema_generation/schema.py#L595-L598) like this.
```python
if not hasattr(schema, schema_key) and schema_key != "extra":
raise ValueError(
f"`schema_extra` declares key `{schema_key}` which does not exist in `Schema` object"
)
```
I don't know if that's good enough, but these are just ideas. Any solution that will enable Piccolo ORM works with the latest Litestar version will be great. Thanks in advance.
### URL to code causing the issue
_No response_
### MCVE
```python
https://github.com/sinisaos/simple-piccolo
```
### Steps to reproduce
```bash
1. Clone repository
2. Install requirements
3. Start app with `python litestar_app.py`
4. Go to `http://localhost:8000/schema/swagger` and see error
```
### Screenshots
```bash
""
```
### Logs
```bash
ERROR - 2024-09-28 07:35:34,331 - litestar - config - Uncaught exception (connection_type=http, path=/schema/swagger):
Traceback (most recent call last):
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/middleware/_internal/exceptions/middleware.py", line 159, in __call__
await self.app(scope, receive, capture_response_started)
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_asgi/asgi_router.py", line 100, in __call__
await asgi_app(scope, receive, send)
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 80, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 132, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 152, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/routes/http.py", line 195, in _get_response_data
data = route_handler.fn(**parsed_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 161, in _handler
return plugin_.render(request, self.provide_openapi_schema())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 99, in provide_openapi_schema
self._openapi_schema = self.provide_openapi().to_schema()
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 94, in provide_openapi
self._openapi = self._build_openapi()
^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/plugin.py", line 83, in _build_openapi
path_item = create_path_item_for_route(context, route)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 139, in create_path_item_for_route
return path_item_factory.create_path_item()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 44, in create_path_item
operation = self.create_operation_for_handler_method(route_handler, HttpMethod(http_method))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/path_item.py", line 68, in create_operation_for_handler_method
request_body = create_request_body(
^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/request_body.py", line 49, in create_request_body
schema = schema_creator.for_field_definition(data_field)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 333, in for_field_definition
result = self.for_plugin(field_definition, plugin_for_annotation)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 515, in for_plugin
schema = plugin.to_openapi_schema(field_definition=field_definition, schema_creator=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_schema_plugin.py", line 235, in to_openapi_schema
return self.for_pydantic_model(field_definition=field_definition, schema_creator=schema_creator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/contrib/pydantic/pydantic_schema_plugin.py", line 252, in for_pydantic_model
return schema_creator.create_component_schema(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 645, in create_component_schema
schema.properties = {k: self.for_field_definition(v) for k, v in property_fields.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 645, in <dictcomp>
schema.properties = {k: self.for_field_definition(v) for k, v in property_fields.items()}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 361, in for_field_definition
return self.process_schema_result(field_definition, result) if isinstance(result, Schema) else result
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rkl/dev/piccolo_env/lib/python3.11/site-packages/litestar/_openapi/schema_generation/schema.py", line 596, in process_schema_result
raise ValueError(
ValueError: `schema_extra` declares key `extra` which does not exist in `Schema` object
```
### Litestar Version
2.12.1
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-09-28T06:04:21Z | 2025-03-20T15:54:56Z | https://github.com/litestar-org/litestar/issues/3762 | [
"Bug :bug:"
] | sinisaos | 2 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 522 | 请教一下模型合并后unpermute 函数的作用是什么 | 为什么需要对wq和wk进行维度交换,前面没有找到可以与之对应的permute代码,有没有参考资料
非常感谢🙏 | closed | 2023-06-06T09:59:40Z | 2023-06-07T02:08:03Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/522 | [] | angonger | 1 |
ets-labs/python-dependency-injector | flask | 387 | Q: Singleton Provider Referencing a Singleton Provider | Amazing library! Thank you for building this!
One question I had was in regards to Singletons referencing other Singleton providers and how [reset](https://python-dependency-injector.ets-labs.org/api/providers.html?highlight=reset#dependency_injector.providers.Singleton.reset) comes into play.
For example if I have the following:
```
from dependency_injector import containers, providers
class MyContainer(containers.DeclarativeContainer):
database = providers.Singleton(
MyDatabase,
connection_string="some connection string",
)
service = providers.Singleton(
MyService,
database=database,
)
```
The singleton referenced by "MyService" will always be the same instance in "MyDatabase" unless [reset](https://python-dependency-injector.ets-labs.org/api/providers.html?highlight=reset#dependency_injector.providers.Singleton.reset) is called on the database and the service.
```
In [30]: id(MyContainer.database())
Out[30]: 4370026304
In [31]: id(MyContainer.database())
Out[31]: 4370026304 # <- Same reference. Good!
In [32]: id(MyContainer.service()._database)
Out[32]: 4370026304 # <- Still same reference. Good!
In [33]: MyContainer.database.reset()
In [34]: id(MyContainer.database())
Out[34]: 4370026355 # <- New reference, as expected.
In [35]: id(MyContainer.service()._database)
Out[35]: 4370026304 # <- Ruhoh. Still holding the same reference. Meaning we have two MyDatabase instances floating around.
```
Obviously if I don't call reset on database there is no problem, or if I do, I can reset database then reset the service.
My question is, does the library itself ever call "reset" on itself? If so, how and when would it do that? My concern is using Singletons like this may inadvertently keep multiple active connections to my database alive.
Thanks for any information you can provide! | closed | 2021-02-03T01:11:54Z | 2021-02-04T13:18:59Z | https://github.com/ets-labs/python-dependency-injector/issues/387 | [
"question"
] | cameroncurrey | 4 |
httpie/cli | api | 722 | --ssl — TLS 1.3 & Python 3.7 compatibility | Now that TLS1.3 is out **[1]** it would be great to add that to the list of supported ssl parameters.
` [--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]`
**[1]** https://tools.ietf.org/html/rfc8446
| open | 2018-10-17T10:04:07Z | 2023-12-19T19:12:50Z | https://github.com/httpie/cli/issues/722 | [] | jaimejim | 4 |
cupy/cupy | numpy | 8,633 | Can't compile on Arch Linux with `cudnn 9.2.1.18` and cuda `12.6.1` | ### Description
i'm getting this error both when trying to install with `pip` or build manually:
```
INFO:root:g++ -fno-strict-overflow -Wsign-compare -DNDEBUG -g -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-le07:13:06 [279/10040]
-ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-po
inter -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=3 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -g
-ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC -D_FORCE_INLINES=1 -DCUPY_CACHE_KEY=18afc59f6755fc202de57c0ee0fc8be2fa13eeef -DCUPY_CUB_VERSION_CODE=200600 -DCUPY_JITIFY_VERSION_CODE=-1 -I/tmp/pip-install-ugxqad27/cupy_b62d9ff2f4a24ef782094f3c58376b2f/cupy/_core/inc
lude/cupy/_cccl/libcudacxx -I/tmp/pip-install-ugxqad27/cupy_b62d9ff2f4a24ef782094f3c58376b2f/cupy/_core/include/cupy/_cccl/thrust -I/tmp/pip-install-ugxqad27/cupy_b62d9ff2f4a24ef782094f3c58376b2f/cupy/_core/include/cupy/_cccl/cub -I/tmp/pip-install-ugxqad27/cupy_b62d9ff2f4a24ef782094f3c58376b
2f/cupy/_core/include -I/opt/cuda/include -I/home/lie/projects/actnlzz_music/env/include -I/usr/include/python3.12 -c cupy_backends/cuda/libs/cudnn.cpp -o build/temp.linux-x86_64-cpython-312/cupy_backends/cuda/libs/cudnn.o
cupy_backends/cuda/libs/cudnn.cpp:2358:71: error: ‘cudnnRNNPaddingMode_t’ was not declared in this scope; did you mean ‘cudnnPaddingMode_t’?
2358 | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_cudnnRNNPaddingMode_t(cudnnRNNPaddingMode_t value);
| ^~~~~~~~~~~~~~~~~~~~~
| cudnnPaddingMode_t
cupy_backends/cuda/libs/cudnn.cpp: In function ‘size_t __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_createPersistentRNNPlan(size_t, int, int, int)’:
cupy_backends/cuda/libs/cudnn.cpp:26660:3: error: ‘cudnnPersistentRNNPlan_t’ was not declared in this scope
26660 | cudnnPersistentRNNPlan_t __pyx_v_plan;
| ^~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp:26677:155: error: ‘__pyx_v_plan’ was not declared in this scope; did you mean ‘__pyx_k_plan’?
26677 | NNPlan(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((int)__pyx_v_minibatch), ((cudnnDataType_t)__pyx_v_dataType), (&__pyx_v_plan));
| ^~~~~~~~~~~~
| __pyx_k_plan
cupy_backends/cuda/libs/cudnn.cpp:26677:20: error: ‘cudnnCreatePersistentRNNPlan’ was not declared in this scope
26677 | __pyx_v_status = cudnnCreatePersistentRNNPlan(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((int)__pyx_v_minibatch), ((cudnnDataType_t)__pyx_v_dataType), (&__pyx_v_plan));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_setPersistentRNNPlan(size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:26848:89: error: ‘cudnnPersistentRNNPlan_t’ was not declared in this scope
26848 | __pyx_v_status = cudnnSetPersistentRNNPlan(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnPersistentRNNPlan_t)__pyx_v_plan));
| ^~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp:26848:114: error: expected ‘)’ before ‘__pyx_v_plan’
26848 | __pyx_v_status = cudnnSetPersistentRNNPlan(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnPersistentRNNPlan_t)__pyx_v_plan));
| ~ ^~~~~~~~~~~~
| )
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_destroyPersistentRNNPlan(size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:26999:52: error: ‘cudnnPersistentRNNPlan_t’ was not declared in this scope
26999 | __pyx_v_status = cudnnDestroyPersistentRNNPlan(((cudnnPersistentRNNPlan_t)__pyx_v_plan));
| ^~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp:26999:77: error: expected ‘)’ before ‘__pyx_v_plan’
26999 | __pyx_v_status = cudnnDestroyPersistentRNNPlan(((cudnnPersistentRNNPlan_t)__pyx_v_plan));
| ~ ^~~~~~~~~~~~
| )
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_setRNNDescriptor_v6(intptr_t, size_t, int, int, size_t, int, int, int, int, int, int)’:
cupy_backends/cuda/libs/cudnn.cpp:27329:20: error: ‘cudnnSetRNNDescriptor_v6’ was not declared in this scope; did you mean ‘cudnnSetRNNDescriptor_v8’?
27329 | __pyx_v_status = cudnnSetRNNDescriptor_v6(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_hiddenSize, __pyx_v_numLayers, ((cudnnDropoutDescriptor_t)__pyx_v_dropoutDesc), ((cudnnRNNInputMode_t)__pyx_v_inputMode), ((cudnnDirectionMode_t)__pyx_v_direction), ((cudnnRNNMode_t)__pyx_v_mode), ((cudnnRNNAlgo_t)__pyx_v_algo), ((cudnnDataType_t)__pyx_v_dataType));
| ^~~~~~~~~~~~~~~~~~~~~~~~
| cudnnSetRNNDescriptor_v8
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_setRNNPaddingMode(size_t, int, int)’:
cupy_backends/cuda/libs/cudnn.cpp:27568:86: error: ‘cudnnRNNPaddingMode_t’ was not declared in this scope; did you mean ‘cudnnPaddingMode_t’?
27568 | __pyx_v_status = cudnnSetRNNPaddingMode(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnRNNPaddingMode_t)__pyx_v_paddingMode));
| ^~~~~~~~~~~~~~~~~~~~~
| cudnnPaddingMode_t
cupy_backends/cuda/libs/cudnn.cpp:27568:108: error: expected ‘)’ before ‘__pyx_v_paddingMode’
27568 | __pyx_v_status = cudnnSetRNNPaddingMode(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnRNNPaddingMode_t)__pyx_v_paddingMode));
| ~ ^~~~~~~~~~~~~~~~~~~
| )
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_getRNNPaddingMode(size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:27703:3: error: ‘cudnnRNNPaddingMode_t’ was not declared in this scope; did you mean ‘cudnnPaddingMode_t’?
27703 | cudnnRNNPaddingMode_t __pyx_v_paddingMode;
| ^~~~~~~~~~~~~~~~~~~~~
| cudnnPaddingMode_t
cupy_backends/cuda/libs/cudnn.cpp:27720:86: error: ‘__pyx_v_paddingMode’ was not declared in this scope; did you mean ‘__pyx_k_paddingMode’?
27720 | __pyx_v_status = cudnnGetRNNPaddingMode(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), (&__pyx_v_paddingMode));
| ^~~~~~~~~~~~~~~~~~~
| __pyx_k_paddingMode
cupy_backends/cuda/libs/cudnn.cpp:27720:20: error: ‘cudnnGetRNNPaddingMode’ was not declared in this scope; did you mean ‘cudnnPaddingMode_t’?
27720 | __pyx_v_status = cudnnGetRNNPaddingMode(((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), (&__pyx_v_paddingMode));
| ^~~~~~~~~~~~~~~~~~~~~~
| cudnnPaddingMode_t
cupy_backends/cuda/libs/cudnn.cpp:27741:53: error: ‘__Pyx_PyInt_From_cudnnRNNPaddingMode_t’ cannot be used as a function
27741 | __pyx_t_1 = __Pyx_PyInt_From_cudnnRNNPaddingMode_t(__pyx_v_paddingMode); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 2094, __pyx_L1_error)
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_getRNNWorkspaceSize(intptr_t, size_t, int, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:28514:20: error: ‘cudnnGetRNNWorkspaceSize’ was not declared in this scope; did you mean ‘cudnnGetRNNWeightSpaceSize’?
28514 | __pyx_v_status = cudnnGetRNNWorkspaceSize(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_seqLength, ((cudnnTensorDescriptor_t *)__pyx_v_xDesc), (&__pyx_v_sizeInBytes));
| ^~~~~~~~~~~~~~~~~~~~~~~~
| cudnnGetRNNWeightSpaceSize
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_getRNNTrainingReserveSize(intptr_t, size_t, int, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:28700:20: error: ‘cudnnGetRNNTrainingReserveSize’ was not declared in this scope
28700 | __pyx_v_status = cudnnGetRNNTrainingReserveSize(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_seqLength, ((cudnnTensorDescriptor_t *)__pyx_v_xDesc), (&__pyx_v_sizeInBytes));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_getRNNParamsSize(intptr_t, size_t, size_t, int, int)’:
cupy_backends/cuda/libs/cudnn.cpp:28886:20: error: ‘cudnnGetRNNParamsSize’ was not declared in this scope; did you mean ‘cudnnGetRNNTempSpaceSizes’?
28886 | __pyx_v_status = cudnnGetRNNParamsSize(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnTensorDescriptor_t)__pyx_v_xDesc), (&__pyx_v_sizeInBytes), ((cudnnDataType_t)__pyx_v_dataType));
| ^~~~~~~~~~~~~~~~~~~~~
| cudnnGetRNNTempSpaceSizes
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_getRNNLinLayerMatrixParams(intptr_t, size_t, int, size_t, size_t, size_t, int, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:29071:20: error: ‘cudnnGetRNNLinLayerMatrixParams’ was not declared in this scope
29071 | __pyx_v_status = cudnnGetRNNLinLayerMatrixParams(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_layer, ((cudnnTensorDescriptor_t)__pyx_v_xDesc), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void *)__pyx_v_w), __pyx_v_linLayerID, ((cudnnFilterDescriptor_t)__pyx_v_linLayerMatDesc), ((void **)__pyx_v_linLayerMat));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_getRNNLinLayerBiasParams(intptr_t, size_t, int, size_t, size_t, size_t, int, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:29299:20: error: ‘cudnnGetRNNLinLayerBiasParams’ was not declared in this scope
29299 | __pyx_v_status = cudnnGetRNNLinLayerBiasParams(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_layer, ((cudnnTensorDescriptor_t)__pyx_v_xDesc), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void *)__pyx_v_w), __pyx_v_linLayerID, ((cudnnFilterDescriptor_t)__pyx_v_linLayerBiasDesc), ((void **)__pyx_v_linLayerBias));
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNForwardInference(intptr_t, size_t, int, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:29553:26: error: ‘cudnnRNNForwardInference’ was not declared in this scope
29553 | __pyx_v_status = cudnnRNNForwardInference(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_seqLength, ((cudnnTensorDescriptor_t *)__pyx_v_xDesc), ((void *)__pyx_v_x), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void *)__pyx_v_hx), ((cudnnTensorDescriptor_t)__pyx_v_cxDesc), ((void *)__pyx_v_cx), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void *)__pyx_v_w), ((cudnnTensorDescriptor_t *)__pyx_v_yDesc), ((void *)__pyx_v_y), ((cudnnTensorDescriptor_t)__pyx_v_hyDesc), ((void *)__pyx_v_hy), ((cudnnTensorDescriptor_t)__pyx_v_cyDesc), ((void *)__pyx_v_cy), ((void *)__pyx_v_workspace), __pyx_v_workSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNForwardTraining(intptr_t, size_t, int, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:29937:26: error: ‘cudnnRNNForwardTraining’ was not declared in this scope; did you mean ‘cudnnRNNForward’?
29937 | __pyx_v_status = cudnnRNNForwardTraining(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_seqLength, ((cudnnTensorDescriptor_t *)__pyx_v_xDesc), ((void *)__pyx_v_x), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void *)__pyx_v_hx), ((cudnnTensorDescriptor_t)__pyx_v_cxDesc), ((void *)__pyx_v_cx), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void *)__pyx_v_w), ((cudnnTensorDescriptor_t *)__pyx_v_yDesc), ((void *)__pyx_v_y), ((cudnnTensorDescriptor_t)__pyx_v_hyDesc), ((void *)__pyx_v_hy), ((cudnnTensorDescriptor_t)__pyx_v_cyDesc), ((void *)__pyx_v_cy), ((void *)__pyx_v_workspace), __pyx_v_workSpaceSizeInBytes, ((void *)__pyx_v_reserveSpace), __pyx_v_reserveSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~~
| cudnnRNNForward
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNBackwardData(intptr_t, size_t, int, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:30343:26: error: ‘cudnnRNNBackwardData’ was not declared in this scope; did you mean ‘cudnnRNNBackwardData_v8’?
30343 | __pyx_v_status = cudnnRNNBackwardData(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_seqLength, ((cudnnTensorDescriptor_t *)__pyx_v_yDesc), ((void *)__pyx_v_y), ((cudnnTensorDescriptor_t *)__pyx_v_dyDesc), ((void *)__pyx_v_dy), ((cudnnTensorDescriptor_t)__pyx_v_dhyDesc), ((void *)__pyx_v_dhy), ((cudnnTensorDescriptor_t)__pyx_v_dcyDesc), ((void *)__pyx_v_dcy), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void *)__pyx_v_w), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void *)__pyx_v_hx), ((cudnnTensorDescriptor_t)__pyx_v_cxDesc), ((void *)__pyx_v_cx), ((cudnnTensorDescriptor_t *)__pyx_v_dxDesc), ((void *)__pyx_v_dx), ((cudnnTensorDescriptor_t)__pyx_v_dhxDesc), ((void *)__pyx_v_dhx), ((cudnnTensorDescriptor_t)__pyx_v_dcxDesc), ((void *)__pyx_v_dcx), ((void *)__pyx_v_workspace), __pyx_v_workSpaceSizeInBytes, ((void *)__pyx_v_reserveSpace), __pyx_v_reserveSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~
| cudnnRNNBackwardData_v8
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNBackwardWeights(intptr_t, size_t, int, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:30815:26: error: ‘cudnnRNNBackwardWeights’ was not declared in this scope; did you mean ‘cudnnRNNBackwardWeights_v8’?
30815 | __pyx_v_status = cudnnRNNBackwardWeights(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), __pyx_v_seqLength, ((cudnnTensorDescriptor_t *)__pyx_v_xDesc), ((void *)__pyx_v_x), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void *)__pyx_v_hx), ((cudnnTensorDescriptor_t *)__pyx_v_yDesc), ((void *)__pyx_v_y), ((void *)__pyx_v_workspace), __pyx_v_workSpaceSizeInBytes, ((cudnnFilterDescriptor_t)__pyx_v_dwDesc), ((void *)__pyx_v_dw), ((void *)__pyx_v_reserveSpace), __pyx_v_reserveSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~~
| cudnnRNNBackwardWeights_v8
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNForwardInferenceEx(intptr_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:31155:26: error: ‘cudnnRNNForwardInferenceEx’ was not declared in this scope
31155 | __pyx_v_status = cudnnRNNForwardInferenceEx(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnRNNDataDescriptor_t)__pyx_v_xDesc), ((void const *)__pyx_v_x), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void const *)__pyx_v_hx), ((cudnnTensorDescriptor_t)__pyx_v_cxDesc), ((void const *)__pyx_v_cx), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void const *)__pyx_v_w), ((cudnnRNNDataDescriptor_t)__pyx_v_yDesc), ((void *)__pyx_v_y), ((cudnnTensorDescriptor_t)__pyx_v_hyDesc), ((void *)__pyx_v_hy), ((cudnnTensorDescriptor_t)__pyx_v_cyDesc), ((void *)__pyx_v_cy), ((cudnnRNNDataDescriptor_t)__pyx_v_kDesc), ((void const *)__pyx_v_keys), ((cudnnRNNDataDescriptor_t)__pyx_v_cDesc), ((void *)__pyx_v_cAttn), ((cudnnRNNDataDescriptor_t)__pyx_v_iDesc), ((void *)__pyx_v_iAttn), ((cudnnRNNDataDescriptor_t)__pyx_v_qDesc), ((void *)__pyx_v_queries), ((void *)__pyx_v_workSpace), __pyx_v_workSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNForwardTrainingEx(intptr_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:31616:26: error: ‘cudnnRNNForwardTrainingEx’ was not declared in this scope
31616 | __pyx_v_status = cudnnRNNForwardTrainingEx(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnRNNDataDescriptor_t)__pyx_v_xDesc), ((void const *)__pyx_v_x), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void const *)__pyx_v_hx), ((cudnnTensorDescriptor_t)__pyx_v_cxDesc), ((void const *)__pyx_v_cx), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void const *)__pyx_v_w), ((cudnnRNNDataDescriptor_t)__pyx_v_yDesc), ((void *)__pyx_v_y), ((cudnnTensorDescriptor_t)__pyx_v_hyDesc), ((void *)__pyx_v_hy), ((cudnnTensorDescriptor_t)__pyx_v_cyDesc), ((void *)__pyx_v_cy), ((cudnnRNNDataDescriptor_t)__pyx_v_kDesc), ((void const *)__pyx_v_keys), ((cudnnRNNDataDescriptor_t)__pyx_v_cDesc), ((void *)__pyx_v_cAttn), ((cudnnRNNDataDescriptor_t)__pyx_v_iDesc), ((void *)__pyx_v_iAttn), ((cudnnRNNDataDescriptor_t)__pyx_v_qDesc), ((void *)__pyx_v_queries), ((void *)__pyx_v_workSpace), __pyx_v_workSpaceSizeInBytes, ((void *)__pyx_v_reserveSpace), __pyx_v_reserveSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNBackwardDataEx(intptr_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:32099:26: error: ‘cudnnRNNBackwardDataEx’ was not declared in this scope; did you mean ‘cudnnRNNBackwardData_v8’?
32099 | __pyx_v_status = cudnnRNNBackwardDataEx(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnRNNDataDescriptor_t)__pyx_v_yDesc), ((void const *)__pyx_v_y), ((cudnnRNNDataDescriptor_t)__pyx_v_dyDesc), ((void const *)__pyx_v_dy), ((cudnnRNNDataDescriptor_t)__pyx_v_dcDesc), ((void const *)__pyx_v_dcAttn), ((cudnnTensorDescriptor_t)__pyx_v_dhyDesc), ((void const *)__pyx_v_dhy), ((cudnnTensorDescriptor_t)__pyx_v_dcyDesc), ((void const *)__pyx_v_dcy), ((cudnnFilterDescriptor_t)__pyx_v_wDesc), ((void const *)__pyx_v_w), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void const *)__pyx_v_hx), ((cudnnTensorDescriptor_t)__pyx_v_cxDesc), ((void const *)__pyx_v_cx), ((cudnnRNNDataDescriptor_t)__pyx_v_dxDesc), ((void *)__pyx_v_dx), ((cudnnTensorDescriptor_t)__pyx_v_dhxDesc), ((void *)__pyx_v_dhx), ((cudnnTensorDescriptor_t)__pyx_v_dcxDesc), ((void *)__pyx_v_dcx), ((cudnnRNNDataDescriptor_t)__pyx_v_dkDesc), ((void *)__pyx_v_dkeys), ((void *)__pyx_v_workSpace), __pyx_v_workSpaceSizeInBytes, ((void *)__pyx_v_reserveSpace), __pyx_v_reserveSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~
| cudnnRNNBackwardData_v8
cupy_backends/cuda/libs/cudnn.cpp: In function ‘PyObject* __pyx_f_13cupy_backends_4cuda_4libs_5cudnn_RNNBackwardWeightsEx(intptr_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, size_t, int)’:
cupy_backends/cuda/libs/cudnn.cpp:32604:26: error: ‘cudnnRNNBackwardWeightsEx’ was not declared in this scope; did you mean ‘cudnnRNNBackwardWeights_v8’?
32604 | __pyx_v_status = cudnnRNNBackwardWeightsEx(((cudnnHandle_t)__pyx_v_handle), ((cudnnRNNDescriptor_t)__pyx_v_rnnDesc), ((cudnnRNNDataDescriptor_t)__pyx_v_xDesc), ((void const *)__pyx_v_x), ((cudnnTensorDescriptor_t)__pyx_v_hxDesc), ((void const *)__pyx_v_hx), ((cudnnRNNDataDescriptor_t)__pyx_v_yDesc), ((void const *)__pyx_v_y), ((void *)__pyx_v_workSpace), __pyx_v_workSpaceSizeInBytes, ((cudnnFilterDescriptor_t)__pyx_v_dwDesc), ((void *)__pyx_v_dw), ((void *)__pyx_v_reserveSpace), __pyx_v_reserveSpaceSizeInBytes);
| ^~~~~~~~~~~~~~~~~~~~~~~~~
| cudnnRNNBackwardWeights_v8
cupy_backends/cuda/libs/cudnn.cpp: At global scope:
cupy_backends/cuda/libs/cudnn.cpp:43366:32: error: redefinition of ‘PyObject* __Pyx_PyInt_From_cudnnRNNPaddingMode_t’
43366 | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_cudnnRNNPaddingMode_t(cudnnRNNPaddingMode_t value) {
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp:2358:32: note: ‘PyObject* __Pyx_PyInt_From_cudnnRNNPaddingMode_t’ previously defined here
2358 | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_cudnnRNNPaddingMode_t(cudnnRNNPaddingMode_t value);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cupy_backends/cuda/libs/cudnn.cpp:43366:71: error: ‘cudnnRNNPaddingMode_t’ was not declared in this scope; did you mean ‘cudnnPaddingMode_t’?
43366 | static CYTHON_INLINE PyObject* __Pyx_PyInt_From_cudnnRNNPaddingMode_t(cudnnRNNPaddingMode_t value) {
| ^~~~~~~~~~~~~~~~~~~~~
| cudnnPaddingMode_t
```
### To Reproduce
```py
# Write the code here
```
### Installation
Source (`pip install cupy`)
### Environment
```
Arch Linux with `cudnn 9.2.1.18` and cuda `12.6.1`
```
### Additional Information
_No response_ | closed | 2024-10-01T05:03:06Z | 2024-10-31T01:53:32Z | https://github.com/cupy/cupy/issues/8633 | [
"issue-checked"
] | actionless | 10 |
feature-engine/feature_engine | scikit-learn | 809 | Request to use polars dataframe package directly | Thank you for creating a great package.
polars is a next-generation high-performance dataframe package that will replace pandas.
After converting polars to pandas using to_pandas(), you can use the feature-engine package.
However, it would be even better if you could directly process polars dataframe in the feature-engine package!
Have a nice week :) | open | 2024-09-02T23:18:19Z | 2024-10-31T00:24:25Z | https://github.com/feature-engine/feature_engine/issues/809 | [] | jnhyeon | 3 |
kymatio/kymatio | numpy | 194 | Inconsistency in the position of J across Scattering*d constructors | A follow-up on a discussion with @janden and @eickenberg in #158
At the moment, the order of positional arguments in the scattering constructors are:
Scattering1d: `T, J, Q`
Scattering2d: `M, N, J`
Scattering 3d: `M, N, O, J, L, sigma_0`
The only parameter that is truly common to each of these classes is `J`. It is also the only one in which we really can't provide a good default value. Lastly, we could make the case that the parameters which are relative to input shape (`T`, `M`, `N`, `O`) could be inferred directly from `J` by default. This would pave the way for allowing variable-sized inputs at runtime.
That's clearly a lot of work, and I don't want to schedule all that at once. We can work towards this progressively. However, one necessary change to the API is the order of positional parameters. I would recommend
Scattering1d: `J, T, Q`
Scattering2d: `J, M, N`
Scattering 3d: `J, M, N, O, L, sigma_0`
That way, in the future we will be able to progressively include simpler constructors in a backwards compatible way, and/or with appropriate deprecation warnings.
This is a big API decision but it is not a lot of work on the part of the code. There are only a few places in the docs and examples in which these constructors are called with positional parameters.
| closed | 2018-11-24T22:50:59Z | 2018-11-27T04:58:41Z | https://github.com/kymatio/kymatio/issues/194 | [
"API"
] | lostanlen | 1 |
Avaiga/taipy | data-visualization | 2,381 | Remove login visual element | ### Description
The issue consists of removing the login visual element from taipy community.
Indeed, it is made to cover a use case with some authentication, which is part of the taipy enterprise package.
In parallel of this breaking change, another issue should be opened to add a login visual element into taipy-enterprise that will be able to deeply integrate with authentication, and authorization.
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2025-01-06T10:17:51Z | 2025-01-31T13:26:20Z | https://github.com/Avaiga/taipy/issues/2381 | [
"🖰 GUI",
"🟧 Priority: High",
"✨New feature",
"🔒 Staff only",
"Enterprise",
"Enterprise: 🙍🏼User management"
] | jrobinAV | 1 |
deepfakes/faceswap | machine-learning | 1,256 | Failed to convert image: 'data_dst_000001.png' UnboundLocalError: local variable 'out' referenced before assignment | path: \plugins\convert\mask\mask_blend.py -line 161
`
mask = self._get_mask(detected_face, predicted_mask, centering, sub_crop_offset)
raw_mask = mask.copy()
if self._mask_type != "none":
out = self._erode(mask) if self._do_erode else mask
out = np.minimum(out, self._box)
logger.trace( # type: ignore
"mask shape: %s, raw_mask shape: %s", mask.shape, raw_mask.shape)
return out, raw_mask
`
I chage it:
`
mask = self._get_mask(detected_face, predicted_mask, centering, sub_crop_offset)
raw_mask = mask.copy()
if self._mask_type != "none":
mask = self._erode(mask) if self._do_erode else mask
mask = np.minimum(mask, self._box)
logger.trace( # type: ignore
"mask shape: %s, raw_mask shape: %s", mask.shape, raw_mask.shape)
return mask, raw_mask
` | closed | 2022-08-13T12:03:35Z | 2022-08-18T18:36:24Z | https://github.com/deepfakes/faceswap/issues/1256 | [] | LongHuW | 1 |
kornia/kornia | computer-vision | 2,845 | [Bug] RandomJPEG does fails if sides are not divisible by 16 | ### Describe the bug
RandomJPEG does fails if sides are not divisible by 16
### Reproduction steps
```bash
In [1]: from kornia.augmentation import RandomJPEG
In [2]: import torch
...: rng = torch.manual_seed(0)
...: images = 0.1904 * torch.ones(2, 3, 33, 37)
...: aug = RandomJPEG(jpeg_quality=(1.0, 50.0), p=1.)
...: images_jpeg = aug(images)
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
Cell In[2], line 5
3 images = 0.1904 * torch.ones(2, 3, 33, 37)
4 aug = RandomJPEG(jpeg_quality=(1.0, 50.0), p=1.)
----> 5 images_jpeg = aug(images)
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/torch/nn/modules/module.py:1511, in Module._wrapped_call_impl(self, *args, **kwargs)
1509 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1510 else:
-> 1511 return self._call_impl(*args, **kwargs)
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/torch/nn/modules/module.py:1520, in Module._call_impl(self, *args, **kwargs)
1515 # If we don't have any hooks, we want to skip the rest of the logic in
1516 # this function, and just call forward.
1517 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1518 or _global_backward_pre_hooks or _global_backward_hooks
1519 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1520 return forward_call(*args, **kwargs)
1522 try:
1523 result = None
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/augmentation/base.py:210, in _BasicAugmentationBase.forward(self, input, params, **kwargs)
206 params["batch_prob"] = tensor([True] * batch_shape[0])
208 params, flags = self._process_kwargs_to_params_and_flags(params, self.flags, **kwargs)
--> 210 output = self.apply_func(in_tensor, params, flags)
211 return self.transform_output_tensor(output, input_shape) if self.keepdim else output
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/augmentation/_2d/base.py:129, in RigidAffineAugmentationBase2D.apply_func(self, in_tensor, params, flags)
126 flags = self.flags
128 trans_matrix = self.generate_transformation_matrix(in_tensor, params, flags)
--> 129 output = self.transform_inputs(in_tensor, params, flags, trans_matrix)
130 self._transform_matrix = trans_matrix
132 return output
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/augmentation/base.py:261, in _AugmentationBase.transform_inputs(self, input, params, flags, transform, **kwargs)
259 self.validate_tensor(in_tensor)
260 if to_apply.all():
--> 261 output = self.apply_transform(in_tensor, params, flags, transform=transform)
262 elif not to_apply.any():
263 output = self.apply_non_transform(in_tensor, params, flags, transform=transform)
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/augmentation/_2d/intensity/jpeg.py:56, in RandomJPEG.apply_transform(self, input, params, flags, transform)
53 def apply_transform(
54 self, input: Tensor, params: Dict[str, Tensor], flags: Dict[str, Any], transform: Optional[Tensor] = None
55 ) -> Tensor:
---> 56 jpeg_output: Tensor = jpeg_codec_differentiable(input, params["jpeg_quality"])
57 return jpeg_output
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/utils/image.py:231, in perform_keep_shape_image.<locals>._wrapper(input, *args, **kwargs)
229 input_shape = input.shape
230 input = _to_bchw(input) # view input as (B, C, H, W)
--> 231 output = f(input, *args, **kwargs)
232 if len(input_shape) == 3:
233 output = output[0]
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/enhance/jpeg.py:440, in jpeg_codec_differentiable(image_rgb, jpeg_quality, quantization_table_y, quantization_table_c)
438 # Check shape of inputs
439 KORNIA_CHECK_SHAPE(image_rgb, ["*", "3", "H", "W"])
--> 440 KORNIA_CHECK(
441 (image_rgb.shape[-1] % 16 == 0) and (image_rgb.shape[-2] % 16 == 0),
442 f"image dimension must be divisible by 16. Got the shape {image_rgb.shape}.",
443 )
444 KORNIA_CHECK_SHAPE(jpeg_quality, ["B"])
445 # Add batch dimension to quantization tables if needed
File ~/anaconda3/envs/albumentations_benchmark/lib/python3.10/site-packages/kornia/core/check.py:103, in KORNIA_CHECK(condition, msg, raises)
101 if not condition:
102 if raises:
--> 103 raise Exception(f"{condition} not true.\n{msg}")
104 return False
105 return True
Exception: False not true.
image dimension must be divisible by 16. Got the shape torch.Size([2, 3, 33, 37]).
```
```
### Expected behavior
-
### Environment
```shell
-
```
### Additional context
_No response_ | closed | 2024-03-15T20:24:19Z | 2024-07-01T17:47:42Z | https://github.com/kornia/kornia/issues/2845 | [
"help wanted"
] | ternaus | 6 |
jonaswinkler/paperless-ng | django | 1,682 | Custom css - overrides.css is not loaded. | Hello,
I just discovered this great program and thank the developers for that.
My installation is done with Docker and works very well. I want to customize a little interface especially the primary color. I followed the documentation by creating a file overrides.css in the media folder, but it is not loaded.
Do I do some bad things?
Thks a lot ! | open | 2022-03-06T16:32:43Z | 2022-03-06T16:33:08Z | https://github.com/jonaswinkler/paperless-ng/issues/1682 | [] | go-ten | 0 |
django-import-export/django-import-export | django | 1,891 | Importing AND displaying related models at importing preview | I have this Course model (also look at the many to many fields):
```
class Course(models.Model):
class Meta:
unique_together = ('name', 'year', 'semester')
name = models.CharField(primary_key=True, max_length=250, null=False, blank=False)
year = models.IntegerField(null=False)
semester = models.IntegerField(null=False)
active = models.BooleanField(default=True)
_students = models.ManyToManyField(to=User, blank=True)
_teachers = models.ManyToManyField(to=User, blank=False, related_name='courses_teacher')
def add_teacher(self, teacher: 'Teacher'):
self._teachers.add(teacher)
def add_student(self, user: 'User'):
self._students.add(user)
```
For extracting the students from an excel sheet (which has a horrific format) I used this custom resource model:
```
class StudentResource(resources.ModelResource):
username = Field(attribute='username', column_name='STUDENT_ID')
first_name = Field(attribute='first_name', column_name='NAME')
class Meta:
model = User
skip_unchanged = True
report_skipped = False
import_id_fields = ('username',)
fields = ('STUDENT_ID', 'NAME')
def import_data(self, dataset, **kwargs):
"""
clean data from the list, only extract students
"""
subsection = dataset[23:len(dataset) - 4]
data = tablib.Dataset()
# distribute data for students
data.headers = ['NAME', 'STUDENT_ID']
for i in subsection:
data.append([i[1], str(i[8])])
cleaned_data = data
return super().import_data(cleaned_data, **kwargs)
def after_save_instance(self, instance, row, **kwargs):
print(instance)
# distribute data for students
group, _ = Student.create_group()
group.user_set.add(instance)
```
As you can see, in import_data() I'm only extracting students from the sheet.
Is there a way to extract and display other data (which would be for other models) from the same excel sheet in the import preview?
In this case in particular, I would also need to extract other data from the dataset. Something like, let's say `course_name= data[13][3]` or `teacher_name= data[17][3]`. Currently I'm running out of ideas, since the overwritten import_data method returns the super method (which only receives a single dataset as a parameter).
If there's a way to do this, would it be possible to display multiple models at the preview?
| closed | 2024-06-27T10:31:08Z | 2024-07-20T19:10:37Z | https://github.com/django-import-export/django-import-export/issues/1891 | [
"question"
] | hector-macias1 | 1 |
zappa/Zappa | django | 1,132 | Not recognizing virtaulenv created with pyenv. | <!--- Provide a general summary of the issue in the Title above -->
## Context
If .python-version does not exist in the current path, the virtual environment of pyenv is not recognized.
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.6/3.7/3.8 -->
## Expected Behavior
<!--- Tell us what should happen -->
```shell
> zappa update dev
Calling update for stage dev..
Downloading and installing dependencies..
...
```
## Actual Behavior
```shell
# .python-version is not in that path, but pyenv works correctly.
> pwd
/Users/username/Documents/GitHub/myenv/myproject
> pyenv versions
* my-env (set by /Users/username/Documents/GitHub/my-env/.python-version)
...
> zappa update dev
Calling update for stage dev..
Error: Zappa requires an active virtual environment!
Learn more about virtual environments here: http://docs.python-guide.org/en/latest/dev/virtualenvs/
```
<!--- Tell us what happens instead -->
## Possible Fix
In addition to detecting the .python-version file in the current path, you can check the current virtual environment by using the `pyenv version` command.
<!--- Not obligatory, but suggest a fix or reason for the bug -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.
2.
3.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.54.1
* Operating System and Python version: 3.9.12
* The output of `pip freeze`:
```
argcomplete==2.0.0
awscli==1.22.87
boto3==1.21.32
botocore==1.24.32
certifi==2021.10.8
cfn-flip==1.3.0
charset-normalizer==2.0.12
click==8.1.2
colorama==0.4.3
docutils==0.15.2
durationpy==0.5
Faker==13.3.4
Flask==2.1.1
flask-validation-extended==0.1.7
future==0.18.2
hjson==3.0.2
idna==3.3
importlib-metadata==4.11.3
itsdangerous==2.1.2
Jinja2==3.1.1
jmespath==1.0.0
kappa==0.6.0
MarkupSafe==2.1.1
placebo==0.9.0
pyasn1==0.4.8
python-dateutil==2.8.2
python-dotenv==0.20.0
python-slugify==6.1.1
PyYAML==5.4.1
requests==2.27.1
rsa==4.7.2
s3transfer==0.5.2
six==1.16.0
slack-sdk==3.15.2
text-unidecode==1.3
toml==0.10.2
tqdm==4.64.0
troposphere==4.0.0
urllib3==1.26.9
Werkzeug==2.1.1
wsgi-request-logger==0.4.6
zappa==0.54.1
zipp==3.8.0
```
* Link to your project (optional): I don't think this is necessary.
* Your `zappa_settings.json`: I don't think this is necessary.
| closed | 2022-05-10T08:08:11Z | 2022-12-01T10:02:40Z | https://github.com/zappa/Zappa/issues/1132 | [
"bug",
"next-release-candidate"
] | iml1111 | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,732 | [Bug]: ImportError: cannot import name 'computed_field' from 'pydantic' | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
WebUI failed to run because of ImportError: cannot import name 'computed_field' from 'pydantic' (/usr/local/lib/python3.10/dist-packages/pydantic/__init__.cpython-310-x86_64-linux-gnu.so)
### Steps to reproduce the problem
Clean install WebUI in Colab
### What should have happened?
Worked normally
### What browsers do you use to access the UI ?
Microsoft Edge
### Sysinfo
[sysinfo.txt](https://github.com/user-attachments/files/18196970/sysinfo.txt)
### Console logs
```Shell
Cloning into 'stable-diffusion-webui'...
remote: Enumerating objects: 34819, done.
remote: Counting objects: 100% (28/28), done.
remote: Compressing objects: 100% (20/20), done.
remote: Total 34819 (delta 19), reused 8 (delta 8), pack-reused 34791 (from 2)
Receiving objects: 100% (34819/34819), 35.45 MiB | 19.49 MiB/s, done.
Resolving deltas: 100% (24319/24319), done.
/content/stable-diffusion-webui
Python 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Installing clip
Installing open_clip
Cloning assets into /content/stable-diffusion-webui/repositories/stable-diffusion-webui-assets...
Cloning into '/content/stable-diffusion-webui/repositories/stable-diffusion-webui-assets'...
remote: Enumerating objects: 20, done.
remote: Counting objects: 100% (20/20), done.
remote: Compressing objects: 100% (18/18), done.
remote: Total 20 (delta 0), reused 20 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (20/20), 132.70 KiB | 2.88 MiB/s, done.
Cloning Stable Diffusion into /content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning into '/content/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (2/2), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 580 (delta 0), reused 0 (delta 0), pack-reused 578 (from 2)
Receiving objects: 100% (580/580), 73.44 MiB | 38.15 MiB/s, done.
Resolving deltas: 100% (283/283), done.
Cloning Stable Diffusion XL into /content/stable-diffusion-webui/repositories/generative-models...
Cloning into '/content/stable-diffusion-webui/repositories/generative-models'...
remote: Enumerating objects: 1064, done.
remote: Counting objects: 100% (477/477), done.
remote: Compressing objects: 100% (124/124), done.
remote: Total 1064 (delta 376), reused 353 (delta 353), pack-reused 587 (from 1)
Receiving objects: 100% (1064/1064), 53.60 MiB | 33.08 MiB/s, done.
Resolving deltas: 100% (562/562), done.
Cloning K-diffusion into /content/stable-diffusion-webui/repositories/k-diffusion...
Cloning into '/content/stable-diffusion-webui/repositories/k-diffusion'...
remote: Enumerating objects: 1345, done.
remote: Counting objects: 100% (646/646), done.
remote: Compressing objects: 100% (86/86), done.
remote: Total 1345 (delta 604), reused 561 (delta 560), pack-reused 699 (from 1)
Receiving objects: 100% (1345/1345), 239.07 KiB | 5.31 MiB/s, done.
Resolving deltas: 100% (944/944), done.
Cloning BLIP into /content/stable-diffusion-webui/repositories/BLIP...
Cloning into '/content/stable-diffusion-webui/repositories/BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (183/183), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 277 (delta 145), reused 137 (delta 137), pack-reused 94 (from 1)
Receiving objects: 100% (277/277), 7.04 MiB | 27.29 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements
Launching Web UI with arguments: --share --disable-console-progressbars --disable-safe-unpickle --no-half-vae --skip-torch-cuda-test --no-half
Traceback (most recent call last):
File "/content/stable-diffusion-webui/launch.py", line 48, in <module>
main()
File "/content/stable-diffusion-webui/launch.py", line 44, in main
start()
File "/content/stable-diffusion-webui/modules/launch_utils.py", line 465, in start
import webui
File "/content/stable-diffusion-webui/webui.py", line 13, in <module>
initialize.imports()
File "/content/stable-diffusion-webui/modules/initialize.py", line 17, in imports
import pytorch_lightning # noqa: F401
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/__init__.py", line 35, in <module>
from pytorch_lightning.callbacks import Callback # noqa: E402
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/__init__.py", line 28, in <module>
from pytorch_lightning.callbacks.pruning import ModelPruning
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/callbacks/pruning.py", line 31, in <module>
from pytorch_lightning.core.module import LightningModule
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/core/__init__.py", line 16, in <module>
from pytorch_lightning.core.module import LightningModule
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/core/module.py", line 47, in <module>
from pytorch_lightning.loggers import Logger
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loggers/__init__.py", line 22, in <module>
from pytorch_lightning.loggers.wandb import WandbLogger # noqa: F401
File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/loggers/wandb.py", line 36, in <module>
import wandb
File "/usr/local/lib/python3.10/dist-packages/wandb/__init__.py", line 21, in <module>
from wandb import sdk as wandb_sdk
File "/usr/local/lib/python3.10/dist-packages/wandb/sdk/__init__.py", line 28, in <module>
from .wandb_init import _attach, init
File "/usr/local/lib/python3.10/dist-packages/wandb/sdk/wandb_init.py", line 39, in <module>
from . import wandb_login, wandb_setup
File "/usr/local/lib/python3.10/dist-packages/wandb/sdk/wandb_login.py", line 19, in <module>
from .wandb_settings import Settings
File "/usr/local/lib/python3.10/dist-packages/wandb/sdk/wandb_settings.py", line 25, in <module>
from pydantic import (
ImportError: cannot import name 'computed_field' from 'pydantic' (/usr/local/lib/python3.10/dist-packages/pydantic/__init__.cpython-310-x86_64-linux-gnu.so)
```
### Additional information
The WebUI is run in Colab | closed | 2024-12-19T11:25:20Z | 2024-12-19T12:18:08Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16732 | [
"bug-report"
] | nagikoru | 1 |
fastapi-users/fastapi-users | fastapi | 245 | Register router treat email as case-sensitive | Hi,
I've just noticed that the default provided router for /register POST endpoint treat email-addresses as case-sensitive.
As a result two "identicals" email-addresses with different cases writing are treated as different users.
eg: during registration John.Doe@company.com is verified as a different user than john.doe@company.com
It may be misleading :-) | closed | 2020-07-06T11:56:46Z | 2020-07-09T16:49:18Z | https://github.com/fastapi-users/fastapi-users/issues/245 | [
"enhancement"
] | MariusMez | 8 |
agronholm/anyio | asyncio | 304 | `anyio.to_thread.run_sync()` hangs IPython when using top-level await expressions | Not sure if this is a problem with IPython, `anyio` or something else.
If you launch a IPython shell, like so:
```bash
$ ipython
Python 3.10.0b2+ (heads/3.10:9c89d62, Jun 2 2021, 20:22:16) [GCC 10.3.0]
Type 'copyright', 'credits' or 'license' for more information
IPython 7.24.1 -- An enhanced Interactive Python. Type '?' for help.
In [1]:
```
And paste this example in, it will run once:
```python3
from time import sleep
from anyio.to_thread import run_sync
def sync_func(time: float):
sleep(time)
print(f'Slept {time} seconds.')
await run_sync(sync_func, 0.5)
```
After it finishes executing, if you await another `anyio.to_thread.run_sync()` coroutine, it will hang the session:
```python3
In [6]: await run_sync(sync_func, 0.5)
```
Here's the stacktrace when you hit Ctrl-C:
```python3
In [6]: await run_sync(sync_func, 0.5)
^C---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
~/.pyenv/versions/3.10-dev/lib/python3.10/site-packages/IPython/core/async_helpers.py in __call__(self, coro)
26 import asyncio
27
---> 28 return asyncio.get_event_loop().run_until_complete(coro)
29
30 def __str__(self):
~/.pyenv/versions/3.10-dev/lib/python3.10/asyncio/base_events.py in run_until_complete(self, future)
626 future.add_done_callback(_run_until_complete_cb)
627 try:
--> 628 self.run_forever()
629 except:
630 if new_task and future.done() and not future.cancelled():
~/.pyenv/versions/3.10-dev/lib/python3.10/asyncio/base_events.py in run_forever(self)
593 events._set_running_loop(self)
594 while True:
--> 595 self._run_once()
596 if self._stopping:
597 break
~/.pyenv/versions/3.10-dev/lib/python3.10/asyncio/base_events.py in _run_once(self)
1843 timeout = min(max(0, when - self.time()), MAXIMUM_SELECT_TIMEOUT)
1844
-> 1845 event_list = self._selector.select(timeout)
1846 self._process_events(event_list)
1847
~/.pyenv/versions/3.10-dev/lib/python3.10/selectors.py in select(self, timeout)
467 ready = []
468 try:
--> 469 fd_event_list = self._selector.poll(timeout, max_ev)
470 except InterruptedError:
471 return ready
KeyboardInterrupt:
```
If you use [`asyncio.to_thread()`](https://docs.python.org/3/library/asyncio-task.html#asyncio.to_thread), then awaiting coroutines in succession works without a problem:
```python3
In [7]: from asyncio import to_thread
In [14]: await to_thread(sync_func, 0.5)
Slept 0.5 seconds.
In [15]: await to_thread(sync_func, 0.5)
Slept 0.5 seconds.
```
Using top-level await expression when [launching Python via `python3 -m asyncio`](https://piccolo-orm.com/blog/top-level-await-in-python/) works, though.
It also happens on Python 3.9.
| closed | 2021-06-03T01:05:47Z | 2021-06-15T08:14:49Z | https://github.com/agronholm/anyio/issues/304 | [
"bug",
"asyncio"
] | alexdelorenzo | 3 |
jupyter/nbgrader | jupyter | 1,052 | Try Azure Pipelines for greater test speed | We've found that Azure Pipelines is much faster on tests than Travis for CPython. NumFOCUS projects are currently free on Azure Pipelines. This may help with tests esp. on Windows. | closed | 2018-12-15T18:15:24Z | 2019-11-02T16:30:03Z | https://github.com/jupyter/nbgrader/issues/1052 | [
"enhancement",
"good first issue"
] | willingc | 0 |
aio-libs/aiomysql | asyncio | 5 | Impossible to install via PyPI | Hi,
I can't install aiomysql via PyPI with pip.
It's ok via Git repository.
You should add a Manifest.in file to include RST files.
Log message:
```
(pyvenv) lg@steroids:~/tmp$ pip install aiomysql
Downloading/unpacking aiomysql
Downloading aiomysql-0.0.1.tar.gz (44kB): 44kB downloaded
Running setup.py (path:/home/lg/tmp/pyvenv/build/aiomysql/setup.py) egg_info for package aiomysql
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/home/lg/tmp/pyvenv/build/aiomysql/setup.py", line 54, in <module>
long_description='\n\n'.join((read('README.rst'), read('CHANGES.rst'))),
File "/home/lg/tmp/pyvenv/build/aiomysql/setup.py", line 20, in read
return open(os.path.join(os.path.dirname(__file__), f)).read().strip()
FileNotFoundError: [Errno 2] No such file or directory: '/home/lg/tmp/pyvenv/build/aiomysql/CHANGES.rst'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/home/lg/tmp/pyvenv/build/aiomysql/setup.py", line 54, in <module>
long_description='\n\n'.join((read('README.rst'), read('CHANGES.rst'))),
File "/home/lg/tmp/pyvenv/build/aiomysql/setup.py", line 20, in read
return open(os.path.join(os.path.dirname(__file__), f)).read().strip()
FileNotFoundError: [Errno 2] No such file or directory: '/home/lg/tmp/pyvenv/build/aiomysql/CHANGES.rst'
```
| closed | 2015-02-17T21:39:23Z | 2015-02-17T22:26:13Z | https://github.com/aio-libs/aiomysql/issues/5 | [] | ludovic-gasc | 2 |
microsoft/nni | machine-learning | 4,954 | NNI v2.9 Iteration Plan | - Release manager: @ultmaster
- Release start date: 6.22
- Feature freeze date (at most 30 days): ~7.22~ 8.1
- First test package (1 week since feature freeze): ~7.29~ ~8.12~ 8.16
- Release date (3 weeks since feature freeze): ~8.12~ 9.7
## Top selling points
- Official announcement of NAS search space hub
- Support training / search of AutoFormer (BigNAS-styled one-shot NAS)
- Support pruning of transformer: User experience and performance preliminary optimization (masks generation/movement pruning)
### Other nominations
- Pythonic HPO search space factory
- Sklearn and Framework-independent tutorial
- Compression supports Lightning: no need to rewrite code when users use lightning in the first place
## Other items
### NAS
- [ ] P0 - Official announcement of search space hub @ultmaster
- [x] Setup cluster
- [x] A table summarizing the performance of popular one-shot strategies ~7.8~ ~7.29~ 8.1 #5034
- [x] Documentation 8.1 #5035
- [x] #5051
- [ ] #5049
- [x] #5050
- [ ] #5052
- [ ] #5053
- [ ] P0 - FLOPs estimator for NAS @ultmaster
- [x] design review ~7.11~ 7.18
- [x] P0 - Support training (PAO TONG) of AutoFormer (BigNAS-styled one-shot NAS) Renjie ~#4965~ #4987
- [ ] P0 - Support evolution search of AutoFormer ~7.27~ ~8.1~ 8.5 #5054
- [ ] P0 - Promote nni.retiarii to nni.nas @ultmaster ~7.1~ 7.8
- [x] #4976
- [x] Move retiarii to NAS #5014
- [ ] NasExperiment interface (deferred)
- [ ] P1 - Refactor of nni.nas.nn @ultmaster
- [ ] design review (3rd round)
- [ ] P1 - Tutorial for one-shot NAS. (depends on 4760) #4509 @ultmaster @JiahangXu ~4.27~ ~5.6~ ??
- [ ] P2 - Strategy refactor: strategies respect resource constraints
- [ ] P0 - Refactor experiment @QuanluZhang 7.8
- [ ] Support view of NAS experiment ~7.1~ #4985
- [ ] Support resume
- [ ] Support new interface of experiment
### Compression @J-shang
- [x] P0 - Support Block Sparse #4932
- [x] P0 - Evaluator - step 1 : add evaluator #4950
- [x] P0 - Evaluator - step 2 : support evaluator ~7.6~ 7.11 #4992
- [ ] P0 - Movement Pruner Improvement & give a good example (transformer) ~7.20~ ~7.22~ 7.29
- [ ] P1 - Evaluator - step 3 : tutorial and doc ~7.13~ ~7.15~ 7.26 #5016
- [x] P1 - Balance Sparse Improvement #5033
- [ ] ~P1 - Combining all of the above features~
- [ ] P2 - ConfigList Improvement
- [ ] P2 - Wrapper Refactor : support pruning target / exclude refactor / block sparse
- [ ] P2 - Improve Compression Experiment (If have time...)
- [x] P1 - Improve UT & IT
#### Speedup @louis-j
- [ ] Auto-generate speed up ops
- [ ] First draft ~7.11~ 7.13 #4996
- [ ] Test different layers and torch versions
### Quantization
- [ ] Quantization refactor @QuanluZhang ~7.22~ ~7.29~ 8.1
### HPO @liuzhe-lz
- [ ] P1 - Scikit-learn tutorial (1 day) ~6.27~ ~7.4~
- [ ] P1 - Framework independent tutorial (1 day)
- [ ] P1 - Pythonic search space API (2 days)
### Experiment @liuzhe-lz
- [ ] P0 - websocket reconnect
- [ ] P2 - logging refactor
- [ ] dispatcher log (1 day)
- [ ] trial log (2 days)
- [ ] P3 - Split NNI manager client APIs and experiment management APIs (2 days)
### WebUI
P0 - 15 days
- [x] P0 - #4973 6.30
- [x] #4975
- [x] P0 - Bug fix: after clicking compare button, the dropdown button (for selecting different keys) is not shown. 7.8 #4990
- [x] Fix issue #4969 ~7.13~ 7.15 #5011
- [x] P0 - Refactor: make React function component use TS interface, use TS interface to describe props. 7.27 #5029
- [x] P0 - Show error/warning messages at the bottom right of the page with small popup windows. 7.28 #5029
- [ ] Discuss: search box
- [ ] Design: the location of trial log URL/button
- [ ] Manage/control experiments on WebUI (maybe create experiments)
- [ ] Project view on WebUI
- [ ] Reduce WebUI latency / Improve trial status consistency
- [ ] WebUI based on WebSocket
### Training service
- Training service refactor 7.25
- [ ] Local training service on PoC entrance 7.10 (review meeting 7.11)
- [ ] Compatibility layer between new training service and old infrastructure
- [ ] New remote training service
- [ ] Remote reconnect #514
- [ ] New infrastructure
- [ ] Compatibility layer between new infrastructure and old training services
- [ ] (stretch) Raw k8s training service
- P0 - Support custom docker registry
- [ ] Kubeflow
- [ ] FrameworkController
- [ ] OpenPAI
- [ ] Support of Microsoft-internal training service
### Pipelines
- [x] P0 - 1ES related stuffs
- [x] Create image and pool @QuanluZhang @ultmaster
- [x] Use Microsoft-hosted pool for full tests, temporarily @J-shang
- [x] apt-get locked by unattended-upgrade
- [x] NAS experiment hang on Windows for unknown reason
- [x] GPU sometimes offline after several minutes (local linux)
- [x] Need a new compliant image (new resource group required)
- [x] Windows doesn't have GPU driver
- [ ] Move tuner test to full test HPO
- [ ] K8S pipeline @liuzhe-lz @QuanluZhang
## Deferred from last release
- [ ] P3 - step 6 - global REST handler register (depends on WebSocket and experiment management refactor)
- [ ] P1 pruning config list extension @J-shang
- [ ] design review ~4.22~ | closed | 2022-06-22T02:27:28Z | 2022-09-23T02:11:24Z | https://github.com/microsoft/nni/issues/4954 | [
"iteration-plan",
"nnidev"
] | ultmaster | 4 |
K3D-tools/K3D-jupyter | jupyter | 188 | Plot a textured surface | Hi,
Great library!
I was wondering if there is a way to plot a textured surface using the `3d.surface` function, something similar to Matlab `surf(Z,C)` functionality. | closed | 2019-11-07T21:45:20Z | 2024-05-10T19:32:12Z | https://github.com/K3D-tools/K3D-jupyter/issues/188 | [] | eladrich | 7 |
LAION-AI/Open-Assistant | python | 2,682 | Open Assistant AI | closed | 2023-04-17T22:24:16Z | 2023-04-18T08:01:45Z | https://github.com/LAION-AI/Open-Assistant/issues/2682 | [] | Lotusfan70 | 0 | |
plotly/plotly.py | plotly | 4,341 | remove `Loading [MathJax]/extensions/MathMenu.js` message | I am working with plotly but the output image show the watermark:
My code:
```
df = pd.read_json(os.path.join(path_rq12, 'macro-topics.json'))
categories = []
frequency_p = []
frequency_k = []
for index, group in df.groupby('Challenge_topic_macro'):
categories.append(macro_topic_indexing[index])
frequency_p.append(len(group[group['Challenge_type'] == 'problem']))
frequency_k.append(len(group[group['Challenge_type'] == 'knowledge']))
# Create a stacked bar chart
fig = go.Figure(data=[
go.Bar(name='Problem', x=categories, y=frequency_p, text=frequency_p, textposition='outside'),
go.Bar(name='Knowledge', x=categories, y=frequency_k, text=frequency_k, textposition='outside')
])
# Change the bar mode
fig.update_layout(
barmode='group',
xaxis_title="Macro-topic Name",
yaxis_title="Post Number",
xaxis=dict(title_font=dict(size=18)),
yaxis=dict(title_font=dict(size=18)),
)
fig.show()
fig.write_image(os.path.join(path_rq12, 'Macro-topics frequency histogram.pdf'))
```
My output image:

OS: Ubuntu 20.04
Python: 3.10.9
Plotly: 5.16.1 | open | 2023-08-27T18:45:36Z | 2024-08-12T21:05:47Z | https://github.com/plotly/plotly.py/issues/4341 | [
"bug",
"P3"
] | zhimin-z | 4 |
pydantic/FastUI | fastapi | 121 | DarkMode | a toggle somewhere for switching to dark mode would be awesome | closed | 2023-12-22T14:20:43Z | 2023-12-22T14:45:05Z | https://github.com/pydantic/FastUI/issues/121 | [] | shroominic | 1 |
mirumee/ariadne | api | 1,079 | GraphiQL Explorer error message for subscriptions after switching to graphql-ws | Hi,
I am using the latest Ariadne 0.19 and just switched from the deprecated `subscriptions-transport-ws` protocol to `graphql-ws` for my GraphQL subscriptions. The change went smoothly for my client application, but now GraphiQL explorer displays the following error message, when I try to test a subscription via GraphiQL:
```json
{
"errors": [
{
"message": "Your GraphiQL createFetcher is not properly configured for websocket subscriptions yet. Please provide subscriptionUrl, wsClient or legacyClient option first.",
"stack": "Error: Your GraphiQL createFetcher is not properly configured for websocket subscriptions yet. Please provide subscriptionUrl, wsClient or legacyClient option first.\n at https://unpkg.com/graphiql/graphiql.min.js:2:737044\n at https://unpkg.com/graphiql/graphiql.min.js:2:567238\n at onClick (https://unpkg.com/graphiql/graphiql.min.js:2:638040)\n at HTMLUnknownElement.callCallback (https://unpkg.com/react-dom@17/umd/react-dom.development.js:3942:16)\n at Object.invokeGuardedCallbackDev (https://unpkg.com/react-dom@17/umd/react-dom.development.js:3991:18)\n at invokeGuardedCallback (https://unpkg.com/react-dom@17/umd/react-dom.development.js:4053:33)\n at invokeGuardedCallbackAndCatchFirstError (https://unpkg.com/react-dom@17/umd/react-dom.development.js:4067:27)\n at executeDispatch (https://unpkg.com/react-dom@17/umd/react-dom.development.js:8273:5)\n at processDispatchQueueItemsInOrder (https://unpkg.com/react-dom@17/umd/react-dom.development.js:8305:9)\n at processDispatchQueue (https://unpkg.com/react-dom@17/umd/react-dom.development.js:8318:7)"
}
]
}
```
Here is how I initialize my GraphQL ASGI-App:
```python
from ariadne.asgi import GraphQL
from ariadne.asgi.handlers import GraphQLTransportWSHandler
from ariadne.explorer import ExplorerGraphiQL
graphql = GraphQL(
schema,
debug=True,
context_value=get_context_value,
websocket_handler=GraphQLTransportWSHandler(),
explorer=ExplorerGraphiQL(title="My API Explorer", explorer_plugin=True),
)
```
| open | 2023-04-26T12:46:24Z | 2024-03-25T08:22:50Z | https://github.com/mirumee/ariadne/issues/1079 | [
"to do"
] | fabiangfd | 2 |
giotto-ai/giotto-tda | scikit-learn | 127 | Make static plotting functions take a Mapper pipeline as input instead of a Mapper graph | #### Description
For consistency with the interactive plotting API, we might wish to change `create_network_2d` and `create_network_3d` to take a `MapperPipeline` object `pipe` as first argument instead of a graph. As per (#126), we might also wish to clone this pipeline to avoid side effects.
| closed | 2019-12-23T15:45:01Z | 2020-01-16T12:53:31Z | https://github.com/giotto-ai/giotto-tda/issues/127 | [
"enhancement",
"discussion",
"mapper"
] | ulupo | 2 |
satwikkansal/wtfpython | python | 323 | A f**king problem with sys.path | ## Problem
If you import some file or object with different path, it will have different id..
Look like this
The directory tree:
```bash
.
├── hole
│ ├── __init__.py
│ ├── base.py
│ └── caller.py
└── main.py
```
hole/__init__.py
```python
import sys
from pathlib import Path
# allow imported as third-party package
__PATH = str(Path(__file__).parent)
if __PATH not in sys.path:
sys.path.append(__PATH)
```
hole/base.py
```python
class Base:
shared = []
@staticmethod
def add(x):
Base.shared.append(x)
```
hole/caller.py
```python
from base import Base
def caller():
Base.add(1)
print(Base.shared)
```
main.py
```python
from hole import caller
from hole.base import Base
caller.caller()
print(Base.shared)
```
After run command `python3 main.py`, u will get the output:
```bash
[1]
[]
```
## Why????
Because `sys.path` causes them to have different modules, different objects are used.
We can clearly observe this distinction with `Base.__dict__`
Add `print(Base.__dict__)` after `print(Base.shared)` in `main.py` and `hole/caller.py`
The output:
```
[1]
{'__module__': 'base', 'shared': [1], 'add': <staticmethod(<function Base.add at 0x1007fd900>)>, '__dict__': <attribute '__dict__' of 'Base' objects>, '__weakref__': <attribute '__weakref__' of 'Base' objects>, '__doc__': None}
[]
{'__module__': 'hole.base', 'shared': [], 'add': <staticmethod(<function Base.add at 0x1007fda20>)>, '__dict__': <attribute '__dict__' of 'Base' objects>, '__weakref__': <attribute '__weakref__' of 'Base' objects>, '__doc__': None}
```
Okay, We can see that the call in `hole/caller.py` outputs `'__module__': 'base'` and the call in `main.py` outputs `'__module__': 'hole.base'`
## How to fix it?
If we need to use the class attribute in different files and want they use the same object, you may have these two options
> Attention: If there is no interference from `sys.path`, you don't need to care
### Option 1
Add a function to return the `Base` class in a python file which in `hole` directory and uses `from base import Base`
Like add `get_base` in `hole/caller.py`
```python
from base import Base
def caller():
Base.add(1)
print(Base.shared)
print(Base.__dict__)
def get_base():
return Base
```
And when you want to use the class attribute in `Base`, first call `get_base` to get the base object which `__module__` is `base`, and then call `base.xxx`
### Option 2
Import `Base` like you did in `caller.py`
> Attention: Pylance will report missing imports, lmao
main.py
```
from hole import caller
from base import Base
caller.caller()
print(Base.shared)
print(Base.__dict__)
``` | open | 2023-12-14T09:09:45Z | 2024-10-16T08:15:56Z | https://github.com/satwikkansal/wtfpython/issues/323 | [] | R4v3nl0 | 1 |
K3D-tools/K3D-jupyter | jupyter | 437 | compatibility with jlab4 ? | quick question as this is not outlined in global README.md, but is the jupyterlab extension expected to be compatible with jupyterlab 4 as published earlier in June ?
thanks in advance ! | closed | 2023-11-28T13:19:02Z | 2024-01-04T15:03:31Z | https://github.com/K3D-tools/K3D-jupyter/issues/437 | [] | parmentelat | 2 |
albumentations-team/albumentations | deep-learning | 1,797 | GaussNoise broken in 1.4.9 | ## Describe the bug
`GaussNoise` does not give the same results in 1.4.9 as in 1.4.8.
### To Reproduce
```py
import cv2
import albumentations as A
import numpy as np
# Generate an image using numpy which is a simple color gradient
image = np.zeros((128, 128, 3), dtype=np.uint8)
image[:, :, 0] = np.arange(0, 128)[:, None]
image[:, :, 1] = np.arange(0, 128)[None, :]
image[:, :, 2] = 128
# Save the augmented image
cv2.imwrite("albu_raw_img.png", image)
# Define the augmentation pipeline
transform = A.Compose([
A.GaussNoise(p=1),
])
# Apply the transformation
augmented = transform(image=image)
# Get the augmented image
augmented_image = augmented["image"]
# Save the augmented image
cv2.imwrite("albu_aug1.png", augmented_image)
```
### Expected behavior
I guess this is clear? :D It should be the same.
### Actual behavior
GaussianNoise completely noises the image
### Screenshots
The input:

Output with 1.4.9:

Output with 1.4.8:

| closed | 2024-06-19T09:30:49Z | 2024-06-19T22:17:28Z | https://github.com/albumentations-team/albumentations/issues/1797 | [
"bug"
] | voegtlel | 12 |
TencentARC/GFPGAN | deep-learning | 152 | Are the decoders finetuned? | From the training script I dont believe the decoders are being fine-tuned but when I play with the colab code I am getting weird results.
In the colab code, if I make conditions empty, it should return the results without SFT, however, the results are bad.
` image, _ = self.stylegan_decoder(
[style_code],
[],
return_latents=return_latents,
input_is_latent=True,
randomize_noise=randomize_noise)
`

This is the result from setting conditions to empty using the test images. If decoders are not being fine-tuned, this should give proper face results. | open | 2022-01-24T18:09:05Z | 2023-07-31T07:35:03Z | https://github.com/TencentARC/GFPGAN/issues/152 | [] | mchong6 | 1 |
omnilib/aiomultiprocess | asyncio | 155 | I use it in linux and windows and the program gets stuck after 1-2 days of running, see the process is not killed, it just hibernates | ### I use uvicorn to start my Fastapi program, there is a long task in it, I use aiomultiprocess to speed up my httpx requests, but it always stops after the program runs for a while, I can't find the problem, nest Tried some configurations but nothing helped
### Details
import httpx
from aiomultiprocess import Pool
from elasticsearch import AsyncElasticsearch
from fastapi import APIRouter, status, Depends, BackgroundTasks
from models import Article
api = APIRouter()
async def requests(data):
async with httpx.AsyncClient() as sess:
resp = await sess.post(settings.URL, data=data)
return resp.text
async def create_datas(data: list):
list_article = [Article(**item) for item in data]
await Article.bulk_create(list_article)
# This function will be called multiple times in the service
async def run_tasks(start_id, es=None):
data = await get_datas(start_id, es) # get data from es
if data:
data_list = []
async with Pool(processes=6,
maxtasksperchild=800,
childconcurrency=6
) as pool:
async for result in pool.map(requests, data):
data_list.append(result)
await create_datas([i for i in data_list if i])
async def task_process(start, es):
while start < 140000000:
start_time = time.time()
await run_tasks(start, es)
all_time = time.time() - start_time
LOG.info(f'id: {start} - {start + 800} The time spent: {all_time}')
start += 800
@api.get('/tasks/')
async def trans2(background_tasks: BackgroundTasks, start: int, es: AsyncElasticsearch = Depends(get_es)):
if start:
background_tasks.add_task(task_process, start, es)
return my_response(data='', message='starting', code=200, sta=status.HTTP_200_OK)
* OS: Windows or ubuntu
* Python version: 3.8
* aiomultiprocess version: 0.9.0
How can I configure it so that it can run stably?
| open | 2022-05-06T03:52:31Z | 2022-05-06T03:52:31Z | https://github.com/omnilib/aiomultiprocess/issues/155 | [] | dyuzhou | 0 |
graphdeco-inria/gaussian-splatting | computer-vision | 328 | CondaEnvException: Pip failed |
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
failed
CondaEnvException: Pip failed
(base) C:\Users\abc6\gaussian-splatting> | open | 2023-10-17T08:57:18Z | 2023-11-01T07:48:55Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/328 | [] | Vaidik501 | 2 |
deepspeedai/DeepSpeed | pytorch | 5,602 | [BUG] Zero3 causes AttributeError: 'NoneType' object has no attribute 'numel' in continual training | I was training LLaVA model using deepspeed zero3. What I want to do is continually train the model to different datasets.
I create LLaVA model and in the for-loop, I create new dataset and new trainer, then calls `trainer.train()`.
At the first iteration of the for-loop, the training works properly.
However, at the second iteration, I got 1 warning and 1 error:
warning: `"Invalidate trace cache @ step XX: expected module XX, but got module XX"`
error: `AttributeError: 'NoneType' object has no attribute 'numel'` at the same location as this [issue](https://github.com/microsoft/DeepSpeed/issues/5019#issue-2101914020)
Screenshot:


But when I simply change the deepspeed config to use zero2 instead of zero3, no error occurs.
I want to use zero3 for larger batch size training. Can you help me out with this?
**System info**
- OS: Ubuntu 20.04
- GPU count and types: single node 4 RTX A6000 gpus
- Python version: 3.10.0
`zero3.json` that I used:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"train_micro_batch_size_per_gpu": "auto",
"train_batch_size": "auto",
"gradient_accumulation_steps": "auto",
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_16bit_weights_on_model_save": true
}
}
```
zero2.json
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"train_micro_batch_size_per_gpu": "auto",
"train_batch_size": "auto",
"gradient_accumulation_steps": "auto",
"zero_optimization": {
"stage": 2,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto"
}
}
``` | closed | 2024-06-03T02:56:58Z | 2025-03-10T16:06:21Z | https://github.com/deepspeedai/DeepSpeed/issues/5602 | [
"bug",
"training"
] | thkimYonsei | 9 |
miguelgrinberg/flasky | flask | 375 | 8e: send confirmed email failed. | This morning , the code runned normally.(can normally send email).
But this afternoon there is no change to code.App can't send email.
This is the report:
> 127.0.0.1 - - [11/Aug/2018 14:03:11] "GET /auth/confirm HTTP/1.1" 302 -
127.0.0.1 - - [11/Aug/2018 14:03:11] "GET / HTTP/1.1" 302 -
127.0.0.1 - - [11/Aug/2018 14:03:11] "GET /auth/unconfirmed HTTP/1.1" 200 -
Exception in thread Thread-2:
Traceback (most recent call last):
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/marin/PycharmProjects/flask-hand/app/email.py", line 9, in send_async_email
mail.send(msg)
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/site-packages/flask_mail.py", line 415, in send
with self.connect() as connection:
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/site-packages/flask_mail.py", line 123, in __enter__
self.host = self.configure_host()
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/site-packages/flask_mail.py", line 137, in configure_host
host = smtplib.SMTP(self.mail.server, self.mail.port)
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/smtplib.py", line 251, in __init__
(code, msg) = self.connect(host, port)
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/smtplib.py", line 307, in _get_socket
self.source_address)
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/socket.py", line 722, in create_connection
raise err
File "/home/marin/anaconda3/envs/flasky/lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
OSError: [Errno 101] Network is unreachable
| closed | 2018-08-11T06:12:48Z | 2018-10-14T22:16:29Z | https://github.com/miguelgrinberg/flasky/issues/375 | [
"question"
] | Kevin-Zhang225 | 4 |
pyg-team/pytorch_geometric | deep-learning | 9,560 | TorchScript compilation of MessagePassing _check_input on torch 1.10.2 | ### 🐛 Describe the bug
The following causes an issue on torch 1.10.2 because if size is provided then the output type will be `List[int]` but otherwise the output with be `List[Optional[int]]`. I've also tested this on torch 2.3.0 and it works so there's a problem with older versions of Torch. Unfortunately I can't use a newer version of torch for my project but perhaps this could be fixed by a simple change like modifying the type annotation of size to `Optional[Tuple[Optional[int], Optional[int]]`?.
```python
import torch
import torch_geometric.nn as geom_nn
import torch_geometric.datasets as geom_datasets
from torch_geometric.loader import DataLoader
dataset = geom_datasets.TUDataset("./", "MUTAG", use_edge_attr=True)
data = dataset[0]
x, edge_index, edge_attr = data.x, data.edge_index, data.edge_attr
model = geom_nn.GATv2Conv(-1, 32, edge_dim=edge_attr.shape[1])
out = model(x, edge_index, edge_attr=edge_attr)
torch.jit.script(model)
```
```
f"'{edge_index.size(0)}')")
return list(size) if size is not None else [None, None]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
raise ValueError(
'GATv2Conv._check_input' is being compiled since it was called from 'GATv2Conv.edge_updater'
File "/tmp/torch_geometric.nn.conv.gatv2_conv_GATv2Conv_edge_updater_0_kv505_.py", line 136
) -> Tensor:
mutable_size = self._check_input(edge_index, size)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
kwargs = self.edge_collect(
'GATv2Conv.edge_updater' is being compiled since it was called from 'GATv2Conv.forward__0'
File "torch_geometric/nn/conv/gatv2_conv.py", line 298
# edge_updater_type: (x: PairTensor, edge_attr: OptTensor)
alpha = self.edge_updater(edge_index, x=(x_l, x_r),
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
edge_attr=edge_attr)
~~~~~~~~~~~~~~~~~~~ <--- HERE
# propagate_type: (x: PairTensor, alpha: Tensor)
```
### Versions
Collecting environment information...
PyTorch version: 1.10.2+cu102
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: 13.0.1-6~deb10u4
CMake version: version 3.29.3
Libc version: glibc-2.28
Python version: 3.9.16 (main, Jul 8 2024, 22:08:41) [GCC 8.3.0] (64-bit runtime)
Python platform: Linux-5.10.147+-x86_64-with-glibc2.28
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7B13
Stepping: 0
CPU MHz: 2449.998
BogoMIPS: 4899.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 32768K
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext invpcid_single ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr arat npt nrip_save umip vaes vpclmulqdq rdpid fsrm
Versions of relevant libraries:
[pip3] flake8==7.0.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.19.5
[pip3] onnx==1.11.0
[pip3] onnxconverter-common==1.13.0
[pip3] pytorch-fast-transformers==0.4.0
[pip3] pytorch-lightning==1.5.8
[pip3] torch==1.10.2
[pip3] torch_geometric==2.5.3
[pip3] torch_scatter==2.1.2
[pip3] torchmetrics==0.7.3
[pip3] torchvision==0.11.3
[conda] Could not collect
| closed | 2024-08-02T02:10:19Z | 2024-08-07T06:22:17Z | https://github.com/pyg-team/pytorch_geometric/issues/9560 | [
"bug"
] | MFairley | 0 |
docarray/docarray | pydantic | 1,555 | Url types are not aware of extension during validation | Url types do not perform validation on the url extension. For instance, if a url has extension to an audio file (.wav), ImageURL can still accept it.
in the screenshot below, when a field is defined like this `item: Union[ImageUrl, AudioUrl, str]`, when it is initialized like this : MyClass(item='link to audio') it will be validated against a ImageUrl. This is because ImageUrl considers a .wav file a valid image

| closed | 2023-05-19T10:53:25Z | 2023-06-27T14:02:11Z | https://github.com/docarray/docarray/issues/1555 | [] | alaeddine-13 | 1 |
glumpy/glumpy | numpy | 293 | Shader library support to GLSL version > 140 | Shader library yields this error when compiling glsl version higher than 140:
```
Error in Fragment shader 5 (<string>)
-> error C7616: global function texture1D is removed after version 140
...
365 vec3 colormap_user(float t)
366 {
367 return texture1D(colormap, t).rgb;
368 }
370 vec3 colormap_user(float t, vec3 under, vec3 over)
...
```
WebGL 2.0 is based on OpenGL ES 3.0 which is based on GLSL 3.30.
I propose a change to texture function call in file `library/colormaps/user.glsl` (or any files necessary). | closed | 2021-07-06T14:57:15Z | 2021-07-23T01:23:03Z | https://github.com/glumpy/glumpy/issues/293 | [] | jstreibel | 3 |
Lightning-AI/pytorch-lightning | deep-learning | 19,950 | Autocast "cache_enabled=True" failing | ### Bug description
The autocast argument `cache_enabled=True` is actually not caching the layer weights when using a Trainer.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
```python
from pathlib import Path
import pytorch_lightning as pl
import torch
from pytorch_lightning.profilers import PyTorchProfiler
TRACE_DIR = Path("~/traces").expanduser()
AUTOCAST_TO = torch.float16
DEVICE = "cuda:1"
class Module(pl.LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(1000, 1000, bias=True)
self.l2 = torch.nn.Linear(1000, 100, bias=True)
def forward(self, x):
return self.l2(self.l1(x))
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = torch.nn.functional.mse_loss(y_hat, y)
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
x = torch.randn(2000, 1000, device=DEVICE, dtype=torch.float32)
y = torch.randn(2000, 100, device=DEVICE, dtype=torch.float32)
dl = torch.utils.data.DataLoader(list(zip(x, y)), batch_size=32)
model = Module()
schedule = torch.profiler.schedule(wait=6, warmup=2, active=4, repeat=2)
profiler = PyTorchProfiler(
schedule=schedule,
dirpath=str(TRACE_DIR),
filename="lightning_autocast",
sort_by_key="cuda_time",
profile_memory=True,
with_stack=False,
with_flops=False,
with_modules=True,
row_limit=100,
)
trainer = pl.Trainer(accelerator="cuda", precision=16, devices=[1], profiler=profiler, max_steps=40)
trainer.fit(model, dl)
```
### Error messages and logs
The above training scripts produces the following trace, where there are 3 calls to `aten:to` before the linear layer (one for the input, weight and bias). The second linear layer has only 2 calls to `aten:to` as the input is already in the right dtype.
<img width="1146" alt="image" src="https://github.com/Lightning-AI/pytorch-lightning/assets/15252203/916aa90e-5594-4bf3-95cb-0d4b2119a668">
What should be expected is one (or 0) call to `aten:to` as the weights should be cached into the right dtype, example:
<img width="1052" alt="image" src="https://github.com/Lightning-AI/pytorch-lightning/assets/15252203/96bbc999-b209-4ed9-9605-fc0ed9670057">
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
Looking at the code base, `autocast` is used with its default value to `cache_enabled=True`. Not sure why the cache would not be used. | open | 2024-06-05T18:15:54Z | 2024-06-05T18:58:43Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19950 | [
"bug",
"needs triage"
] | thomassajot | 1 |
neuml/txtai | nlp | 171 | Add reindex method to embeddings | With the addition of #168, txtai embedding indices can now be re-run through indexing given that the data is available.
This method will:
- Read all database records
- Write the records to a new ANN index with new configuration | closed | 2021-12-14T00:32:07Z | 2021-12-19T21:31:31Z | https://github.com/neuml/txtai/issues/171 | [] | davidmezzetti | 0 |
jpadilla/django-rest-framework-jwt | django | 332 | Invalid signature | When inputting a JWT key on https://jwt.io/ it actually tells me that the signature is invalid?
Is anyone else having the same issue?
I am using this library through https://github.com/Tivix/django-rest-auth | closed | 2017-05-14T18:42:36Z | 2017-06-12T20:54:15Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/332 | [] | philippeluickx | 2 |
mitmproxy/pdoc | api | 106 | Option to export only members in subclasses that differ from their parent's version | I have class A which has a lot of members, and class B which extends A and overrides/adds a single function or variable. I'd prefer the documentation for B to only show documentation for that one member, and not reproduce all the other members from A which are identical. In cases where class A has several trivial subclasses, documentation files contain massive amounts of completely redundant information. Is it possible to detect these cases?
| closed | 2016-05-26T23:52:30Z | 2021-01-20T08:04:51Z | https://github.com/mitmproxy/pdoc/issues/106 | [] | JPLeBreton | 4 |
deezer/spleeter | tensorflow | 143 | Is this only for stereo audio? can i use for mono audio too? | <!-- Please respect the title [Discussion] tag. -->
| closed | 2019-11-28T07:10:45Z | 2019-11-28T08:57:03Z | https://github.com/deezer/spleeter/issues/143 | [
"question"
] | rameezrehman83 | 1 |
explosion/spaCy | nlp | 13,690 | 403 Server Error: Downloading `en_core_web_sm` fails due to "Compatibility table not found for Spacy v3.7.5" | <!-- NOTE: For questions or install related issues, please open a Discussion instead. -->
## How to reproduce the behaviour
<!-- Include a code example or the steps that led to the problem. Please try to be as specific as possible. -->
Not sure if it's easily reproducible (it isn't even for me consistently), but during `docker build`, downloading the ` en_core_web_sm` package fails with a 403 Server error, on occasion. With the `--no-cache` option during `docker build`, it still fails occasionally, and retrying the build often leads to a success, but not always. This has only started happening since sometime early this week or late last week, there's been no code change at all for the build or for requirements, and I've never faced this issue prior to now. spaCy claims it can't find the compatibility table, but clearly it can when it doesn't fail.
### Failing command
```
python -m spacy download en_core_web_sm
```
### Error message
```
81.32 ✘ Server error (403)
81.32 Couldn't fetch compatibility table. Please find a package for your spaCy
81.32 installation (v3.7.5), and download it manually. For more details, see the
81.32 documentation: https://spacy.io/usage/models
```
## Your Environment
<!-- Include details of your environment. You can also type `python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System: Ubuntu 24.04 LTS
* Python Version Used: 3.11
* spaCy Version Used: 3.7.5
* Environment Information: Conda environment within Docker, spaCy is downloaded with `pip install -r requirements.txt`
| open | 2024-11-13T13:40:51Z | 2024-12-09T08:52:27Z | https://github.com/explosion/spaCy/issues/13690 | [] | mkh1991 | 9 |
ansible/awx | django | 15,228 | Improve User Feedback and Terminology Consistency in Ansible AWX UI | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
Enhancement to Existing Feature
### Feature Summary
During a heuristic evaluation of the Ansible AWX web UI using Nielsen's 10 Usability Heuristics, several issues were identified that impact user experience and efficiency. This issue outlines key findings and proposes enhancements to address these usability problems.
Visibility of System Status:
Problem: The system often fails to provide timely feedback during longer tasks, such as launching a job template.
Proposed Solution: Implement a loading spinner or status bar to indicate the initiation and progress of tasks, improving user confidence.
Match Between System and the Real World:
Problem: Technical jargon such as “playbook” and “inventory” may be unclear to new users.
Proposed Solution: Provide tooltips or a glossary to help users understand technical terms.
User Control and Freedom:
Problem: There are limited options for undoing actions, such as canceling or undoing a job execution.
Proposed Solution: Implement an undo feature or a confirmation prompt before critical actions to enhance user control.
Consistency and Standards:
Problem: Inconsistent terminology, such as using “hosts” and “nodes” interchangeably, can confuse users.
Proposed Solution: Ensure consistent use of terminology across different sections of the interface.
Error Prevention:
Problem: Lack of preventive measures for common mistakes, such as validating input fields in playbooks.
Proposed Solution: Implement real-time validation to prevent syntax errors and other common mistakes.
Help Users Recognize, Diagnose, and Recover from Errors:
Problem: Error messages are often unclear and do not provide specific details or corrective actions.
Proposed Solution: Enhance error messages with detailed information and suggested fixes to reduce user frustration.
### Select the relevant components
- [X] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
Steps to Reproduce
Visibility of System Status:
Navigate to the "Templates" section and launch a job template.
Observe whether the interface provides immediate feedback about the job initiation and its progress.
Match Between System and the Real World:
Explore different sections such as "Projects" and "Inventories".
Identify any technical jargon that may be unclear without context or explanation.
User Control and Freedom:
Execute a job from the "Templates" section.
Attempt to find an option to cancel or undo the job execution.
Consistency and Standards:
Review the terminology used in various sections like "Hosts" and "Nodes".
Check if the terms are used consistently across the interface.
Error Prevention:
Create or edit a playbook in the "Projects" section with incorrect syntax or invalid inputs.
Observe if there is any real-time validation or feedback provided.
Help Users Recognize, Diagnose, and Recover from Errors:
Trigger an error by saving a playbook with invalid syntax or missing required fields.
Examine the error message for clarity and helpfulness in diagnosing and resolving the issue.
### Current results
Visibility of System Status:
The system does not provide immediate feedback when a job is launched, leading to uncertainty about the task's initiation and progress.
Match Between System and the Real World:
Technical terms such as “playbook” and “inventory” are presented without explanation, which can be unclear to new users.
User Control and Freedom:
There are limited options to cancel or undo job executions, reducing user control over critical actions.
Consistency and Standards:
Inconsistent use of terminology, with terms like “hosts” and “nodes” used interchangeably across different sections, causing potential confusion.
Error Prevention:
Input fields are not validated in real-time, allowing syntax errors and other common mistakes to go unnoticed until later.
Help Users Recognize, Diagnose, and Recover from Errors:
Error messages lack detail and do not provide specific information or suggested corrective actions, making it difficult for users to resolve issues.
### Sugested feature result
Visibility of System Status:
The system should provide immediate feedback through a loading spinner or status bar when a job is launched, indicating the task's progress.
Match Between System and the Real World:
Technical terms such as “playbook” and “inventory” should have tooltips or a glossary accessible from the interface to help new users understand their meanings.
User Control and Freedom:
Users should have the option to cancel or undo job executions easily, possibly through an undo feature or a confirmation prompt before critical actions.
Consistency and Standards:
Terminology like “hosts” and “nodes” should be used consistently across all sections of the interface to avoid confusion.
Error Prevention:
The system should validate input fields in real-time, providing immediate feedback for syntax errors or other common mistakes.
Help Users Recognize, Diagnose, and Recover from Errors:
Error messages should be detailed, providing specific information about the error and suggestions for corrective actions.
### Additional information
This issue is based on findings from a heuristic evaluation conducted using Nielsen's 10 Usability Heuristics. Addressing these issues would significantly enhance the user experience and efficiency of the Ansible AWX web UI.
| closed | 2024-05-25T04:58:19Z | 2024-06-05T15:51:11Z | https://github.com/ansible/awx/issues/15228 | [
"type:enhancement",
"component:ui",
"needs_triage",
"community"
] | akashthemosh | 0 |
adap/flower | tensorflow | 4,361 | How can I implement a YOLO model using the Flower framework? | ### What is your question?
## How to Pass Weights as Parameters in Flower?
I’m trying to use the Flower framework to train a YOLO model in a federated learning setting. I’m having trouble figuring out how to properly pass the model weights as parameters between the server and clients.
Here’s what I’ve tried so far:
- I’ve converted the YOLO model weights to a list of NumPy arrays.
```python
class RobotClient(NumPyClient):
def __init__(self, data, epochs):
self.model: YOLO = load_model()
self.data = data
self.epochs = epochs
def get_parameters(self, config):
return [param.data.numpy() for param in self.model.state_dict().values()]
def set_parameters(self, parameters: List[NDArray]):
state_dict = {
key: torch.tensor(value)
for key, value in zip(self.model.state_dict().keys(), parameters)
}
self.model.load_state_dict(state_dict)
def fit(self, parameters: List[NDArray], config):
self.set_parameters(parameters)
self.model.train(data=self.data, epochs=self.epochs)
return self.get_parameters(config), 10, {}
def evaluate(self, parameters: List[NDArray], config):
self.set_parameters(parameters)
matrics: DetMetrics = self.model.val()
accuracy = matrics.box.map
loss = matrics.fitness
return loss, 10, {"accuracy": accuracy}
```
However, I’m encountering errors during training, and I suspect it’s related to how the weights are being handled.
```log
ERROR : Client raised an exception.
Traceback (most recent call last):
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/flwr/client/app.py", line 536, in start_client_internal
reply_message = client_app(message=message, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/flwr/client/client_app.py", line 143, in __call__
return self._call(message, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/flwr/client/client_app.py", line 126, in ffn
out_message = handle_legacy_message_from_msgtype(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/flwr/client/message_handler/message_handler.py", line 129, in handle_legacy_message_from_msgtype
fit_res = maybe_call_fit(
^^^^^^^^^^^^^^^
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/flwr/client/client.py", line 255, in maybe_call_fit
return client.fit(fit_ins)
^^^^^^^^^^^^^^^^^^^
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/flwr/client/numpy_client.py", line 259, in _fit
results = self.numpy_client.fit(parameters, ins.config) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/seoyc/Project/work/mlops/flower_study/src/flower_yolo/client.py", line 35, in fit
self.set_parameters(parameters)
File "/home/seoyc/Project/work/mlops/flower_study/src/flower_yolo/client.py", line 32, in set_parameters
self.model.load_state_dict(state_dict)
File "/home/seoyc/Project/work/mlops/flower_study/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2584, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for YOLO:
While copying the parameter named "model.model.0.conv.weight", whose dimensions in the model are torch.Size([16, 3, 3, 3]) and whose dimensions in the checkpoint are torch.Size([16, 3, 3, 3]), an exception occurred : ('Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.See https://github.com/pytorch/rfcs/pull/17 for more details.',).
While copying the parameter named "model.model.0.conv.bias", whose dimensions in the model are torch.Size([16]) and whose dimensions in the checkpoint are torch.Size([16]), an exception occurred : ('Inplace update to inference tensor outside InferenceMode is not allowed.You can make a clone to get a normal tensor before doing inplace update.See https://github.com/pytorch/rfcs/pull/17 for more details.',).
size mismatch for model.model.1.conv.weight: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
size mismatch for model.model.1.conv.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.2.cv1.conv.weight: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32, 32, 1, 1]).
size mismatch for model.model.2.cv1.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.2.cv2.conv.weight: copying a param with shape torch.Size([32, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 48, 1, 1]).
size mismatch for model.model.2.cv2.conv.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for model.model.2.m.0.cv1.conv.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([8, 16, 3, 3]).
size mismatch for model.model.2.m.0.cv1.conv.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([8]).
size mismatch for model.model.2.m.0.cv2.conv.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([16, 8, 3, 3]).
size mismatch for model.model.2.m.0.cv2.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for model.model.3.conv.weight: copying a param with shape torch.Size([32, 32, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for model.model.3.conv.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for model.model.4.cv1.conv.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]).
size mismatch for model.model.4.cv1.conv.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for model.model.4.cv2.conv.weight: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([128, 96, 1, 1]).
size mismatch for model.model.4.cv2.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for model.model.4.m.0.cv1.conv.weight: copying a param with shape torch.Size([64, 48, 1, 1]) from checkpoint, the shape in current model is torch.Size([16, 32, 3, 3]).
size mismatch for model.model.4.m.0.cv1.conv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([16]).
size mismatch for model.model.4.m.0.cv2.conv.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32, 16, 3, 3]).
size mismatch for model.model.4.m.0.cv2.conv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.5.conv.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([128, 128, 3, 3]).
size mismatch for model.model.5.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for model.model.6.cv1.conv.weight: copying a param with shape torch.Size([8, 16, 3, 3]) from checkpoint, the shape in current model is torch.Size([128, 128, 1, 1]).
size mismatch for model.model.6.cv1.conv.bias: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for model.model.6.cv2.conv.weight: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([128, 192, 1, 1]).
size mismatch for model.model.6.cv2.conv.bias: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for model.model.6.m.0.cv1.conv.weight: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for model.model.6.m.0.cv1.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.6.m.0.cv2.conv.weight: copying a param with shape torch.Size([16, 8, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 1, 1]).
size mismatch for model.model.6.m.0.cv2.conv.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.6.m.0.cv3.conv.weight: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([64, 64, 1, 1]).
size mismatch for model.model.6.m.0.cv3.conv.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for model.model.6.m.0.m.0.cv1.conv.weight: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for model.model.6.m.0.m.0.cv1.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.6.m.0.m.0.cv2.conv.weight: copying a param with shape torch.Size([64, 64, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for model.model.6.m.0.m.0.cv2.conv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.6.m.0.m.1.cv1.conv.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for model.model.6.m.0.m.1.cv1.conv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.6.m.0.m.1.cv2.conv.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for model.model.6.m.0.m.1.cv2.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([32]).
size mismatch for model.model.7.conv.weight: copying a param with shape torch.Size([64, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([256, 128, 3, 3]).
size mismatch for model.model.7.conv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.model.8.cv1.conv.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([256, 256, 1, 1]).
size mismatch for model.model.8.cv1.conv.bias: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.model.8.cv2.conv.weight: copying a param with shape torch.Size([64]) from checkpoint, the shape in current model is torch.Size([256, 384, 1, 1]).
size mismatch for model.model.8.cv2.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([256]).
size mismatch for model.model.8.m.0.cv1.conv.weight: copying a param with shape torch.Size([128, 96, 1, 1]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for model.model.8.m.0.cv1.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for model.model.8.m.0.cv2.conv.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64, 128, 1, 1]).
size mismatch for model.model.8.m.0.cv2.conv.bias: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([64]).
size mismatch for model.model.8.m.0.cv3.conv.weight: copying a param with shape torch.Size([128]) from checkpoint, the shape in current model is torch.Size([128, 128, 1, 1]).
size mismatch for model.model.8.m.0.cv3.conv.bias: copying a param with shape torch.Size([]) from checkpoint, the shape in current model is torch.Size([128]).
size mismatch for model.model.8.m.0.m.0.cv1.conv.weight: copying a param with shape torch.Size([16, 32, 3, 3]) from checkpoint, the shape in current model is torch.Size([64, 64, 3, 3]).
size mismatch for model.model.8.m.0.m.0.cv1.conv.bias: copying a param with shape torch.Size([16]) from checkpoint, the shape in current model is torch.Size([64]).
...
```
Could someone provide guidance or examples on how to correctly pass YOLO model weights as parameters in Flower? Any help would be greatly appreciated! | closed | 2024-10-24T04:20:44Z | 2025-02-11T16:13:46Z | https://github.com/adap/flower/issues/4361 | [
"bug",
"part: examples",
"stale"
] | wkqco33 | 7 |
plotly/dash-table | plotly | 195 | editing the dropdown doesn't trigger an update on `data_timestamp`? | See examples here: https://dash-docs-pr-232.herokuapp.com/datatable/dropdowns | closed | 2018-11-01T15:07:10Z | 2018-11-01T16:31:43Z | https://github.com/plotly/dash-table/issues/195 | [] | chriddyp | 1 |
healthchecks/healthchecks | django | 346 | API @authorize decorator doesn't allow read-only for Single Check | We need to rewrite the `@authorize` decorator to allow for a read-only API key to do a GET request on the `single` view.
Current thought would be to merge the write and read-only decorators and handle the request based on the `request.method` parameter.
Without this, you currently need to use a full-access key to read a single check. | closed | 2020-03-24T03:19:30Z | 2020-03-24T14:14:50Z | https://github.com/healthchecks/healthchecks/issues/346 | [] | jameskirsop | 1 |
sigmavirus24/github3.py | rest-api | 425 | Notes are required for Oauth, but optional in the api | To reproduce, follow the [documentation](https://github3py.readthedocs.org/en/develop/examples/oauth.html) but omit the `note`.
Relevant github docs here: https://developer.github.com/v3/oauth_authorizations/#create-a-new-authorization
(Tested with 2-factor-auth. Not sure if the problem exists with "normal" logins.)
Possible solutions:
1) Make it no longer optional.
2) Keep it optional, but create a default unique message
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/25191186-notes-are-required-for-oauth-but-optional-in-the-api?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | open | 2015-07-30T21:20:10Z | 2016-11-15T19:39:36Z | https://github.com/sigmavirus24/github3.py/issues/425 | [] | miketwo | 1 |
PokeAPI/pokeapi | graphql | 387 | What units are being used for the height and weight values? | I noticed that I was pulling in values for the pokemon without units and was about to add them, but I can't figure out which units the values are measured in. For example, Bulbasaur's height value pulls in as 7, but his height is neither 7 inches, 7 feet, 7 meters, or 7 yards. The official Pokemon website says that Bulbasaur's height is 2 feet 4 inches, so I feel like either I'm missing something obvious (very possible) or the heights are wrong. | closed | 2018-10-27T04:37:50Z | 2025-03-08T23:18:35Z | https://github.com/PokeAPI/pokeapi/issues/387 | [] | SilasOtoko | 4 |
encode/apistar | api | 525 | how to use bootstrap with apistar? | Hi, i see bootstrap files in repository, what is the best way to make bootsrap project with apistar? May anybody show some example? | closed | 2018-05-10T08:03:40Z | 2018-07-03T14:58:02Z | https://github.com/encode/apistar/issues/525 | [] | avonar | 2 |
davidsandberg/facenet | computer-vision | 625 | How to choose the far_target | I was wondering if the far_target is chosen in a random way or someway else. I am currently running FaceNet with my own face dataset, but I got a bad validation rate (14%) when I use 1e-3 as the far_target, and in this case my threshold was 0.25, it is strange. I though the threshold should be 1.021. | closed | 2018-01-22T13:30:12Z | 2018-04-01T21:29:09Z | https://github.com/davidsandberg/facenet/issues/625 | [] | LiuNull | 3 |
fugue-project/fugue | pandas | 50 | [BUG] datetime column (pd.DataFrame) returned in Transformer is causing spark error | **Minimal Code To Reproduce**
```python
import pandas as pd
# schema: a:datetime
def t(sdf:pd.DataFrame) -> pd.DataFrame:
sdf["a"]=pd.to_datetime(sdf["a"])
return sdf
with FugueSQLWorkflow(SparkExecutionEngine()) as dag:
dag.df([["2020-01-01"]], "a:str").transform(t).show()
```
**Describe the bug**
```
TypeError: field a: TimestampType can not accept object Timestamp('2020-01-01 00:00:00') in type <class 'pandas._libs.tslibs.timestamps.Timestamp'>
```
**Expected behavior**
This should work.
Should add datetime tests into general execution engine test suites.
**Environment (please complete the following information):**
- Backend: pandas/dask/ray? spark
- Backend version: 3
- Python version: 3.6.9
- OS: linux/windows linux
| closed | 2020-09-27T00:46:34Z | 2020-09-27T06:24:56Z | https://github.com/fugue-project/fugue/issues/50 | [
"version dependent",
"spark"
] | goodwanghan | 1 |
inducer/pudb | pytest | 312 | Changing the keyboard shortcut to invoke pudb | Would it be possible to change he keyboard shortcut to invoke pudb?
The documentation states that we should hit `Ctrl+c`.
The problem is that signal is also used by Django runserver to kill the web server.
Therefore, when I hit `Ctrl+c`, I get in the debugger, then (by pressing c), the web server stops. | closed | 2018-09-14T14:46:47Z | 2018-10-24T17:23:52Z | https://github.com/inducer/pudb/issues/312 | [] | jaepetto | 4 |
mirumee/ariadne | graphql | 210 | Example not working [similar to #177] | Using Ariadne==0.5.0 and Uvicorn==0.8.3, following the example on [https://ariadnegraphql.org/docs/django-integration](url), where it is in a list called `http_routes`, and used in a ProtocalTypeRouter:
```
http_routes = []
http_routes.append(path("graphql/", GraphQL(schema, debug=True)))
router = ProtocolTypeRouter({
"http": URLRouter(http_routes),
"channel": ChannelNameRouter({
"channelone": ChannelOneConsumer
})
})
```
When Apollo-Federation calls this endpoint, I get:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 368, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "/usr/local/lib/python3.7/site-packages/uvicorn/middleware/asgi2.py", line 6, in __call__
instance = self.app(scope)
File "/usr/local/lib/python3.7/site-packages/channels/routing.py", line 58, in __call__
return self.application_mapping[scope["type"]](scope)
File "/usr/local/lib/python3.7/site-packages/channels/routing.py", line 144, in __call__
"kwargs": {**outer.get("kwargs", {}), **kwargs},
TypeError: __call__() missing 2 required positional arguments: 'receive' and 'send'
[2019-07-04 11:54:07 +0800] [497] [INFO] ('172.21.0.3', 50320) - "POST /graphql/ HTTP/1.0" 500
```
The comment [https://github.com/mirumee/ariadne/issues/177#issuecomment-493950544](url) says that Ariadne should be using ASGI3, but uvicorn still shows to be using ASGI2? (`uvicorn/middleware/asgi2.py`).
Btw I am using channels==2.2.0
What should I do to get around this? | closed | 2019-07-04T04:09:31Z | 2019-09-13T12:42:08Z | https://github.com/mirumee/ariadne/issues/210 | [] | ghost | 6 |
unit8co/darts | data-science | 2,543 | [QUESTION] NaN handeling in model.fit() | I am facing this error when using XGBoost model since i have NaN values in my target TimeSeries object.
```
Check failed: valid: Label contains NaN, infinity or a value too large
```
I have seen multiple solutions suggested here
- fill missing values by darts.utils.missing_values.fill_missing_values()
- use a custom RangeIndex to replace DateTimeIndex (https://github.com/unit8co/darts/issues/2500)
- using sample_weights (https://github.com/unit8co/darts/issues/2294)
However, i would much rather Darts model fit ignore any data slices containing NaN during training.
E.g. for a time series [1, 2, 3, NaN, 5, 6, 7] fitting into a model with lag=2
I would like the following behaviour
data slice 1: [1, 2] >>> 3
data slice 2: [2, 3] >>> NaN (ignore this during .fit())
data slice 3: [3, NaN] >>> 5 (ignore this during .fit())
data slice 4: [NaN, 5] >>> 6 (ignore this during .fit())
data slice 5: [5, 6] >>> 7
May I know if Darts current support the above? | closed | 2024-09-26T07:53:51Z | 2024-09-27T13:55:54Z | https://github.com/unit8co/darts/issues/2543 | [
"question"
] | SafetyMary | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.