repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
feature-engine/feature_engine
scikit-learn
8
add check for NA in categorical encoders
All categorical encoders need to check if data set contains missing values before either training or transforming a data set.
closed
2019-09-04T08:05:52Z
2020-04-19T09:53:23Z
https://github.com/feature-engine/feature_engine/issues/8
[]
solegalli
0
freqtrade/freqtrade
python
11,523
Is StaticPairList effective for newly launched pairs?
<!-- Have you searched for similar issues before posting it? Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there Please do not use the question template to report bugs or to request new features. --> ## Describe your environment * Operating system: Windows 11 * Python Version: 3.10.15 * CCXT version: 4.4.33 * Freqtrade Version: 2024.11-dev-d0f326b93 ## Your question I set the `pair_whitelist` to `[".*/USDT:USDT"]` and `pairlists.method` to `"StaticPairList"` in the config file without freqai. So when the strategy is running, if the exchange launches some new pairs, can I trade these new pairs without doing anything? If not, how can I do to be able to trade all futures pairs supported on the exchange in real-time?
closed
2025-03-17T15:39:07Z
2025-03-17T16:48:17Z
https://github.com/freqtrade/freqtrade/issues/11523
[ "Question" ]
Chen-Shuai-CS
1
remsky/Kokoro-FastAPI
fastapi
43
GPU image does not use GPU
I am using following image: ghcr.io/remsky/kokoro-fastapi-gpu:v0.0.5post1 with unmodified `docker-compose.yml` from repo. ``` kokoro-tts-1 | INFO: Started server process [1] kokoro-tts-1 | INFO: Waiting for application startup. kokoro-tts-1 | 11:30:33 AM | INFO | Loading TTS model and voice packs... kokoro-tts-1 | 11:30:33 AM | INFO | CUDA available: False kokoro-tts-1 | 11:30:33 AM | INFO | Initializing model on cpu ``` Image runs as `appuser` instead of root and on linux you need to be in proper group to use NVIDIA driver (even when using nvidia-container-toolkit). ``` devilan@darkstar:~/git/Kokoro-FastAPI (master) $ docker exec -it b5ed10b82d7f /bin/bash appuser@b5ed10b82d7f:/app$ nvidia-smi Failed to initialize NVML: Insufficient Permissions ``` You should at least point this out clearly in README if this app should not work as root user. There are some methods to resolve this problem, but each of them compromises system a bit. You can chmod 666 /dev/nvidia* so no access restriction for this device (but not everyone would like to do so).
closed
2025-01-13T12:12:31Z
2025-01-13T13:21:07Z
https://github.com/remsky/Kokoro-FastAPI/issues/43
[]
DevilaN
2
aws/aws-sdk-pandas
pandas
2,399
`wr.dyamodb.read_items` gives a pyarrow error
### Describe the bug I'm not quite sure where this problem is arising, but just attempting to read 10 rows from a database table. It looks like the error is thrown before my query is even run: from trying to resolve the table metadata itself? ```py Traceback (most recent call last): File "/home/louis/dev/testing/wrangler/wrangler_demo.py", line 3, in <module> items = wr.dynamodb.read_items( File "/home/louis/miniconda3/envs/wr310/lib/python3.10/site-packages/awswrangler/_utils.py", line 174, in inner return func(*args, **kwargs) File "/home/louis/miniconda3/envs/wr310/lib/python3.10/site-packages/awswrangler/dynamodb/_read.py", line 635, in read_items return _read_items( File "/home/louis/miniconda3/envs/wr310/lib/python3.10/site-packages/awswrangler/dynamodb/_read.py", line 384, in _read_items return _read_items_scan( File "/home/louis/miniconda3/envs/wr310/lib/python3.10/site-packages/awswrangler/dynamodb/_read.py", line 341, in _read_items_scan return _utils.table_refs_to_df(items, arrow_kwargs) File "/home/louis/miniconda3/envs/wr310/lib/python3.10/site-packages/awswrangler/_distributed.py", line 105, in wrapper return cls.dispatch_func(func)(*args, **kw) File "/home/louis/miniconda3/envs/wr310/lib/python3.10/site-packages/awswrangler/_utils.py", line 882, in table_refs_to_df return _table_to_df(pa.concat_tables(tables, promote=True), kwargs=kwargs) File "pyarrow/table.pxi", line 5371, in pyarrow.lib.concat_tables File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Unable to merge: Field user_id has incompatible types: decimal128(4, 0) vs decimal128(6, 0 ``` ### How to Reproduce ```py import awswrangler as wr items = wr.dynamodb.read_items( table_name="my-table", max_items_evaluated=10, # limit the number of items to 10 for testing columns=["foo_id", "user_id", "bar_id"], # specify the columns to read ) print(items) ``` ### Expected behavior The columns in this query are all N (Number) and should be integer dtype. I can't even get it to run though. I looked at what it was implying and it seems to refer to the arrow decimal128 type's precision (scale being 0 means there are no characters after the decimal place) and my interpretation is that it indicates there are numbers returned at different orders of magnitude (powers of 10)? I just created a conda environment to test this library out as an alternative to the boto3 client access to dynamodb. Happy to try any suggestions to fix ### Your project _No response_ ### Screenshots _No response_ ### OS Linux ### Python version 3.10 ### AWS SDK for pandas version 3.2.1 ### Additional context ```sh $ pip list Package Version ----------------- ------- awswrangler 3.2.1 boto3 1.28.7 botocore 1.31.7 jmespath 1.0.1 numpy 1.25.1 packaging 23.1 pandas 2.0.3 pip 23.1.2 pyarrow 12.0.1 python-dateutil 2.8.2 pytz 2023.3 s3transfer 0.6.1 setuptools 67.8.0 six 1.16.0 typing_extensions 4.7.1 tzdata 2023.3 urllib3 1.26.16 wheel 0.38.4 ```
closed
2023-07-20T16:24:28Z
2023-07-21T14:23:44Z
https://github.com/aws/aws-sdk-pandas/issues/2399
[ "bug" ]
lmmx
1
apache/airflow
automation
47,858
Toggle to exclude removed tasks from the grid is not available
### Apache Airflow version 2.10.5 ### If "Other Airflow 2 version" selected, which one? _No response_ ### What happened? As per [AIP 63 doc](https://cwiki.apache.org/confluence/display/AIRFLOW/AIP-65%3A+Improve+DAG+history+in+UI) We should be able to toggle in GRID to exclude removed tasks. As per current implementation I do not see that. ### What you think should happen instead? _No response_ ### How to reproduce 1. Execute DAG runs 2. Remove Tasks 3. Check Grid ### Operating System Linux ### Versions of Apache Airflow Providers _No response_ ### Deployment Microsoft ADF Managed Airflow ### Deployment details _No response_ ### Anything else? _No response_ ### Are you willing to submit PR? - [ ] Yes I am willing to submit a PR! ### Code of Conduct - [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
open
2025-03-17T11:05:58Z
2025-03-17T11:38:14Z
https://github.com/apache/airflow/issues/47858
[ "kind:bug", "priority:medium", "area:core", "area:UI", "affected_version:3.0.0beta" ]
vatsrahul1001
2
litestar-org/litestar
api
3,812
Bug: new docs theme makes code hard to read
### Description Link: https://docs.litestar.dev/3-dev/usage/dto/1-abstract-dto.html <img width="1352" alt="Снимок экрана 2024-10-16 в 10 53 20" src="https://github.com/user-attachments/assets/69db6f46-0349-45d7-ac2f-e6e08d2c07ed"> Right now it is very hard to read the code, since it is yellow on yellow. For some people who have problems with recognising yellow colors it might be even impossible to read these examples. Please, consider adding a more contrast theme or keeping the old one. ### URL to code causing the issue _No response_ ### MCVE _No response_ ### Steps to reproduce _No response_ ### Screenshots _No response_ ### Logs _No response_ ### Litestar Version main ### Platform - [ ] Linux - [ ] Mac - [ ] Windows - [ ] Other (Please specify in the description above)
open
2024-10-16T07:56:18Z
2025-03-20T15:55:00Z
https://github.com/litestar-org/litestar/issues/3812
[ "Bug :bug:", "Documentation :books:" ]
sobolevn
2
tensorly/tensorly
numpy
27
Dangerous dependency handling during testing.
There is no guarantee that the dependencies used during testing will match the dependencies the project actually needs. Currently we test using a conda environment that gets set up in the following manner: ` - conda create -q -n test-environment python=$TRAVIS_PYTHON_VERSION numpy scipy` But there is no guarantee that the version used by conda will match the version in requirements.txt
closed
2018-01-13T03:53:25Z
2018-01-24T22:40:04Z
https://github.com/tensorly/tensorly/issues/27
[]
jesuscast
1
openapi-generators/openapi-python-client
rest-api
288
Support non-schema components and references (e.g. parameters)
**Describe the bug** When building the client with the path parameter in the yaml file which are passed through reference, it does not consider the parameter and the generated client does not have those referenced parameters. **To Reproduce** Steps to reproduce the behavior: 1. Take any open api sample yaml file that have path parameters in it. 2. Pass that parameter as reference like for example, parameters: - $ref: '#/components/parameters/sampleparam' 3. Generate client using command "openapi-python-client generate --path <yaml_path>" **Expected behavior** It should consider the path parameters passed through reference too same as the parameters that it considers when passed with the standard way. **OpenAPI Spec File** Any openapi spec file could be used. **Desktop (please complete the following information):** - OS: macOS 10.15.6 - Python Version: 3.7.3 - openapi-python-client 0.7.3 **Additional context** It does not raise any error while building the client, but it does not have the referenced parameters in the generated client.
closed
2021-01-07T07:53:39Z
2022-08-13T17:38:49Z
https://github.com/openapi-generators/openapi-python-client/issues/288
[ "✨ enhancement" ]
Aniketghumed
12
dmlc/gluon-cv
computer-vision
1,657
feature vector size does not change as num-segments increase
I'm extracting video features using `python feat_extract.py --data-list video_list.txt --model i3d_resnet50_v1_kinetics400 --save-dir feats --num-segments 10` Changing the `num-segments` parameter value does not change the dimensionality of the output feature. Both `--num-segments 1` and `--num-segments 10` result in `1 x 2048` feature dims. Could someone please verify that this is the expected behavior? I assumed that `--num-segment 10` will divide the input video into 10 shorter clips and extracts the feats separately for each clip; resulting in shape `10 x 2048` Here is the line that I'm referring to: https://github.com/dmlc/gluon-cv/blob/ab03ca04c588342be5cd659c3f96011c0146ac4f/scripts/action-recognition/feat_extract.py#L199
closed
2021-05-10T18:50:57Z
2021-05-17T13:57:08Z
https://github.com/dmlc/gluon-cv/issues/1657
[]
R2D2oid
1
langmanus/langmanus
automation
33
测试通过,启动服务端,页面访问时报404
使用uv run main.py是正常,然后使用uv run server.py启动了服务端,在浏览器进行xxx:8000访问时报404 not found。部署在远程服务器,然后本地电脑访问的
closed
2025-03-18T15:53:54Z
2025-03-19T03:44:08Z
https://github.com/langmanus/langmanus/issues/33
[]
xiaobai3310
2
feder-cr/Jobs_Applier_AI_Agent_AIHawk
automation
664
Ignore
Ignore, very sorry
closed
2024-10-29T17:04:55Z
2024-10-29T22:44:38Z
https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/664
[ "help wanted" ]
Alkhalid3
0
python-gino/gino
sqlalchemy
40
GINO query methods should accept raw SQL
So that user could get model objects from raw SQL. For example: ```python users = await db.text('SELECT * FROM users WHERE id > :num').gino.model(User).return_model(True).all(num=28, bind=db.bind) ```
closed
2017-08-30T02:28:27Z
2017-08-30T03:16:49Z
https://github.com/python-gino/gino/issues/40
[ "help wanted", "task" ]
fantix
1
luispedro/mahotas
numpy
61
mh.labeled.bbox strange behavior
I encounter the following strange behavior where mh.labeled.bbox is all zeros: ``` aaa = array([[[2256, 402, 402], [2256, 402, 402], [2256, 402, 402]], [[2256, 402, 402], [2256, 402, 402], [2256, 402, 402]], [[2256, 402, 402], [2256, 402, 402], [2256, 402, 402]]], dtype=uint32) mh.labeled.bbox(aaa) array([[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], ..., [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]) mh.labeled.bbox(aaa).shape (2257, 6) mh.bbox(aaa==2256) # but this works array([0, 3, 0, 3, 0, 1]) ```
closed
2015-06-02T18:18:48Z
2015-06-04T07:59:26Z
https://github.com/luispedro/mahotas/issues/61
[]
haehn
3
MycroftAI/mycroft-core
nlp
2,889
Adding last spoken sentence
I made an assistant using aiml and now i have switched to mycroft. However one feature i miss is the ability to re speak what was said last. This is a very useful feature especially when one is using headphones or is busy doing something else and missed what was said. i used a variable in the txt2wav.py file called prevline which stores the last sentence.
closed
2021-04-28T20:01:25Z
2021-04-28T23:13:07Z
https://github.com/MycroftAI/mycroft-core/issues/2889
[ "enhancement" ]
hanzala123
6
2noise/ChatTTS
python
237
[分享] ChatTTS Forge 项目
分享下我的项目,实现了大部分 api 需求 https://github.com/lenML/ChatTTS-Forge 还实验性质的写了一个 ssml 语法用来支持长文本定制生成 另外还有 prompt 注入对生成进行风格干预 你可以在 huggingface 上在线体验(速度超快): https://huggingface.co/spaces/lenML/ChatTTS-Forge 欢迎提来pr issues
closed
2024-06-03T16:24:09Z
2024-06-25T11:38:13Z
https://github.com/2noise/ChatTTS/issues/237
[ "ad" ]
zhzLuke96
4
521xueweihan/HelloGitHub
python
2,468
【自荐项目】 - Smalltalk
## 推荐项目 <!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。--> <!-- 点击上方 “Preview” 立刻查看提交的内容 --> <!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址--> - 项目地址: https://github.com/tinystruct/smalltalk <!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)--> - 类别: Java <!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 --> - 项目标题: Smalltalk <!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符--> - 项目描述: smalltalk是基于@tinystruct框架的一个示例项目,它支持C/S应用程序和B/S Web应用程序的开发。它允许您通过命令行界面(CLI)或Web界面与OpenAI开发的ChatGPT语言模型进行交互。 <!--令人眼前一亮的点是什么?类比同类型项目有什么特点!--> - 亮点: 它允许您通过命令行界面(CLI)或Web界面与OpenAI开发的ChatGPT语言模型进行交互。 - 示例代码:(可选) - 截图:(可选)gif/png/jpg <img src="https://raw.githubusercontent.com/tinystruct/smalltalk/master/screenshot.png" /> - 后续更新计划: 持续更新...
closed
2023-01-13T13:33:52Z
2023-01-21T23:06:26Z
https://github.com/521xueweihan/HelloGitHub/issues/2468
[]
m0ver
1
mwaskom/seaborn
pandas
3,387
DOC: warning-text of old documentation mentions link to new version but there is none
At https://seaborn.pydata.org/archive/0.11/generated/seaborn.scatterplot.html there is supposed to be a clickable link in the red box, but I can't find anything. I believe the link should point to https://seaborn.pydata.org/generated/seaborn.scatterplot.html
closed
2023-06-14T17:25:13Z
2023-09-29T10:52:28Z
https://github.com/mwaskom/seaborn/issues/3387
[ "docs" ]
julian-goettingen
6
ScrapeGraphAI/Scrapegraph-ai
machine-learning
214
OMP: Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized
**Describe the bug** OMP: Error #15: Initializing libiomp5md.dll, but found libomp140.x86_64.dll already initialized. OMP: Hint This means that multiple copies of the OpenMP runtime have been linked into the program. That is dangerous, since it can degrade performance or cause incorrect results. The best thing to do is to ensure that only a single OpenMP runtime is linked into the process, e.g. by avoiding static linking of the OpenMP runtime in any library. As an unsafe, unsupported, undocumented workaround you can set the environment variable KMP_DUPLICATE_LIB_OK=TRUE to allow the program to continue to execute, but that may cause crashes or silently produce incorrect results. For more information, please see http://www.intel.com/software/products/support/. **To Reproduce** Steps to reproduce the behavior: Targetting in the following script various websites, for example this one: https://github.com/VinciGit00/Scrapegraph-ai/releases `from scrapegraphai.graphs import SmartScraperGraph` `from scrapegraphai.utils import prettify_exec_info` `graph_config = {` `"llm": {` `"model": "ollama/mistral",` `"temperature": 1,` `"format": "json", ` `"model_tokens": 2000,` `"base_url": "http://localhost:11434",` `},` `"embeddings": {` `"model": "ollama/nomic-embed-text",` `"temperature": 0,` `"base_url": "http://localhost:11434",` `}` `}` `smart_scraper_graph = SmartScraperGraph(` `prompt="List me all the news with their description.",` `source="https://github.com/VinciGit00/Scrapegraph-ai/releases",` `config=graph_config` `)` `result = smart_scraper_graph.run()` `print(result)` **Expected behavior** A list of news with the json format. **Desktop:** - OS: Windows 10
closed
2024-05-10T23:49:21Z
2024-08-08T08:34:51Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/214
[ "bug" ]
elmoBG8
11
miguelgrinberg/flasky
flask
317
Suggestion: ENV setup for Chapter 7 Unittest
For Chapter 7, just a suggestion to include a note about setting the FLASK_APP to flasky.py before the unit test output since it has always been hello.py up to this point and one might easily forget that was set and wonder why 'flask test' command throws an error. Include this: (venv) $ set FLASK_APP=flasky.py Before this output: (venv) $ flask test test_app_exists (test_basics.BasicsTestCase) ... ok test_app_is_testing (test_basics.BasicsTestCase) ... ok .---------------------------------------------------------------------- Ran 2 tests in 0.001s OK Thank you.
closed
2017-11-23T21:57:18Z
2017-12-10T19:58:52Z
https://github.com/miguelgrinberg/flasky/issues/317
[ "bug" ]
ericchou1
2
coqui-ai/TTS
pytorch
3,067
[Bug] tts_to_file gives TypeError: Invalid file: None
### Describe the bug When using the xtts-1 model on windows (python 3.11.6), every time I run the `tts_to_file` function, it gives the error `TypeError: Invalid file: None` ### To Reproduce On windows with python 3.11.6, with torch, torchaudio (not sure if needed, but just to be sure) and TTS installed, run this snippet ```python import torch from TTS.api import TTS tts = TTS("tts_models/multilingual/multi-dataset/xtts_v1").to("cuda" if toch.cuda.is_available() else "cpu") # any combination of parameters gives the same error. tts.tts_to_file("Hello, world!", language="en") # Expected error: "TypeError: Invalid file: None" ``` ### Expected behavior The audio output should be written to output.wav, or the specified file name. ### Logs _No response_ ### Environment ```shell - 🐸TTS Version: 0.17.8 - PyTorch Version: 2.1.0+cpu - Python Version: 3.11.6 - OS: Windows 11 - CUDA/cuDNN version: null - GPU models and configuration: AMD Ryzen 7 5700G with Radeon Graphics - How you installed PyTorch: pip on a virtual environment ``` ### Additional context _No response_
closed
2023-10-13T14:15:20Z
2023-11-10T05:34:00Z
https://github.com/coqui-ai/TTS/issues/3067
[ "bug" ]
perrylets
9
pallets/flask
python
5,056
make blinker a required dependency
It's now part of the Pallets-Eco community, https://github.com/pallets-eco. Making it required will simplify Flask's support, it's a bit of a hack right now that typing doesn't like.
closed
2023-04-12T21:55:49Z
2023-04-28T00:05:53Z
https://github.com/pallets/flask/issues/5056
[]
davidism
0
cobrateam/splinter
automation
751
Chrome: screenshot_as_png() Not callable from browser or browser.driver
Hello, I was interested in being able to directly call [screenshot_as_png()](https://github.com/cobrateam/splinter/blob/9a79baf73f10d79e9b3d7695745c937154018874/splinter/driver/webdriver/__init__.py#L978) from the browser object instead of browser.screenshot which would require me to add an extra IO step in reading the image. I'm using the Chrome Driver and getting the error ``` AttributeError: 'WebDriver' object has no attribute 'screenshot_as_png' ``` But browser.screenshot('filename') seems to work fine, am I missing something? I even tried `browser.element_class.get_screenshot_as_png()` because I believe `.screenshot()` is dependent on `.screenshot_as_png()` if I'm not mistaken
closed
2019-12-23T15:02:54Z
2020-01-11T06:45:54Z
https://github.com/cobrateam/splinter/issues/751
[ "question" ]
gtamba
2
strawberry-graphql/strawberry
django
3,479
Ability to disable auto-camelcasing per field
Let's say I want to keep auto-camelcasing enabled for my whole service _except_ for a couple of fields e.g. maybe i have a translations service and want to write schema/query like this: ```graphql query { getTranslations(string: "Hello world") { en_US fr_FR } } ``` Ideally, I could write resolvers like this ```python @strawberry.type class Translations: @strawberry.field(disable_auto_camelcase=True) def en_US(self) -> str: ... @strawberry.field(disable_auto_camelcase=True) def fr_FR(self) -> str: ... ``` or maybe even for a whole type? ```python @strawberry.type(disable_auto_camelcase=True) class Translations: @strawberry.field def en_US(self) -> str: ... @strawberry.field def fr_FR(self) -> str: ... ``` Thanks! ## Feature Request Type [x] Alteration (enhancement/optimization) of existing feature(s)
closed
2024-04-30T15:53:02Z
2025-03-20T15:56:42Z
https://github.com/strawberry-graphql/strawberry/issues/3479
[]
magicmark
3
blacklanternsecurity/bbot
automation
1,601
How to add multiple api keys per service?
**Describe the bug** Most of the modules (services) allow you to sign up for a free account with a limited quota per api token. How can I insert more than one token? Does the tool know how to read several tokens for one module and rotate them so that they don't run out quickly and impact the results of scan? Something like that: ``` modules: shodan_dns: api_keys: - 'api_key_1' - 'api_key_2' ```
closed
2024-07-29T16:45:58Z
2024-10-02T19:00:03Z
https://github.com/blacklanternsecurity/bbot/issues/1601
[ "enhancement", "high-priority" ]
DrorDvash
7
robinhood/faust
asyncio
68
Command to delete state for old app version
The app version flag makes it easy to start applications from scratch. However this results in a bunch of state in rocksdb/kafka for older app versions that stick around for a long time. We should try to fix this.
closed
2018-02-22T23:01:43Z
2018-11-27T22:05:37Z
https://github.com/robinhood/faust/issues/68
[ "Issue Type: Enhancement" ]
vineetgoel
0
matplotlib/mplfinance
matplotlib
134
Customizing background and grid
Hello! I'm just getting started to this library and i'm surprised at how easy to use it is while looking beautiful and modern. I'm trying to add some more customization to my chart, but i'm struggling with some parameters: i would like to change color to the background and i would like to remove the grid from my chart, but i don't know where to set those values. In addition, with the old mpl_finance, i was able to set an `alpha` parameter, can i do something similar with mplfinance? Thanks in advance!
closed
2020-05-16T13:12:58Z
2020-05-17T10:54:07Z
https://github.com/matplotlib/mplfinance/issues/134
[ "question" ]
Sile25
3
OpenInterpreter/open-interpreter
python
1,453
How to use with gemini api key
### Is your feature request related to a problem? Please describe. _No response_ ### Describe the solution you'd like How can I use the api key in https://aistudio.google.com/ ### Describe alternatives you've considered _No response_ ### Additional context _No response_
closed
2024-09-12T14:10:44Z
2025-02-15T13:33:13Z
https://github.com/OpenInterpreter/open-interpreter/issues/1453
[]
hyroxtvvv
3
huggingface/datasets
numpy
7,079
HfHubHTTPError: 500 Server Error: Internal Server Error for url:
### Describe the bug newly uploaded datasets, since yesterday, yields an error. old datasets, works fine. Seems like the datasets api server returns a 500 I'm getting the same error, when I invoke `load_dataset` with my dataset. Long discussion about it here, but I'm not sure anyone from huggingface have seen it. https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1 ### Steps to reproduce the bug this api url: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 respond with: ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Expected behavior return no error with newer datasets. With older datasets I can load the datasets fine. ### Environment info # Browser When I access the api in the browser: https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3 ``` {"error":"Internal Error - We're working hard to fix this as soon as possible!"} ``` ### Request headers ``` Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Encoding gzip, deflate, br, zstd Accept-Language en-US,en;q=0.5 Connection keep-alive Host huggingface.co Priority u=1 Sec-Fetch-Dest document Sec-Fetch-Mode navigate Sec-Fetch-Site cross-site Upgrade-Insecure-Requests 1 User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0 ``` ### Response headers ``` X-Firefox-Spdy h2 access-control-allow-origin https://huggingface.co access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range content-length 80 content-type application/json; charset=utf-8 cross-origin-opener-policy same-origin date Fri, 26 Jul 2024 19:09:45 GMT etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c" referrer-policy strict-origin-when-cross-origin vary Origin via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront) x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ== x-amz-cf-pop CPH50-C1 x-cache Error from cloudfront x-error-message Internal Error - We're working hard to fix this as soon as possible! x-powered-by huggingface-moon x-request-id Root=1-66a3f479-026417465ef42f49349fdca1 ```
closed
2024-07-27T08:21:03Z
2024-09-20T13:26:25Z
https://github.com/huggingface/datasets/issues/7079
[]
neoneye
17
dsdanielpark/Bard-API
nlp
206
Response Error - Unable to get response
I executed this following command to use the Bard API : from bardapi import Bard, BardCookies cookie_dict = { "__Secure-1PSID" : "xxxxxxxxxxxxxxxxxxxxxx", "__Secure-1PAPISID" : "xxxxxxxxxxxxxxxxxxxxx", "__Secure-1PSIDCC" : "xxxxxxxxxxxxxxxxxxxxx", } bard = BardCookies(cookie_dict = cookie_dict) result = bard.get_answer("how to use ChatGPT?") print(result['content']) I am still getting a response error : Response Error: b')]}\'\n\n38\n[["wrb.fr",null,null,null,null,[9]]]\n56\n[["di",367],["af.httprm",367,"2847306205436971809",7]]\n25\n[["e",4,null,null,131]]\n'. Unable to get response. Please double-check the cookie values and verify your network environment or google account.
closed
2023-10-10T06:25:19Z
2024-01-18T15:48:43Z
https://github.com/dsdanielpark/Bard-API/issues/206
[]
MananAg-1784
4
quokkaproject/quokka
flask
69
Error upload image Content -> Image
Admin -> Content -> Image Error: Failed to create model. decoder jpeg not available
closed
2013-10-23T09:48:06Z
2015-07-16T02:56:41Z
https://github.com/quokkaproject/quokka/issues/69
[]
jniltinho
6
man-group/arctic
pandas
69
tickstore query slowly
Arctic said that can query millions of rows per second per client, but when I try to use it in our team, and found that it only thousand of rows per second, Here the code, Does anyone got the same problem or I use it with wrong way. ``` shell @property def arctic(self): if not self._arctic: log.info("init arctic") mongo_conn = MongoDB() self._arctic = Arctic(mongo_host=mongo_conn.client) library = self._arctic.list_libraries() if self.tick_db not in library: self._arctic.initialize_library(self.tick_db, lib_type=arctic.TICK_STORE) if self.bar_db not in library: self._arctic.initialize_library(self.bar_db, lib_type=arctic.TICK_STORE) return self._arctic ... # res is a dict of tick data index = self.int_to_date(tick_time) data = pd.DataFrame(res, [index]) self.arctic[self.tick_db].write(symbol, data) ... >>> now = time.time(); ac['tick'].read('IF1601', date_range=dr); print(time.time() - now) Output: [4021 rows x 26 columns] 3.56284999847 ``` thanks.
closed
2015-12-24T01:47:22Z
2015-12-31T08:28:24Z
https://github.com/man-group/arctic/issues/69
[]
zoe0316
5
paperless-ngx/paperless-ngx
django
7,969
[BUG] PAPERLESS_CONSUMER_POLLING doesn't seem to be taking effect.
### Description **ENVIRONMENT** I have paperless deployed as a set of containers on a server. My core volumes are mounted to the service via NFS and mapped to the containers via the docker-compose.yml file. ``` volumes: - /mnt/files/paperless-ngx:/usr/src/paperless/data - /mnt/files/paperless-ngx:/usr/src/paperless/media - ./export:/usr/src/paperless/export - /mnt/files/paperless-ngx/uploads:/usr/src/paperless/consume ``` I know that as my "consume folder" is mapped via NFS, I will need to enable `PAPERLESS_CONSUMER_POLLING=<num>`, which I have done in the docker-compose.env file. I can see these settings through the docker gui as well. (see attached screenshot) ``` PAPERLESS_URL=<url> PAPERLESS_TIME_ZONE=Etc/UTC PAPERLESS_OCR_LANGUAGE=eng PAPERLESS_SECRET_KEY=<secret key> PAPERLESS_CONSUMER_POLLING=30 ``` <img width="602" alt="2024-10-20_23-34-45" src="https://github.com/user-attachments/assets/fbb76034-cafa-4417-9750-40275b2d0391"> **ISSUE:** I don't believe the `PAPERLESS_CONSUMER_POLLING` varible is taking effect; on 2 counts: 1. Files dropped in to the mapped **Consume** folder are not being processed, within the expected timeframe. 2. If I restart the webserver, the files are being processed on startup 3. On startup, I'm seeing the following message in the logs: `[2024-10-20 22:28:34,134] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume` Based on the [Configuration Documentation](https://docs.paperless-ngx.com/configuration/#PAPERLESS_CONSUMER_POLLING), I would have expected `inotify` to be disabled when `PAPERLESS_CONSUMER_POLLING` is enabled. Please can you let me know if I'm doing something wrong or if this is a possible bug? This is an amazing product and I'd love to see this feature working as expected without me restarting the web server every time I want to process uploaded files. Many thanks in advance. Bavo ### Steps to reproduce explained above ### Webserver logs ```bash [2024-10-20 22:25:39,220] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume [2024-10-20 22:28:13,995] [INFO] [paperless.management.consumer] Received SIGINT, stopping inotify [2024-10-20 22:28:14,003] [DEBUG] [paperless.management.consumer] Consumer exiting. [2024-10-20 22:28:20,130] [INFO] [paperless.auth] Login failed for user `bavo4` from IP `31.51.121.161`. [2024-10-20 22:28:34,134] [INFO] [paperless.management.consumer] Using inotify to watch directory for changes: /usr/src/paperless/consume ``` ### Browser logs _No response_ ### Paperless-ngx version 2.12.1 ### Host OS Ubuntu 24.04 - Docker 25.0.2, build 29cf629 ### Installation method Docker - official image ### System status ```json { "pngx_version": "2.12.1", "server_os": "Linux-6.8.0-47-generic-x86_64-with-glibc2.36", "install_type": "docker", "storage": { "total": 11892816101376, "available": 2433494769664 }, "database": { "type": "postgresql", "url": "paperless", "status": "OK", "error": null, "migration_status": { "latest_migration": "paperless_mail.0011_remove_mailrule_assign_tag_squashed_0024_alter_mailrule_name_and_more", "unapplied_migrations": [] } }, "tasks": { "redis_url": "redis://broker:6379", "redis_status": "OK", "redis_error": null, "celery_status": "OK", "index_status": "OK", "index_last_modified": "2024-10-20T22:25:11.855280Z", "index_error": null, "classifier_status": "OK", "classifier_last_trained": null, "classifier_error": null } } ``` ### Browser _No response_ ### Configuration changes _No response_ ### Please confirm the following - [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [X] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [X] I have already searched for relevant existing issues and discussions before opening this report. - [X] I have updated the title field above with a concise description.
closed
2024-10-20T22:44:34Z
2024-10-20T22:56:54Z
https://github.com/paperless-ngx/paperless-ngx/issues/7969
[ "not a bug" ]
bavo14
0
huggingface/diffusers
deep-learning
10,741
FluxControlNetImg2ImgPipeline doesn't support generating more than one image
### Describe the bug The FluxControlNetImg2ImgPipeline does not support generating more than one image. The error encountered is: RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list. I figured out that the control_mode needs to be sent as a list of control_mode values, matching the requested number of images specified by the num_images_per_prompt parameter. As I see it, in the file pipeline_flux_controlnet_image_to_image.py, at line 818, the following code needs to be added: if control_mode is not None: if batch_size * num_images_per_prompt > 1: control_mode = [control_mode] * batch_size * num_images_per_prompt control_mode = torch.tensor(control_mode).to(device, dtype=torch.long) control_mode = control_mode.reshape([-1, 1]) Does this make sense? Would you like PR on this fix? ### Reproduction FluxControlNetImg2ImgPipeline with num_images_per_prompt=2 ### System Info diffusers = 0.32.2 ### Who can help? @sayakpaul @yiyixuxu
open
2025-02-06T16:24:17Z
2025-03-09T15:02:48Z
https://github.com/huggingface/diffusers/issues/10741
[ "bug", "stale" ]
liorRabkin
1
sanic-org/sanic
asyncio
2,862
New websockets, handling disconnects
### Is there an existing issue for this? - [X] I have searched the existing issues ### Is your feature request related to a problem? Please describe. I'm using version 23.6.0, so it's new websockets. I have followed the [guide](https://sanic.dev/en/guide/advanced/websockets.html#routing) to build something like this: async def handler(request, ws): while True: data = "hello!" try: await ws.send(data) data = await ws.recv() except Exception as e: print(e) break clean_up() bp.add_websocket_route(handler, '/ws') I discovered that when the client disconnects, no exceptions are thrown in this above code. Making it difficult for me to clean up things. What's worse is that the execution of this handler somehow stops in a disconnect, no subsequent code gets run. I read the source code and figured out the following mechanism to tackle a disconnect: def ws_disconnect(my_mess, fut): clean_up(my_mess) async def handler(request, ws): ws.connection_lost_waiter.add_done_callback(functools.partial(ws_disconnect, "my_mess")) ... It works, as the WebsocketImplProtocol calls it upon disconnect. But I could help but thinking there should be a better mechanism to handle websocket clean ups. What am I supposed to do to handle disconnects? ### Describe the solution you'd like Keep the handler running upon disconnect, and give it a chance to handle exception or check for "connected" attribute. ### Additional context _No response_
closed
2023-11-30T14:50:39Z
2023-12-01T03:06:27Z
https://github.com/sanic-org/sanic/issues/2862
[ "feature request" ]
heshiming
2
deepspeedai/DeepSpeed
machine-learning
6,526
[REQUEST] parallelize zero_to_fp32.py to use multiple cpu-cores and threads
When https://github.com/microsoft/DeepSpeed/blob/c27483933d50a693fef9c48418d2664cf6a6a6f8/deepspeed/utils/zero_to_fp32.py was written 3 years ago models were small and converted fast. Now with 70B+ models the conversion can take hours. The original script uses a single cpu core. Here is a possible implementation algorithm: The way I was thinking multiple cores could be utilized by loading all shards into the cpu memory and then firing off multiple threads, each re-composing a single layer - the user could specify how many cores to use or by default all cores will be used - so that `n_threads == cores`. I think the total memory usage here will still be `2x model size * dtype` just like in the original script. Possible additional changes: - Using `safetensors` would be a bonus because then each tensor could be written separately and there is no need to wait for the whole model to be unsharded to write a single torch tensor. This could also become an option for low RAM nodes, where each layer is unsharded sequentially and total memory usage will be `1x model size * dtype` + `max layer size * dtype`, which for a large model be a huge memory saving, at the cost of not parallelizing - or perhaps using just 1-2 threads, which would already speed things up. - Switching to universal checkpoint API would be another bonus because the original is very clunky and very difficult to understand/maintain. cc: @tjruwase
open
2024-09-11T22:52:42Z
2024-10-14T03:10:41Z
https://github.com/deepspeedai/DeepSpeed/issues/6526
[ "enhancement" ]
stas00
4
docarray/docarray
fastapi
1,137
Handle da inside da in stack mode
```python from docarray import BaseDocument, DocumentArray from docarray.typing import AnyTensor class Image(BaseDocument): tensor : AnyTensor class Video(BaseDocument): images: DocumentArray[Image] da = DocumentArray[Video]( Video(images = DocumentArray[Image]([ Image()])]) da.stack() da[0].images # is not stack but should be ``` in this case `da.stack()` will not stack the tensor in Image because of the nested DocumentArray this will probably be solved by introducing the concept of dimension in document array
closed
2023-02-15T13:45:41Z
2023-02-21T01:37:05Z
https://github.com/docarray/docarray/issues/1137
[]
AnneYang720
0
onnx/onnx
pytorch
6,152
Missing type support in parser for various types (float16, bfloat16, ...)
# Bug Report ### Is the issue related to model conversion? No. ### Describe the bug The parser has only partial support for data type parsing: https://github.com/onnx/onnx/blob/093a8d335a66ea136eb1f16b3a1ce6237ee353ab/onnx/defs/parser.cc#L436 The missing types are: ``` TensorProto_DataType_FLOAT16 TensorProto_DataType_BFLOAT16 TensorProto_DataType_FLOAT8E4M3FN TensorProto_DataType_FLOAT8E4M3FNUZ TensorProto_DataType_FLOAT8E5M2 TensorProto_DataType_FLOAT8E5M2FNUZ TensorProto_DataType_COMPLEX64 TensorProto_DataType_COMPLEX128 ``` ### System information System independent. ### Reproduction instructions Find a sample model and instruction for float16 below; note however that it would be great to support all types not just float16. The error will be the same for the other types. Sample model: [gemmfloat16.onnxtxt](https://github.com/user-attachments/files/15513562/gemmfloat16.onnxtxt.txt) Sample instructions: ``` import onnx from pathlib import Path onnx.parser.parse_model(Path("gemmfloat16.onnxtxt.txt").read_text()) ``` Result: `onnx.parser.ParseError: b'[ParseError at position (...)]\nError context: <float16[4, 4] weight = {...}, float16[4] bias = {...}>\nUnhandled type: %d10'` (Note: The `.onnxtxt` format is not allowed to be uploaded therefore the sample is `.onnxtxt.txt`) ### Expected behavior ONNX model is parsed successfully.
open
2024-05-31T12:50:30Z
2025-01-24T15:02:17Z
https://github.com/onnx/onnx/issues/6152
[ "bug", "module: parser", "contributions welcome" ]
TinaAMD
1
open-mmlab/mmdetection
pytorch
11,820
YOLOX input normalisation question
Greatings! https://github.com/open-mmlab/mmdetection/blob/cfd5d3a985b0249de009b67d04f37263e11cdf3d/configs/yolox/yolox_s_8xb8-300e_coco.py#L16 I can not see any normalisation step for this model (any YOLOX model actually). Please can you point me to it? Best regards!
open
2024-06-30T20:16:36Z
2024-06-30T20:16:51Z
https://github.com/open-mmlab/mmdetection/issues/11820
[]
dmitrysarov
0
PaddlePaddle/ERNIE
nlp
171
论文中用到的BERT是你们重新训练的,还是直接用谷歌开放的?
在论文里面没有提到这个细节。
closed
2019-06-19T08:14:08Z
2019-06-21T08:08:06Z
https://github.com/PaddlePaddle/ERNIE/issues/171
[]
rainarch
2
scikit-learn-contrib/metric-learn
scikit-learn
272
Unconsistent behaviour of SDML when ussing skggm with fixed seed.
#### Description I was using SDML_Supervised() for a subsequent 2D visualization with UMAP (Similar to t-sne) and got large differences in the results on every fit instance while using the same data. Fixing the seed doesn't make a difference. I tracked down the problem to the call of quic() done when skggm is installed, reviewing their code I found there is a fixed seed but anyway the results from that function vary in every call. note: I am using the latest version from Skggm, will try to reproduce later with the version indicated in the documentation. #### Steps/Code to Reproduce ```python from metric_learn import SDML_Supervised from sklearn.datasets import load_wine from sklearn.preprocessing import StandardScaler from sklearn.model_selection import train_test_split import numpy as np wine=load_wine() X, y = load_wine(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test) SDML=SDML_Supervised(random_state=42) X_transform=SDML.fit_transform(X_train,y_train) print(np.sum(np.abs(X_transform - SDML.fit_transform(X_train,y_train)))) ``` #### Expected Results The two instances of SDML fit should have the same result, then the printed difference should be zero. #### Actual Results Large numbers in the order of 100 to 300. #### Versions Linux-5.0.0-37-generic-x86_64-with-Ubuntu-18.04-bionic Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] NumPy 1.18.1 SciPy 1.4.1 Scikit-Learn 0.22.1 Metric-Learn 0.5.0 Skggm 0.2.8
closed
2020-01-13T09:38:45Z
2020-01-13T15:28:05Z
https://github.com/scikit-learn-contrib/metric-learn/issues/272
[]
grudloff
4
saulpw/visidata
pandas
1,499
Test fails in Guix build: file missing from distribution tarball?
**Steps to reproduce** 1. Check out my [updated Visidata 2.10 package for Guix](https://github.com/ryanprior/guix/blob/update-guix-2.10/gnu/packages/spreadsheet.scm#L86-L119) 2. Run `pre-inst-env guix build visidata` **Expected result** Package build should complete with no errors. **Actual result with screenshot** During the test phase, pytest errors out complaining that it can't find a necessary file: ``` self = PosixPath('/tmp/guix-build-visidata-2.10.drv-0/visidata-2.10/visidata/../sample_data/sample.tsv') name = '/tmp/guix-build-visidata-2.10.drv-0/visidata-2.10/visidata/../sample_data/sample.tsv' flags = 524288, mode = 438 def _opener(self, name, flags, mode=0o666): # A stub for the opener argument to built-in open() > return self._accessor.open(self, flags, mode) E FileNotFoundError: [Errno 2] No such file or directory: '/tmp/guix-build-visidata-2.10.drv-0/visidata-2.10/visidata/../sample_data/sample.tsv' /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/pathlib.py:1120: FileNotFoundError ``` <details> <summary>View full package build output (733 lines)</summary> <pre> starting phase `set-SOURCE-DATE-EPOCH' phase `set-SOURCE-DATE-EPOCH' succeeded after 0.0 seconds starting phase `set-paths' environment variable `PATH' set to `/gnu/store/slsh0qjv5j68xda2bb6h8gsxwyi1j25a-python-wrapper-3.9.9/bin:/gnu/store/7frqm5ijy66f81hr8i1j6791k84lds9w-python-pytest-6.2.5/bin:/gnu/store/6pzbvnfvgdxfar4qhms0d81w9d6n1ylp-python-xlrd-2.0.1/bin:/gnu/store/g2ajyl8xk9aarxrgjbng2hkj3qm2v0z2-tar-1.34/bin:/gnu/store/iixwcv3k49ks1rf34pjgfzmzyhhgwng3-gzip-1.10/bin:/gnu/store/s3hl12jxz9ybs7nsy7kq7ybzz7qnzmsg-bzip2-1.0.8/bin:/gnu/store/c8isj4jq6knv0icfgr43di6q3nvdzkx7-xz-5.2.5/bin:/gnu/store/4ic6244i3ca4b4rxc2wnrgllsidyishv-file-5.39/bin:/gnu/store/ahmmvw21p11ik80lg1f953y7fd8bqkjm-diffutils-3.8/bin:/gnu/store/z39hnrwds1dgcbpfgj8dnv2cngjb2xbl-patch-2.7.6/bin:/gnu/store/39rsx3nl4c31952jybbjb8d6idr5hx7r-findutils-4.8.0/bin:/gnu/store/690qz3fg334dpwn3pn6k59n4wc943p2b-gawk-5.1.0/bin:/gnu/store/wxgv6i8g0p24q5gcyzd0yr07s8kn9680-sed-4.8/bin:/gnu/store/xjwp2hsd9256icjjybfrmznppjicywf6-grep-3.6/bin:/gnu/store/d251rfgc9nm2clzffzhgiipdvfvzkvwi-coreutils-8.32/bin:/gnu/store/55cbpsi18mahg131nmiya6km5b4mscfa-make-4.3/bin:/gnu/store/4y5m9lb8k3qkb1y9m02sw9w9a6hacd16-bash-minimal-5.1.8/bin:/gnu/store/s2pg5k98fl2g2szg9dykxyd9zl3xihv9-ld-wrapper-0/bin:/gnu/store/rc781v4k0drhaqn90xfwwpspki5x0bvf-binutils-2.37/bin:/gnu/store/069aq2v993kpc41yabp5b6vm4wb9jkhg-gcc-10.3.0/bin:/gnu/store/5h2w4qi9hk1qzzgi1w83220ydslinr4s-glibc-2.33/bin:/gnu/store/5h2w4qi9hk1qzzgi1w83220ydslinr4s-glibc-2.33/sbin:/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/bin:/gnu/store/pwzh4npbxhm1rqrbg9lra99wx6sinkmf-python-charset-normalizer-2.0.11/bin' environment variable `GUIX_PYTHONPATH' set to `/gnu/store/7frqm5ijy66f81hr8i1j6791k84lds9w-python-pytest-6.2.5/lib/python3.9/site-packages:/gnu/store/xs8pxa4rr2zkb2hr5nhkxr7ijxxgmqna-python-dateutil-2.8.2/lib/python3.9/site-packages:/gnu/store/gjv07rwkais79cr0m2vy4ia2xb3irwx5-python-requests-2.27.1/lib/python3.9/site-packages:/gnu/store/pv2gvq3im4czlpdrh0pb7jjkkib19qvy-python-lxml-4.6.3/lib/python3.9/site-packages:/gnu/store/x2maq7qxliy26mzfq1nic7mpjmyx8r7j-python-openpyxl-3.0.9/lib/python3.9/site-packages:/gnu/store/6pzbvnfvgdxfar4qhms0d81w9d6n1ylp-python-xlrd-2.0.1/lib/python3.9/site-packages:/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/site-packages:/gnu/store/rh5pmm5ralyl06pgfr83qlfxaav6svjr-python-wcwidth-0.1.8/lib/python3.9/site-packages:/gnu/store/hmyqhci2vvrnvjwm32l26kwgasz7in1w-python-toml-0.10.2/lib/python3.9/site-packages:/gnu/store/xqvr1b5065idb5y8jxgr42cdkiwj6l64-python-six-bootstrap-1.16.0/lib/python3.9/site-packages:/gnu/store/ls2xsqbwsha2ap65dh09f9v0q0v43d91-python-py-1.10.0/lib/python3.9/site-packages:/gnu/store/jl1g2qqlg9sjxrh649x8zp2ysy4fmwh5-python-pluggy-0.13.1/lib/python3.9/site-packages:/gnu/store/jbb1l7nqy3dskqy8i835p8wbi871dmsy-python-packaging-bootstrap-21.3/lib/python3.9/site-packages:/gnu/store/driz9p0pv29s9dlpd7r8m5r65wiia30z-python-more-itertools-8.2.0/lib/python3.9/site-packages:/gnu/store/sww1f0qbddpnj7p1pivrsva83xn7c711-python-iniconfig-1.1.1/lib/python3.9/site-packages:/gnu/store/wp31hr5sia5wydha04ijqiz2kdhck4y0-python-attrs-bootstrap-21.2.0/lib/python3.9/site-packages:/gnu/store/3bjjwwwbniv92j0cg8kp1h5k2q3c42n3-python-six-1.16.0/lib/python3.9/site-packages:/gnu/store/9bzm9zhbw6zk9ynhzx1qhzhzardd419w-python-urllib3-1.26.8/lib/python3.9/site-packages:/gnu/store/hj74j5jjqr55qy8ldvs23rlxanr3f1l7-python-idna-3.3/lib/python3.9/site-packages:/gnu/store/pwzh4npbxhm1rqrbg9lra99wx6sinkmf-python-charset-normalizer-2.0.11/lib/python3.9/site-packages:/gnu/store/sgs95j3njwkkw752iy4i8rycn37cs69z-python-certifi-2021.10.8/lib/python3.9/site-packages:/gnu/store/qmkk3jpm693wajkb1p1w5njf6z7rmpcm-python-jdcal-1.4/lib/python3.9/site-packages:/gnu/store/s3jjk7n2hpwwn6vcxk9l9v19ns4mpnb3-python-et-xmlfile-1.0.1/lib/python3.9/site-packages:/gnu/store/kw41zfr5s3ay2xva287lcpgzqa7bhvh1-python-pyparsing-3.0.6/lib/python3.9/site-packages:/gnu/store/rm5hqml7c77psn8b7gsz4hqabb3xxkq6-python-pysocks-1.7.1/lib/python3.9/site-packages:/gnu/store/0fz1vanf4n5c4b5b5jh3p9sk36p6kfix-python-pyopenssl-21.0.0/lib/python3.9/site-packages:/gnu/store/rgkhhsivbqw737d5zrpj1ql540m37wh8-python-cryptography-3.4.8/lib/python3.9/site-packages:/gnu/store/xr7l6fkj6fwvxyvamxllsp7x0zm4j3ag-python-iso8601-1.0.2/lib/python3.9/site-packages:/gnu/store/4hzdy7w8mnxpi3054m46xd00ixqfhb6g-python-cffi-1.14.4/lib/python3.9/site-packages:/gnu/store/cnzvnhjqwgiqdlyyrkrpb4vkydxm4din-python-asn1crypto-1.4.0/lib/python3.9/site-packages:/gnu/store/1mdg7xc4zx0i9s0kd0hwwq98bgab51s1-python-pycparser-2.21/lib/python3.9/site-packages' environment variable `PYTHONTZPATH' unset environment variable `BASH_LOADABLES_PATH' unset environment variable `C_INCLUDE_PATH' set to `/gnu/store/s3hl12jxz9ybs7nsy7kq7ybzz7qnzmsg-bzip2-1.0.8/include:/gnu/store/c8isj4jq6knv0icfgr43di6q3nvdzkx7-xz-5.2.5/include:/gnu/store/4ic6244i3ca4b4rxc2wnrgllsidyishv-file-5.39/include:/gnu/store/690qz3fg334dpwn3pn6k59n4wc943p2b-gawk-5.1.0/include:/gnu/store/55cbpsi18mahg131nmiya6km5b4mscfa-make-4.3/include:/gnu/store/rc781v4k0drhaqn90xfwwpspki5x0bvf-binutils-2.37/include:/gnu/store/069aq2v993kpc41yabp5b6vm4wb9jkhg-gcc-10.3.0/include:/gnu/store/5h2w4qi9hk1qzzgi1w83220ydslinr4s-glibc-2.33/include:/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/include:/gnu/store/6mjww4iz4xdan74d5bbjfh7il8rngfkk-linux-libre-headers-5.10.35/include' environment variable `CPLUS_INCLUDE_PATH' set to `/gnu/store/s3hl12jxz9ybs7nsy7kq7ybzz7qnzmsg-bzip2-1.0.8/include:/gnu/store/c8isj4jq6knv0icfgr43di6q3nvdzkx7-xz-5.2.5/include:/gnu/store/4ic6244i3ca4b4rxc2wnrgllsidyishv-file-5.39/include:/gnu/store/690qz3fg334dpwn3pn6k59n4wc943p2b-gawk-5.1.0/include:/gnu/store/55cbpsi18mahg131nmiya6km5b4mscfa-make-4.3/include:/gnu/store/rc781v4k0drhaqn90xfwwpspki5x0bvf-binutils-2.37/include:/gnu/store/069aq2v993kpc41yabp5b6vm4wb9jkhg-gcc-10.3.0/include/c++:/gnu/store/069aq2v993kpc41yabp5b6vm4wb9jkhg-gcc-10.3.0/include:/gnu/store/5h2w4qi9hk1qzzgi1w83220ydslinr4s-glibc-2.33/include:/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/include:/gnu/store/6mjww4iz4xdan74d5bbjfh7il8rngfkk-linux-libre-headers-5.10.35/include' environment variable `LIBRARY_PATH' set to `/gnu/store/7frqm5ijy66f81hr8i1j6791k84lds9w-python-pytest-6.2.5/lib:/gnu/store/xs8pxa4rr2zkb2hr5nhkxr7ijxxgmqna-python-dateutil-2.8.2/lib:/gnu/store/gjv07rwkais79cr0m2vy4ia2xb3irwx5-python-requests-2.27.1/lib:/gnu/store/pv2gvq3im4czlpdrh0pb7jjkkib19qvy-python-lxml-4.6.3/lib:/gnu/store/x2maq7qxliy26mzfq1nic7mpjmyx8r7j-python-openpyxl-3.0.9/lib:/gnu/store/6pzbvnfvgdxfar4qhms0d81w9d6n1ylp-python-xlrd-2.0.1/lib:/gnu/store/s3hl12jxz9ybs7nsy7kq7ybzz7qnzmsg-bzip2-1.0.8/lib:/gnu/store/c8isj4jq6knv0icfgr43di6q3nvdzkx7-xz-5.2.5/lib:/gnu/store/4ic6244i3ca4b4rxc2wnrgllsidyishv-file-5.39/lib:/gnu/store/690qz3fg334dpwn3pn6k59n4wc943p2b-gawk-5.1.0/lib:/gnu/store/rc781v4k0drhaqn90xfwwpspki5x0bvf-binutils-2.37/lib:/gnu/store/5h2w4qi9hk1qzzgi1w83220ydslinr4s-glibc-2.33/lib:/gnu/store/4jdghmc65q7i7ib89zmvq66l0ghf7jc4-glibc-2.33-static/lib:/gnu/store/fnr1z6xsan0437r0yg48d0y8k32kqxby-glibc-utf8-locales-2.33/lib:/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib:/gnu/store/rh5pmm5ralyl06pgfr83qlfxaav6svjr-python-wcwidth-0.1.8/lib:/gnu/store/hmyqhci2vvrnvjwm32l26kwgasz7in1w-python-toml-0.10.2/lib:/gnu/store/xqvr1b5065idb5y8jxgr42cdkiwj6l64-python-six-bootstrap-1.16.0/lib:/gnu/store/ls2xsqbwsha2ap65dh09f9v0q0v43d91-python-py-1.10.0/lib:/gnu/store/jl1g2qqlg9sjxrh649x8zp2ysy4fmwh5-python-pluggy-0.13.1/lib:/gnu/store/jbb1l7nqy3dskqy8i835p8wbi871dmsy-python-packaging-bootstrap-21.3/lib:/gnu/store/driz9p0pv29s9dlpd7r8m5r65wiia30z-python-more-itertools-8.2.0/lib:/gnu/store/sww1f0qbddpnj7p1pivrsva83xn7c711-python-iniconfig-1.1.1/lib:/gnu/store/wp31hr5sia5wydha04ijqiz2kdhck4y0-python-attrs-bootstrap-21.2.0/lib:/gnu/store/3bjjwwwbniv92j0cg8kp1h5k2q3c42n3-python-six-1.16.0/lib:/gnu/store/9bzm9zhbw6zk9ynhzx1qhzhzardd419w-python-urllib3-1.26.8/lib:/gnu/store/hj74j5jjqr55qy8ldvs23rlxanr3f1l7-python-idna-3.3/lib:/gnu/store/pwzh4npbxhm1rqrbg9lra99wx6sinkmf-python-charset-normalizer-2.0.11/lib:/gnu/store/sgs95j3njwkkw752iy4i8rycn37cs69z-python-certifi-2021.10.8/lib:/gnu/store/qmkk3jpm693wajkb1p1w5njf6z7rmpcm-python-jdcal-1.4/lib:/gnu/store/s3jjk7n2hpwwn6vcxk9l9v19ns4mpnb3-python-et-xmlfile-1.0.1/lib:/gnu/store/kw41zfr5s3ay2xva287lcpgzqa7bhvh1-python-pyparsing-3.0.6/lib:/gnu/store/rm5hqml7c77psn8b7gsz4hqabb3xxkq6-python-pysocks-1.7.1/lib:/gnu/store/0fz1vanf4n5c4b5b5jh3p9sk36p6kfix-python-pyopenssl-21.0.0/lib:/gnu/store/rgkhhsivbqw737d5zrpj1ql540m37wh8-python-cryptography-3.4.8/lib:/gnu/store/xr7l6fkj6fwvxyvamxllsp7x0zm4j3ag-python-iso8601-1.0.2/lib:/gnu/store/4hzdy7w8mnxpi3054m46xd00ixqfhb6g-python-cffi-1.14.4/lib:/gnu/store/cnzvnhjqwgiqdlyyrkrpb4vkydxm4din-python-asn1crypto-1.4.0/lib:/gnu/store/1mdg7xc4zx0i9s0kd0hwwq98bgab51s1-python-pycparser-2.21/lib' environment variable `GUIX_LOCPATH' set to `/gnu/store/fnr1z6xsan0437r0yg48d0y8k32kqxby-glibc-utf8-locales-2.33/lib/locale' phase `set-paths' succeeded after 0.1 seconds starting phase `install-locale' using 'en_US.utf8' locale for category "LC_ALL" phase `install-locale' succeeded after 0.0 seconds starting phase `unpack' visidata-2.10/ visidata-2.10/LICENSE.gpl3 visidata-2.10/MANIFEST.in visidata-2.10/PKG-INFO visidata-2.10/README.md visidata-2.10/bin/ visidata-2.10/bin/vd visidata-2.10/setup.cfg visidata-2.10/setup.py visidata-2.10/visidata/ visidata-2.10/visidata/__init__.py visidata-2.10/visidata/__main__.py visidata-2.10/visidata/_input.py visidata-2.10/visidata/_open.py visidata-2.10/visidata/_types.py visidata-2.10/visidata/_urlcache.py visidata-2.10/visidata/aggregators.py visidata-2.10/visidata/basesheet.py visidata-2.10/visidata/bezier.py visidata-2.10/visidata/canvas.py visidata-2.10/visidata/canvas_text.py visidata-2.10/visidata/choose.py visidata-2.10/visidata/clipboard.py visidata-2.10/visidata/cliptext.py visidata-2.10/visidata/cmdlog.py visidata-2.10/visidata/color.py visidata-2.10/visidata/colorsheet.py visidata-2.10/visidata/column.py visidata-2.10/visidata/customdate.py visidata-2.10/visidata/ddw/ visidata-2.10/visidata/ddw/input.ddw visidata-2.10/visidata/ddwplay.py visidata-2.10/visidata/deprecated.py visidata-2.10/visidata/describe.py visidata-2.10/visidata/editor.py visidata-2.10/visidata/errors.py visidata-2.10/visidata/expr.py visidata-2.10/visidata/extensible.py visidata-2.10/visidata/fill.py visidata-2.10/visidata/form.py visidata-2.10/visidata/freeze.py visidata-2.10/visidata/freqtbl.py visidata-2.10/visidata/graph.py visidata-2.10/visidata/help.py visidata-2.10/visidata/incr.py visidata-2.10/visidata/join.py visidata-2.10/visidata/keys.py visidata-2.10/visidata/layout.py visidata-2.10/visidata/loaders/ visidata-2.10/visidata/loaders/__init__.py visidata-2.10/visidata/loaders/_pandas.py visidata-2.10/visidata/loaders/archive.py visidata-2.10/visidata/loaders/arrow.py visidata-2.10/visidata/loaders/csv.py visidata-2.10/visidata/loaders/eml.py visidata-2.10/visidata/loaders/fixed_width.py visidata-2.10/visidata/loaders/frictionless.py visidata-2.10/visidata/loaders/geojson.py visidata-2.10/visidata/loaders/graphviz.py visidata-2.10/visidata/loaders/hdf5.py visidata-2.10/visidata/loaders/html.py visidata-2.10/visidata/loaders/http.py visidata-2.10/visidata/loaders/imap.py visidata-2.10/visidata/loaders/json.py visidata-2.10/visidata/loaders/lsv.py visidata-2.10/visidata/loaders/markdown.py visidata-2.10/visidata/loaders/mbtiles.py visidata-2.10/visidata/loaders/mysql.py visidata-2.10/visidata/loaders/npy.py visidata-2.10/visidata/loaders/odf.py visidata-2.10/visidata/loaders/pandas_freqtbl.py visidata-2.10/visidata/loaders/parquet.py visidata-2.10/visidata/loaders/pcap.py visidata-2.10/visidata/loaders/pdf.py visidata-2.10/visidata/loaders/png.py visidata-2.10/visidata/loaders/postgres.py visidata-2.10/visidata/loaders/rec.py visidata-2.10/visidata/loaders/sas.py visidata-2.10/visidata/loaders/shp.py visidata-2.10/visidata/loaders/spss.py visidata-2.10/visidata/loaders/sqlite.py visidata-2.10/visidata/loaders/texttables.py visidata-2.10/visidata/loaders/tsv.py visidata-2.10/visidata/loaders/ttf.py visidata-2.10/visidata/loaders/unzip_http.py visidata-2.10/visidata/loaders/usv.py visidata-2.10/visidata/loaders/vcf.py visidata-2.10/visidata/loaders/vds.py visidata-2.10/visidata/loaders/xlsb.py visidata-2.10/visidata/loaders/xlsx.py visidata-2.10/visidata/loaders/xml.py visidata-2.10/visidata/loaders/xword.py visidata-2.10/visidata/loaders/yaml.py visidata-2.10/visidata/macos.py visidata-2.10/visidata/macros.py visidata-2.10/visidata/main.py visidata-2.10/visidata/mainloop.py visidata-2.10/visidata/man/ visidata-2.10/visidata/man/vd.1 visidata-2.10/visidata/man/vd.txt visidata-2.10/visidata/man/visidata.1 visidata-2.10/visidata/melt.py visidata-2.10/visidata/memory.py visidata-2.10/visidata/menu.py visidata-2.10/visidata/metasheets.py visidata-2.10/visidata/misc.py visidata-2.10/visidata/modify.py visidata-2.10/visidata/motd.py visidata-2.10/visidata/movement.py visidata-2.10/visidata/path.py visidata-2.10/visidata/pivot.py visidata-2.10/visidata/plugins.py visidata-2.10/visidata/pyobj.py visidata-2.10/visidata/regex.py visidata-2.10/visidata/save.py visidata-2.10/visidata/search.py visidata-2.10/visidata/selection.py visidata-2.10/visidata/settings.py visidata-2.10/visidata/sheets.py visidata-2.10/visidata/shell.py visidata-2.10/visidata/slide.py visidata-2.10/visidata/sort.py visidata-2.10/visidata/statusbar.py visidata-2.10/visidata/tests/ visidata-2.10/visidata/tests/__init__.py visidata-2.10/visidata/tests/conftest.py visidata-2.10/visidata/tests/test_commands.py visidata-2.10/visidata/tests/test_edittext.py visidata-2.10/visidata/tests/test_path.py visidata-2.10/visidata/textsheet.py visidata-2.10/visidata/threads.py visidata-2.10/visidata/transpose.py visidata-2.10/visidata/undo.py visidata-2.10/visidata/unfurl.py visidata-2.10/visidata/utils.py visidata-2.10/visidata/vdobj.py visidata-2.10/visidata/vendor/ visidata-2.10/visidata/vendor/appdirs.py visidata-2.10/visidata/window.py visidata-2.10/visidata/wrappers.py visidata-2.10/visidata.egg-info/ visidata-2.10/visidata.egg-info/PKG-INFO visidata-2.10/visidata.egg-info/SOURCES.txt visidata-2.10/visidata.egg-info/dependency_links.txt visidata-2.10/visidata.egg-info/entry_points.txt visidata-2.10/visidata.egg-info/requires.txt visidata-2.10/visidata.egg-info/top_level.txt phase `unpack' succeeded after 0.0 seconds starting phase `ensure-no-mtimes-pre-1980' phase `ensure-no-mtimes-pre-1980' succeeded after 0.0 seconds starting phase `enable-bytecode-determinism' phase `enable-bytecode-determinism' succeeded after 0.0 seconds starting phase `ensure-no-cythonized-files' phase `ensure-no-cythonized-files' succeeded after 0.0 seconds starting phase `patch-usr-bin-file' phase `patch-usr-bin-file' succeeded after 0.0 seconds starting phase `patch-source-shebangs' patch-shebang: ./bin/vd: changing `/usr/bin/env python3' to `/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/bin/python3' patch-shebang: ./setup.py: changing `/usr/bin/env python3' to `/gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/bin/python3' phase `patch-source-shebangs' succeeded after 0.0 seconds starting phase `patch-generated-file-shebangs' phase `patch-generated-file-shebangs' succeeded after 0.0 seconds starting phase `build' running "python setup.py" with command "build" and parameters () running build running build_py file visidata.py (for module visidata) not found creating build creating build/lib creating build/lib/visidata copying visidata/cmdlog.py -> build/lib/visidata copying visidata/_input.py -> build/lib/visidata copying visidata/help.py -> build/lib/visidata copying visidata/customdate.py -> build/lib/visidata copying visidata/_types.py -> build/lib/visidata copying visidata/layout.py -> build/lib/visidata copying visidata/modify.py -> build/lib/visidata copying visidata/__main__.py -> build/lib/visidata copying visidata/main.py -> build/lib/visidata copying visidata/plugins.py -> build/lib/visidata copying visidata/sheets.py -> build/lib/visidata copying visidata/mainloop.py -> build/lib/visidata copying visidata/aggregators.py -> build/lib/visidata copying visidata/menu.py -> build/lib/visidata copying visidata/pivot.py -> build/lib/visidata copying visidata/misc.py -> build/lib/visidata copying visidata/keys.py -> build/lib/visidata copying visidata/basesheet.py -> build/lib/visidata copying visidata/settings.py -> build/lib/visidata copying visidata/choose.py -> build/lib/visidata copying visidata/threads.py -> build/lib/visidata copying visidata/column.py -> build/lib/visidata copying visidata/canvas_text.py -> build/lib/visidata copying visidata/describe.py -> build/lib/visidata copying visidata/bezier.py -> build/lib/visidata copying visidata/form.py -> build/lib/visidata copying visidata/colorsheet.py -> build/lib/visidata copying visidata/path.py -> build/lib/visidata copying visidata/transpose.py -> build/lib/visidata copying visidata/motd.py -> build/lib/visidata copying visidata/errors.py -> build/lib/visidata copying visidata/freeze.py -> build/lib/visidata copying visidata/search.py -> build/lib/visidata copying visidata/deprecated.py -> build/lib/visidata copying visidata/freqtbl.py -> build/lib/visidata copying visidata/vdobj.py -> build/lib/visidata copying visidata/window.py -> build/lib/visidata copying visidata/memory.py -> build/lib/visidata copying visidata/shell.py -> build/lib/visidata copying visidata/canvas.py -> build/lib/visidata copying visidata/editor.py -> build/lib/visidata copying visidata/metasheets.py -> build/lib/visidata copying visidata/incr.py -> build/lib/visidata copying visidata/melt.py -> build/lib/visidata copying visidata/clipboard.py -> build/lib/visidata copying visidata/regex.py -> build/lib/visidata copying visidata/macos.py -> build/lib/visidata copying visidata/graph.py -> build/lib/visidata copying visidata/selection.py -> build/lib/visidata copying visidata/wrappers.py -> build/lib/visidata copying visidata/movement.py -> build/lib/visidata copying visidata/statusbar.py -> build/lib/visidata copying visidata/__init__.py -> build/lib/visidata copying visidata/join.py -> build/lib/visidata copying visidata/_urlcache.py -> build/lib/visidata copying visidata/save.py -> build/lib/visidata copying visidata/_open.py -> build/lib/visidata copying visidata/undo.py -> build/lib/visidata copying visidata/cliptext.py -> build/lib/visidata copying visidata/textsheet.py -> build/lib/visidata copying visidata/pyobj.py -> build/lib/visidata copying visidata/fill.py -> build/lib/visidata copying visidata/sort.py -> build/lib/visidata copying visidata/color.py -> build/lib/visidata copying visidata/extensible.py -> build/lib/visidata copying visidata/ddwplay.py -> build/lib/visidata copying visidata/macros.py -> build/lib/visidata copying visidata/unfurl.py -> build/lib/visidata copying visidata/slide.py -> build/lib/visidata copying visidata/expr.py -> build/lib/visidata copying visidata/utils.py -> build/lib/visidata creating build/lib/visidata/loaders copying visidata/loaders/xlsb.py -> build/lib/visidata/loaders copying visidata/loaders/sas.py -> build/lib/visidata/loaders copying visidata/loaders/ttf.py -> build/lib/visidata/loaders copying visidata/loaders/parquet.py -> build/lib/visidata/loaders copying visidata/loaders/tsv.py -> build/lib/visidata/loaders copying visidata/loaders/xml.py -> build/lib/visidata/loaders copying visidata/loaders/odf.py -> build/lib/visidata/loaders copying visidata/loaders/xword.py -> build/lib/visidata/loaders copying visidata/loaders/json.py -> build/lib/visidata/loaders copying visidata/loaders/mysql.py -> build/lib/visidata/loaders copying visidata/loaders/markdown.py -> build/lib/visidata/loaders copying visidata/loaders/_pandas.py -> build/lib/visidata/loaders copying visidata/loaders/geojson.py -> build/lib/visidata/loaders copying visidata/loaders/pandas_freqtbl.py -> build/lib/visidata/loaders copying visidata/loaders/pdf.py -> build/lib/visidata/loaders copying visidata/loaders/graphviz.py -> build/lib/visidata/loaders copying visidata/loaders/vds.py -> build/lib/visidata/loaders copying visidata/loaders/xlsx.py -> build/lib/visidata/loaders copying visidata/loaders/yaml.py -> build/lib/visidata/loaders copying visidata/loaders/pcap.py -> build/lib/visidata/loaders copying visidata/loaders/arrow.py -> build/lib/visidata/loaders copying visidata/loaders/shp.py -> build/lib/visidata/loaders copying visidata/loaders/frictionless.py -> build/lib/visidata/loaders copying visidata/loaders/postgres.py -> build/lib/visidata/loaders copying visidata/loaders/eml.py -> build/lib/visidata/loaders copying visidata/loaders/lsv.py -> build/lib/visidata/loaders copying visidata/loaders/rec.py -> build/lib/visidata/loaders copying visidata/loaders/png.py -> build/lib/visidata/loaders copying visidata/loaders/mbtiles.py -> build/lib/visidata/loaders copying visidata/loaders/vcf.py -> build/lib/visidata/loaders copying visidata/loaders/npy.py -> build/lib/visidata/loaders copying visidata/loaders/__init__.py -> build/lib/visidata/loaders copying visidata/loaders/archive.py -> build/lib/visidata/loaders copying visidata/loaders/html.py -> build/lib/visidata/loaders copying visidata/loaders/fixed_width.py -> build/lib/visidata/loaders copying visidata/loaders/imap.py -> build/lib/visidata/loaders copying visidata/loaders/http.py -> build/lib/visidata/loaders copying visidata/loaders/texttables.py -> build/lib/visidata/loaders copying visidata/loaders/unzip_http.py -> build/lib/visidata/loaders copying visidata/loaders/sqlite.py -> build/lib/visidata/loaders copying visidata/loaders/usv.py -> build/lib/visidata/loaders copying visidata/loaders/csv.py -> build/lib/visidata/loaders copying visidata/loaders/hdf5.py -> build/lib/visidata/loaders copying visidata/loaders/spss.py -> build/lib/visidata/loaders package init file 'visidata/vendor/__init__.py' not found (or not a regular file) creating build/lib/visidata/vendor copying visidata/vendor/appdirs.py -> build/lib/visidata/vendor creating build/lib/visidata/tests copying visidata/tests/conftest.py -> build/lib/visidata/tests copying visidata/tests/test_commands.py -> build/lib/visidata/tests copying visidata/tests/test_edittext.py -> build/lib/visidata/tests copying visidata/tests/test_path.py -> build/lib/visidata/tests copying visidata/tests/__init__.py -> build/lib/visidata/tests running egg_info writing visidata.egg-info/PKG-INFO writing dependency_links to visidata.egg-info/dependency_links.txt writing entry points to visidata.egg-info/entry_points.txt writing requirements to visidata.egg-info/requires.txt writing top-level names to visidata.egg-info/top_level.txt file visidata.py (for module visidata) not found reading manifest file 'visidata.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' adding license file 'LICENSE.gpl3' writing manifest file 'visidata.egg-info/SOURCES.txt' creating build/lib/visidata/ddw copying visidata/ddw/input.ddw -> build/lib/visidata/ddw creating build/lib/visidata/man copying visidata/man/vd.1 -> build/lib/visidata/man copying visidata/man/vd.txt -> build/lib/visidata/man copying visidata/man/visidata.1 -> build/lib/visidata/man file visidata.py (for module visidata) not found warning: build_py: byte-compiling is disabled, skipping. running build_scripts creating build/scripts-3.9 copying and adjusting bin/vd -> build/scripts-3.9 changing mode of build/scripts-3.9/vd from 644 to 755 phase `build' succeeded after 0.4 seconds starting phase `install' running "python setup.py" with command "install" and parameters ("--prefix=/gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10" "--no-compile" "--single-version-externally-managed" "--root=/") running install running build running build_py file visidata.py (for module visidata) not found package init file 'visidata/vendor/__init__.py' not found (or not a regular file) running egg_info writing visidata.egg-info/PKG-INFO writing dependency_links to visidata.egg-info/dependency_links.txt writing entry points to visidata.egg-info/entry_points.txt writing requirements to visidata.egg-info/requires.txt writing top-level names to visidata.egg-info/top_level.txt file visidata.py (for module visidata) not found reading manifest file 'visidata.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' adding license file 'LICENSE.gpl3' writing manifest file 'visidata.egg-info/SOURCES.txt' file visidata.py (for module visidata) not found warning: build_py: byte-compiling is disabled, skipping. running build_scripts running install_lib creating /gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10 creating /gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10/lib creating /gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10/lib/python3.9 creating /gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10/lib/python3.9/site-packages creating /gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10/lib/python3.9/site-packages/visidata [cut 281 lines of file install messages…] phase `install' succeeded after 0.4 seconds starting phase `add-install-to-pythonpath' phase `add-install-to-pythonpath' succeeded after 0.0 seconds starting phase `add-install-to-path' phase `add-install-to-path' succeeded after 0.0 seconds starting phase `wrap' find-files: /gnu/store/7rh1wh9kmi6852zs6j1zn4ll2w2y2mny-visidata-2.10/sbin: No such file or directory phase `wrap' succeeded after 0.0 seconds starting phase `check' ============================= test session starts ============================== platform linux -- Python 3.9.9, pytest-6.2.5, py-1.10.0, pluggy-0.13.1 rootdir: /tmp/guix-build-visidata-2.10.drv-0/visidata-2.10 plugins: hypothesis-6.0.2 collected 21 items visidata/tests/test_commands.py F [ 4%] visidata/tests/test_edittext.py ................... [ 95%] visidata/tests/test_path.py . [100%] =================================== FAILURES =================================== ________________________ TestCommands.test_baseCommands ________________________ self = <visidata.tests.test_commands.TestCommands object at 0x7ffff5aa78b0> mock_screen = <Mock id='140737315126000'> def test_baseCommands(self, mock_screen): 'exec each global command at least once' cmdlist = visidata.vd.commands vs = visidata.Sheet('test_commands') vs.reload() vd = visidata.vd nerrs = 0 ntotal = 0 for longname in cmdlist.keys(): if not isTestableCommand(longname, cmdlist): continue ntotal += 1 print(longname) > self.runOneTest(mock_screen, longname) visidata/tests/test_commands.py:93: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ visidata/tests/test_commands.py:119: in runOneTest vs.reload.__wrapped__(vs) visidata/sheets.py:944: in reload self.setCols(list(self.optlines(itsource, 'header'))) visidata/sheets.py:930: in optlines yield next(it) visidata/loaders/tsv.py:45: in iterload with self.source.open_text(encoding=self.options.encoding) as fp: visidata/path.py:222: in open_text return self.open(mode=mode, encoding=encoding or vd.options.encoding, errors=vd.options.encoding_errors, newline=newline) visidata/path.py:262: in open return FileProgress(path, fp=self._path.open(*args, **kwargs), **kwargs) /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/pathlib.py:1252: in open return io.open(self, mode, buffering, encoding, errors, newline, _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = PosixPath('/tmp/guix-build-visidata-2.10.drv-0/visidata-2.10/visidata/../sample_data/sample.tsv') name = '/tmp/guix-build-visidata-2.10.drv-0/visidata-2.10/visidata/../sample_data/sample.tsv' flags = 524288, mode = 438 def _opener(self, name, flags, mode=0o666): # A stub for the opener argument to built-in open() > return self._accessor.open(self, flags, mode) E FileNotFoundError: [Errno 2] No such file or directory: '/tmp/guix-build-visidata-2.10.drv-0/visidata-2.10/visidata/../sample_data/sample.tsv' /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/pathlib.py:1120: FileNotFoundError ----------------------------- Captured stdout call ----------------------------- open-config =============================== warnings summary =============================== visidata/tests/test_commands.py::TestCommands::test_baseCommands /gnu/store/65i3nhcwmz0p8rqbg48gaavyky4g4hwk-python-3.9.9/lib/python3.9/site-packages/pkg_resources/__init__.py:1130: DeprecationWarning: Use of .. or absolute path in a resource path is not allowed and will raise exceptions in a future release. return get_provider(package_or_requirement).get_resource_filename( -- Docs: https://docs.pytest.org/en/stable/warnings.html =========================== short test summary info ============================ FAILED visidata/tests/test_commands.py::TestCommands::test_baseCommands - Fil... =================== 1 failed, 20 passed, 1 warning in 0.52s ==================== error: in phase 'check': uncaught exception: %exception #<&invoke-error program: "pytest" arguments: () exit-status: 1 term-signal: #f stop-signal: #f> phase `check' failed after 1.2 seconds command "pytest" failed with status 1 </pre> </details> **Additional context** The tests seem to be looking for a file called `sample.tsv`. I checked the distribution tarball on PyPI and it doesn't seem to have that file in it. Maybe adding the file would fix the issue?
closed
2022-08-31T01:38:46Z
2022-09-12T00:55:11Z
https://github.com/saulpw/visidata/issues/1499
[ "bug", "fixed" ]
ryanprior
3
autogluon/autogluon
scikit-learn
3,892
Hyperparemeter Optimization for Time Series stoped working
**Describe the bug** I carefully installed autogluon for Windows-Pip-GPU (https://auto.gluon.ai/stable/install.html). autogluon works perfecty withouth HPO (fast training to best results). When I try to HPO `PatchTST` I got this error `prediction-net-state.pt`. Which does not happen when I HPO on autogluon for CPU. I am on a Windows 11 laptop with all the last updates. Bellow I attach all the relevant information for you to check the error. ```Bash Beginning AutoGluon training... Time limit = 7200s AutoGluon will save models to 'autogluon-m4-hourly' =================== System Info =================== AutoGluon Version: 1.0.0 Python Version: 3.9.18 Operating System: Windows Platform Machine: AMD64 Platform Version: 10.0.22631 CPU Count: 32 GPU Count: 1 Memory Avail: 15.75 GB / 31.69 GB (49.7%) Disk Space Avail: 769.92 GB / 952.89 GB (80.8%) =================================================== Fitting with arguments: {'enable_ensemble': False, 'eval_metric': MASE, 'hyperparameter_tune_kwargs': {'num_trials': 2, 'scheduler': 'local', 'searcher': 'random'}, 'hyperparameters': {'PatchTST': {'nhead': Categorical[2, 4]}}, 'known_covariates_names': [], 'num_val_windows': 2, 'prediction_length': 48, 'quantile_levels': [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9], 'random_seed': 123, 'refit_every_n_windows': 1, 'refit_full': False, 'target': 'target', 'time_limit': 7200, 'verbosity': 2} Inferred time series frequency: 'H' Provided train_data has 148060 rows, 200 time series. Median time series length is 700 (min=700, max=960). Provided dataset contains following columns: target: 'target' AutoGluon will gauge predictive performance using evaluation metric: 'MASE' This metric's sign has been flipped to adhere to being higher_is_better. The metric score can be multiplied by -1 to get the metric value. =================================================== Starting training. Start time is 2024-01-29 17:14:49 Models that will be trained: ['PatchTST'] Hyperparameter tuning model PatchTST. Tuning model for up to 7198.9s of the 7198.9s remaining. Traceback (most recent call last): File "c:\Users\diego.villacreses\OneDrive - Universidad de Las Américas\Desktop\MT\forecast_ventas\notebooks\tmp2.py", line 19, in <module> predictor.fit( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\core\utils\decorators.py", line 31, in _call return f(*gargs, **gkwargs) File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\predictor.py", line 681, in fit self._learner.fit( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\learner.py", line 63, in fit return self._fit( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\learner.py", line 118, in _fit self.trainer.fit( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\trainer\auto_trainer.py", line 63, in fit self._train_multi( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\trainer\abstract_trainer.py", line 587, in _train_multi model_names_trained += self.tune_model_hyperparameters( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\trainer\abstract_trainer.py", line 448, in tune_model_hyperparameters model_hpo = self.load_model(model_hpo_name, path=model_path, model_type=type(model)) File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\trainer\abstract_trainer.py", line 170, in load_model return model_type.load(path=os.path.join(self.path, path), reset_paths=self.reset_paths) File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\models\multi_window\multi_window_model.py", line 225, in load model.most_recent_model = model.model_base_type.load( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\autogluon\timeseries\models\gluonts\abstract_gluonts.py", line 208, in load model.gts_predictor = PyTorchPredictor.deserialize(Path(path) / cls.gluonts_model_path) File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\gluonts\torch\model\predictor.py", line 117, in deserialize state_dict = torch.load( File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\torch\serialization.py", line 791, in load with _open_file_like(f, 'rb') as opened_file: File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\torch\serialization.py", line 271, in _open_file_like return _open_file(name_or_buffer, mode) File "C:\Users\diego.villacreses\AppData\Local\miniconda3\envs\ag\lib\site-packages\torch\serialization.py", line 252, in __init__ super().__init__(open(name, mode)) FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\diego.villacreses\\notebooks\\autogluon-m4-hourly\\models\\PatchTST\\d196b_00000\\W1\\gluon_ts\\prediction-net-state.pt' ``` **To Reproduce** ```Python import pandas as pd from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor from autogluon.common import space df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly_subset/train.csv") train_data = TimeSeriesDataFrame.from_data_frame( df, id_column="item_id", timestamp_column="timestamp" ) predictor = TimeSeriesPredictor( prediction_length=48, path="autogluon-m4-hourly", target="target", eval_metric="MASE", ) predictor.fit( train_data, hyperparameters={ "PatchTST": {"nhead": space.Categorical(2, 4)}, }, hyperparameter_tune_kwargs={ "num_trials": 2, "scheduler": "local", # local, FIFO ASHA "searcher": "random", # auto, bayes, random }, enable_ensemble=False, time_limit=60*60*2, # total training time in seconds num_val_windows=2, ) print(predictor.leaderboard()) ``` **Screenshots / Logs** <!-- If applicable, add screenshots or logs to help explain your problem. --> **Installed Versions** ```Bash absl-py 2.1.0 accelerate 0.21.0 aiohttp 3.9.3 aiohttp-cors 0.7.0 aiosignal 1.3.1 aliyun-python-sdk-core 2.14.0 aliyun-python-sdk-kms 2.16.2 ansicon 1.89.0 antlr4-python3-runtime 4.9.3 anyio 4.2.0 arrow 1.3.0 async-timeout 4.0.3 attrs 23.2.0 autogluon 1.0.0 autogluon.common 1.0.0 autogluon.core 1.0.0 autogluon.features 1.0.0 autogluon.multimodal 1.0.0 autogluon.tabular 1.0.0 autogluon.timeseries 1.0.0 backoff 2.2.1 beautifulsoup4 4.12.3 blessed 1.20.0 blis 0.7.11 boto3 1.34.30 botocore 1.34.30 cachetools 5.3.2 catalogue 2.0.10 catboost 1.2.2 certifi 2023.11.17 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 cloudpathlib 0.16.0 cloudpickle 3.0.0 colorama 0.4.6 colorful 0.5.6 confection 0.1.4 contourpy 1.2.0 crcmod 1.7 croniter 1.4.1 cryptography 42.0.1 cycler 0.12.1 cymem 2.0.8 datasets 2.10.1 dateutils 0.6.12 deepdiff 6.7.1 defusedxml 0.7.1 dill 0.3.6 distlib 0.3.8 editor 1.6.6 et-xmlfile 1.1.0 evaluate 0.4.1 exceptiongroup 1.2.0 fastai 2.7.13 fastapi 0.109.0 fastcore 1.5.29 fastdownload 0.0.7 fastprogress 1.0.3 filelock 3.13.1 fonttools 4.47.2 frozenlist 1.4.1 fsspec 2023.12.2 future 0.18.3 gdown 5.0.1 gluonts 0.14.3 google-api-core 2.15.0 google-auth 2.27.0 google-auth-oauthlib 1.2.0 googleapis-common-protos 1.62.0 gpustat 1.1.1 graphviz 0.20.1 grpcio 1.60.0 h11 0.14.0 huggingface-hub 0.20.3 hyperopt 0.2.7 idna 3.6 imageio 2.33.1 importlib-metadata 7.0.1 importlib-resources 6.1.1 inquirer 3.2.3 itsdangerous 2.1.2 Jinja2 3.1.3 jinxed 1.2.1 jmespath 0.10.0 joblib 1.3.2 jsonschema 4.17.3 kiwisolver 1.4.5 langcodes 3.3.0 lazy_loader 0.3 lightgbm 4.1.0 lightning 2.0.9.post0 lightning-cloud 0.5.62 lightning-utilities 0.10.1 llvmlite 0.41.1 Markdown 3.5.2 markdown-it-py 3.0.0 MarkupSafe 2.1.4 matplotlib 3.8.2 mdurl 0.1.2 mlforecast 0.10.0 model-index 0.1.11 mpmath 1.3.0 msgpack 1.0.7 multidict 6.0.4 multiprocess 0.70.14 murmurhash 1.0.10 networkx 3.2.1 nlpaug 1.1.11 nltk 3.8.1 nptyping 2.4.1 numba 0.58.1 numpy 1.24.4 nvidia-ml-py 12.535.133 nvidia-ml-py3 7.352.0 oauthlib 3.2.2 omegaconf 2.2.3 opencensus 0.11.4 opencensus-context 0.1.3 opendatalab 0.0.10 openmim 0.3.9 openpyxl 3.1.2 openxlab 0.0.34 ordered-set 4.1.0 orjson 3.9.12 oss2 2.17.0 packaging 23.2 pandas 2.1.4 patsy 0.5.6 pillow 10.2.0 pip 23.3.2 platformdirs 3.11.0 plotly 5.18.0 preshed 3.0.9 prometheus-client 0.19.0 protobuf 4.23.4 psutil 5.9.8 py-spy 0.3.14 py4j 0.10.9.7 pyarrow 6.0.1 pyasn1 0.5.1 pyasn1-modules 0.3.0 pycparser 2.21 pycryptodome 3.20.0 pydantic 1.10.14 Pygments 2.17.2 PyJWT 2.8.0 pyparsing 3.1.1 pyrsistent 0.20.0 PySocks 1.7.1 pytesseract 0.3.10 python-dateutil 2.8.2 python-multipart 0.0.6 pytorch-lightning 2.0.9.post0 pytorch-metric-learning 1.7.3 pytz 2023.4 PyWavelets 1.5.0 pywin32 306 PyYAML 6.0.1 ray 2.6.3 readchar 4.0.5 regex 2023.12.25 requests 2.28.2 requests-oauthlib 1.3.1 responses 0.18.0 rich 13.4.2 rsa 4.9 runs 1.2.2 s3transfer 0.10.0 safetensors 0.4.2 scikit-image 0.20.0 scikit-learn 1.4.0 scipy 1.9.1 sentencepiece 0.1.99 seqeval 1.2.2 setuptools 60.2.0 six 1.16.0 smart-open 6.4.0 sniffio 1.3.0 soupsieve 2.5 spacy 3.7.2 spacy-legacy 3.0.12 spacy-loggers 1.0.5 srsly 2.4.8 starlette 0.35.1 starsessions 1.3.0 statsforecast 1.4.0 statsmodels 0.14.1 sympy 1.12 tabulate 0.9.0 tenacity 8.2.3 tensorboard 2.15.1 tensorboard-data-server 0.7.2 tensorboardX 2.6.2.2 text-unidecode 1.3 thinc 8.2.2 threadpoolctl 3.2.0 tifffile 2024.1.30 timm 0.9.12 tokenizers 0.13.3 toolz 0.12.1 torch 2.0.1+cu118 torchmetrics 1.1.2 torchvision 0.15.2+cu118 tqdm 4.65.2 traitlets 5.14.1 transformers 4.31.0 typer 0.9.0 types-python-dateutil 2.8.19.20240106 typing_extensions 4.9.0 tzdata 2023.4 urllib3 1.26.18 utilsforecast 0.0.10 uvicorn 0.27.0.post1 virtualenv 20.21.0 wasabi 1.1.2 wcwidth 0.2.13 weasel 0.3.4 websocket-client 1.7.0 websockets 12.0 Werkzeug 3.0.1 wheel 0.42.0 window-ops 0.0.14 xgboost 2.0.3 xmod 1.8.1 xxhash 3.4.1 yarl 1.9.4 zipp 3.17.0 ``` ```python from autogluon.core.utils import show_versions show_versions() ``` ```Bash date : 2024-01-29 time : 17:22:48.832973 python : 3.9.18.final.0 OS : Windows OS-release : 10 Version : 10.0.22631 machine : AMD64 processor : Intel64 Family 6 Model 183 Stepping 1, GenuineIntel num_cores : 32 cpu_ram_mb : 32452.703125 cuda version : None num_gpus : 1 gpu_ram_mb : [7963] avail_disk_size_mb : None accelerate : 0.21.0 async-timeout : 4.0.3 autogluon : 1.0.0 autogluon.common : 1.0.0 autogluon.core : 1.0.0 autogluon.features : 1.0.0 autogluon.multimodal : 1.0.0 autogluon.tabular : 1.0.0 autogluon.timeseries : 1.0.0 boto3 : 1.34.30 catboost : 1.2.2 defusedxml : 0.7.1 evaluate : 0.4.1 fastai : 2.7.13 gluonts : 0.14.3 hyperopt : 0.2.7 imodels : None jinja2 : 3.1.3 joblib : 1.3.2 jsonschema : 4.17.3 lightgbm : 4.1.0 lightning : 2.0.9.post0 matplotlib : 3.8.2 mlforecast : 0.10.0 networkx : 3.2.1 nlpaug : 1.1.11 nltk : 3.8.1 nptyping : 2.4.1 numpy : 1.24.4 nvidia-ml-py3 : 7.352.0 omegaconf : 2.2.3 onnxruntime-gpu : None openmim : 0.3.9 orjson : 3.9.12 pandas : 2.1.4 Pillow : 10.2.0 psutil : 5.9.8 PyMuPDF : None pytesseract : 0.3.10 pytorch-lightning : 2.0.9.post0 pytorch-metric-learning: 1.7.3 ray : 2.6.3 requests : 2.28.2 scikit-image : 0.20.0 scikit-learn : 1.4.0 scikit-learn-intelex : None scipy : 1.9.1 seqeval : 1.2.2 setuptools : 60.2.0 skl2onnx : None statsforecast : 1.4.0 statsmodels : 0.14.1 tabpfn : None tensorboard : 2.15.1 text-unidecode : 1.3 timm : 0.9.12 torch : 2.0.1+cu118 torchmetrics : 1.1.2 torchvision : 0.15.2+cu118 tqdm : 4.65.2 transformers : 4.31.0 utilsforecast : 0.0.10 vowpalwabbit : None xgboost : 2.0.3 ```
closed
2024-01-29T22:24:03Z
2024-03-01T14:49:15Z
https://github.com/autogluon/autogluon/issues/3892
[ "bug: unconfirmed", "module: timeseries" ]
DiegoDVillacreses
3
plotly/dash
plotly
3,074
Memory Leakage When Navigating Away from Home Page with dash.register_page()
**Describe your context** There are issues with memory leakage in our Dash application. Specifically, each time users navigate away from the home page, we observe a memory leak of about 15MB. Objects are getting detached but not fully removed, and each time the user changes tabs, new objects are created while the old ones are not removed. - replace the result of `pip list | grep dash` below ``` dash==2.17.0 gunicorn==20.0.4 pandas>=1.1.5 ``` - if frontend related, tell us your Browser, Version and OS - Browser chrome **Describe the bug** - Navigation: We use dash.register_page() to add a new page to our application. Users navigate away from the home page using dcc.Location and dcc.Link - Component Setup: The home page layout includes an ag-Grid component within a dcc.Tabs component. - Callbacks: A callback renders content dynamically based on the selected tab within dcc.Tabs. **Expected behavior** Remove detached objects **Screenshots** ![Image](https://github.com/user-attachments/assets/565169ea-eb15-410e-8af8-4b72a82fe2c4)
open
2024-11-13T15:38:29Z
2024-11-15T15:09:23Z
https://github.com/plotly/dash/issues/3074
[ "bug", "P2" ]
andre996
0
MaartenGr/BERTopic
nlp
1,890
Tracking the source index of new topics when merging models
Hi Maarten, I'm playing with the merge_models feature, which is very useful, but I'm wondering if there is a way for a merged model to keep track the index of new topics added to it from their original models. One use case of this is if I have some other metadata relating to my topics before the merge, and I want to link that metadata to the topics after the merge. At the moment I'm doing ```python from umap import UMAP from bertopic import BERTopic from datasets import load_dataset import numpy as np import pandas as pd dataset = load_dataset("CShorten/ML-ArXiv-Papers")["train"] abstracts_1 = dataset["abstract"][:5_000] abstracts_2 = dataset["abstract"][5_000:10_000] # Create topic models umap_model = UMAP(n_neighbors=15, n_components=5, min_dist=0.0, metric='cosine', random_state=42) topic_model_1 = BERTopic(umap_model=umap_model, min_topic_size=20).fit(abstracts_1) topic_model_2 = BERTopic(umap_model=umap_model, min_topic_size=20).fit(abstracts_2) # calculate topic stat topic_info_1 = topic_model_1.get_topic_info() topic_info_1['Topic stat'] = np.random.randint(1, 10, topic_info_1.shape[0]) topic_info_2 = topic_model_2.get_topic_info() topic_info_2['Topic stat'] = np.random.randint(1, 10, topic_info_2.shape[0]) merged_model = BERTopic.merge_models([topic_model_1, topic_model_2], min_similarity=0.9) merged_info = merged_model.get_topic_info() # map new merged topics to original model 2 new_topic_nums = merged_info['Name'][len(topic_info_1):] new_topic_nums = new_topic_nums.str.split("_", n=1).str[0].astype('int') all_old_stats = topic_info_1['Topic stat'] selected_new_stats = topic_info_2['Topic stat'].loc[topic_info_2['Topic'].isin(new_topic_nums)] merged_stats = pd.concat([all_old_stats, selected_new_stats]).tolist() merged_info['Topic stat'] = merged_stats ``` But this seems really hacky and gets really complicated when merging more than 2 models. It would be great to have a dictionary or something that mapped each sequential merge e.g. ``` { "1": { # merge 1 "5": 53, # topic num in original model: topic num in merged model "19: 54, ... }, "2": { # merge 2 "7": 61, 12: 62, ... }, ... } ```` or something like that. I know there's already a topic mapper used for other purposes. Not sure if that could be utilised?
open
2024-03-27T04:47:30Z
2024-04-01T23:28:46Z
https://github.com/MaartenGr/BERTopic/issues/1890
[]
zilch42
2
statsmodels/statsmodels
data-science
8,929
ENH: GAM combining different penalizations, splines
We don't have an option to combine two different spline types in GAM https://stackoverflow.com/questions/76541891/use-both-cyclic-and-b-spline-smoothers-in-statsmodels-gam We had this question once before, soon after I had merged GAM. Related we don't have a simple way to add different penalties, Penalty classes, either for overlapping or non-overlapping params.
open
2023-06-24T13:13:50Z
2024-07-23T12:54:44Z
https://github.com/statsmodels/statsmodels/issues/8929
[ "type-enh", "topic-penalization", "prio-elev", "comp-gam" ]
josef-pkt
0
lepture/authlib
flask
379
Auth required after fetch_token with AsyncOAuth2Client
**Describe the bug** Documentation shows client.get, .post, etc being used immediately after client.fetch_token() but this does not seem to work. **To Reproduce** ``` from authlib.integrations.httpx_client.oauth2_client import OAuth2Client, OAuth2Auth oauth_client = AsyncOAuth2Client(...) token = await oauth_client.fetch_token( API_TOKEN_URL, authorization_response=str(request.url), grant_type='authorization_code', ) user_request = await oauth_client.get(API_URL + "/users") ``` _**401 here**_ However, if you use: ``` user_request = await oauth_client.get(API_URL + "/users", auth=OAuth2Auth(client.token)) ... or ... client.auth = OAuth2Auth(client.token) user_request = await oauth_client.get(API_URL + "/users") ``` _**200 here**_ **Expected behavior** As per the docs suggest, I should be able to use .get, .post, etc immediately after using fetch_token() or maybe the docs should be updated for the methods above? I don't know how it's supposed to work. The docs make it seem like I should be able to use .get immediately against an endpoint requiring authentication. **Environment:** - OS: Linux - Python Version: 3.9.6 - Authlib Version: 0.15.4
closed
2021-08-27T04:34:59Z
2021-10-18T12:16:57Z
https://github.com/lepture/authlib/issues/379
[ "bug" ]
elijahsgh
4
torchbox/wagtail-grapple
graphql
39
example app docker build fails
![example](https://user-images.githubusercontent.com/10539855/70173238-11575e80-1698-11ea-86c7-c8feb189c283.png)
closed
2019-12-04T19:14:59Z
2022-02-25T16:58:11Z
https://github.com/torchbox/wagtail-grapple/issues/39
[ "status: Needs info" ]
easherma
4
clovaai/donut
computer-vision
38
Need code for SROIE custom dataset regarding
Hi Neha, Kindly send me a code for DONUT using a custom dataset.
open
2022-08-29T08:10:05Z
2022-08-29T08:10:05Z
https://github.com/clovaai/donut/issues/38
[]
SankarSennan
0
TencentARC/GFPGAN
deep-learning
83
Better hair enhancement
Is it possible for this tool to enhance all of subject's hair and not only the hair that's around their face?
open
2021-10-19T15:25:11Z
2021-10-19T15:25:11Z
https://github.com/TencentARC/GFPGAN/issues/83
[]
ivellios1988
0
vllm-project/vllm
pytorch
14,747
[Installation]: Cannot compile vLLM from source on XPU
### Your current environment <details> <summary>The output of `python collect_env.py`</summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug Compiling vLLM from source for XPU or building the dockerfile results in failure each time you attempt to run vLLM. ``` python3 -m vllm.entrypoints.openai.api_server --model /llm/models/qwen2.5-1.5b-instruct --device xpu --max_model_len 1024 ``` ``` [W313 09:42:19.085066709 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden. Overriding a previously registered kernel for the same operator and the same dispatch key operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> () registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6 dispatch key: XPU previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477 new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator()) INFO 03-13 09:42:21 [__init__.py:256] Automatically detected platform xpu. [W313 09:42:22.469965695 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden. Overriding a previously registered kernel for the same operator and the same dispatch key operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> () registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6 dispatch key: XPU previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477 new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator()) INFO 03-13 09:42:22 [api_server.py:912] vLLM API server version 0.7.4.dev389+g72e4bd5e INFO 03-13 09:42:22 [api_server.py:913] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/llm/models/qwen2.5-1.5b-instruct', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=1024, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='xpu', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False) INFO 03-13 09:42:22 [api_server.py:209] Started engine process with PID 301 [W313 09:42:24.376887599 OperatorEntry.cpp:154] Warning: Warning only once for all operators, other operators may also be overridden. Overriding a previously registered kernel for the same operator and the same dispatch key operator: aten::_validate_compressed_sparse_indices(bool is_crow, Tensor compressed_idx, Tensor plain_idx, int cdim, int dim, int nnz) -> () registered at /pytorch/build/aten/src/ATen/RegisterSchema.cpp:6 dispatch key: XPU previous kernel: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:30477 new kernel: registered at /build/intel-pytorch-extension/build/Release/csrc/gpu/csrc/aten/generated/ATen/RegisterXPU.cpp:468 (function operator()) INFO 03-13 09:42:26 [__init__.py:256] Automatically detected platform xpu. INFO 03-13 09:42:27 [config.py:576] This model supports multiple tasks: {'classify', 'generate', 'reward', 'embed', 'score'}. Defaulting to 'generate'. WARNING 03-13 09:42:27 [_logger.py:68] bfloat16 is only supported on Intel Data Center GPU, Intel Arc GPU is not supported yet. Your device is Intel(R) Arc(TM) A770 Graphics, which is not supported. will fallback to float16 WARNING 03-13 09:42:27 [_logger.py:68] CUDA graph is not supported on XPU, fallback to the eager mode. INFO 03-13 09:42:31 [config.py:576] This model supports multiple tasks: {'reward', 'generate', 'embed', 'classify', 'score'}. Defaulting to 'generate'. WARNING 03-13 09:42:31 [_logger.py:68] bfloat16 is only supported on Intel Data Center GPU, Intel Arc GPU is not supported yet. Your device is Intel(R) Arc(TM) A770 Graphics, which is not supported. will fallback to float16 WARNING 03-13 09:42:31 [_logger.py:68] CUDA graph is not supported on XPU, fallback to the eager mode. INFO 03-13 09:42:31 [llm_engine.py:235] Initializing a V0 LLM engine (v0.7.4.dev389+g72e4bd5e) with config: model='/llm/models/qwen2.5-1.5b-instruct', speculative_config=None, tokenizer='/llm/models/qwen2.5-1.5b-instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=1024, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=xpu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=/llm/models/qwen2.5-1.5b-instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True, INFO 03-13 09:42:31 [xpu.py:36] Cannot use None backend on XPU. INFO 03-13 09:42:31 [xpu.py:42] Using IPEX attention backend. WARNING 03-13 09:42:31 [_logger.py:68] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") INFO 03-13 09:42:31 [importing.py:16] Triton not installed or not compatible; certain GPU-related functions will not be available. INFO 03-13 09:42:31 [parallel_state.py:948] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0 2025:03:13-09:42:31:( 301) |CCL_WARN| value of CCL_ATL_TRANSPORT changed to be ofi (default:mpi) 2025:03:13-09:42:31:( 301) |CCL_WARN| value of CCL_LOCAL_RANK changed to be 0 (default:-1) 2025:03:13-09:42:31:( 301) |CCL_WARN| value of CCL_LOCAL_SIZE changed to be 1 (default:-1) 2025:03:13-09:42:31:( 301) |CCL_WARN| value of CCL_PROCESS_LAUNCHER changed to be none (default:hydra) 2025:03:13-09:42:32:( 439) |CCL_WARN| no membind support for NUMA node 0, skip thread membind 2025:03:13-09:42:32:( 301) |CCL_WARN| device_family is unknown, topology discovery could be incorrect, it might result in suboptimal performance 2025:03:13-09:42:32:( 301) |CCL_WARN| pidfd is not supported, fallbacks to drmfd exchange mode 2025:03:13-09:42:32:( 301) |CCL_ERROR| ze_fd_manager.cpp:214 fill_device_fds: condition fds[dev_idx] > 0 failed open failed: fd: -1, errno: No such file or directory ERROR 03-13 09:42:32 [engine.py:411] oneCCL: ze_fd_manager.cpp:214 fill_device_fds: EXCEPTION: open failed: fd: -1, errno: No such file or directory ERROR 03-13 09:42:32 [engine.py:411] Traceback (most recent call last): ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 402, in run_mp_engine ERROR 03-13 09:42:32 [engine.py:411] engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 125, in from_engine_args ERROR 03-13 09:42:32 [engine.py:411] return cls(ipc_path=ipc_path, ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 77, in __init__ ERROR 03-13 09:42:32 [engine.py:411] self.engine = LLMEngine(*args, **kwargs) ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/llm_engine.py", line 274, in __init__ ERROR 03-13 09:42:32 [engine.py:411] self.model_executor = executor_class(vllm_config=vllm_config, ) ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/executor/executor_base.py", line 52, in __init__ ERROR 03-13 09:42:32 [engine.py:411] self._init_executor() ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/executor/uniproc_executor.py", line 46, in _init_executor ERROR 03-13 09:42:32 [engine.py:411] self.collective_rpc("init_device") ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/executor/uniproc_executor.py", line 56, in collective_rpc ERROR 03-13 09:42:32 [engine.py:411] answer = run_method(self.driver_worker, method, args, kwargs) ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/utils.py", line 2238, in run_method ERROR 03-13 09:42:32 [engine.py:411] return func(*args, **kwargs) ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/worker/worker_base.py", line 604, in init_device ERROR 03-13 09:42:32 [engine.py:411] self.worker.init_device() # type: ignore ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/worker/xpu_worker.py", line 82, in init_device ERROR 03-13 09:42:32 [engine.py:411] self.init_worker_distributed_environment() ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/worker/xpu_worker.py", line 180, in init_worker_distributed_environment ERROR 03-13 09:42:32 [engine.py:411] torch.distributed.all_reduce(torch.zeros(1).xpu()) ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper ERROR 03-13 09:42:32 [engine.py:411] return func(*args, **kwargs) ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] File "/vllm/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 2806, in all_reduce ERROR 03-13 09:42:32 [engine.py:411] work = group.allreduce([tensor], opts) ERROR 03-13 09:42:32 [engine.py:411] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ERROR 03-13 09:42:32 [engine.py:411] RuntimeError: oneCCL: ze_fd_manager.cpp:214 fill_device_fds: EXCEPTION: open failed: fd: -1, errno: No such file or directory Process SpawnProcess-1: Traceback (most recent call last): File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 413, in run_mp_engine raise e File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 402, in run_mp_engine engine = MQLLMEngine.from_engine_args(engine_args=engine_args, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 125, in from_engine_args return cls(ipc_path=ipc_path, ^^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/multiprocessing/engine.py", line 77, in __init__ self.engine = LLMEngine(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/engine/llm_engine.py", line 274, in __init__ self.model_executor = executor_class(vllm_config=vllm_config, ) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/executor/executor_base.py", line 52, in __init__ self._init_executor() File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/executor/uniproc_executor.py", line 46, in _init_executor self.collective_rpc("init_device") File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/executor/uniproc_executor.py", line 56, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/utils.py", line 2238, in run_method return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/worker/worker_base.py", line 604, in init_device self.worker.init_device() # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/worker/xpu_worker.py", line 82, in init_device self.init_worker_distributed_environment() File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/worker/xpu_worker.py", line 180, in init_worker_distributed_environment torch.distributed.all_reduce(torch.zeros(1).xpu()) File "/vllm/lib/python3.12/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/torch/distributed/distributed_c10d.py", line 2806, in all_reduce work = group.allreduce([tensor], opts) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: oneCCL: ze_fd_manager.cpp:214 fill_device_fds: EXCEPTION: open failed: fd: -1, errno: No such file or directory Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/entrypoints/openai/api_server.py", line 992, in <module> uvloop.run(run_server(args)) File "/vllm/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run return __asyncio.run( ^^^^^^^^^^^^^^ File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete File "/vllm/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper return await main ^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/entrypoints/openai/api_server.py", line 947, in run_server async with build_async_engine_client(args) as engine_client: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/entrypoints/openai/api_server.py", line 139, in build_async_engine_client async with build_async_engine_client_from_engine_args( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__ return await anext(self.gen) ^^^^^^^^^^^^^^^^^^^^^ File "/vllm/lib/python3.12/site-packages/vllm-0.7.4.dev389+g72e4bd5e.xpu-py3.12.egg/vllm/entrypoints/openai/api_server.py", line 233, in build_async_engine_client_from_engine_args raise RuntimeError( RuntimeError: Engine process failed to start. See stack trace for the root cause. ``` ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
open
2025-03-13T09:47:58Z
2025-03-24T15:55:18Z
https://github.com/vllm-project/vllm/issues/14747
[ "installation" ]
HumerousGorgon
10
jwkvam/bowtie
plotly
207
query params
Use Case: 1. Allow deeplinking and sharing links. - [ ] for appropriate controllers, option to use query param - [ ] when loading page, if there are query params load them - [ ] query params should take precedence over stored state
open
2018-02-20T21:27:48Z
2018-09-23T04:27:25Z
https://github.com/jwkvam/bowtie/issues/207
[ "enhancement" ]
jwkvam
0
Gozargah/Marzban
api
1,600
Update notification and one-click update via web interface.
I would like to be notified of a new version release directly in the web interface and the ability to update at the click of a button from the same. For example, add an icon here to symbolize an available update and when you click on it you will see the new available version, a link to the changes and a button to update. ![image](https://github.com/user-attachments/assets/ca1c7515-9b8f-4b74-903b-28491236112e)
closed
2025-01-12T07:32:20Z
2025-01-12T11:29:16Z
https://github.com/Gozargah/Marzban/issues/1600
[ "Duplicate" ]
OwnerLink
2
pytorch/pytorch
numpy
149,194
LSTM slow on PackedSequence
### 🐛 Describe the bug Using LSTM with `PackedSequence` input is very slow. This effect is extreme at high sequence lengths, see tables below. Given that PackedSequence is the only way to get correct output and state for a sequence with non-homogenious length I think this is a big challenge in usability of RNNs. From less detailed experiments, similar slowdown occured for GRU. Below is a script that reproduces it, both on GPU and CPU. It has commented sections for plotting and profiling. Here is how much slower using PackedSequence is: | L | Packed / LSTM Forward (%) | Packed / LSTM Backward (%) | |------|---------------------------|----------------------------| | 10 | 526.90 | 277.19 | | 20 | 739.90 | 460.88 | | 50 | 1162.63 | 381.99 | | 100 | 1506.77 | 395.81 | | 200 | 2300.32 | 590.51 | | 500 | 5967.95 | 1715.68 | | 1000 | 9583.25 | 2793.80 | | 2000 | 10983.58 | 5242.34 | | 4000 | 11384.78 | 8090.40 | ```python import time import torch import torch.nn as nn import numpy as np from functools import lru_cache # Define the LSTM model class SimpleLSTM(nn.Module): def __init__(self, input_size, hidden_size): super().__init__() self.lstm = nn.LSTM(input_size, hidden_size, num_layers=2) def forward(self, x): # x shape: (sequence_length, batch_size, input_size) out, _ = self.lstm(x) return out from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence @lru_cache def get_triangular_lengths(B: int, T: int) -> torch.Tensor: return torch.from_numpy(np.linspace(1, T, num=B).round().astype(np.int64)) class PackedLSTM(nn.Module): def __init__(self, input_size: int, hidden_size: int): super().__init__() self.lstm = nn.LSTM(input_size, hidden_size, num_layers=2) def forward(self, x: torch.Tensor): # x shape: (sequence_length, batch_size, input_size) T, B, D = x.shape # lengths shape: (batch_size,) lengths = get_triangular_lengths(B, T) packed_x = pack_padded_sequence(x, lengths.cpu(), enforce_sorted=False) packed_out, _ = self.lstm(packed_x) out, _ = pad_packed_sequence(packed_out) return out def benchmark(lstm_cls, input_size, hidden_size, batch_size, seq_len, quiet:bool =False): device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = lstm_cls(input_size, hidden_size).to(device) model.train() # Set the model to training mode loss_fn = nn.MSELoss() # Generate random input data and target n_repeats = round( (100_000/seq_len)**(3/4) ) + 1 fwd = [] bckwd = [] for i in range(n_repeats): input_data = torch.randn(seq_len, batch_size, input_size, device=device) target = torch.randn( seq_len, batch_size, hidden_size, device=device ) # Random target for loss computation # Measure the time taken for a forward pass start_time = time.time() output = model(input_data) forward_time = time.time() - start_time fwd.append(forward_time) # Measure the time taken for a backward pass loss = loss_fn(output, target) # Compute loss model.zero_grad() start_time = time.time() loss.backward() backward_time = time.time() - start_time bckwd.append(backward_time) # Print the results if not quiet: print( f"{lstm_cls.__name__} on {device}: Seq Length: {seq_len}, Forward: {forward_time:.5f} seconds, Backward: {backward_time:.5f} seconds" ) return sum(fwd) / n_repeats, sum(bckwd) / n_repeats # Parameters input_size = 16 # Number of input features hidden_size = 128 # Number of LSTM units batch_size = 32 # Number of sequences to process in parallel sequence_lengths = [ 10, 20, 50, 100, 200, 500, 1000, 2000, 4000, ] # Different sequence lengths to benchmark # Run the benchmark for cls in [PackedLSTM, SimpleLSTM, ]: forward_times = [] backward_times = [] for seq_len in sequence_lengths: benchmark( cls, input_size, hidden_size, batch_size, seq_len, quiet=True ) forward_time, backward_time = benchmark( cls, input_size, hidden_size, batch_size, seq_len ) forward_times.append(forward_time) backward_times.append(backward_time) print(f"forward_times_{cls.__name__} = {forward_times}") print(f"backward_times_{cls.__name__} = {backward_times}") # # Plotting the results # plt.figure(figsize=(10, 5)) # plt.plot(sequence_lengths, forward_times, label="Forward Time", marker="o") # plt.plot(sequence_lengths, backward_times, label="Backward Time", marker="o") # plt.xlabel("Sequence Length") # plt.ylabel("Time (seconds)") # plt.title(f"{cls.__name__} Forward and Backward Pass Time vs Sequence Length") # plt.legend() # plt.grid() # plt.ylim(-10, 185) # # plt.xscale('log') # Use logarithmic scale for better visualization # # plt.yscale('log') # Use logarithmic scale for better visualization # plt.show() # import cProfile # import io # import pstats # # # def profile_function(f, *args, **kwargs): # pr = cProfile.Profile() # pr.enable() # result = f(*args, **kwargs) # pr.disable() # # s = io.StringIO() # ps = pstats.Stats(pr, stream=s).sort_stats("cumulative") # ps.print_stats() # # print(s.getvalue()) # Print the profiling results # return result # Return the original function result # # # profile_function(benchmark, PackedLSTM, input_size, hidden_size, batch_size, 1_000) ``` ### Observations, Implications I see multiple posts about this in forums and stack overflow: https://stackoverflow.com/questions/72073853/pytorch-pack-padded-sequence-is-extremely-slow https://discuss.pytorch.org/t/gru-training-very-slow-with-sequence-packing/192222 https://discuss.pytorch.org/t/pytorch-pack-padded-sequence-is-really-slow/150508 It must be that most people a) don't use PackedSequence in the first place, or b) didn't use big values of T in their timeseries and didn't mind the ~3-5 times slowdown for small T. Otherwise this is a big blocker. I'm using PackedSequence to deal with sometimes short sequences in replay buffer in RL context. I would just use forward on padded sequence, but then I can't get correct final state. The problem is that in RL, I want to get the final state on history, and then do single step forward from that step on different possible inputs (critic Q(s,a) in SAC, for example). Profiling has shown that most time is spent in forward / backward methods, not in packing / unpacking. ### Versions Collecting environment information... PyTorch version: 2.6.0+cu124 Is debug build: False CUDA used to build PyTorch: 12.4 ROCM used to build PyTorch: N/A OS: Red Hat Enterprise Linux release 8.10 (Ootpa) (x86_64) GCC version: (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22) Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.28 Python version: 3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-553.22.1.el8_10.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB MIG 1g.10gb Device 0: Nvidia driver version: 560.35.03 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 1 Core(s) per socket: 48 Socket(s): 2 NUMA node(s): 2 Vendor ID: AuthenticAMD CPU family: 25 Model: 1 Model name: AMD EPYC 7643 48-Core Processor Stepping: 1 CPU MHz: 2300.000 CPU max MHz: 3640.9170 CPU min MHz: 1500.0000 BogoMIPS: 4591.43 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 32768K NUMA node0 CPU(s): 0-47 NUMA node1 CPU(s): 48-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca sme sev sev_es Versions of relevant libraries: [pip3] mypy==1.8.0 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.3 [pip3] nvidia-cublas-cu12==12.4.5.8 [pip3] nvidia-cuda-cupti-cu12==12.4.127 [pip3] nvidia-cuda-nvrtc-cu12==12.4.127 [pip3] nvidia-cuda-runtime-cu12==12.4.127 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.2.1.3 [pip3] nvidia-curand-cu12==10.3.5.147 [pip3] nvidia-cusolver-cu12==11.6.1.9 [pip3] nvidia-cusparse-cu12==12.3.1.170 [pip3] nvidia-cusparselt-cu12==0.6.2 [pip3] nvidia-nccl-cu12==2.21.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.4.127 [pip3] optree==0.10.0 [pip3] torch==2.6.0 [pip3] triton==3.2.0 [conda] numpy 1.26.3 pypi_0 pypi [conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi [conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi [conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi [conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi [conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi [conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi [conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi [conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi [conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi [conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi [conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi [conda] optree 0.10.0 pypi_0 pypi [conda] torch 2.6.0 pypi_0 pypi [conda] triton 3.2.0 pypi_0 pypi cc @mikaylagawarecki
open
2025-03-14T14:01:34Z
2025-03-17T15:20:08Z
https://github.com/pytorch/pytorch/issues/149194
[ "module: rnn", "triaged", "topic: performance" ]
ikamensh
0
neuml/txtai
nlp
243
Add notebook for Embeddings SQL functions
Add notebook for Embeddings SQL functions
closed
2022-03-09T12:34:35Z
2022-03-09T12:36:43Z
https://github.com/neuml/txtai/issues/243
[]
davidmezzetti
0
deepfakes/faceswap
deep-learning
1,017
Check failed: vec.size() == NDIMS
5/03/2020 20:04:39 INFO No existing state file found. Generating. 05/03/2020 20:04:42 INFO Creating new 'original' model in folder: 'E:\AiLearning\project\model' 05/03/2020 20:04:42 INFO Loading Trainer from Original plugin... 05/03/2020 20:04:42 INFO Enabled TensorBoard Logging 2020-05-03 20:05:01.916652: F .\tensorflow/core/util/bcast.h:111] Check failed: vec.size() == NDIMS (1 vs. 2) Process exited. why? help me
closed
2020-05-03T12:08:43Z
2020-08-03T07:42:08Z
https://github.com/deepfakes/faceswap/issues/1017
[]
chunxingque
1
jupyter-widgets-contrib/ipycanvas
jupyter
58
CanvasView and clear_rect
Clearing a rectangle is not visible in previous views of the widget. To be more precise construct a canvas widget and show it: ``` from ipycanvas import Canvas canvas = Canvas(size=(200, 200)) canvas.fill_rect(0,0,100,100) canvas ``` Now clear a rectangle: ``` canvas.clear_rect(0,0,20,20) ``` This does not result in a change of the visible canvas widget. Showing another instance of the widget by putting ``` canvas ``` in a new cell shows the cleared rectangle however. Is this intentional? It seems to be due to the fact that the views are updated by drawing the model canvas on them, but clearing a rectangle makes it transparent. Putting `this.clear()` before the call to `drawImage` in `updateCanvas` fixes this for me.
closed
2019-12-20T15:56:44Z
2019-12-22T09:46:24Z
https://github.com/jupyter-widgets-contrib/ipycanvas/issues/58
[]
JeremiasE
5
jupyterlab/jupyter-ai
jupyter
865
Remove "Replace selection" checkbox
### Problem Same as #864. Current "replace selection" checkbox implementation is likely broken by #859. ### Proposed Solution - Remove the "replace selection" checkbox - Allow for text selection to be replaced if a text selection is active in the code action toolbar (which currently only allows for the active cell to be replaced). - Implement a message-global "Replace selection" action within a hamburger menu at the top of each message (e.g. next to the timestamp). This should observe the same behavior as the Replace button in the code action toolbar, i.e. replace text if a text selection is active, replace cell if there exists an active cell, and render as disabled if neither condition is satisfied.
closed
2024-06-30T13:53:42Z
2024-07-12T23:19:43Z
https://github.com/jupyterlab/jupyter-ai/issues/865
[ "enhancement", "priority" ]
dlqqq
0
tensorflow/tensor2tensor
machine-learning
1,232
How much more memory and computation does Universal Transformer require than vanilla Transformer for translation?
If i'm not misunderstanding, Universal Transformer uses bigger but shared encoder and decoder blocks, So if with same number of parameters for the whole model , each Universal Transformer block will need much more memory and computation than vanilla Transformer with same number of layers(or recurrent steps for UT). Can you share the batch size for Universal Transformer and vanilla Transformer during training with same gpu, and speed of inference? @MostafaDehghani
closed
2018-11-17T04:05:56Z
2018-11-21T09:45:26Z
https://github.com/tensorflow/tensor2tensor/issues/1232
[]
wenyong-h
2
jpadilla/django-rest-framework-jwt
django
104
Should getting a token require POST?
This is mostly academic, but retrieving a JWT token has no side-effects on the server, so I reckon it should be possible with a GET request. Thoughts?
closed
2015-04-24T05:53:43Z
2015-04-24T12:30:45Z
https://github.com/jpadilla/django-rest-framework-jwt/issues/104
[]
AlexHill
1
mwaskom/seaborn
pandas
2,841
RFE: move from husl to hsluv
Looks like `husl` module is no longer maintained (last release was in 2015) and instead can be used `hsluv` https://pypi.org/project/hsluv/ https://github.com/hsluv/hsluv-python/
closed
2022-06-08T17:15:27Z
2022-07-16T12:30:24Z
https://github.com/mwaskom/seaborn/issues/2841
[]
kloczek
22
zwczou/weixin-python
flask
61
AttributeError: invalid attribute "notify_url"
File "/home/tzy/venvs/wxpay/lib/python2.7/site-packages/weixin/msg.py", line 216, in __getattr__ raise AttributeError('invalid attribute "' + key + '"') AttributeError: invalid attribute "notify_url 请问这个问题怎么解决 python == 2.7.5 用的是flask框架 调用pay_jsapi接口时报错
closed
2020-03-03T04:28:37Z
2020-03-30T02:44:34Z
https://github.com/zwczou/weixin-python/issues/61
[]
Near-Tam
1
sinaptik-ai/pandas-ai
data-visualization
1,665
MySQL columns type and result type problems
# Issue Title: pandasai Chat Function Returns Non-numeric Value for Expected Numeric Result ## Description When using the `pai.chat` function to query data, the function expects a numeric value but returns a non-numeric value, resulting in an `InvalidOutputValueMismatch` exception. Additionally, I would like to know if the column type definitions in pandasai can be consistent with MySQL types such as `varchar(32)` and `datetime`. ## Code Example `import pandasai as pai from pandasai_local import LocalLLM import matplotlib import matplotlib.pyplot as plt import logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler('app.log', encoding='utf-8'), logging.StreamHandler() ] ) lm_studio_llm = LocalLLM(api_base="http://localhost:11434/v1", model="deepseek-coder-v2:16b", api_key="ollama") pai.config.set({"llm": lm_studio_llm }) ndtzeqk = pai.load("example/sys-mkt-tender-count") yggrxx = pai.load("example/hr-employee-personal") zbxxdjd = pai.load("example/sys-mkt-bidding-wf") gcxmxx = pai.load("example/sys-mkt-project") respose = pai.chat("业主合同管理中拓展额是多少", ndtzeqk, yggrxx, zbxxdjd, gcxmxx) print(respose) ` ## Error Message /home/alan/anaconda3/envs/d_a2/bin/python /home/alan/DataE_v3/api_sql_v3.py Dataset loaded successfully. Dataset loaded successfully. Dataset loaded successfully. Dataset loaded successfully. Dataset loaded successfully. Traceback (most recent call last): File "/home/alan/DataE_v3/api_sql_v3.py", line 27, in <module> respose = pai.chat("豪方东园燃气工程业主合同的建安拓展额是多少?结果以数值类型输出", ndtzeqk, yggrxx, zbxxdjd, gcxmxx, yzhtgl) ... File "/home/alan/anaconda3/envs/d_a2/lib/python3.11/site-packages/pandasai/core/response/parser.py", line 43, in _validate_response raise InvalidOutputValueMismatch( pandasai.exceptions.InvalidOutputValueMismatch: Invalid output: Expected a numeric value for result type 'number', but received a non-numeric value. ## Additional Questions Can the type definition of columns in pandasai be consistent with MySQL types such as varchar(32) and datetime? ## Expected Behavior The pai.chat function should return a numeric value when a numeric result is expected, and column type definitions should be compatible with MySQL types. ## Possible Solution I’m not sure whether the issue is with pandasai or the llm.
open
2025-03-10T09:35:00Z
2025-03-10T12:25:21Z
https://github.com/sinaptik-ai/pandas-ai/issues/1665
[]
Alan-zhong
2
gradio-app/gradio
data-science
10,661
ModuleNotFoundError: No module named 'websockets.asyncio' when importing supabase
### Describe the bug I am using supabase as the backend for my project and hosting it on Hugging Face. It was working about a month ago, but now I'm getting this runtime error when I try to import supabase: "ModuleNotFoundError: No module named 'websockets.asyncio'". I have added asyncio and websockets to my requirements.txt file. I tried to add websockets.asycio to the requirements file as well, but that caused a build error. I also did a full factory rebuild, and that didn't help either. Note that everything works when I run the code locally; the problem is only when I run it on Hugging Face. Thanks in advance! ### Have you searched existing issues? 🔎 - [x] I have searched and found no existing issues ### Reproduction ```python import gradio as gr import supabase ``` ### Screenshot _No response_ ### Logs ```shell ===== Application Startup at 2025-02-23 19:10:40 ===== Traceback (most recent call last): File "/home/user/app/workshops.py", line 11, in <module> import supabase File "/usr/local/lib/python3.10/site-packages/supabase/__init__.py", line 13, in <module> from realtime import AuthorizationError, NotConnectedError File "/usr/local/lib/python3.10/site-packages/realtime/__init__.py", line 9, in <module> from ._async.client import AsyncRealtimeClient File "/usr/local/lib/python3.10/site-packages/realtime/_async/client.py", line 13, in <module> from websockets.asyncio.client import ClientConnection, connect ModuleNotFoundError: No module named 'websockets.asyncio' Traceback (most recent call last): File "/home/user/app/workshops.py", line 11, in <module> import supabase File "/usr/local/lib/python3.10/site-packages/supabase/__init__.py", line 13, in <module> from realtime import AuthorizationError, NotConnectedError File "/usr/local/lib/python3.10/site-packages/realtime/__init__.py", line 9, in <module> from ._async.client import AsyncRealtimeClient File "/usr/local/lib/python3.10/site-packages/realtime/_async/client.py", line 13, in <module> from websockets.asyncio.client import ClientConnection, connect ModuleNotFoundError: No module named 'websockets.asyncio' ``` ### System Info ```shell As mentioned earlier, the issue is only when I run the code on Hugging Face, which is presumably running the latest version of Gradio. ``` ### Severity Blocking usage of gradio
closed
2025-02-23T19:16:44Z
2025-03-03T02:53:53Z
https://github.com/gradio-app/gradio/issues/10661
[ "bug", "needs repro" ]
loganpearce17
5
jupyterhub/zero-to-jupyterhub-k8s
jupyter
3,463
Jupyterhub stable docker image not working out of the box
Hello, Apologies in advance for polluting this repo with my request but it sounds like it should work out of the box. I'm trying to do a very basic setup on the latest version of jupyterhub and I'm stuck on something very simple. But as this is the standard docker image, I'm like what's going on... it should work right? I'm using minikube on a MacBook Pro M1 with docker image: jupyterhub/k8s-hub:3.3.7 I'm getting this: ``` File "/usr/local/etc/jupyterhub/jupyterhub_config.py", line 9, in <module> from kubernetes import client ModuleNotFoundError: No module named 'kubernetes' ``` Now if I add: ``` RUN pip3 install kubernetes ``` in the docker file I then get: ``` File "/usr/local/etc/jupyterhub/z2jh.py", line 6, in <module> from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/usr/local/lib/python3.11/collections/__init__.py) ``` And I'm stuck. I thought it was a Python version issue but clearly that image specifies Python 3.11 in its configuration. Let me know if I'm missing something obvious. Is 3.3.7 the latest stable release? 4.0.0 tags are all dev it seems... Thank you for your support. ``` python3 --version Python 3.11.9 ```
closed
2024-07-19T08:16:27Z
2024-07-21T22:39:31Z
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/3463
[ "support" ]
ThomasLabstep
3
MycroftAI/mycroft-core
nlp
2,656
skill keep reloading indefinitely if there are files with a future date
due to a temporary problem in my system, i ended up with a skill with __init.py modification time set in the future. this caused that skill to constantly be reloaded my mycroft core, and unloaded just after the loading was complted. took some hours of debug to understand this was actually the problem. perhaps skills with files with modification date from the future should just be stopped from loading and have a debug log about it?
closed
2020-08-12T14:48:15Z
2020-09-22T11:25:36Z
https://github.com/MycroftAI/mycroft-core/issues/2656
[]
notmart
1
mlfoundations/open_clip
computer-vision
918
Error loading state_dict using pre-trained checkpoint with CustomTextCLIP model
Dear developers: I recently reinstalled the latest environment for open_clip. When I tried to train new data using a pre-trained checkpoint and model, I encountered an error. The error message indicates that almost all keys in the state_dict are missing or not recognized. traning parameters are: train_command = [ 'python', '-m', 'open_clip_train.main', '--save-frequency', '1', '--zeroshot-frequency', '1', '--report-to', 'tensorboard', '--train-data', train_csv, '--val-data', val_csv, '--csv-img-key', 'img_path', '--csv-caption-key', 'label', '--warmup', '1000', '--batch-size', '32', '--lr', '5e-6', '--wd', '0.1', '--epochs', '1000', '--workers', '4', '--model','hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224', '--csv-separator', ',', '--pretrained','../models/open_clip-main/logs/-model_coca_ViT-L-14-lr_5e-06-b_64-j_8-p_amp/checkpoints/epoch_20.pt', '--val-frequency', '10' ] subprocess.run(train_command) Error Message: RuntimeError: Error(s) in loading state_dict for CustomTextCLIP: Missing key(s) in state_dict: "visual.trunk.cls_token", "visual.trunk.pos_embed", "visual.trunk.patch_embed.proj.weight", "visual.trunk.patch_embed.proj.bias", "visual.trunk.blocks.0.norm1.weight", "visual.trunk.blocks.0.norm1.bias", "visual.trunk.blocks.0.attn.qkv.weight", "visual.trunk.blocks.0.attn.qkv.bias", "visual.trunk.blocks.0.attn.proj.weight", "visual.trunk.blocks.0.attn.proj.bias", "visual.trunk.blocks.0.norm2.weight", "visual.trunk.blocks.0.norm2.bias", "visual.trunk.blocks.0.mlp.fc1.weight", "visual.trunk.blocks.0.mlp.fc1.bias", "visual.trunk.blocks.0.mlp.fc2.weight", "visual.trunk.blocks.0.mlp.fc2.bias", ... I don't know how it happened.
closed
2024-07-22T22:41:17Z
2024-07-23T19:36:43Z
https://github.com/mlfoundations/open_clip/issues/918
[]
cubense
2
ansible/ansible
python
84,794
delegate_to localhost listed as running on remote host in play recap
### Summary When I run a task with Ansible on a remote host and a second task locally with `delegate_to`, the summary of all tasks shows both tasks ran at the remote host and none on the local host. If I verify this, it shows that the location of where the tasks are supposed to run is correct, but the play recap is partly misleading. The following playbook illustrates the issue ```yaml --- # file: playbook.yml - name: Test playbook hosts: all gather_facts: false tasks: - name: Run on the remote ansible.builtin.command: cmd: hostname register: _output_remote - name: Run locally ansible.builtin.command: cmd: hostname delegate_to: localhost register: _output_local - name: Output hostnames ansible.builtin.debug: msg: "Ran on {{ item }}" loop: - "{{ _output_remote['stdout'] }}" - "{{ _output_local['stdout'] }}" ``` If you run this playbook with `ansible-playbook, -i remote_host.example.com, playbook.yml` the output will show three tasks being executed: 1. `Run on the remote` 2. `Run locally` 3. `Output hostnames` The output showing: ``` ``` Task 2 and 3 are supposed to run locally. Task 2 runs locally, showing the local host name `local_host`. Task 3 runs locally, since all use of `ansible.builtin.debug` is automatically running locally. ### Issue Type Bug Report ### Component Name default_callback ### Ansible Version ```console ansible --version ansible [core 2.18.2] [...] python version = 3.13.2 (main, Feb 4 2025, 00:00:00) [GCC 14.2.1 20250110 (Red Hat 14.2.1-7)] (/usr/bin/python3) jinja version = 3.1.5 libyaml = True ``` ### Configuration ```console # if using a version older than ansible-core 2.12 you should omit the '-t all' $ ansible-config dump --only-changed -t all ``` ### OS / Environment Fedora 40/41 ### Steps to Reproduce ```yaml --- # file: playbook.yml - name: Test playbook hosts: all gather_facts: false tasks: - name: Run on the remote ansible.builtin.command: cmd: hostname register: _output_remote - name: Run locally ansible.builtin.command: cmd: hostname delegate_to: localhost register: _output_local - name: Output hostnames ansible.builtin.debug: msg: "Ran on {{ item }}" loop: - "{{ _output_remote['stdout'] }}" - "{{ _output_local['stdout'] }}" ``` ### Expected Results I was expecting to see `local_host.local.int` being added to the play recap. The output of task 2 indicates that the target host `remote_host.example.com` has been changed to `localhost`, but the recap at the bottom misses it completely. There does not seem to be an attribution of locally delegated tasks to the local host within the recap. Other callback plugins with list by execution, time, etc seem to make this distinction, which then leads to contradicting information between the recap and the rest of the output. ### Actual Results ```console [...] TASK [Run on the remote] [...] changed: [remote_.host.example.com] TASK [Run locally] [...] changed: [remote_host.example.com -> localhost] TASK [Output hostnames] [....] ok: [remote_host.example.com] => (item=remote_host) => { "msg": "Ran on remote_host" } ok: [remote_host.example.com] => (item=local_host) => { "msg": "Ran on local_host.local.int" } PLAY RECAP ******************************************************************************************************************* remote_host.example.com : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 .... ``` ### Code of Conduct - [x] I agree to follow the Ansible Code of Conduct
closed
2025-03-08T08:34:43Z
2025-03-18T08:01:22Z
https://github.com/ansible/ansible/issues/84794
[ "bug", "affects_2.18" ]
Spreadcat
3
graphdeco-inria/gaussian-splatting
computer-vision
921
How do I do backpropagation for depth?
We are conducting research through 3dgs. (Submodules / diff-guassian-rasterization / cuda_rasterization) I don't know how to do the backpropagation to depth here.
open
2024-08-03T02:40:11Z
2024-08-03T02:40:11Z
https://github.com/graphdeco-inria/gaussian-splatting/issues/921
[]
Kohwang
0
Farama-Foundation/PettingZoo
api
518
[Proposal] Adding type hints
Per [this](https://sethmlarson.dev/blog/2021-10-18/tests-arent-enough-case-study-after-adding-types-to-urllib3) recently article, we really should work to add type hints to Gym over time. This is something that can be worried about after the major upcoming changes are planned, but none-the-less this is important for a package of Gym's scale. Incremental contributions towards this end are very welcome. I'm also going to be worrying about getting this added to Gym.
closed
2021-10-18T21:56:04Z
2022-05-02T22:44:43Z
https://github.com/Farama-Foundation/PettingZoo/issues/518
[ "help wanted", "in progress" ]
jkterry1
0
miguelgrinberg/Flask-SocketIO
flask
1,512
failed: WebSocket is closed before the connection is established.
**Your question** I am writing a website in Flask, use gunicorn + nginx. To exchange data with the client, I use Flask-SocketIO. I get the error in console: ``` websocket.js:88 WebSocket connection to 'ws://185.151.245.211/socket.io/?EIO=4&transport=websocket&sid=leA-pXn1TBOM1dPMAAAp' failed: websocket.js:88 WebSocket connection to 'ws://185.151.245.211/socket.io/?EIO=4&transport=websocket&sid=IJqPh-L8-GzUCHaFAAAq' failed: websocket.js:205 WebSocket connection to 'ws://185.151.245.211/socket.io/?EIO=4&transport=websocket&sid=hz86wPr5hCgPmG1qAAAr' failed: WebSocket is closed before the connection is established. ``` **Please tell me what the problem is?** _wsgi.py_: ``` from webapp import app, socketio if __name__ == '__main__': socketio.run(app, logger=True, engineio_logger=True, use_reloader=False) ``` _gunicorn.conf.py_: ``` bind = '185.151.245.211:8000' max_requests = 1000 worker_class = "gevent" workers = 1 loglevel = "info" ``` _webapp/__init__.py_: ``` from gevent import monkey monkey.patch_all() import grpc.experimental.gevent grpc.experimental.gevent.init_gevent() from flask import Flask, session, request from config import MqttConfig, MailConfig, ProductionConfig from flask_sqlalchemy import SQLAlchemy from flask_migrate import Migrate from flask_mail import Mail from flask_script import Manager from flask_socketio import SocketIO from flask_mqtt import Mqtt from flask_login import LoginManager from flask_babel import lazy_gettext as _l from apscheduler.schedulers.gevent import GeventScheduler app = Flask(__name__) app.config.from_object(ProductionConfig) app.config.from_object(MqttConfig) app.config.from_object(MailConfig) db = SQLAlchemy(app) migrate = Migrate(app, db, render_as_batch=True) mail = Mail(app) mqtt = Mqtt(app) manager = Manager(app, db) login_manager = LoginManager(app) login_manager.login_view = 'auth' login_manager.login_message = _l("Необходимо авторизоваться для доступа к закрытой странице") login_manager.login_message_category = "error" scheduler = GeventScheduler() scheduler.start() socketio = SocketIO(app, async_mode='gevent') # Production Version import webapp.views from webapp import models if __name__ == "__main__": manager.run() ``` _profile.html_: ``` <html> <script src="https://cdn.socket.io/3.1.1/socket.io.min.js" integrity="sha384-gDaozqUvc4HTgo8iZjwth73C6dDDeOJsAgpxBcMpZYztUfjHXpzrpdrHRdVp8ySO" crossorigin="anonymous"></script> <script> document.addEventListener("DOMContentLoaded", function(event) { var socket = io.connect('http://' + document.domain + ':' + location.port); socket.on('connect', function() { console.log(socket.id); }); }); </script> </html> ``` _/etc/nginx/sites-available/projectnew_: ``` server { listen 80; server_name 185.151.245.211 www.185.151.245.211; client_max_body_size 75M; location / { include proxy_params; proxy_pass http://185.151.245.211:8000; } location /static { alias /home/sammy/projectnew/webapp/static; expires 30d; } location /socket.io { include proxy_params; proxy_http_version 1.1; proxy_buffering off; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_pass http://185.151.245.211:8000/socket.io; } } ``` _/etc/systemd/system/projectnew.service_: ``` [Unit] Description=Gunicorn instance to serve projectnew After=network.target [Service] User=sammy Group=www-data WorkingDirectory=/home/sammy/projectnew Environment="PATH=/home/sammy/projectnew/projectenv/bin" ExecStart=/home/sammy/projectnew/projectenv/bin/gunicorn wsgi:app [Install] WantedBy=multi-user.target ``` **Logs** _nginx error.log_: `2021/04/01 13:36:29 [error] 204428#204428: *7895 connect() failed (111: Connection refused) while connecting to upstream, client: 176.105.207.105, server: 185.151.245.211, request: "GET /profile/2 HTTP/1.1", upstream: "http://185.151.245.211:8000/profile/2", host: "185.151.245.211", referrer: "http://185.151.245.211/contacts"` _projectnew journal_: `Apr 01 13:54:37 toaa gunicorn[205658]: http://185.151.245.211 is not an accepted origin. (further occurrences of this error will be logged with level INFO)`
closed
2021-04-01T12:03:11Z
2021-04-01T13:51:46Z
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1512
[ "question" ]
Yasha261998
2
plotly/dash
jupyter
2,733
[BUG] dcc.Dropdown value does not update when an option is removed (regression from #1868)
**Describe your context** Please provide us your environment, so we can easily reproduce the issue. - replace the result of `pip list | grep dash` below ``` dash 2.13.0 ``` **Describe the bug** Regression #1868 When the options of a dcc.Dropdown are updated to remove a currently selected option, the UI updates to remove that value, but the value parameter does not update. **Expected behavior** Removing an option from options should also remove that option from value, that way it will be sync with the UI. **Screenshots** Using the same code as #1868 but with version 2.13.0 we have ![msedge_oxESf4Irhm](https://github.com/plotly/dash/assets/770605/79516863-9dd6-4e9b-86b0-32f9b6cc7138)
closed
2024-01-29T14:44:17Z
2024-04-09T14:12:46Z
https://github.com/plotly/dash/issues/2733
[ "regression", "bug", "sev-2" ]
Thuener
1
AirtestProject/Airtest
automation
467
手机锁屏后执行脚本异常
@yimelia 在命令行里运行如下命令: py -3.6 airtest run test2.air --device android:///10.0.2.55:7556?ori_method=ADBORI&&touch_method=ADBTOUCH --log 当手机锁屏时,脚本运行结果会异常,提示找不到目标 某些手机(vivox23)无法设置永不锁屏(最多30分钟),脚本应如何设置,才能在手机锁屏时进行解锁(或者在运行前先自动进行解锁操作) 错误截图如下: ![6](https://user-images.githubusercontent.com/26775694/61603796-9bff6a00-ac71-11e9-8dd6-26c64c79ad67.jpg)
open
2019-07-22T03:13:17Z
2019-07-22T03:32:25Z
https://github.com/AirtestProject/Airtest/issues/467
[ "bug", "module/yosemite" ]
ymdhtt
4
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,584
Applying to a general image restoration task
Hello How are you? Thanks for contributing to this project. I am going to use this method for a general image restoration (low quality -> high quality). There are several kinds of images such as blur and noise samples with noise in the low quality images. Is it possible to apply this method without any architecture change to such a general image restoration task? Thanks
open
2023-06-20T11:37:48Z
2023-06-20T11:37:48Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1584
[]
rose-jinyang
0
microsoft/nni
deep-learning
5,134
torch.jit._trace.TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations!
**Describe the issue**: Hello, I'm using NNI to prune my model(DD3D https://github.com/TRI-ML/dd3d). And an error as shown in the title occurred when initializing a class named ModelSpeedup. After debugging, it is found that the torch.jit.trace API was called when the TorchGraph class was initialized. However, the DD3D model cannot use this interface. Is there any other solution. **Environment**: - NNI version: 2.8 - Training service (local|remote|pai|aml|etc): local - Client OS: - Server OS (for remote mode only): - Python version: 2.8 - PyTorch/TensorFlow version: torch 1.9.0+cu102 - Is conda/virtualenv/venv used?: conda - Is running in Docker?: no **Log message**: File "/home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/jit/_trace.py", line 744, in trace _module_class, File "/home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/jit/_trace.py", line 985, in trace_module _module_class, File "/home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/jit/_trace.py", line 521, in _check_trace raise TracingCheckError(*diag_info) torch.jit._trace.TracingCheckError: Tracing failed sanity checks! ERROR: Graphs differed across invocations! Graph diff: graph(%self.1 : __torch__.tridet.modeling.dd3d.core.DD3D, %batched_inputs : Tensor): %2 : __torch__.tridet.modeling.dd3d.fcos3d.FCOS3DHead = prim::GetAttr[name="fcos3d_head"](%self.1) %3 : __torch__.tridet.modeling.dd3d.fcos2d.FCOS2DHead = prim::GetAttr[name="fcos2d_head"](%self.1) %4 : __torch__.detectron2.modeling.backbone.fpn.FPN = prim::GetAttr[name="backbone"](%self.1) %60 : float = prim::Constant[value=2.](), scope: __module.backbone # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/nn/functional.py:3690:0 - %61 : Tensor = prim::Constant[value={1e-05}](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer/__module.backbone.bottom_up.base_layer.norm # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/batch_norm.py:48:0 ? ^^^^^^ ^ ^^^^^ ^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^ + %61 : float = prim::Constant[value=0.10000000000000001](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer/__module.backbone.bottom_up.base_layer.norm # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/nn/functional.py:2282:0 ? ^^^^^ ^^ ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^ ^ + - %62 : int = prim::Constant[value=-1](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer/__module.backbone.bottom_up.base_layer.norm # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/batch_norm.py:50:0 ? ^^^ ^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + %62 : float = prim::Constant[value=1.0000000000000001e-05](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer/__module.backbone.bottom_up.base_layer.norm # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/torch/nn/functional.py:2282:0 ? ^^^^^ +++++++++++++++++++ ^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^ - %63 : int = prim::Constant[value=6](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer/__module.backbone.bottom_up.base_layer.norm # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/batch_norm.py:53:0 ? ^^^ ^ -------------------------------------------- ^^^^^^^^^^^^^^ - + %63 : bool = prim::Constant[value=1](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^^ ^ ^^^^^^^^^^^^^^ - %64 : bool = prim::Constant[value=1](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^^ ^ + %64 : int = prim::Constant[value=0](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^ ^ - %65 : int = prim::Constant[value=0](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^ + %65 : bool = prim::Constant[value=0](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^^ - %66 : bool = prim::Constant[value=0](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^^ ^ + %66 : int = prim::Constant[value=3](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^ ^ - %67 : int = prim::Constant[value=3](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^ + %67 : int = prim::Constant[value=1](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^ - %68 : int = prim::Constant[value=1](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 - %69 : NoneType = prim::Constant(), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer ? ^ + %68 : NoneType = prim::Constant(), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.base_layer ? ^ - %70 : int = prim::Constant[value=2](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.level1/__module.backbone.bottom_up.level1.0 # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^^^^^^ + %69 : int = prim::Constant[value=2](), scope: __module.backbone/__module.backbone.bottom_up/__module.backbone.bottom_up.level1/__module.backbone.bottom_up.level1.0 # /home/yaoshw/anaconda3/envs/DD3D/lib/python3.7/site-packages/detectron2/layers/wrappers.py:215:0 ? ^^^^^^^^ - %71 : __torch__.detectron2.modeling.backbone.fpn.LastLevelP6P7 = prim::GetAttr[name="top_block"](%4) ? ^ + %70 : __torch__.detectron2.modeling.backbone.fpn.LastLevelP6P7 = prim::GetAttr[name="top_block"](%4) ? ^ - %72 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_output3"](%4) ? ^ + %71 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_output3"](%4) ? ^ - %73 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_lateral3"](%4) ? ^ + %72 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_lateral3"](%4) ? ^ - %74 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_output4"](%4) ? ^ + %73 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_output4"](%4) ? ^ - %75 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_lateral4"](%4) ? ^ + %74 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_lateral4"](%4) ? ^ - %76 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_output5"](%4) ? ^ + %75 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_output5"](%4) ? ^ - %77 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_lateral5"](%4) ? ^ + %76 : __torch__.detectron2.layers.wrappers.Conv2dCB = prim::GetAttr[name="fpn_lateral5"](%4) ? ^ - %78 : __torch__.tridet.modeling.feature_extractor.dla.DLA = prim::GetAttr[name="bottom_up"](%4) ? ^ + %77 : __torch__.tridet.modeling.feature_extractor.dla.DLA = prim::GetAttr[name="bottom_up"](%4) ? ^ - %79 : __torch__.tridet.modeling.feature_extractor.dla.Tree = prim::GetAttr[name="level5"](%78) ? ^ ^ + %78 : __torch__.tridet.modeling.feature_extractor.dla.Tree = prim::GetAttr[name="level5"](%77) ? ^ ^ + %79 : __torch__.tridet.modeling.feature_extractor.dla.Tree = prim::GetAttr[name="level4"](%77) ...... **How to reproduce it?**: import os import sys sys.path.append(os.path.join(os.path.dirname(__file__), "../")) import torch.multiprocessing torch.multiprocessing.set_sharing_strategy('file_system') import logging from collections import OrderedDict, defaultdict import hydra import torch import wandb from fvcore.common.checkpoint import Checkpointer, PeriodicCheckpointer from torch.cuda import amp from torch.nn import SyncBatchNorm from torch.nn.parallel import DistributedDataParallel from tqdm import tqdm import detectron2.utils.comm as d2_comm from detectron2.data import MetadataCatalog from detectron2.evaluation import DatasetEvaluators, inference_on_dataset from detectron2.modeling import build_model from detectron2.solver import build_lr_scheduler, build_optimizer from detectron2.utils.events import CommonMetricPrinter, get_event_storage import tridet.modeling # pylint: disable=unused-import import tridet.utils.comm as comm from tridet.data import build_test_dataloader, build_train_dataloader from tridet.data.dataset_mappers import get_dataset_mapper from tridet.data.datasets import random_sample_dataset_dicts, register_datasets from tridet.evaluators import get_evaluator from tridet.modeling import build_tta_model from tridet.utils.s3 import sync_output_dir_s3 from tridet.utils.setup import setup from tridet.utils.train import get_inference_output_dir, print_test_results from tridet.utils.visualization import mosaic, save_vis from tridet.utils.wandb import flatten_dict, log_nested_dict from tridet.visualizers import get_dataloader_visualizer, get_predictions_visualizer from tridet.structures.image_list import ImageList import cv2 print('CUDA available: {}'.format(torch.cuda.is_available())) device = 'cuda' @hydra.main(config_path="../configs/", config_name="defaults") def main(cfg): setup(cfg) model = build_model(cfg) checkpoint_file = "../model_final.pth"#cfg.MODEL.CKPT if checkpoint_file: Checkpointer(model).load(checkpoint_file) model_input = (torch.zeros((1, 3, 384, 1280), dtype=torch.float32, device='cuda:0')) config_list = [{ 'sparsity_per_layer': 0.3, 'op_types': ['Conv2d'] }, { 'exclude': True, 'op_names': ['fcos2d_head.cls_logits','fcos2d_head.box2d_reg','fcos3d_head.box3d_quat.0', 'fcos3d_head.box3d_ctr.0', 'fcos3d_head.box3d_depth.0', 'fcos3d_head.box3d_size.0', 'fcos3d_head.box3d_conf.0'] }] from nni.compression.pytorch.pruning import L1NormPruner pruner = L1NormPruner(model, config_list) _, masks = pruner.compress() pruner._unwrap_model() from nni.compression.pytorch.speedup import ModelSpeedup spedd_up = ModelSpeedup(model, model_input, masks,map_location=None,batch_dim=0, confidence=1) spedd_up.speedup_model() if __name__ == '__main__': main() # pylint: disable=no-value-for-parameter
closed
2022-09-21T02:29:35Z
2022-10-14T08:43:37Z
https://github.com/microsoft/nni/issues/5134
[ "waiting user confirm", "support" ]
yaoshw
9
sktime/sktime
data-science
7,368
[ENH] interfacing TSC algorithms by Dempster et al
It would be nice to add some of the state-of-art algorithms by Dempster et al (maintained primarily by Dempster = @angus924). There are two sub-points here: * end state of the interface/package architecture. The repositories by @angus924 are not full packages, only executable code. They could be turned into full packages, but would then require maintenance as full packages. * the algorithms are/were released under GPL, a copyleft license. Some have already been added to `sktime`, and it looks like much of the code is identical, but `sktime` is permissive. * For the algorithms already in `sktime`, we need to clarify the license status, e.g., where they forked by a non-owner from GPL repositories, in which case there is a license violation that we need to resolve, or copied by an owner, in which case the license was changed with authorization. * For the algorithms already in `sktime`, we still need to clarify the code end state, e.g., merge back in the original package, which have separate bugfixes applied to the code, etc. We could look at primary location in `sktime`, or merge back in the author package, which would then need to be transformed into a proper python package * For the algorithms not in `sktime`, the license issue interacts with the lack of a package that is importable. We can of course attach a GPL license to a subfolder, but then need to warn the user. The algorithms: - [ ] QUANT https://github.com/angus924/quant - [ ] HYDRA https://github.com/angus924/hydra - [ ] the rocket algorithms (already in `sktime`) - https://github.com/angus924/minirocket, https://github.com/angus924/rocket - [ ] the multirocket algorithms (already in `sktime`), maintained by @ChangWeiTan, @ViktorvdValk - https://github.com/ChangWeiTan/MultiRocket
open
2024-11-06T09:08:26Z
2024-11-06T09:13:13Z
https://github.com/sktime/sktime/issues/7368
[ "interfacing algorithms", "module:classification", "module:transformations", "enhancement" ]
fkiraly
2
elliotgao2/toapi
flask
110
使用toapi run 时出现问题
![question](https://user-images.githubusercontent.com/26709018/35477832-f0d99f50-0407-11e8-9b6a-f2a1cb9c1364.png)
closed
2018-01-28T00:48:21Z
2018-03-04T12:12:05Z
https://github.com/elliotgao2/toapi/issues/110
[]
opooc
1
fastapi/sqlmodel
fastapi
437
Upsert of 2 table with a one to many relation onto them: fastest implementation (single vs batch vs one commit vs ?)
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python class Lei(LeiBase, table=True): """ lei db. Start from csv taken here https://www.gleif.org/fr/lei-data/lei-mapping/download-isin-to-lei-relationship-files/ Could be completed by direct gleif api calls https://documenter.getpostman.com/view/7679680/SVYrrxuU?version=latest#3e013a79-b5f6-46a7-b9e9-299fde0b3a03 """ id: Optional[int] = Field(default=None, primary_key=True) created_at: datetime = Field(default_factory=datetime.utcnow) isins: List['Isin'] = Relationship(back_populates='lei') class Isin(SQLModel, table=True): """ isin db. Start from csv taken here https://www.gleif.org/fr/lei-data/lei-mapping/download-isin-to-lei-relationship-files/ Could be completed by direct gleif api calls https://documenter.getpostman.com/view/7679680/SVYrrxuU?version=latest#3e013a79-b5f6-46a7-b9e9-299fde0b3a03 """ id: Optional[int] = Field(default=None, primary_key=True) created_at: datetime = Field(default_factory=datetime.utcnow) isin: str = Field(index=True) lei_id: Optional[int] = Field(default=None, foreign_key='lei.id') lei: Optional['Lei'] = Relationship(back_populates='isins') def create_or_update_isins_leis(grouped_isins: Dict[str, List[str]], session: Session): """ Code suggestion, to be improved. isin_lei_models is not provided but """ for lei, isins in grouped_isins.items(): isin_lei_models(lei, isins, session) session.commit() def isin_lei_models(lei: str, isins: List[str], session: Session): """ Possible way to upsert lei and isins """ in_db_lei = session.exec(select(Lei).where(col(Lei.lei) == lei)).first() or Lei(lei=lei) isins_to_insert = list(set(isins) - {isin.isin for isin in in_db_lei.isins}) for isin in isins_to_insert: if in_db_isin := session.exec(select(Isin).where(col(Isin.isin) == isin)).first(): in_db_isin.lei = lei session.add(in_db_isin) return session.add(Isin(isin=isin, lei=lei)) ``` ### Description I simplified the code as much as to focus on my interrogation. I have a simple one to many relation between Lei and Isin. Imagine that I am given new {lei: isins} values and I have to either insert or update the lei. My question is simple: should I commit once at the end or in the for loop just above ? or in the lei loop? Should I batch if I have 10M leis? Is the answer the same in Create and Upsert mode? I found this related stackoverflow thread https://stackoverflow.com/questions/24377193/best-way-to-update-millions-of-records-in-sql-table-with-foreign-keys, which describes well what I am trying to accomplish, but the solution seems far fetched (buy maybe it is not?) I can experiment all suggested ideas if needs be :) ### Operating System Linux ### Operating System Details python:3.10-slim docker image Requirements.in (then converted to txt via pip-compile) ``` bcrypt==3.2.0 certifi==2021.5.30 cryptography==3.4.8 dash==2.6.0 dash-auth==1.4.1 dash_core_components==2.0.0 dash_bootstrap_components==1.2.0 fastapi==0.80.0 jupyter==1.0.0 jupyter-dash==0.4.2 nltk==3.7 numpy==1.23.1 openpyxl==3.0.10 pandas==1.4.3 passlib==1.7.4 pillow==9.2.0 plotly==5.9.0 psycopg2==2.9.1 pydeps==1.10.22 python-dateutil==2.8.2 python-dotenv==0.19.0 python-editor==1.0.4 python-jose==3.3.0 python-multipart==0.0.5 requests==2.26.0 sqladmin==0.3.0 sqlalchemy==1.4.35 # Needed, otherwise relationship are not working. See https://github.com/tiangolo/sqlmodel/issues/315 sqlmodel==0.0.6 strsimpy==0.2.1 tqdm==4.64.0 uvicorn==0.18.2 ``` ### SQLModel Version 0.0.6 ### Python Version 3.10 ### Additional Context In creation mode, a csv with 8 millions LEI/ISINS line takes less thant 30 minutes to insert. In upsert I gave up, it was too long with my naive implentation :)
open
2022-09-05T11:52:41Z
2022-09-06T13:21:54Z
https://github.com/fastapi/sqlmodel/issues/437
[ "question" ]
tepelbaum
2
mage-ai/mage-ai
data-science
5,100
[BUG] Circualr reference error when returning pyspark dataframe
### Mage version latest ### Describe the bug When returning a pyspark dataframe in a data loader block, the error below is thrown: [ERROR] LocalStorage.write_json_file: Circular reference detected JSONDecodeError: Expecting value: line 1 column 1 (char 0) The above exception was the direct cause of the following exception: Exception Traceback (most recent call last) Exception: Failed to read json file: /home/src/mage_data/default_repo/pipelines/test_pipe/.variables/load_block/output_0/data.json If the dataframe is converted to pandas before returning, it works as expected. Spark works as creating the pyspark dataframe and saving it as a databricks table is working as expected. ### To reproduce This was tested on azure using an azure databricks cluster, openjdk17 and databricks-connect and by setting up the following environment variables on container creation through terraform: DATABRICKS_HOST, DATABRICKS_CLUSTER_ID, DATABRICKS_TOKEN, SPARK_REMOTE. The pipeline type was also changed to databricks. ### Expected behavior _No response_ ### Screenshots _No response_ ### Operating system _No response_ ### Additional context _No response_
open
2024-05-21T20:47:05Z
2024-05-23T17:57:33Z
https://github.com/mage-ai/mage-ai/issues/5100
[ "bug" ]
MRehd
0
donnemartin/system-design-primer
python
1,031
How to ace a systems design interview link is not working
open
2024-12-15T13:17:25Z
2025-01-12T13:45:47Z
https://github.com/donnemartin/system-design-primer/issues/1031
[]
MahmoudNasser01
1
tiangolo/uvicorn-gunicorn-fastapi-docker
pydantic
20
Pull access denied for ttiangolo/uvicorn-gunicorn-fastapi
H, I get this error when I try to build an image from this tool: `pull access denied for ttiangolo/uvicorn-gunicorn-fastapi, repository does not exist or may require 'docker login': denied: requested access to the resource is denied` How can I get past this? It doesn't happen with other repos.
closed
2019-10-24T13:59:34Z
2020-04-10T17:15:58Z
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/20
[]
PyDataBlog
3
dynaconf/dynaconf
flask
492
Checking key for being deleted too early
Checking key for being deleted too early prevents it from refreshing if the a key was delete and then set again: ``` if key in self._deleted: return default ``` ``` def get( self, key, default=None, cast=None, fresh=False, dotted_lookup=True, parent=None, ): """ Get a value from settings store, this is the prefered way to access:: >>> from dynaconf import settings >>> settings.get('KEY') :param key: The name of the setting value, will always be upper case :param default: In case of not found it will be returned :param cast: Should cast in to @int, @float, @bool or @json ? :param fresh: Should reload from loaders store before access? :param dotted_lookup: Should perform dotted-path lookup? :param parent: Is there a pre-loaded parent in a nested data? :return: The value if found, default or None """ if "." in key and dotted_lookup: return self._dotted_get( dotted_key=key, default=default, cast=cast, fresh=fresh, parent=parent, ) key = upperfy(key) if key in self._deleted: return default if ( fresh or self._fresh or key in getattr(self, "FRESH_VARS_FOR_DYNACONF", ()) ) and key not in dir(default_settings): self.unset(key) self.execute_loaders(key=key) ```
closed
2020-12-18T13:26:09Z
2021-03-01T18:35:01Z
https://github.com/dynaconf/dynaconf/issues/492
[ "bug" ]
dmugtasimov
2
wkentaro/labelme
deep-learning
389
Propose to exclude imageData from JSON annotation file. And use imagePath only for information purposes.
Now if annotation is created in JSON file, image data also is stored in JSON file. Is there any arguments for this decision? I see only disadvantages of this: 1) sometimes it seems to be convenient to handle annotation files manually: create it for new image on the basis of one or may be several other files and then update using LabelMe editor. But now you need to do something with ImageData. Yes, I know about "keep previous" mode, but sometimes handcrafting is more convenient. 2) writing converter to LaberlMe annotation format becomes much more sofisticated. 3) JSON files have huge size (compared with normal json), it makes more time to copy them, more place to store, more difficult to open in text editor. 4) Storing the same information in to places always increase risk of misalignment errors. You can be sure that you annotate JPG file you have selected but you actually annotate only copy stored in json. Sometimes it can become not the same. But other tools that use your annotation (NNets etc) uses JPG , and it will be hard to catch the mistake. Last point also refers to imagePath field. Actually then you load JPG image (loadFile function) JPG and JSON files are paired by filename. But then imagePath is read from JSON. This ambigulity also can cause problems. For instance, I expect that if I rename both JPG and JSON, it will be OK, but suddenly it appeares, that I also have to edit every JSON file. I propose 1) exclude imageData from JSON at all 2) either exclude imagePath as well, or keep it just for information purposes (for human readiing in JSON), but don't use it in a program. What do you think about it? Modification seems not to be difficult, I'm going to do it.
closed
2019-05-06T11:28:25Z
2020-01-27T01:27:28Z
https://github.com/wkentaro/labelme/issues/389
[]
IlyaOvodov
9
onnx/onnx
machine-learning
6,289
[Feature request] Implement the reference runtime with the array api
### System information _No response_ ### What is the problem that this feature solves? Enable multi backend support in the reference runtime. ### Alternatives considered _No response_ ### Describe the feature https://github.com/data-apis/array-api-compat ### Will this influence the current api (Y/N)? _No response_ ### Feature Area _No response_ ### Are you willing to contribute it (Y/N) Yes ### Notes _No response_
open
2024-08-08T22:33:44Z
2025-01-01T17:46:43Z
https://github.com/onnx/onnx/issues/6289
[ "topic: enhancement", "module: reference implementation", "contributions welcome" ]
justinchuby
1
marshmallow-code/apispec
rest-api
76
Getting Assert when using many=True
Getting this assert when I use a Marshmallow schema with many=True > Schemas with many=True are only supported for 'json' location (aka 'in: body') apispec/ext/marshmallow/swagger.py ~230. I am using the following decorator: `@use_kwargs(MySchema(many=True), locations=('json',))`
closed
2016-05-20T15:37:38Z
2016-05-20T18:43:44Z
https://github.com/marshmallow-code/apispec/issues/76
[]
incognick
1
ScrapeGraphAI/Scrapegraph-ai
machine-learning
777
use in windows 10 error : UnboundLocalError: local variable 'browser' referenced before assignment
playwright install : ![image](https://github.com/user-attachments/assets/cdb5899f-8171-4b08-803c-b2d7fdd2cb85) use : ![image](https://github.com/user-attachments/assets/637460b9-371d-4e8a-aa6a-85fadd434604) error : **UnboundLocalError: local variable 'browser' referenced before assignment** detail is : ``` UnboundLocalError Traceback (most recent call last) Cell In[4], line 27 18 # ************************************************ 19 # Create the SmartScraperGraph instance and run it 20 # ************************************************ 21 smart_scraper_graph = SmartScraperGraph( 22 prompt="Find some information about what does the company do, the name and a contact email.", 23 source="https://scrapegraphai.com/", 24 config=graph_config 25 ) ---> 27 result = smart_scraper_graph.run() 28 print(result) File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\graphs\smart_scraper_graph.py:212, in SmartScraperGraph.run(self) 204 """ 205 Executes the scraping process and returns the answer to the prompt. 206 207 Returns: 208 str: The answer to the prompt. 209 """ 211 inputs = {"user_prompt": self.prompt, self.input_key: self.source} --> 212 self.final_state, self.execution_info = self.graph.execute(inputs) 214 return self.final_state.get("answer", "No answer found.") File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\graphs\base_graph.py:284, in BaseGraph.execute(self, initial_state) 282 return (result["_state"], []) 283 else: --> 284 return self._execute_standard(initial_state) File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\graphs\base_graph.py:198, in BaseGraph._execute_standard(self, initial_state) 185 graph_execution_time = time.time() - start_time 186 log_graph_execution( 187 graph_name=self.graph_name, 188 source=source, (...) 196 exception=str(e) 197 ) --> 198 raise e 199 node_exec_time = time.time() - curr_time 200 total_exec_time += node_exec_time File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\graphs\base_graph.py:182, in BaseGraph._execute_standard(self, initial_state) 180 with self.callback_manager.exclusive_get_callback(llm_model, llm_model_name) as cb: 181 try: --> 182 result = current_node.execute(state) 183 except Exception as e: 184 error_node = current_node.node_name File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\nodes\fetch_node.py:130, in FetchNode.execute(self, state) 128 return self.handle_local_source(state, source) 129 else: --> 130 return self.handle_web_source(state, source) File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\nodes\fetch_node.py:305, in FetchNode.handle_web_source(self, state, source) 303 else: 304 loader = ChromiumLoader([source], headless=self.headless, **loader_kwargs) --> 305 document = loader.load() 307 if not document or not document[0].page_content.strip(): 308 raise ValueError("""No HTML body content found in 309 the document fetched by ChromiumLoader.""") File D:\softs\anaconda3\envs\flux\lib\site-packages\langchain_core\document_loaders\base.py:31, in BaseLoader.load(self) 29 def load(self) -> list[Document]: 30 """Load data into Document objects.""" ---> 31 return list(self.lazy_load()) File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\docloaders\chromium.py:192, in ChromiumLoader.lazy_load(self) 189 scraping_fn = getattr(self, f"ascrape_{self.backend}") 191 for url in self.urls: --> 192 html_content = asyncio.run(scraping_fn(url)) 193 metadata = {"source": url} 194 yield Document(page_content=html_content, metadata=metadata) File D:\softs\anaconda3\envs\flux\lib\site-packages\nest_asyncio.py:30, in _patch_asyncio.<locals>.run(main, debug) 28 task = asyncio.ensure_future(main) 29 try: ---> 30 return loop.run_until_complete(task) 31 finally: 32 if not task.done(): File D:\softs\anaconda3\envs\flux\lib\site-packages\nest_asyncio.py:98, in _patch_loop.<locals>.run_until_complete(self, future) 95 if not f.done(): 96 raise RuntimeError( 97 'Event loop stopped before Future completed.') ---> 98 return f.result() File D:\softs\anaconda3\envs\flux\lib\asyncio\futures.py:201, in Future.result(self) 199 self.__log_traceback = False 200 if self._exception is not None: --> 201 raise self._exception.with_traceback(self._exception_tb) 202 return self._result File D:\softs\anaconda3\envs\flux\lib\asyncio\tasks.py:232, in Task.__step(***failed resolving arguments***) 228 try: 229 if exc is None: 230 # We use the `send` method directly, because coroutines 231 # don't have `__iter__` and `__next__` methods. --> 232 result = coro.send(None) 233 else: 234 result = coro.throw(exc) File D:\softs\anaconda3\envs\flux\lib\site-packages\scrapegraphai\docloaders\chromium.py:136, in ChromiumLoader.ascrape_playwright(self, url) 134 results = f"Error: Network error after {self.RETRY_LIMIT} attempts - {e}" 135 finally: --> 136 await browser.close() 138 return results ``` pelase help me
closed
2024-10-29T14:05:21Z
2025-01-06T04:19:58Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/777
[]
1272870698
11
ultralytics/ultralytics
machine-learning
19,055
OBB does not return boxes, despite producing an image and bounding box visualizations
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question Hi, I'm encountering an issue where the YOLO models (`yolo11n-obb.pt`, `yolo11m-obb.pt`) don't return the bounding box values, even though the output includes the image with bounding boxes visually displayed. I have searched in the documentation and I found the same issue in this discussion: [https://github.com/orgs/ultralytics/discussions/8462#discussioncomment-10200090](https://github.com/orgs/ultralytics/discussions/8462#discussioncomment-10200090) The proposed solution was to decrease the confidence score, but I don't think that’s the issue because when I run the script, the output image correctly shows the bounding boxes and labels. However, the resulting output does not contain any values for the bounding box coordinates in the "boxes" field. - **My** code: ``` bash from ultralytics import YOLO model = YOLO("yolo11n-obb.pt") frame = 'yt_videos/image.png' results = model(frame, save=True) print(results) ``` ![Image](https://github.com/user-attachments/assets/19e3a082-c44e-4e21-ba1a-9f609c1e248f) ![Image](https://github.com/user-attachments/assets/e490b2f3-3e48-4e8e-9a76-834576c3150f) **- Result Output** `Speed: 3.3ms preprocess, 81.1ms inference, 1.9ms postprocess per image at shape (1, 3, 576, 1024) Results saved to runs/obb/track4 [ultralytics.engine.results.Results object with attributes: boxes: None keypoints: None masks: None names: {0: 'plane', 1: 'ship', 2: 'storage tank', 3: 'baseball diamond', 4: 'tennis court', 5: 'basketball court', 6: 'ground track field', 7: 'harbor', 8: 'bridge', 9: 'large vehicle', 10: 'small vehicle', 11: 'helicopter', 12: 'roundabout', 13: 'soccer ball field', 14: 'swimming pool'} obb: ultralytics.engine.results.OBB object orig_img: array([[[137, 136, 135], [137, 136, 135], [137, 136, 135], ..., [177, 176, 175], [177, 176, 175], [177, 176, 175]], [[137, 136, 135], [137, 136, 135], [137, 136, 135], ..., [177, 176, 175], [177, 176, 175], [177, 176, 175]], [[137, 136, 135], [137, 136, 135], [137, 136, 135], ..., [177, 176, 175], [177, 176, 175], [177, 176, 175]], ..., [[ 87, 92, 95], [ 87, 92, 95], [ 87, 92, 95], ..., [ 45, 75, 73], [ 46, 76, 74], [ 44, 75, 73]], [[ 87, 92, 95], [ 87, 92, 95], [ 87, 92, 95], ..., [ 49, 77, 75], [ 47, 76, 74], [ 44, 74, 72]], [[ 91, 96, 99], [ 91, 96, 99], [ 93, 98, 101], ..., [ 50, 77, 75], [ 46, 75, 73], [ 42, 71, 70]]], dtype=uint8) orig_shape: (1080, 1920)` Am I missing something? I would appreciate any help, Thank you ### Additional _No response_
closed
2025-02-04T05:51:50Z
2025-02-17T05:28:55Z
https://github.com/ultralytics/ultralytics/issues/19055
[ "question", "OBB" ]
FabianEP11
8
PokeAPI/pokeapi
graphql
217
App Showcase section
We get a tonne of new apps being developed with PokéAPI. We should build a section that helps to promote them, especially ones that are new and from first-time programmers. This doesn't need to be anything fancy, but a simple form for people to submit their projects with a URL link and an image, with a simple way of approving apps, is all it takes. Who would like to work on something like this?
closed
2016-06-24T15:55:09Z
2017-06-12T12:51:54Z
https://github.com/PokeAPI/pokeapi/issues/217
[ "enhancement" ]
phalt
3
ploomber/ploomber
jupyter
758
Do not override user settings when running cloud set-key tests
Some of the cloud tests call the set-key command to verify the functionality. However, this causes the user's existing key to be overridden after running the tests. We should add a `pytest.fixture` that backs up the user settings and it restores them after the test runs. Essentially, we need to backup the file that stores the key. https://github.com/ploomber/ploomber/blob/9e11fcbcdf763d6f3111e8254dc327a56306971e/tests/cli/test_cloud.py#L125
closed
2022-05-12T22:11:22Z
2022-05-19T02:42:57Z
https://github.com/ploomber/ploomber/issues/758
[ "good first issue" ]
edublancas
2
ets-labs/python-dependency-injector
asyncio
816
Cannot build using GCC v13 & v14
Hello, I cannot install dependency injector having versions 13 and 14 of gcc on my system. Could you provide any information which versions of gcc are supported?
open
2024-09-04T21:01:40Z
2024-12-08T10:39:14Z
https://github.com/ets-labs/python-dependency-injector/issues/816
[]
fedya-eremin
1
gradio-app/gradio
data-visualization
10,211
Gallery preview row unable to display large number of images (overflow images are hidden and cannot be selected)
### Describe the bug <img width="1439" alt="image" src="https://github.com/user-attachments/assets/7e8e1de0-1bbf-477c-afb6-af5a62fe269f" /> <img width="1439" alt="image" src="https://github.com/user-attachments/assets/216ad393-772c-4875-9385-cf7ba57e2efe" /> ### Have you searched existing issues? 🔎 - [X] I have searched and found no existing issues ### Fix the bug ```python css = """ .thumbnails { justify-content: space-between !important; } """ with gr.Blocks(css=css) as demo: ``` ### System Info ```shell gradio@5.9.0 ``` ### Severity I can work around it
open
2024-12-16T15:14:14Z
2024-12-16T16:50:42Z
https://github.com/gradio-app/gradio/issues/10211
[ "bug" ]
xiaomofa
0
LibreTranslate/LibreTranslate
api
375
Bug after latest update: cannot start
On a `linux/amd64` machine docker-compose.yaml: ```yaml services: libretranslate: image: libretranslate/libretranslate container_name: libretranslate hostname: libretranslate environment: # https://github.com/LibreTranslate/LibreTranslate#arguments # https://github.com/LibreTranslate/LibreTranslate#manage-api-keys LT_HOST: 0.0.0.0 # 127.0.0.1 LT_PORT: 5000 LT_FRONTEND_LANGUAGE_SOURCE: "en" LT_FRONTEND_LANGUAGE_TARGET: en # LT_FRONTEND_TIMEOUT: 500 LT_API_KEYS: "true" LT_API_KEYS_DB_PATH: /libretranslate_db.db LT_THREADS: 2 # 4 LT_REQUIRE_API_KEY_ORIGIN: "true" LT_REQ_LIMIT: 20 # no limit LT_BATCH_LIMIT: 2 # no limit volumes: - ${DOCKER}/libretranslate_db/libretranslate_db.db:/libretranslate_db.db - ${DOCKER}/libretranslate/.local:/home/libretranslate/.local ports: 5000:5000 ``` Logs: ```sh $ docker logs libretranslate (URLError(TimeoutError(110, 'Connection timed out')),) (URLError(TimeoutError(110, 'Connection timed out')),) (URLError(TimeoutError(110, 'Connection timed out')),) (URLError(TimeoutError(110, 'Connection timed out')),) Updating language models Found 58 models Downloading Arabic → English (1.0) ... Downloading Azerbaijani → English (1.5) ... Downloading Catalan → English (1.7) ... Downloading Chinese → English (1.7) ... Downloading Czech → English (1.5) ... Downloading Danish → English (1.3) ... Downloading Dutch → English (1.4) ... Downloading English → Arabic (1.0) ... Downloading English → Azerbaijani (1.5) ... Downloading English → Catalan (1.7) ... Downloading English → Chinese (1.7) ... Downloading English → Czech (1.5) ... Downloading English → Danish (1.3) ... Downloading English → Dutch (1.4) ... Downloading English → Esperanto (1.5) ... Downloading English → Finnish (1.5) ... Downloading English → French (1.0) ... Downloading English → German (1.0) ... Downloading English → Greek (1.5) ... Downloading English → Hebrew (1.5) ... Downloading English → Hindi (1.1) ... Downloading English → Hungarian (1.5) ... Downloading English → Indonesian (1.2) ... Downloading English → Irish (1.1) ... Downloading English → Italian (1.0) ... Downloading English → Japanese (1.1) ... Downloading English → Korean (1.1) ... Downloading English → Persian (1.5) ... Downloading English → Polish (1.1) ... Downloading English → Portuguese (1.0) ... Downloading English → Russian (1.7) ... Downloading English → Slovak (1.5) ... Downloading English → Spanish (1.0) ... Downloading English → Swedish (1.5) ... Downloading English → Turkish (1.5) ... Downloading English → Ukranian (1.4) ... Downloading Esperanto → English (1.5) ... Downloading Finnish → English (1.5) ... Downloading French → English (1.0) ... Downloading German → English (1.0) ... Downloading Greek → English (1.5) ... Downloading Hebrew → English (1.5) ... Downloading Hindi → English (1.1) ... Downloading Hungarian → English (1.5) ... Downloading Indonesian → English (1.2) ... Downloading Irish → English (1.1) ... Downloading Italian → English (1.0) ... Downloading Japanese → English (1.1) ... Downloading Korean → English (1.1) ... Downloading Persian → English (1.5) ... Downloading Polish → English (1.1) ... Downloading Portuguese → English (1.0) ... Downloading Russian → English (1.0) ... Downloading Slovak → English (1.5) ... Downloading Spanish → English (1.0) ... Downloading Swedish → English (1.5) ... Downloading Turkish → English (1.5) ... Downloading Ukranian → English (1.4) ... Cannot update models (normal if you're offline): name 'app' is not defined Traceback (most recent call last): File "./venv/bin/libretranslate", line 33, in <module> sys.exit(load_entry_point('libretranslate==1.3.8', 'console_scripts', 'libretranslate')()) File "/app/venv/lib/python3.8/site-packages/libretranslate/main.py", line 176, in main app = create_app(args) File "/app/venv/lib/python3.8/site-packages/libretranslate/app.py", line 163, in create_app limiter = Limiter( File "/app/venv/lib/python3.8/site-packages/flask_limiter/extension.py", line 272, in __init__ self.init_app(app) File "/app/venv/lib/python3.8/site-packages/flask_limiter/extension.py", line 278, in init_app config = app.config AttributeError: 'Blueprint' object has no attribute 'config' ```
closed
2023-01-01T17:01:58Z
2023-01-01T18:41:25Z
https://github.com/LibreTranslate/LibreTranslate/issues/375
[ "bug" ]
schklom
2
deeppavlov/DeepPavlov
nlp
1,385
Create Architecture Diagram of DeepPavlov Library
closed
2021-01-27T09:16:05Z
2023-07-06T12:07:44Z
https://github.com/deeppavlov/DeepPavlov/issues/1385
[]
danielkornev
4
ccxt/ccxt
api
24,813
fetchOHLCV issue with 'since' ?
### Operating System Windows ### Programming Languages JavaScript ### CCXT Version last ### Description Hi, I contact you because I think I've found a bug with the Alpaca integration, I try to fetch the candles, then I use the following code: ``` const symbol = 'BTC/USDT'; const timeframe = '1d'; const since = 1701867696765; // 6 December 2023 13:01:36.765 const limit = 200; return await globalThis.exchange.fetchOHLCV(symbol, timeframe, since, limit); ``` But I got strange candles timestamps: **The first:** [1724648400000, 18.135275524, 159.35, 18.135275524, 158.687, 0.000510664] where 1724648400000 is Monday, 26 August 2024 05:00:00 **The last:** [1736402400000, 194.2875, 194.825, 189.022, 190.35, 33.749763368] where 1736402400000 is Thursday, 9 January 2025 06:00:00 So, the candles are not from "6 December 2023 13:01:36.765" and "6 December 2023 13:01:36.765" + 200 days. Right ? ### Code ``` const symbol = 'BTC/USDT'; const timeframe = '1d'; const since = 1701867696765; // 6 December 2023 13:01:36.765 const limit = 200; return await globalThis.exchange.fetchOHLCV(symbol, timeframe, since, limit); ``` BTW, I have tested with both Alpaca and Binance, and I got the same problem.
closed
2025-01-09T13:23:50Z
2025-01-11T17:56:40Z
https://github.com/ccxt/ccxt/issues/24813
[]
vd3d
4
dpgaspar/Flask-AppBuilder
flask
1,601
Unable to Expose POST method in BaseView
If you'd like to report a bug in Flask-Appbuilder, fill out the template below. Provide any extra information that may be useful ### Environment Flask-Appbuilder version: 3.2.1 pip freeze output: ### Describe the expected results Tell us what should happen. ```python Paste a minimal example that causes the problem. ``` #Bulk Email to Lead class CustomView(BaseView): default_view = "bulkemail" @expose("/bulkemail/",methods="POST") @has_access def bulkemail(self): return self.render_template("bulkemail.html") appbuilder.add_view(CustomView(), "Bulk Email",category="Utilities") ### Describe the actual results Tell us what happens instead. I want to send the html template form data to the Baseview. ```pytb Paste the full traceback if there was an exception. ``` **The method is not allowed for the requested URL.** ### Steps to reproduce
closed
2021-04-06T16:14:31Z
2021-07-21T03:21:44Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/1601
[ "stale" ]
naresnayak
2
dgtlmoon/changedetection.io
web-scraping
1,920
[feature] Wachete import / export as Wachete alternative
It's possible to export your waches from Wachete, so it should be possible to import their XLS file directly into changedetection.io The format looks fairly clean, I think it should be no problems. We can even try to keep their UUIDs [wachete-export-example.xlsx](https://github.com/dgtlmoon/changedetection.io/files/13217931/wachete-export-example.xlsx) And then changedetection.io becomes another fantastic alternative :) actually the UI should offer to import any kind of XSLX file with configurable columns
closed
2023-10-31T14:57:54Z
2023-11-01T18:23:37Z
https://github.com/dgtlmoon/changedetection.io/issues/1920
[ "enhancement" ]
dgtlmoon
2
graphql-python/graphene-mongo
graphql
40
unable to install via pip - Some packages may not be found!
Error: Download error on https://pypi.python.org/simple/pytest-runner/: EOF occurred in violation of protocol (_ssl.c:590) -- Some packages may not be found! Couldn't find index page for 'pytest-runner' (maybe misspelled?) I resolved the problem by: 1. I have to go to https://pypi.org/simple/pytest-runner/ 2. download pytest_runner-4.2-py2.py3-none-any.whl 3. run pip install ./pytest_runner-4.2-py2.py3-none-any.whl 4. pip install graphene-mongo
closed
2018-07-13T09:07:33Z
2018-08-30T07:47:13Z
https://github.com/graphql-python/graphene-mongo/issues/40
[ "work in progress" ]
ahmad88me
2
google/seq2seq
tensorflow
361
Error while executing
I am executing the following command as a batch file on windows10 `python -m bin.train --config_paths="C:\Users\seq2seqten\data\Configs\basicconfig" --model_params "vocab_source: C:\Users\seq2seqten\data\knowdata\" --input_pipeline_train "class: ParallelTextInputPipeline params: source_files: - C:\Users\seq2seqten\data\knowdata\knoworg.bin.txt target_files: - C:\Users\seq2seqten\data\knowdata\knowans.bin.txt --batch_size 32 --train_steps 1000000 --output_dir C:\Users\seq2seqten\data\knowdata\nmt_tutorial.txt` I am getting the error:``` self.get_mark()) yaml.scanner.ScannerError: mapping values are not allowed here in "<unicode string>", line 1, column 106: ... ta" --input_pipeline_train class: ^ ``` Can you please let me know what I am missing
open
2019-09-14T16:16:44Z
2019-09-14T16:17:07Z
https://github.com/google/seq2seq/issues/361
[]
newmluser
0
deepfakes/faceswap
deep-learning
904
Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR when Extracting
Error on extrace: ``` 2019-10-14 00:35:36.958535: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2019-10-14 00:35:36.964129: E tensorflow/stream_executor/cuda/cuda_dnn.cc:329] Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 10/14/2019 00:35:39 ERROR Got Exception on main handler: Traceback (most recent call last): File "/home/tianze/face/lib/cli.py", line 128, in execute_script process.process() File "/home/tianze/face/scripts/extract.py", line 61, in process self.run_extraction() File "/home/tianze/face/scripts/extract.py", line 187, in run_extraction self.extractor.launch() File "/home/tianze/face/plugins/extract/pipeline.py", line 180, in launch self._launch_aligner() File "/home/tianze/face/plugins/extract/pipeline.py", line 326, in _launch_aligner self._aligner.initialize(**kwargs) File "/home/tianze/face/plugins/extract/_base.py", line 330, in initialize self.init_model() File "/home/tianze/face/plugins/extract/align/fan.py", line 41, in init_model self.model.predict(placeholder) File "/home/tianze/face/lib/model/session.py", line 66, in predict return self._model.predict(feed, batch_size=batch_size) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/engine/training.py", line 1169, in predict steps=steps) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/engine/training_arrays.py", line 294, in predict_loop batch_outs = f(ins_batch) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1458, in __call__ run_metadata_ptr) tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node conv2d_1/convolution}}]] [[conv2d_56/BiasAdd/_6051]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. To Reproduce Extract from mp4 Expected behavior Extraction Log: 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_card_most_free DEBUG Active GPU Card with most free VRAM: {'device': 'Tesla V100-PCIE-32GB', 'free': 32061.625, 'total': 32480.5, 'card_id': 1} 10/14/2019 00:35:18 MainProcess MainThread pipeline _set_parallel_processing VERBOSE Tesla V100-PCIE-32GB - 32061MB free of 32480MB 10/14/2019 00:35:18 MainProcess MainThread gpu_stats __init__ DEBUG Initializing GPUStats 10/14/2019 00:35:18 MainProcess MainThread gpu_stats initialize DEBUG OS is not macOS. Using pynvml 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_device_count DEBUG GPU Device count: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_active_devices DEBUG Active GPU Devices: [0, 1, 2] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_handles DEBUG GPU Handles found: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_driver DEBUG GPU Driver: 410.104 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_devices DEBUG GPU Devices: ['Tesla V100-PCIE-32GB', 'Tesla V100-PCIE-32GB', 'Tesla V100-PCIE-32GB'] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_vram DEBUG GPU VRAM: [32480.5, 32480.5, 32480.5] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats __init__ DEBUG Initialized GPUStats 10/14/2019 00:35:18 MainProcess MainThread gpu_stats initialize DEBUG OS is not macOS. Using pynvml 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_device_count DEBUG GPU Device count: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_active_devices DEBUG Active GPU Devices: [0, 1, 2] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_handles DEBUG GPU Handles found: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_free DEBUG GPU VRAM free: [1143.625, 32061.625, 32061.625] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats initialize DEBUG OS is not macOS. Using pynvml 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_device_count DEBUG GPU Device count: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_active_devices DEBUG Active GPU Devices: [0, 1, 2] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_handles DEBUG GPU Handles found: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_free DEBUG GPU VRAM free: [1143.625, 32061.625, 32061.625] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats initialize DEBUG OS is not macOS. Using pynvml 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_device_count DEBUG GPU Device count: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_active_devices DEBUG Active GPU Devices: [0, 1, 2] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_handles DEBUG GPU Handles found: 3 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_free DEBUG GPU VRAM free: [1143.625, 32061.625, 32061.625] 10/14/2019 00:35:18 MainProcess MainThread gpu_stats get_card_most_free DEBUG Active GPU Card with most free VRAM: {'device': 'Tesla V100-PCIE-32GB', 'free': 32061.625, 'total': 32480.5, 'card_id': 1} 10/14/2019 00:35:18 MainProcess MainThread pipeline _set_extractor_batchsize DEBUG Plugin requirements within threshold: (plugin_required: 8192MB, vram_free: 32061MB) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager adding: (name: 'extract_detect_in', maxsize: 64) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager added: (name: 'extract_detect_in') 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager getting: 'extract_detect_in' 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager got: 'extract_detect_in' 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager adding: (name: 'extract_align_in', maxsize: 32) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager added: (name: 'extract_align_in') 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager getting: 'extract_align_in' 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager got: 'extract_align_in' 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager adding: (name: 'extract_align_out', maxsize: 32) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager added: (name: 'extract_align_out') 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager getting: 'extract_align_out' 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager got: 'extract_align_out' 10/14/2019 00:35:18 MainProcess MainThread pipeline _add_queues DEBUG Queues: {'extract_align_out': <queue.Queue object at 0x7f9b05530048>, 'extract_align_in': <queue.Queue object at 0x7f9b05525f28>, 'extract_detect_in': <queue.Queue object at 0x7f9b05525e10>} 10/14/2019 00:35:18 MainProcess MainThread pipeline __init__ DEBUG Initialized Extractor 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager getting: 'extract_save' 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager adding: (name: 'extract_save', maxsize: 0) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager added: (name: 'extract_save') 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager got: 'extract_save' 10/14/2019 00:35:18 MainProcess MainThread extract __init__ DEBUG Initialized Extract 10/14/2019 00:35:18 MainProcess MainThread extract process INFO Starting, this may take a while... 10/14/2019 00:35:18 MainProcess MainThread extract threaded_io DEBUG Threading task: (Task: 'load') 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initializing MultiThread: (target: 'load_images', thread_count: 1) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initialized MultiThread: 'load_images' 10/14/2019 00:35:18 MainProcess MainThread multithreading start DEBUG Starting thread(s): 'load_images' 10/14/2019 00:35:18 MainProcess MainThread multithreading start DEBUG Starting thread 1 of 1: 'load_images_0' 10/14/2019 00:35:18 MainProcess load_images_0 extract load_images DEBUG Load Images: Start 10/14/2019 00:35:18 MainProcess load_images_0 fsmedia load_video_frames DEBUG Input is video. Capturing frames 10/14/2019 00:35:18 MainProcess MainThread multithreading start DEBUG Started all threads 'load_images': 1 10/14/2019 00:35:18 MainProcess MainThread extract threaded_io DEBUG Threading task: (Task: 'save') 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initializing MultiThread: (target: 'save_faces', thread_count: 1) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initialized MultiThread: 'save_faces' 10/14/2019 00:35:18 MainProcess MainThread multithreading start DEBUG Starting thread(s): 'save_faces' 10/14/2019 00:35:18 MainProcess MainThread multithreading start DEBUG Starting thread 1 of 1: 'save_faces_0' 10/14/2019 00:35:18 MainProcess save_faces_0 extract save_faces DEBUG Save Faces: Start 10/14/2019 00:35:18 MainProcess MainThread multithreading start DEBUG Started all threads 'save_faces': 1 10/14/2019 00:35:18 MainProcess MainThread extract process_item_count DEBUG Items already processed: 0 10/14/2019 00:35:18 MainProcess MainThread extract process_item_count DEBUG Items to be Processed: 335 10/14/2019 00:35:18 MainProcess MainThread pipeline _launch_aligner DEBUG Launching Aligner 10/14/2019 00:35:18 MainProcess MainThread _base initialize DEBUG initialize Align: (args: (), kwargs: {'out_queue': <queue.Queue object at 0x7f9b05530048>, 'in_queue': <queue.Queue object at 0x7f9b05525f28>}) 10/14/2019 00:35:18 MainProcess MainThread _base initialize INFO Initializing FAN Aligner... 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager getting: 'align_predict' 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager adding: (name: 'align_predict', maxsize: 1) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager added: (name: 'align_predict') 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager got: 'align_predict' 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager getting: 'align_post' 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager adding: (name: 'align_post', maxsize: 1) 10/14/2019 00:35:18 MainProcess MainThread queue_manager add_queue DEBUG QueueManager added: (name: 'align_post') 10/14/2019 00:35:18 MainProcess MainThread queue_manager get_queue DEBUG QueueManager got: 'align_post' 10/14/2019 00:35:18 MainProcess MainThread _base _compile_threads DEBUG Compiling align threads 10/14/2019 00:35:18 MainProcess MainThread _base _add_thread DEBUG Adding thread: (name: align_input, function: <bound method Align.process_input of <plugins.extract.align.fan.Align object at 0x7f9b055136a0>>, in_queue: <queue.Queue object at 0x7f9b05525f28>, out_queue: <queue.Queue object at 0x7f9b05525d68>) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initializing MultiThread: (target: 'align_input', thread_count: 1) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initialized MultiThread: 'align_input' 10/14/2019 00:35:18 MainProcess MainThread _base _add_thread DEBUG Added thread: align_input 10/14/2019 00:35:18 MainProcess MainThread _base _add_thread DEBUG Adding thread: (name: align_predict, function: <bound method Aligner._predict of <plugins.extract.align.fan.Align object at 0x7f9b055136a0>>, in_queue: <queue.Queue object at 0x7f9b05525d68>, out_queue: <queue.Queue object at 0x7f9b05525b38>) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initializing MultiThread: (target: 'align_predict', thread_count: 1) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initialized MultiThread: 'align_predict' 10/14/2019 00:35:18 MainProcess MainThread _base _add_thread DEBUG Added thread: align_predict 10/14/2019 00:35:18 MainProcess MainThread _base _add_thread DEBUG Adding thread: (name: align_output, function: <bound method Align.process_output of <plugins.extract.align.fan.Align object at 0x7f9b055136a0>>, in_queue: <queue.Queue object at 0x7f9b05525b38>, out_queue: <queue.Queue object at 0x7f9b05530048>) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initializing MultiThread: (target: 'align_output', thread_count: 1) 10/14/2019 00:35:18 MainProcess MainThread multithreading __init__ DEBUG Initialized MultiThread: 'align_output' 10/14/2019 00:35:18 MainProcess MainThread _base _add_thread DEBUG Added thread: align_output 10/14/2019 00:35:18 MainProcess MainThread _base _compile_threads DEBUG Compiled align threads: [<lib.multithreading.MultiThread object at 0x7f9b05525978>, <lib.multithreading.MultiThread object at 0x7f9b05530240>, <lib.multithreading.MultiThread object at 0x7f9b05530278>] 10/14/2019 00:35:18 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/face/lib/model/session.py:110: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n 10/14/2019 00:35:18 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/face/lib/model/session.py:113: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n 10/14/2019 00:35:20 MainProcess MainThread session _set_session DEBUG Creating tf.session: (graph: <tensorflow.python.framework.ops.Graph object at 0x7f9b05530588>, session: <tensorflow.python.client.session.Session object at 0x7f9b05530748>, config: ) 10/14/2019 00:35:20 MainProcess MainThread session load_model VERBOSE Initializing plugin model: FAN 10/14/2019 00:35:20 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n 10/14/2019 00:35:20 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n 10/14/2019 00:35:20 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:245: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n 10/14/2019 00:35:21 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:3980: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n 10/14/2019 00:35:21 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:2018: The name tf.image.resize_nearest_neighbor is deprecated. Please use tf.compat.v1.image.resize_nearest_neighbor instead.\n 10/14/2019 00:35:33 MainProcess MainThread deprecation_wrapper __getattr__ WARNING From /home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n Traceback (most recent call last): File "/home/tianze/face/lib/cli.py", line 128, in execute_script process.process() File "/home/tianze/face/scripts/extract.py", line 61, in process self.run_extraction() File "/home/tianze/face/scripts/extract.py", line 187, in run_extraction self.extractor.launch() File "/home/tianze/face/plugins/extract/pipeline.py", line 180, in launch self._launch_aligner() File "/home/tianze/face/plugins/extract/pipeline.py", line 326, in _launch_aligner self._aligner.initialize(**kwargs) File "/home/tianze/face/plugins/extract/_base.py", line 330, in initialize self.init_model() File "/home/tianze/face/plugins/extract/align/fan.py", line 41, in init_model self.model.predict(placeholder) File "/home/tianze/face/lib/model/session.py", line 66, in predict return self._model.predict(feed, batch_size=batch_size) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/engine/training.py", line 1169, in predict steps=steps) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/engine/training_arrays.py", line 294, in predict_loop batch_outs = f(ins_batch) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2715, in __call__ return self._call(inputs) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2675, in _call fetched = self._callable_fn(*array_vals) File "/home/tianze/anaconda3/envs/ftz/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1458, in __call__ run_metadata_ptr) tensorflow.python.framework.errors_impl.UnknownError: 2 root error(s) found. (0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node conv2d_1/convolution}}]] [[conv2d_56/BiasAdd/_6051]] (1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [[{{node conv2d_1/convolution}}]] 0 successful operations. 0 derived errors ignored. ============ System Information ============ encoding: UTF-8 git_branch: Not Found git_commits: Not Found gpu_cuda: 9.1 gpu_cudnn: 7.6.3 gpu_devices: GPU_0: Tesla V100-PCIE-32GB, GPU_1: Tesla V100-PCIE-32GB, GPU_2: Tesla V100-PCIE-32GB gpu_devices_active: GPU_0, GPU_1, GPU_2 gpu_driver: 410.104 gpu_vram: GPU_0: 32480MB, GPU_1: 32480MB, GPU_2: 32480MB os_machine: x86_64 os_platform: Linux-4.15.0-64-generic-x86_64-with-debian-buster-sid os_release: 4.15.0-64-generic py_command: /home/tianze/face/faceswap.py extract -i /home/tianze/face/src/nomask.mp4 -o /home/tianze/face/faces/nomask --serializer json -D s3fd -A fan -nm none -min 0 -l 0.4 -bt 0.0 -een 1 -sz 256 -si 0 -L INFO -gui py_conda_version: conda 4.7.12 py_implementation: CPython py_version: 3.5.6 py_virtual_env: True sys_cores: 48 sys_processor: x86_64 sys_ram: Total: 514627MB, Available: 469354MB, Used: 40669MB, Free: 314539MB =============== Pip Packages =============== absl-py==0.8.1 astor==0.8.0 certifi==2016.9.26 cloudpickle==1.2.2 cycler==0.10.0 dask==2.5.2 decorator==4.4.0 fastcluster==1.1.25 ffmpy==0.2.2 gast==0.3.2 google-pasta==0.1.7 grpcio==1.24.1 h5py==2.9.0 imageio==2.4.1 imageio-ffmpeg==0.3.0 Keras==2.2.4 Keras-Applications==1.0.8 Keras-Preprocessing==1.1.0 kiwisolver==1.0.1 Markdown==3.1.1 matplotlib==2.2.2 networkx==2.3 numpy==1.16.2 nvidia-ml-py3==7.352.1 olefile==0.46 opencv-python==4.1.1.26 pathlib==1.0.1 Pillow==6.1.0 protobuf==3.10.0 psutil==5.4.7 pyparsing==2.4.2 python-dateutil==2.7.3 pytz==2019.3 PyWavelets==1.0.0 PyYAML==5.1.2 scikit-image==0.14.0 scikit-learn==0.20.0 scipy==1.1.0 six==1.11.0 tensorboard==1.14.0 tensorflow-estimator==1.14.0 tensorflow-gpu==1.14.0 termcolor==1.1.0 toolz==0.10.0 toposort==1.5 tornado==5.1.1 tqdm==4.36.1 Werkzeug==0.16.0 wrapt==1.11.2 ============== Conda Packages ============== # packages in environment at /home/tianze/anaconda3/envs/ftz: # # Name Version Build Channel _libgcc_mutex 0.1 main absl-py 0.8.1 pypi_0 pypi astor 0.8.0 pypi_0 pypi blas 1.0 mkl bzip2 1.0.8 h516909a_1 conda-forge ca-certificates 2019.9.11 hecc5488_0 conda-forge certifi 2016.9.26 py35_0 conda-forge cloudpickle 1.2.2 py_0 cycler 0.10.0 py35hc4d5149_0 dask-core 2.5.2 py_0 dbus 1.13.6 h746ee38_0 decorator 4.4.0 py_0 expat 2.2.6 he6710b0_0 fastcluster 1.1.25 py35hf8a1672_0 conda-forge ffmpeg 4.0 h04d0a96_0 ffmpy 0.2.2 pypi_0 pypi fontconfig 2.12.6 h49f89f6_0 freetype 2.8 hab7d2ae_1 gast 0.3.2 pypi_0 pypi glib 2.56.2 hd408876_0 google-pasta 0.1.7 pypi_0 pypi grpcio 1.24.1 pypi_0 pypi gst-plugins-base 1.14.0 hbbd80ab_1 gstreamer 1.14.0 hb453b48_1 h5py 2.9.0 pypi_0 pypi icu 58.2 h9c2bf20_1 imageio 2.4.1 py35_0 imageio-ffmpeg 0.3.0 py_0 conda-forge intel-openmp 2019.4 243 jpeg 9b h024ee3a_2 keras 2.2.4 pypi_0 pypi keras-applications 1.0.8 pypi_0 pypi keras-preprocessing 1.1.0 pypi_0 pypi kiwisolver 1.0.1 py35hf484d3e_0 libedit 3.1.20181209 hc058e9b_0 libffi 3.2.1 hd88cf55_4 libgcc-ng 9.1.0 hdf63c60_0 libgfortran-ng 7.3.0 hdf63c60_0 libopus 1.3 h7b6447c_0 libpng 1.6.37 hbc83047_0 libstdcxx-ng 9.1.0 hdf63c60_0 libtiff 4.0.10 h2733197_2 libuuid 1.0.3 h1bed415_2 libvpx 1.7.0 h439df22_0 libxcb 1.13 h1bed415_1 libxml2 2.9.9 hea5a465_1 markdown 3.1.1 pypi_0 pypi matplotlib 2.2.2 py35h0e671d2_1 mkl 2018.0.3 1 ncurses 6.1 he6710b0_1 networkx 2.3 py_0 numpy 1.16.2 pypi_0 pypi nvidia-ml-py3 7.352.1 pypi_0 pypi olefile 0.46 py35_0 opencv-python 4.1.1.26 pypi_0 pypi openssl 1.0.2r h14c3975_0 conda-forge pathlib 1.0.1 pypi_0 pypi pcre 8.43 he6710b0_0 pillow 6.1.0 pypi_0 pypi pip 10.0.1 py35_0 protobuf 3.10.0 pypi_0 pypi psutil 5.4.7 py35h14c3975_0 pyparsing 2.4.2 py_0 pyqt 5.9.2 py35h751905a_0 python 3.5.6 hc3d631a_0 python-dateutil 2.7.3 py35_0 pytz 2019.3 py_0 pywavelets 1.0.0 py35hdd07704_0 pyyaml 5.1.2 pypi_0 pypi qt 5.9.5 h7e424d6_0 readline 7.0 h7b6447c_5 scikit-image 0.14.0 py35hf484d3e_1 scikit-learn 0.20.0 py35h4989274_1 scipy 1.1.0 py35hd20e5f9_0 setuptools 41.4.0 pypi_0 pypi sip 4.19.8 py35hf484d3e_0 six 1.11.0 py35_1 sqlite 3.30.0 h7b6447c_0 tensorboard 1.14.0 pypi_0 pypi tensorflow-estimator 1.14.0 pypi_0 pypi tensorflow-gpu 1.14.0 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi tk 8.6.8 hbc83047_0 toolz 0.10.0 py_0 toposort 1.5 py_3 conda-forge tornado 5.1.1 py35h7b6447c_0 tqdm 4.36.1 py_0 werkzeug 0.16.0 pypi_0 pypi wheel 0.31.1 py35_0 wrapt 1.11.2 pypi_0 pypi xz 5.2.4 h14c3975_4 zlib 1.2.11 h7b6447c_3 zstd 1.3.7 h0b5b093_0 ================= Configs ================== --------- gui.ini --------- [global] fullscreen: False tab: extract options_panel_width: 30 console_panel_height: 20 font: default font_size: 9 --------- train.ini --------- [global] coverage: 68.75 mask_type: none mask_blur: False icnr_init: False conv_aware_init: False subpixel_upscaling: False reflect_padding: False penalized_mask_loss: True loss_function: mae learning_rate: 5e-05 [model.dfl_sae] input_size: 128 autoencoder_dims: 0 architecture: df clipnorm: True multiscale_decoder: False decoder_dims: 21 encoder_dims: 42 [model.realface] complexity_decoder: 512 input_size: 64 dense_nodes: 1536 output_size: 128 complexity_encoder: 128 [model.original] lowmem: False [model.dfl_h128] lowmem: False [model.villain] lowmem: False [model.unbalanced] input_size: 128 complexity_decoder_b: 512 complexity_encoder: 128 lowmem: False complexity_decoder_a: 384 clipnorm: True nodes: 1024 [trainer.original] rotation_range: 10 color_clahe_max_size: 4 flip_chance: 50 color_lightness: 30 color_ab: 8 color_clahe_chance: 50 preview_images: 14 zoom_amount: 5 shift_range: 5 --------- convert.ini --------- [mask.box_blend] radius: 5.0 distance: 11.0 type: gaussian passes: 1 [mask.mask_blend] erosion: 0.0 radius: 3.0 type: normalized passes: 4 [color.color_transfer] clip: True preserve_paper: True [color.manual_balance] balance_3: 0.0 balance_1: 0.0 contrast: 0.0 colorspace: HSV brightness: 0.0 balance_2: 0.0 [color.match_hist] threshold: 99.0 [scaling.sharpen] threshold: 5.0 radius: 0.3 method: unsharp_mask amount: 150 [writer.ffmpeg] profile: auto container: mp4 level: auto tune: none preset: medium crf: 23 codec: libx264 [writer.pillow] optimize: False png_compress_level: 3 tif_compression: tiff_deflate jpg_quality: 75 format: png draw_transparent: False gif_interlace: True [writer.gif] subrectangles: False loop: 0 fps: 25 palettesize: 256 [writer.opencv] png_compress_level: 3 draw_transparent: False jpg_quality: 75 format: png --------- .faceswap --------- backend: nvidia --------- extract.ini --------- [global] allow_growth: False [align.fan] batch-size: 8 [detect.cv2_dnn] confidence: 50 [detect.mtcnn] batch-size: 8 threshold_1: 0.6 threshold_2: 0.7 minsize: 20 scalefactor: 0.709 threshold_3: 0.7 [detect.s3fd] batch-size: 8 confidence: 50 ```
closed
2019-10-13T13:56:34Z
2020-02-03T10:26:09Z
https://github.com/deepfakes/faceswap/issues/904
[]
Futtttz
1
vitalik/django-ninja
pydantic
1,384
How to set CSRF_HEADER_NAME with CORS
Hi, I'm having troubles to understand how to make CSRF protection work with CORS. Using ```py api = NinjaAPI(auth=SessionAuth(csrf=true)) ``` and looking at the implementation, The `ninja.utils.check_csrf` method will be called for all requests. In particular, the method `CsrfViewMiddleware.process_view` will be called. For POST request with no formData, this method requires that the header `settings.CSRF_HEADER_NAME` is set to the correct value of the CSRF token/secret. In a CORS context, the CSRF token being in the cookie (or the session), we cannot access it to set the header. How do you make this work ? Thanks
closed
2025-01-08T14:12:39Z
2025-01-08T17:13:30Z
https://github.com/vitalik/django-ninja/issues/1384
[]
Quetute
1
dunossauro/fastapi-do-zero
sqlalchemy
210
Revisar o ultimo erro da aula de CI
closed
2024-07-19T18:50:05Z
2024-07-19T23:28:39Z
https://github.com/dunossauro/fastapi-do-zero/issues/210
[]
dunossauro
0