repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
ranaroussi/yfinance | pandas | 2,057 | python nvidia_stock_tool.py | ### Describe bug
C:\Users\basse>python nvidia_stock_tool.py
Current Stock Information for NVDA:
Company Name: NVIDIA Corporation
Traceback (most recent call last):
File "C:\Users\basse\nvidia_stock_tool.py", line 34, in <module>
display_stock_data(hist, info)
File "C:\Users\basse\nvidia_stock_tool.py", line 20, in display_stock_data
print(f"Current Price: ${info['regularMarketPrice']:.2f}")
~~~~^^^^^^^^^^^^^^^^^^^^^^
KeyError: 'regularMarketPrice'
### Simple code that reproduces your problem
Python
### Debug log
import yfinance as yf
import logging
# Configure logging
logging.basicConfig(filename='nvidia_stock_tool.log', level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s')
def get_stock_data(ticker):
logging.debug(f"Fetching stock data for ticker: {ticker}")
try:
# Create a Ticker object
stock = yf.Ticker(ticker)
# Fetch historical market data for the last 5 days
hist = stock.history(period='5d') # You can adjust the period as needed
# Fetch current stock info
info = stock.info
logging.debug(f"Fetched stock info: {info}")
return hist, info
except Exception as e:
logging.error(f"Error fetching stock data: {e}")
raise
def display_stock_data(hist, info):
logging.debug("Displaying stock data")
# Print the entire info dictionary to inspect available keys
logging.debug("Full info dictionary:")
for key, value in info.items():
logging.debug(f"{key}: {value}")
# Safely retrieve information with defaults if key is missing
def get_info(key):
return info.get(key, 'Data not available')
# Print current stock information
print(f"Current Stock Information for {get_info('symbol')}:")
print(f"Company Name: {get_info('longName')}")
print(f"Current Price: ${get_info('regularMarketPrice')}")
print(f"Market Cap: ${get_info('marketCap')}")
print(f"PE Ratio: {get_info('trailingPE')}")
print(f"52 Week High: ${get_info('fiftyTwoWeekHigh')}")
print(f"52 Week Low: ${get_info('fiftyTwoWeekLow')}")
print(f"Dividend Yield: {get_info('dividendYield')}")
# Print historical data
print("\nHistorical Data (Last 5 Days):")
print(hist)
if __name__ == "__main__":
ticker = 'NVDA'
try:
hist, info = get_stock_data(ticker)
display_stock_data(hist, info)
except Exception as e:
logging.error(f"Unhandled exception: {e}")
### Bad data proof
_No response_
### `yfinance` version
Location: C:\Users\basse\AppData\Roaming\Python\Python312\site-packages Requires: beautifulsoup4, frozendict, html5lib, lxml, multitasking, numpy, pandas, peewee, platformdirs, pytz, requests Required-by:
### Python version
Python 3.12.6
### Operating system
windows 11 | closed | 2024-09-17T10:01:17Z | 2024-09-28T10:32:08Z | https://github.com/ranaroussi/yfinance/issues/2057 | [] | Kessi69 | 1 |
python-visualization/folium | data-visualization | 1,320 | Unicode issue in tooltips on Jupyter notebook | 
MVP:
```
import folium
map_osm = folium.Map()
folium.GeoJson('{ "type": "Feature", "properties": { "name": "5/7, Линейная улица, Berdsk, Berdsk municipality, Novosibirsk Oblast, Siberian Federal District, 633011, Russia" }, "geometry": { "type": "Point", "coordinates": [ -75.849253579389796, 47.6434349837781 ] }}', name="5/7, Линейная улица, Berdsk, Berdsk municipality, Novosibirsk Oblast, Siberian Federal District, 633011, Russia", tooltip="5/7, Линейная улица, Berdsk, Berdsk municipality, Novosibirsk Oblast, Siberian Federal District, 633011, Russia").add_to(map_osm)
display(map_osm)
```
I'm running in Jupyter Lab with Python3.7 and latest Folium version (0.10.1+28.ga8ec61d which is with my PR)
Is there a workaround for this? | closed | 2020-05-05T14:07:43Z | 2021-02-03T20:08:20Z | https://github.com/python-visualization/folium/issues/1320 | [
"bug",
"jupyter"
] | galewis2 | 11 |
lepture/authlib | flask | 567 | The expires_in function needs to have a timedelta to avoid tokenExpiry errors for milliseconds | **Describe the bug**
I am using the OAuth2session object
```
client = OAuth2Session(client_id=client_id, client_secret=client_secret, token_endpoint=token_url, grant_type='client_credentials')
client.fetch_token(token_url)
client.get(<MY_PROTECTED_URL>)
```
Here, the library behavior is that the token gets automatically refreshed if that has expired. Refer https://github.com/lepture/authlib/blob/master/authlib/oauth2/client.py#L257
However, the function which checks the token expiry https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6749/wrappers.py#L13 , simply checks the expiry time with the current time . Because of this we are missing some corner cases, where the token is about to expire in few milliseconds/seconds and when the API call to the protected url is made, it gives error in authentication.
`JWT expired at 2023-06-20T13:16:42Z. Current time: 2023-06-20T13:16:42Z, a difference of 105 milliseconds. Allowed clock skew: 0 milliseconds."
`
**Error Stacks**
`JWT expired at 2023-06-20T13:16:42Z. Current time: 2023-06-20T13:16:42Z, a difference of 105 milliseconds. Allowed clock skew: 0 milliseconds."
`
**To Reproduce**
A minimal example to reproduce the behavior:
While the exact replication is not possible here as the request is failing by few milliseconds.
```
client = OAuth2Session(client_id=<client_id>, client_secret=<client_secret>, token_endpoint=<token_url>, grant_type='client_credentials')
client.fetch_token(<token_ur>l)
client.get(<MY_PROTECTED_URL>)
```
**A clear and concise description of what you expected to happen.**
Even if the token got expired by few milliseconds, the library should be able to handle such cases by obtaining a new token.
Instead of https://github.com/lepture/authlib/blob/master/authlib/oauth2/rfc6749/wrappers.py#L17 , we should be adding a small timedelta . For eg - even if the token is going to expire in next 60 seconds, refresh that still.
**Environment:**
- OS: Linux
- Python Version: 3.6
- Authlib Version: 1.1.0
**Additional context**
There should be some timedelta introduced in the function , so that we can avoid facing issues where API requests fail by few milliseconds. Here, we can add logic to show that token has expired , let's say 30-60 seconds prior to its actual expiry.
| closed | 2023-07-24T08:47:18Z | 2024-04-08T16:58:45Z | https://github.com/lepture/authlib/issues/567 | [
"bug",
"good first issue"
] | pghole | 2 |
LibrePhotos/librephotos | django | 670 | Change date format | How can I change the date format librephotos uses?
When I set the website's profile language to English I get dates formatted like **MMMM DD YYYY, DDDD** which is pretty acceptable as far as I know.
When I set the language to Dutch however that same format is used (with day and month names in Dutch). In Dutch this looks completely ridiculous. No one uses that format in Dutch. We would use **DDDD DD MMM YYYY**.
I haven't found a way to change the date format however, but that doesn't mean there isn't one... I hope...
If this can't be changed yet, please consider this a feature request. Personally, I'd prefer to set the date format right next to the language dropdown in the profile settings.
Thanks! | closed | 2022-10-18T14:55:06Z | 2022-11-15T14:28:01Z | https://github.com/LibrePhotos/librephotos/issues/670 | [
"enhancement",
"frontend"
] | Roebie | 3 |
serengil/deepface | deep-learning | 1,430 | [FEATURE]: Facenet Model impovement | ### Description
**Extremly unstable metrics for production level face verification system**
Briefly, my system is designed to verify person by photo looking to video, I basically use frames and take average cos similarity accross frames as result cos similarity. Issue is, I noticed that flagman pair of models, which are Facenet512 + RetinaFace according to Benchmark, perfoming pretty well with european faces, but hardly unstable for asian faces, in my case Middle Asia. Similarity drops for 0.2-0.25, whilst hardly discriminating different faces. Is that not solvable issue? If yes, only way is to fine tune recognition model?
P.S. I already tested all possible pairs for my domain, still the Facenet512 + RetinaFace is a best choice, but I don't have any close chance for good accuracy with a current state of Facenet512
### Additional Info
_No response_ | closed | 2025-01-30T14:13:54Z | 2025-01-30T14:40:45Z | https://github.com/serengil/deepface/issues/1430 | [
"enhancement",
"dependencies"
] | YernarBekbolat | 1 |
suitenumerique/docs | django | 320 | ⚡️Compress img before saving | ## Feature Request
Sometime images saved in docs are very heavy, I noticed more than 6mo.
We could compress them without loosing too much quality before saving them in the MinIO, it will speed up a lot the page loading. | open | 2024-10-10T15:37:21Z | 2024-10-17T15:09:28Z | https://github.com/suitenumerique/docs/issues/320 | [
"enhancement",
"backend"
] | AntoLC | 1 |
babysor/MockingBird | pytorch | 883 | 克隆后的音频播放时,杂音很重 | **合成后的杂音很重怎么处理**
**Env & To Reproduce[复现与环境]**
encoder: pretrained_bak_5805000
ppg_extractor:24epoch.pt
ppg2mel:ppg2mel.yaml ppg2melbest_loss_step_322000.pth
vocoder_hifigan_24k:hifigan_24k.pt config.json
用到的模型如上所示,我运行了run.py得到合成后的音频,播放时发现杂音非常大,合成的音色和给定的音色还算接近,如何处理掉杂音问题?
另外我从README的2.3中下载了社区提供的预训练合成器,但不知道这个模型要用在何处?只有运行toolbox时指定了这个模型,run.py不需要用到吗?
烦请解答,谢谢! | open | 2023-04-20T08:29:22Z | 2023-06-07T09:03:01Z | https://github.com/babysor/MockingBird/issues/883 | [] | DrewdropLife | 1 |
paperless-ngx/paperless-ngx | django | 9,467 | [BUG] SMTP Password not taken from env | ### Description
I try to send an email with the workflow. SMTP Settings are set, but I changed the password on the server side for a while and also changed it in Paperless with changing the env-variable PAPERLESS_EMAIL_HOST_PASSWORD to the new password.
But I can't send eMails, because I get an error that authentication failed.
I can see into the server logs and see, what password is used. It is the old one first entered, when I installed paperless the first time
### Steps to reproduce
see above
### Webserver logs
```bash
paperless-webserver-1 | [2025-03-23 15:07:09,342] [INFO] [paperless.matching] Document did not match Workflow: Steuerbox Buhl Steuererklärung Belegsammlung
paperless-webserver-1 | [2025-03-23 15:07:30,143] [INFO] [paperless.matching] Document matched WorkflowTrigger 13 from Workflow: Steuerbox Buhl Steuererklärung Belegsammlung
paperless-webserver-1 | [2025-03-23 15:07:30,144] [INFO] [paperless.handlers] Applying WorkflowAction 12 from Workflow: Steuerbox Buhl Steuererklärung Belegsammlung
paperless-webserver-1 | [2025-03-23 15:07:38,374] [ERROR] [paperless.handlers] Error occurred sending notification email: (535, b'5.7.8 Error: authentication failed: (reason unavailable)')
paperless-webserver-1 | Traceback (most recent call last):
paperless-webserver-1 | File "/usr/src/paperless/src/documents/signals/handlers.py", line 990, in email_action
paperless-webserver-1 | n_messages = send_email(
paperless-webserver-1 | ^^^^^^^^^^^
paperless-webserver-1 | File "/usr/src/paperless/src/documents/mail.py", line 38, in send_email
paperless-webserver-1 | File "/usr/local/lib/python3.12/site-packages/django/core/mail/message.py", line 301, in send
paperless-webserver-1 | return self.get_connection(fail_silently).send_messages([self])
paperless-webserver-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
paperless-webserver-1 | File "/usr/local/lib/python3.12/site-packages/django/core/mail/backends/smtp.py", line 128, in send_messages
paperless-webserver-1 | new_conn_created = self.open()
paperless-webserver-1 | ^^^^^^^^^^^
paperless-webserver-1 | File "/usr/local/lib/python3.12/smtplib.py", line 750, in login
paperless-webserver-1 | raise last_exception
paperless-webserver-1 | File "/usr/local/lib/python3.12/smtplib.py", line 739, in login
paperless-webserver-1 | (code, resp) = self.auth(
paperless-webserver-1 | ^^^^^^^^^^
paperless-webserver-1 | raise SMTPAuthenticationError(code, resp)
paperless-webserver-1 | smtplib.SMTPAuthenticationError: (535, b'5.7.8 Error: authentication failed: (reason unavailable)')
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.7
### Host OS
Synology
### Installation method
Docker - official image
### System status
```json
```
### Browser
Safari
### Configuration changes
_No response_
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-03-23T14:16:37Z | 2025-03-23T14:54:57Z | https://github.com/paperless-ngx/paperless-ngx/issues/9467 | [
"not a bug"
] | puzich | 3 |
521xueweihan/HelloGitHub | python | 2,748 | 【开源自荐】批量为视频生成字幕文件,并翻译成其它语言 | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/buxuku/video-subtitle-master
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:在本地批量为视频生成字幕文件,并翻译成其它语言
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:自己有一大批外文视频,没有字幕,希望能够添加字幕文件,同时也能够将字幕文件翻译成中文, 同时希望能够通过批量处理的方式来减轻工作量。
因此自己开发了这样一个小工具,可以为本地视频批量生成字幕文件出来,并调用三方翻译 api 来将它翻译成其它字幕语言
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:
- 源语言字幕文件和目标语言字幕文件放在视频同目录下,方便播放时任意挂载字幕文件
- 批量处理目录下面的所有视频文件
- 可以只生成字幕,不翻译,方便批量为视频生成字幕
- 支持火山引擎翻译
- 支持百度翻译
- 支持 deeplx 翻译 (批量翻译容易存在被限流的情况)
- 自定义字幕文件名,方便兼容不同的播放器挂载字幕识别
- 自定义翻译后的字幕文件内容,纯翻译结果,原字幕+翻译结果
- 项目集成 `whisper.cpp`, 它对 apple silicon 进行了优化,有较快的生成速度
- 项目集成了 `fluent-ffmpeg`, 无须安装 `ffmpeg`
- 截图:

| open | 2024-05-13T10:01:07Z | 2024-05-24T06:41:09Z | https://github.com/521xueweihan/HelloGitHub/issues/2748 | [
"JavaScript 项目"
] | buxuku | 0 |
pyjanitor-devs/pyjanitor | pandas | 1,044 | [ENH] let `fill_empty` function support to fill NaN value with mean, median or mode | # Brief Description
As title.
For some data, such as GDP, filling its NaN value with 0 isn't a good idea.
Because most of the GDP values end in million.
We don't fill NaN value with `0` rather mean value.
# API
```python
def fill_empty(
df: pd.DataFrame,
column_names: list[str | int],
value: Any = None,
method: str = None,
) -> pd.DataFrame:
...
```
1. One of `value` and `method` shouldn't be `None`.
1. The `method` should be 'mean', 'median', or 'mode'.
# Example
```python
import pandas as pd
import janitor # noqa
# create a DataFrame
df = pd.Series([2, 2, None, 0, 4], name="nan-col").to_frame()
# nan-col
# 0 2.0
# 1 2.0
# 2 NaN
# 3 0.0
# 4 4.0
# fill NaN with mean value
df.fill_empty(["nan-col"], method="mean")
# nan-col
# 0 2.0
# 1 2.0
# 2 2.0
# 3 0.0
# 4 4.0
```
| open | 2022-03-16T11:43:21Z | 2022-03-19T11:58:54Z | https://github.com/pyjanitor-devs/pyjanitor/issues/1044 | [] | Zeroto521 | 3 |
tensorpack/tensorpack | tensorflow | 833 | Is there ImageNet Pretrain weights for Res101-GroupNorm32-Alignpadding? | I have seen the weights for Res50-GroupNorm32-Alignpadding?
However, Res50 is low in accuracy.
Is there ImageNet Pretrain weights for Res101-GroupNorm32-Alignpadding? | closed | 2018-07-19T02:33:48Z | 2018-12-25T08:41:57Z | https://github.com/tensorpack/tensorpack/issues/833 | [
"examples"
] | engineer1109 | 8 |
microsoft/qlib | machine-learning | 1,350 | get_recorder use data from another computer | ## ❓ Questions and Help
We sincerely suggest you to carefully read the [documentation](http://qlib.readthedocs.io/) of our library as well as the official [paper](https://arxiv.org/abs/2009.11189). After that, if you still feel puzzled, please describe the question clearly under this issue.
Suppose I am using the mlrun data copied from another computer, how do I define R.get_recorder so it loads under this directory?
Thanks for help.
| closed | 2022-11-13T07:25:08Z | 2023-03-07T15:01:53Z | https://github.com/microsoft/qlib/issues/1350 | [
"question",
"stale"
] | nkchem09 | 2 |
aiogram/aiogram | asyncio | 752 | Callback handler doesn't work | ## Context
Please provide any relevant information about your setup. This is important in case the issue is not reproducible except for under certain conditions.
* Operating System: Ubuntu 20.14
* Python Version: 3.9.5
* aiogram version: 2.16
* aiohttp version: 3.8.1
* uvloop version (if installed): 0.16.0
## Expected Behavior
I have a task in my project(cycle), that notify me in telegram that particular event has happend. When an event has happend,
backend sends me message with InlineKeyboardMarkup, and if button was pressed, callback_handler should do some work.
## Current Behavior
Backend sends InlineKeyboardMarkup to bot, but when i press a button - nothing happens(Of course, i have registeged
callback_query_handler, but in another module(i suspect this is a problem))
### Code:
```py
// src/path/to/file.py
...
if True:
keyboard = await get_inline_event_keyboard()
three_exclamation = ':heavy_exclamation_mark:' * 3
emojized_three_exclamation = emoji.emojize(three_exclamation, use_aliases=True)
await bot.send_message(
MY_TG_ID,
text='LOREM IPSUM DOLOR SER AMET',
reply_markup=keyboard
)
```
```py
// src/path/to/another/file.py
@dp.callback_query_handler(lambda c: c.data == 'text')
async def process_callback_text(call: CallbackQuery):
await bot.answer_callback_query(call.id)
await bot.send_message(call.from_user.id, text='HAM')
@dp.callback_query_handler(lambda c: c.data == 'text_1')
async def process_callback_text_1(call: CallbackQuery):
await bot.answer_callback_query(call.id)
await bot.send_message(call.from_user.id, text='SPAM')
```
```py
// src/path/to/keyboard.py
async def get_inline_event_keyboard():
keyboard = InlineKeyboardMarkup(row_width=2)
buttons = [
InlineKeyboardButton(text=InlineEnumButtons.LOREM, callback_data='text'),
InlineKeyboardButton(text=InlineEnumButtons.IPSUM, callback_data='text_1')
]
keyboard.row(*buttons)
return keyboard
```
What's problem?
| closed | 2021-11-19T14:26:27Z | 2022-08-12T07:41:38Z | https://github.com/aiogram/aiogram/issues/752 | [
"stale",
"needs triage"
] | komarp | 3 |
pytest-dev/pytest-randomly | pytest | 209 | reorganize randomly across more things | would it be possible to get pytest-randomly to shuffle based on overlapping fixture scopes rather than just module, class, fn | open | 2019-11-06T20:41:21Z | 2019-11-11T20:46:59Z | https://github.com/pytest-dev/pytest-randomly/issues/209 | [] | graingert | 3 |
shibing624/text2vec | nlp | 54 | 长文本的相似度 | ### Describe the Question
是否有支持长文本级别 相似度计算的模型
| closed | 2023-01-05T02:18:15Z | 2023-01-13T01:46:07Z | https://github.com/shibing624/text2vec/issues/54 | [
"question"
] | xxllp | 5 |
pandas-dev/pandas | data-science | 60,282 | BUG (string dtype): `replace()` value in string column with non-string should cast to object dtype instead of raising an error | For all other dtypes (I think, just checked with the one below), if the value to replace with in `replace()` doesn't fit into the calling series, then we "upcast" to object dtype and then do the replacement anyway.
Simple example with an integer series:
```python
>>> ser = pd.Series([1, 2])
>>> ser.replace(1, "str")
0 str
1 2
dtype: object
```
However, for the future string dtype, and then trying to replace a value with a non-string, we do _not_ cast to object dtype currently, but raise instead:
```python
>>> pd.options.future.infer_string = True
>>> ser = pd.Series(["a", "b"])
>>> ser.replace("a", 1)
...
File ~/scipy/repos/pandas/pandas/core/internals/blocks.py:713, in Block.replace(self, to_replace, value, inplace, mask)
709 elif self._can_hold_element(value):
710 # TODO(CoW): Maybe split here as well into columns where mask has True
711 # and rest?
712 blk = self._maybe_copy(inplace)
--> 713 putmask_inplace(blk.values, mask, value)
714 return [blk]
716 elif self.ndim == 1 or self.shape[0] == 1:
...
File ~/scipy/repos/pandas/pandas/core/arrays/string_.py:746, in __setitem__(self, key, value)
...
TypeError: Invalid value '1' for dtype 'str'. Value should be a string or missing value, got 'int' instead.
```
Making `replace()` strict (preserve dtype) in general is a much bigger topic, so I think for now we should just keep the current behaviour of upcasting to object dtype when needed. | closed | 2024-11-12T10:18:23Z | 2024-11-12T21:41:48Z | https://github.com/pandas-dev/pandas/issues/60282 | [
"Bug",
"Strings",
"replace"
] | jorisvandenbossche | 0 |
ultralytics/ultralytics | machine-learning | 19,788 | Custom Yolo8 to Onnx | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have created a custom yolo 8 model that takes in (1 4 640 640) and I want to convert this to onnx. I am having trouble tho because I am getting an error that it expected 4 channels but received 3. How can I fix this?
### Additional
_No response_ | open | 2025-03-19T19:19:52Z | 2025-03-20T01:22:15Z | https://github.com/ultralytics/ultralytics/issues/19788 | [
"question",
"exports"
] | Mysterium-sch | 4 |
pyro-ppl/numpyro | numpy | 1,037 | New versions of sphinx and jinja2 break docs linting | This is observed by @tcbegley in #1034 | closed | 2021-05-12T04:59:10Z | 2021-05-27T20:28:51Z | https://github.com/pyro-ppl/numpyro/issues/1037 | [
"low priority",
"documentation"
] | fehiepsi | 1 |
numpy/numpy | numpy | 28,138 | RuntimeError: _ARRAY_API is not PyCapsule object when importing OpenCV on Python 3.13 with no GIL | ### Steps to reproduce:
Build a docker container using the following dockerfile:
```
FROM nvidia/cuda:12.3.2-devel-ubuntu22.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:deadsnakes/ppa
RUN apt-get install -y cmake build-essential zip python3.13-nogil python3.13-dev
ADD https://bootstrap.pypa.io/get-pip.py /
RUN python3.13t /get-pip.py
RUN python3.13t -m pip install numpy
ADD https://github.com/opencv/opencv/archive/refs/tags/4.10.0.zip /opencv.zip
RUN unzip /opencv.zip
ADD https://github.com/opencv/opencv_contrib/archive/refs/tags/4.10.0.zip /opencv_contrib.zip
RUN unzip /opencv_contrib.zip
RUN mkdir build
RUN bash -c "cmake \
-S /opencv-4.10.0 \
-B /build \
-DOPENCV_EXTRA_MODULES_PATH='/opencv_contrib-4.10.0/modules/cudev;/opencv_contrib-4.10.0/modules/cudaarithm;/opencv_contrib-4.10.0/modules/cudaimgproc;/opencv_contrib-4.10.0/modules/cudawarping' \
-DBUILD_TESTS=OFF \
-DBUILD_PERF_TESTS=OFF \
-DBUILD_opencv_apps=OFF \
-DWITH_OPENCL=OFF \
-DWITH_PNG=OFF \
-DWITH_JPEG=OFF \
-DWITH_WEBP=OFF \
-DWITH_OPENJPEG=OFF \
-DWITH_JASPER=OFF \
-DWITH_OPENEXR=OFF \
-DWITH_JPEGXL=OFF \
-DWITH_V4L=OFF \
-DWITH_FFMPEG=OFF \
-DWITH_GSTREAMER=OFF \
-DWITH_ANDROID_MEDIANDK=OFF \
-DVIDEOIO_ENABLE_PLUGINS=OFF \
-DWITH_GTK=OFF \
-DPARALLEL_ENABLE_PLUGINS=OFF \
-DHIGHGUI_ENABLE_PLUGINS=OFF \
-DWITH_PROTOBUF=OFF \
-DBUILD_PROTOBUF=OFF \
-DOPENCV_DNN_OPENCL=OFF \
-DENABLE_CCACHE=OFF \
-DBUILD_JAVA=OFF \
-DBUILD_opencv_python2=OFF \
-DBUILD_opencv_dnn=OFF \
-DBUILD_opencv_gapi=OFF \
-DBUILD_opencv_highgui=OFF \
-DBUILD_opencv_flann=ON \
-DBUILD_opencv_objdetect=OFF \
-DBUILD_opencv_videoio=OFF \
-DBUILD_opencv_video=OFF \
-DBUILD_opencv_photo=OFF \
-DBUILD_opencv_stitching=OFF \
-DBUILD_opencv_world=OFF \
-DBUILD_opencv_ml=OFF \
-DBUILD_opencv_calib3d=OFF \
-DBUILD_opencv_python3=ON \
-DPYTHON_EXECUTABLE=$(which python3.13t) \
-DOPENCV_PYTHON3_INSTALL_PATH=lib/python3.13t/dist-packages \
-DPYTHON3_LIBRARIES=/usr/lib/x86_64-linux-gnu/libpython3.13t.so \
-DWITH_CUDA=OFF 2>&1 \
| tee /cmake.log"
RUN bash -c "make -C /build -j $(nproc) 2>&1 | tee /build.log"
RUN bash -c "make -C /build install 2>&1 | tee /install.log"
```
Then, inside the container run `python3.13t -c "import cv2"`
### Error message:
```shell
RuntimeError: _ARRAY_API is not PyCapsule object
Traceback (most recent call last):
File "<string>", line 1, in <module>
import cv2
File "/usr/local/lib/python3.13t/dist-packages/cv2/__init__.py", line 181, in <module>
bootstrap()
~~~~~~~~~^^
File "/usr/local/lib/python3.13t/dist-packages/cv2/__init__.py", line 153, in bootstrap
native_module = importlib.import_module("cv2")
File "/usr/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ImportError: numpy._core.multiarray failed to import
```
### Additional information:
_No response_ | closed | 2025-01-10T08:44:45Z | 2025-01-15T21:22:27Z | https://github.com/numpy/numpy/issues/28138 | [
"57 - Close?",
"32 - Installation"
] | CameronDevine | 13 |
kymatio/kymatio | numpy | 787 | reconstruct signal example fails | I went through plot_real_signal, and generated the scattering coefficients Sx. Then I tried to use reconstruct_signal to reproduce the original sound file. Even after converting x and Sx from the first example to torch-friendly tensors, I hit the folliowing error:
`TypeError Traceback (most recent call last)
<ipython-input-183-245942b8469e> in <module>
----> 1 Sx = scattering(x)
~/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~/miniconda3/lib/python3.9/site-packages/kymatio/frontend/torch_frontend.py in forward(self, x)
21 input_checks(x)
22
---> 23 return self.scattering(x)
24
25 _doc_array = 'torch.Tensor'
~/miniconda3/lib/python3.9/site-packages/kymatio/scattering1d/frontend/torch_frontend.py in scattering(self, x)
112
113
--> 114 S = scattering1d(x, self.backend.pad, self.backend.unpad, self.backend, self.J, self.psi1_f, self.psi2_f, self.phi_f,\
115 max_order=self.max_order, average=self.average,
116 pad_left=self.pad_left, pad_right=self.pad_right,
~/miniconda3/lib/python3.9/site-packages/kymatio/scattering1d/core/scattering1d.py in scattering1d(x, pad, unpad, backend, J, psi1, psi2, phi, pad_left, pad_right, ind_start, ind_end, oversampling, max_order, average, size_scattering, vectorize, out_type)
82
83 if average:
---> 84 S_0_c = cdgmm(U_0_hat, phi[0])
85 S_0_hat = subsample_fourier(S_0_c, 2**k0)
86 S_0_r = fft(S_0_hat, 'C2R', inverse=True)
~/miniconda3/lib/python3.9/site-packages/kymatio/backend/torch_backend.py in cdgmm(A, B, inplace)
191
192 if A.dtype is not B.dtype:
--> 193 raise TypeError('Input and filter must be of the same dtype.')
194
195 if B.device.type == 'cuda':
TypeError: Input and filter must be of the same dtype.`
As far as I can tell, the only input I have specified is x. What is going on? | closed | 2021-11-08T06:24:02Z | 2022-01-01T23:22:06Z | https://github.com/kymatio/kymatio/issues/787 | [] | milchada | 3 |
tox-dev/tox | automation | 3,264 | ImportError with tox but pytest works when using entry_points plugins. | ## Issue
I expect tox to succeed when tests pass with pytest. I am seeing all tests pass with pytest but plugin project has unit tests fail when using tox due to ImportErrors.
This might be out of scope for requesting help. I have tried many different approaches to fix my issue when running tox. If you can provide help it would be greatly appreciated. I have not been able to find examples of other projects with our setup: a main Python package in one repository with support for extending functionality via plugins in Python packages in separate repositories.
## Environment
Minimal working example Python projects on Ubuntu 22.04 with Python 3.10.12, tox 4.14.2, pytest 8.0.1. I have a full project that runs GitHub actions on Mac, Linux, and Windows where we are seeing the same issue.
## Output of running tox
```console
$ tox -e py310
.pkg: _optional_hooks> python /home/thomas/src/tox-pyproject/tox-pyproject/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
.pkg: get_requires_for_build_sdist> python /home/thomas/src/tox-pyproject/tox-pyproject/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
.pkg: build_sdist> python /home/thomas/src/tox-pyproject/tox-pyproject/venv/lib/python3.10/site-packages/pyproject_api/_backend.py True setuptools.build_meta
py310: install_package> python -I -m pip install --force-reinstall --no-deps /home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins/.tox/.tmp/package/5/tox-pyproject-plugins-0.0.1.tar.gz
py310: commands[0] /home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins/.output-py310> pytest --cov=/home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins/src/tox_pyproject --cov-report term-missing --doctest-modules --junit-xml=tox-pyproject-py310-junit.xml --junit-prefix=py310 /home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins
========================================================================== test session starts ==========================================================================
platform linux -- Python 3.10.12, pytest-8.1.1, pluggy-1.4.0
cachedir: .tox/py310/.pytest_cache
rootdir: /home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins
configfile: tox.ini
plugins: cov-5.0.0
collected 0 items / 3 errors
================================================================================ ERRORS =================================================================================
_______________________________________________________ ERROR collecting src/tox_pyproject/plugins/discovery/b.py _______________________________________________________
ImportError while importing test module '/home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins/src/tox_pyproject/plugins/discovery/b.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib/python3.10/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
../src/tox_pyproject/plugins/discovery/b.py:3: in <module>
from tox_pyproject.config import Config
E ModuleNotFoundError: No module named 'tox_pyproject.config'
__________________________________________________________________ ERROR collecting tests/b/test_b.py ___________________________________________________________________
ImportError while importing test module '/home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins/tests/b/test_b.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib/python3.10/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
../tests/b/test_b.py:5: in <module>
from tox_pyproject.plugins.discovery.b import BDiscoveryPlugin
../src/tox_pyproject/plugins/discovery/b.py:3: in <module>
from tox_pyproject.config import Config
E ModuleNotFoundError: No module named 'tox_pyproject.config'
__________________________________________________________________ ERROR collecting tests/b/test_b.py ___________________________________________________________________
ImportError while importing test module '/home/thomas/src/tox-pyproject/tox-pyproject/tox-pyproject-plugins/tests/b/test_b.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/lib/python3.10/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
../tests/b/test_b.py:5: in <module>
from tox_pyproject.plugins.discovery.b import BDiscoveryPlugin
../src/tox_pyproject/plugins/discovery/b.py:3: in <module>
from tox_pyproject.config import Config
E ModuleNotFoundError: No module named 'tox_pyproject.config'
```
One other thing I tried is to activate the tox virtual environment then start a Python session. In that Python session I am able to import the modules that cause errors when running tests with tox. The files appear to exist where expected, but something about running tox is different than what I am expecting to happen.
## Minimal example
I have a repository at https://github.com/tdenewiler/tox-pyproject with a minimal working example (still fairly large) where the main project is fine, but using plugins in a separate project causes failures with tox testing. The README in the plugin package has details on how to execute, what works, and what doesn't work (https://github.com/tdenewiler/tox-pyproject/tree/main/tox-pyproject-plugins).
The main project we have is at https://github.com/sscpac/statick. I am trying to move away from yapsy for plugins and use a standard library-based implementation for plugin support. The forks where I have ported the code to use entry_points from standard library (and where I am seeing tox testing failures in plugin repos) is at:
- https://github.com/tdenewiler/statick/tree/stdlib-plugins (main)
- https://github.com/tdenewiler/statick-tex/tree/stdlib-plugins (plugins)
- https://github.com/tdenewiler/statick-md/tree/stdlib-plugins (plugins)
Example of failures in GitHub workflows:
https://github.com/tdenewiler/statick-md/actions/runs/8585765290 | closed | 2024-04-07T03:09:03Z | 2024-09-10T18:07:21Z | https://github.com/tox-dev/tox/issues/3264 | [] | tdenewiler | 1 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,293 | encoder training isn't stopping | i am trying to train encoder on a small dataset of 10 audios but it is taking months and not stopping. Can you please help me?
| open | 2024-03-16T07:56:32Z | 2024-03-16T07:56:32Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1293 | [] | Sarajish | 0 |
lanpa/tensorboardX | numpy | 43 | log batches of embeddings | Is there a way to log batches of embeddings then interact with all of the batches together in the projector?
I'm currently logging batches with the following code:
```
writer.add_embedding(embedding_batch, metadata=label_batch,
label_img=train_batch, global_step=current_epoch)
```
When I check the embeddings in the projector I have 10 (batch size * dim) tensors and I can only look at one batch at a time. they are named default:0000 - default:0010. Any way to get these all on the same graph? | closed | 2017-10-20T11:30:32Z | 2017-10-21T03:50:51Z | https://github.com/lanpa/tensorboardX/issues/43 | [] | A-Jacobson | 4 |
gradio-app/gradio | data-science | 10,623 | Provide a status indicator to show that a Chatbot has not finished streaming | From an [internal conversation](https://huggingface.slack.com/archives/C08CP7H97EG/p1739208266953479):
> have a loading animation while the agent is still in its thought process (i.e. didn't finish yielding new messages), kind of claude's strange star moving around or ChatGPT's dot animations. Else users could think the agent just stopped responding | closed | 2025-02-18T23:51:15Z | 2025-02-25T00:48:17Z | https://github.com/gradio-app/gradio/issues/10623 | [
"enhancement"
] | abidlabs | 0 |
zihangdai/xlnet | tensorflow | 71 | xlnet for Quora Questions Dataset | Hello,
I saw in Papers with code website, xlnet accuracy on Quora question pairs is 90.3%.
Any has the code of apply xlnet on Quora dataset?
[https://paperswithcode.com/paper/xlnet-generalized-autoregressive-pretraining](url)
Thanks,
| closed | 2019-06-27T19:52:53Z | 2019-07-02T07:24:44Z | https://github.com/zihangdai/xlnet/issues/71 | [] | aisheh90 | 1 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,415 | cannot change password of user: "Set up encryption by providing a PGP public key" | i have set up globaleaks via docker compose. when trying to change the password of an recipient and save this change, i get prompted to set up encryption, even tho it is disabled... and i can only save it AFTER i have setup PGP public key.

deactivating and re-activating the option in advanced settings does not have any effect.
| closed | 2023-04-05T08:02:59Z | 2023-04-11T06:48:06Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3415 | [] | GerryCrooked | 2 |
matplotlib/cheatsheets | matplotlib | 117 | In the handout for beginners, plt.subplots(2, 1) should be used. | I'm using matplotlib-3.6.0.
When I use plt.subplots((2, 1)), there is an error: ValueError: Number of rows must be a positive integer, not (2, 1) | closed | 2022-10-01T14:27:07Z | 2022-10-11T23:33:20Z | https://github.com/matplotlib/cheatsheets/issues/117 | [] | radarFudan | 4 |
streamlit/streamlit | python | 9,946 | `st.date_input` doesn't show placeholder text when the initial value is a list of size one | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
You can pass a list to `st.date_input`'s `value` parameter. An empty list will initialize the widget with an empty interval (showing placeholder text). A pair of values will initialize the widget with the given interval (showing the dates as an initial selection). However, if you pass a list of size one, the widget has only the starting date of the interval initialized with no end date. Instead of showing placeholder text for the end date, it's blank.
```
import streamlit as st
from datetime import date, datetime
a = st.selectbox("A", ["today", "2024-10-01", date(2024,9,1), datetime(2024,8,1,12,0,0,0), None])
b = st.selectbox("B", ["today", "2024-10-01", date(2024,9,1), datetime(2024,8,1,12,0,0,0), None])
x = st.date_input("Two-list", [a,b])
x
y = st.date_input("One-list", [a])
y
z = st.date_input("Empty-list", [])
z
```
### Why?
Presumably, it would look better to show placeholder text for the end date when only the initial date of a range is selected. This is also how an intermediate state looks if someone selects a new start date and clicks out of the widget before selecting the end date.
### How?
If someone unfocuses an (interval) date widget with only the start date selected, show the placeholder text for the missing end date.
### Additional Context
_No response_ | open | 2024-11-29T08:59:19Z | 2024-11-29T22:26:15Z | https://github.com/streamlit/streamlit/issues/9946 | [
"type:enhancement",
"feature:st.date_input"
] | sfc-gh-dmatthews | 1 |
davidteather/TikTok-Api | api | 1,142 | Legal Implications of Using TikTok Data for Insights as Per Its Terms of Service | Regarding the previous question posed by joseluismoreira and in accordance with TikTok's Terms of Service (ToS), is it legally permissible to use this product?
### Discussed in https://github.com/davidteather/TikTok-Api/discussions/495
<div type='discussions-op-text'>
<sup>Originally posted by **joseluismoreira** February 7, 2021</sup>
I have noticed the following part of the tiktok's ToS

Does anyone know about the legal implications? To use the data to get insights about it, but not selling directly, should be ok?</div> | open | 2024-04-22T15:26:19Z | 2024-04-22T15:26:19Z | https://github.com/davidteather/TikTok-Api/issues/1142 | [] | clchera | 0 |
streamlit/streamlit | data-visualization | 10,883 | Allow pre-populated / placeholder file text in st.file_uploader | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
What I would think is a fairly common need for more professional apps; which load as a demo or allows the user to select a demo, with pre-populated elements.
I want the file being used in the demo to appear in the st.file_upload element by default
### Why?
If there were a placeholder parameter, like some other inputs, it would allow the user to understand what file is currently being read (which they can perhaps overwrite)
### How?
placeholder paramater, default = None
If populated, the file appears as attachments do with the placeholder name as the attachment name.
Forget about the KB grey subtitle text, you could let that just be blank for the placeholder.
<img width="297" alt="Image" src="https://github.com/user-attachments/assets/869fdde5-e6eb-4678-9178-725366021dd8" />
### Additional Context
_No response_ | closed | 2025-03-24T01:24:47Z | 2025-03-24T02:28:26Z | https://github.com/streamlit/streamlit/issues/10883 | [
"type:enhancement"
] | nickgreengithub | 3 |
ets-labs/python-dependency-injector | flask | 599 | Expand kwargs from configuration provider? | I am wondering if there is a way to communicate a config substructure as kwargs to a provider:
```python
from dependency_injector import containers, providers
from lume_services.services.data.models.db.mysql import MySQLModelDBConfig
class Context(containers.DeclarativeContainer):
config = providers.Configuration()
model_db_config = providers.Singleton(
MySQLModelDBConfig,
**config.model_db
)
```
The above raises: `dependency_injector.providers.ConfigurationOption.keys() returned a non-iterable (type NoneType)`. | open | 2022-06-28T22:24:06Z | 2023-03-26T20:27:16Z | https://github.com/ets-labs/python-dependency-injector/issues/599 | [] | jacquelinegarrahan | 1 |
NVlabs/neuralangelo | computer-vision | 97 | UserWarning: tinycudann was built for lower compute capability (86) than the system's (89). Performance may be suboptimal. | UserWarning: tinycudann was built for lower compute capability (86) than the system's (89). Performance may be suboptimal.
What does this warning mean? | closed | 2023-08-31T20:03:31Z | 2023-10-08T02:34:17Z | https://github.com/NVlabs/neuralangelo/issues/97 | [] | Islaster | 3 |
kennethreitz/responder | graphql | 205 | SentryMiddleware | I've taken a first pass at a [Sentry integration for ASGI](https://github.com/encode/sentry-asgi).
Things you probably want to do to support it well:
* Document `.add_middleware` for adding ASGI middleware. (Perhaps with a section linking to third party ASGI middleware implementations?)
* Ensure the router updates the ASGI scope with an 'endpoint', which should be the routed class or function. | closed | 2018-11-06T13:56:32Z | 2024-03-31T00:57:42Z | https://github.com/kennethreitz/responder/issues/205 | [
"help wanted",
"good first issue",
"documentation"
] | tomchristie | 0 |
PeterL1n/BackgroundMattingV2 | computer-vision | 94 | Were you able to convert the pre-trained model to CoreML? | closed | 2021-05-12T01:36:53Z | 2021-05-13T06:21:02Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/94 | [] | charankilari | 3 | |
shaikhsajid1111/social-media-profile-scrapers | web-scraping | 4 | Error when running the script: 'str' object has no attribute 'get' | As the title suggests, please let me know how to resolve this. Thanks!
| closed | 2020-07-31T12:26:12Z | 2020-08-02T05:54:30Z | https://github.com/shaikhsajid1111/social-media-profile-scrapers/issues/4 | [] | tina1998612 | 1 |
donnemartin/data-science-ipython-notebooks | scikit-learn | 80 | Python | open | 2021-02-12T06:36:25Z | 2023-03-16T10:41:20Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/80 | [
"needs-review"
] | saketh0000 | 1 | |
taverntesting/tavern | pytest | 937 | Question about list slicing in response validaton using tavern.request_vars | Hello,
having a request consisting of top-level object holding a list, how do I refer to such list element in response validation using `"{tavern.request_vars}"` ?
tavern version: 2.11.0
example:
```yaml
- name: some test
request:
url: <truncated>
method: POST
json:
requestBodyList:
- complexListElement:
attribute: some_value
response:
status_code: 200
json:
responseBodyList:
- attribute: "{tavern.request_vars.json.requestBodyList[0].attribute}"
```
Output (with the syntax above, e.g. "{tavern.request_vars.json.requestBodyList[0].attribute}" ):
```
Errors:
E tavern._core.exceptions.MissingFormatError: tavern.request_vars.json.requestBodyList[0].attribute
------------------------------ Captured log call -------------------------------
ERROR tavern._core.dict_util:dict_util.py:41 Failed to resolve string '{tavern.request_vars.json.requestBodyList[0].attribute}'
ERROR tavern._core.dict_util:dict_util.py:44 Key(s) not found in format: tavern.request_vars.json.requestBodyList[0].attribute
```
Tested the other notation mentioned in docs, `thing.nested.0`, gives String.Formatter error, as mentioned in docs, this got most probably deprecated up from 1.0 version, e.g.
```
...
response:
status_code: 200
json:
responseBodyList:
- attribute: "{tavern.request_vars.json.requestBodyList.0.attribute}"
```
```
field_name = 'tavern.request_vars.json.requestBodyList.0.attribute', args = []
kwargs = <truncated>
def get_field(self, field_name, args, kwargs):
first, rest = _string.formatter_field_name_split(field_name)
obj = self.get_value(first, args, kwargs)
# loop through the rest of the field_name, doing
# getattr or getitem as needed
for is_attr, i in rest:
if is_attr:
> obj = getattr(obj, i)
E AttributeError: 'BoxList' object has no attribute '0'
```
Didn't find an example for this in the docs, besides of https://tavern.readthedocs.io/en/latest/basics.html#response -> "thing.nested[0]" , however this applies to `save` keyword/section.
Is there known way how to achieve list slicing while using `tavern.request_vars` in the `response.json` section ?
Thanks for any hint,
Regards,
Adrian
| closed | 2024-06-13T11:51:57Z | 2024-09-21T17:33:13Z | https://github.com/taverntesting/tavern/issues/937 | [] | adrpp | 3 |
slackapi/python-slack-sdk | asyncio | 841 | Dispatch Action in Block Kit | Block Kit added a new feature:
* `dispatch_action` flag to `input` blocks
* `dispatch_action_config` field to `plain_text_input` block elements.
References:
* https://api.slack.com/reference/block-kit/blocks#input
* https://api.slack.com/reference/block-kit/composition-objects#dispatch_action_config
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack.web.WebClient** (Web API client)
- [ ] **slack.webhook.WebhookClient** (Incoming Webhook, response_url sender)
- [x] **slack.web.classes** (UI component builders)
- [ ] **slack.rtm.RTMClient** (RTM client)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slackclient/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2020-10-09T09:27:29Z | 2020-10-09T22:12:49Z | https://github.com/slackapi/python-slack-sdk/issues/841 | [
"Version: 2x",
"enhancement",
"web-client",
"Version: 3x",
"untriaged"
] | seratch | 0 |
deepset-ai/haystack | pytorch | 8,220 | clean up docstrings: TransformersTextRouter | closed | 2024-08-13T13:56:33Z | 2024-08-16T10:44:40Z | https://github.com/deepset-ai/haystack/issues/8220 | [] | dfokina | 0 | |
matterport/Mask_RCNN | tensorflow | 2,899 | ERROR:tensorflow:unhashable type: 'ListWrapper' in Tensorflow2.10.0 keras 2.10.0 |
Log output:
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
----------------------loss: Tensor("mrcnn_mask_loss/Mean:0", shape=(), dtype=float32)
------------------------------------output_rois: Tensor("output_rois/mul:0", shape=(1, ?, 4), dtype=float32)
------------------------------------rpn_class_loss: Tensor("rpn_class_loss/cond/Merge:0", shape=(), dtype=float32)
------------------------------------rpn_bbox_loss: Tensor("rpn_bbox_loss/cond/Merge:0", shape=(), dtype=float32)
------------------------------------class_loss: Tensor("mrcnn_class_loss/truediv:0", shape=(), dtype=float32)
------------------------------------bbox_loss: Tensor("mrcnn_bbox_loss/Mean:0", shape=(), dtype=float32)
------------------------------------mask_loss: Tensor("mrcnn_mask_loss/Mean:0", shape=(), dtype=float32)
----------------------------inputs: [<tf.Tensor 'split:0' shape=(?, ?, ?, 3) dtype=float32>, <tf.Tensor 'split_1:0' shape=(?, 16) dtype=float32>, <tf.Tensor 'split_2:0' shape=(?, ?, 1) dtype=int32>, <tf.Tensor 'split_3:0' shape=(?, ?, 4) dtype=float32>, <tf.Tensor 'split_4:0' shape=(?, ?) dtype=int32>, <tf.Tensor 'split_5:0' shape=(?, ?, 4) dtype=float32>, <tf.Tensor 'split_6:0' shape=(?, 56, 56, ?) dtype=bool>]
----------------------loss: Tensor("tower_0/mask_rcnn/mrcnn_mask_loss/Mean:0", shape=(), dtype=float32, device=/device:GPU:0)
first stage epoch num:121,second stage epoch num:365,third stage epoch num:486
per epoch steps:50
Training network heads
([array([[[[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
...,
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9]],
[[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
...,
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9]],
[[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
...,
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9]],
...,
[[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
...,
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9]],
[[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
...,
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9]],
[[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
...,
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9],
[-123.7, -116.8, -103.9]]]], dtype=float32), array([[8.300e+01, 1.280e+02, 1.280e+02, 3.000e+00, 1.024e+03, 1.024e+03,
3.000e+00, 1.120e+02, 1.120e+02, 9.120e+02, 9.120e+02, 6.250e+00,
1.000e+00, 1.000e+00, 1.000e+00, 1.000e+00]]), array([[[0],
[0],
[0],
...,
[0],
[0],
[0]]], dtype=int32), array([[[ 0.72265625, 0.625 , 0.60624622, 0.62351739],
[ 0.72265625, -0.625 , 0.60624622, 0.62351739],
[-0.52734375, 0.625 , 0.60624622, 0.62351739],
...,
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]]]), array([[3, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32), array([[[112, 112, 443, 550],
[418, 543, 707, 833],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0],
[ 0, 0, 0, 0]]], dtype=int32), array([[[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
...,
[[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
...,
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
...,
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]],
[[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
...,
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False],
[ True, False, False, ..., False, False, False]]]])], [])
Starting at epoch 0. LR=0.001
Checkpoint Path: /home/hpcadmin/hys/hys_1017_dir/train_dir_instance_segmentation
Selecting layers to train
fpn_c5p5 (Conv2D)
fpn_c4p4 (Conv2D)
fpn_c3p3 (Conv2D)
fpn_c2p2 (Conv2D)
fpn_p5 (Conv2D)
fpn_p2 (Conv2D)
fpn_p3 (Conv2D)
fpn_p4 (Conv2D)
rpn_model (Functional)
mrcnn_mask_conv1 (TimeDistributed)
mrcnn_mask_bn1 (TimeDistributed)
mrcnn_mask_conv2 (TimeDistributed)
mrcnn_mask_bn2 (TimeDistributed)
mrcnn_class_conv1 (TimeDistributed)
mrcnn_class_bn1 (TimeDistributed)
mrcnn_mask_conv3 (TimeDistributed)
mrcnn_mask_bn3 (TimeDistributed)
mrcnn_class_conv2 (TimeDistributed)
mrcnn_class_bn2 (TimeDistributed)
mrcnn_mask_conv4 (TimeDistributed)
mrcnn_mask_bn4 (TimeDistributed)
mrcnn_bbox_fc (TimeDistributed)
mrcnn_mask_deconv (TimeDistributed)
mrcnn_class_logits (TimeDistributed)
mrcnn_mask (TimeDistributed)
-----------------------------------layers: (mrcnn\_.*)|(rpn\_.*)|(fpn\_.*)
-----------------------------------model : {'mode': 'training', 'config': <letrain_keras.parallel_config.CocoConfig object at 0x7f98b8297370>, 'model_dir': '/home/hpcadmin/hys/hys_1017_dir/train_dir_instance_segmentation', 'epoch': 0, 'log_dir': '/home/hpcadmin/hys/hys_1017_dir/train_dir_instance_segmentation/task_020221104T0957', 'checkpoint_path': '/home/hpcadmin/hys/hys_1017_dir/train_dir_instance_segmentation/task_020221104T0957/mask_rcnn_task_0_{epoch:04d}.h5', '_anchor_cache': {(1024, 1024, 3): array([[-0.02211869, -0.01105934, 0.02114117, 0.01008183],
[-0.01564027, -0.01564027, 0.01466276, 0.01466276],
[-0.01105934, -0.02211869, 0.01008183, 0.02114117],
...,
[ 0.5845174 , 0.7614669 , 1.2913378 , 1.1143883 ],
[ 0.68817204, 0.68817204, 1.1876833 , 1.1876833 ],
[ 0.7614669 , 0.5845174 , 1.1143883 , 1.2913378 ]],
dtype=float32)}, 'anchors': array([[ -22.627417 , -11.3137085 , 22.627417 , 11.3137085 ],
[ -16. , -16. , 16. , 16. ],
[ -11.3137085 , -22.627417 , 11.3137085 , 22.627417 ],
...,
[ 597.96132803, 778.98066402, 1322.03867197, 1141.01933598],
[ 704. , 704. , 1216. , 1216. ],
[ 778.98066402, 597.96132803, 1141.01933598, 1322.03867197]]), 'keras_model': <letrain_keras.parallel_model.ParallelModel object at 0x7f9987d060d0>}
----------------------config.LEARNING_MOMENTUM: 0.9
2022-11-04 09:57:57.260568: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
======optimizer_device: /device:CPU:0
the learning_rate_decay_type is: fixed
the optimizer is: sgd
----------------------------self.total_loss: Tensor("loss_1/AddN:0", shape=(), dtype=float32)
----------------------------params: [<tf.Variable 'fpn_c5p5/kernel:0' shape=(1, 1, 2048, 256) dtype=float32>, <tf.Variable 'fpn_c5p5/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_c4p4/kernel:0' shape=(1, 1, 1024, 256) dtype=float32>, <tf.Variable 'fpn_c4p4/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_c3p3/kernel:0' shape=(1, 1, 512, 256) dtype=float32>, <tf.Variable 'fpn_c3p3/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_c2p2/kernel:0' shape=(1, 1, 256, 256) dtype=float32>, <tf.Variable 'fpn_c2p2/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_p5/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'fpn_p5/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_p2/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'fpn_p2/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_p3/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'fpn_p3/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'fpn_p4/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'fpn_p4/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'rpn_conv_shared/kernel:0' shape=(3, 3, 256, 512) dtype=float32>, <tf.Variable 'rpn_conv_shared/bias:0' shape=(512,) dtype=float32>, <tf.Variable 'rpn_class_raw/kernel:0' shape=(1, 1, 512, 6) dtype=float32>, <tf.Variable 'rpn_class_raw/bias:0' shape=(6,) dtype=float32>, <tf.Variable 'rpn_bbox_pred/kernel:0' shape=(1, 1, 512, 12) dtype=float32>, <tf.Variable 'rpn_bbox_pred/bias:0' shape=(12,) dtype=float32>, <tf.Variable 'mrcnn_mask_conv1/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'mrcnn_mask_conv1/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn1/gamma:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn1/beta:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_conv2/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'mrcnn_mask_conv2/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn2/gamma:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn2/beta:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_class_conv1/kernel:0' shape=(7, 7, 256, 1024) dtype=float32>, <tf.Variable 'mrcnn_class_conv1/bias:0' shape=(1024,) dtype=float32>, <tf.Variable 'mrcnn_class_bn1/gamma:0' shape=(1024,) dtype=float32>, <tf.Variable 'mrcnn_class_bn1/beta:0' shape=(1024,) dtype=float32>, <tf.Variable 'mrcnn_mask_conv3/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'mrcnn_mask_conv3/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn3/gamma:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn3/beta:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_class_conv2/kernel:0' shape=(1, 1, 1024, 1024) dtype=float32>, <tf.Variable 'mrcnn_class_conv2/bias:0' shape=(1024,) dtype=float32>, <tf.Variable 'mrcnn_class_bn2/gamma:0' shape=(1024,) dtype=float32>, <tf.Variable 'mrcnn_class_bn2/beta:0' shape=(1024,) dtype=float32>, <tf.Variable 'mrcnn_mask_conv4/kernel:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Variable 'mrcnn_mask_conv4/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn4/gamma:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_mask_bn4/beta:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_bbox_fc/kernel:0' shape=(1024, 16) dtype=float32>, <tf.Variable 'mrcnn_bbox_fc/bias:0' shape=(16,) dtype=float32>, <tf.Variable 'mrcnn_mask_deconv/kernel:0' shape=(2, 2, 256, 256) dtype=float32>, <tf.Variable 'mrcnn_mask_deconv/bias:0' shape=(256,) dtype=float32>, <tf.Variable 'mrcnn_class_logits/kernel:0' shape=(1024, 4) dtype=float32>, <tf.Variable 'mrcnn_class_logits/bias:0' shape=(4,) dtype=float32>, <tf.Variable 'mrcnn_mask/kernel:0' shape=(1, 1, 256, 4) dtype=float32>, <tf.Variable 'mrcnn_mask/bias:0' shape=(4,) dtype=float32>]
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/sub:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/GatherV2_2:0", shape=(?, 7, 7, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/Shape:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/sub_1:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/GatherV2_5:0", shape=(?, 7, 7, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/Shape_1:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/sub_2:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/GatherV2_8:0", shape=(?, 7, 7, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/Shape_2:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/sub_3:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/GatherV2_11:0", shape=(?, 7, 7, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_classifier/concat_grad/Shape_3:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/sub:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/GatherV2_2:0", shape=(?, 14, 14, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/Shape:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/sub_1:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/GatherV2_5:0", shape=(?, 14, 14, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/Shape_1:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/sub_2:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/GatherV2_8:0", shape=(?, 14, 14, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/Shape_2:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/sub_3:0", shape=(?,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/GatherV2_11:0", shape=(?, 14, 14, 256), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/roi_align_mask/concat_grad/Shape_3:0", shape=(4,), dtype=int32))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/framework/indexed_slices.py:444: UserWarning: Converting sparse IndexedSlices(IndexedSlices(indices=Tensor("gradients/tower_0/mask_rcnn/ROI/GatherV2_1_grad/Reshape_1:0", shape=(6000,), dtype=int32), values=Tensor("gradients/tower_0/mask_rcnn/ROI/GatherV2_1_grad/Reshape:0", shape=(6000, 4), dtype=float32), dense_shape=Tensor("gradients/tower_0/mask_rcnn/ROI/GatherV2_1_grad/Cast:0", shape=(2,), dtype=int32, device=/device:GPU:0))) to a dense Tensor of unknown shape. This may consume a large amount of memory.
warnings.warn(
-----------------------------grads: [<tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c5p5/Conv2D_grad/Conv2DBackpropFilter:0' shape=(1, 1, 2048, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c5p5/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c4p4/Conv2D_grad/Conv2DBackpropFilter:0' shape=(1, 1, 1024, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c4p4/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c3p3/Conv2D_grad/Conv2DBackpropFilter:0' shape=(1, 1, 512, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c3p3/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c2p2/Conv2D_grad/Conv2DBackpropFilter:0' shape=(1, 1, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_c2p2/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p5/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p5/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p2/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p2/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p3/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p3/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p4/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/fpn_p4/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/AddN_28:0' shape=(3, 3, 256, 512) dtype=float32>, <tf.Tensor 'gradients/AddN_26:0' shape=(512,) dtype=float32>, <tf.Tensor 'gradients/AddN_11:0' shape=(1, 1, 512, 6) dtype=float32>, <tf.Tensor 'gradients/AddN_10:0' shape=(6,) dtype=float32>, <tf.Tensor 'gradients/AddN_22:0' shape=(1, 1, 512, 12) dtype=float32>, <tf.Tensor 'gradients/AddN_21:0' shape=(12,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv1/conv2d_2/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv1/conv2d_2/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn1/batch_norm_2/FusedBatchNormV3_grad/FusedBatchNormGradV3:1' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn1/batch_norm_2/FusedBatchNormV3_grad/FusedBatchNormGradV3:2' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv2/conv2d_3/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv2/conv2d_3/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn2/batch_norm_3/FusedBatchNormV3_grad/FusedBatchNormGradV3:1' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn2/batch_norm_3/FusedBatchNormV3_grad/FusedBatchNormGradV3:2' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_conv1/conv2d/Conv2D_grad/Conv2DBackpropFilter:0' shape=(7, 7, 256, 1024) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_conv1/conv2d/BiasAdd_grad/BiasAddGrad:0' shape=(1024,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_bn1/batch_norm/FusedBatchNormV3_grad/FusedBatchNormGradV3:1' shape=(1024,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_bn1/batch_norm/FusedBatchNormV3_grad/FusedBatchNormGradV3:2' shape=(1024,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv3/conv2d_4/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv3/conv2d_4/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn3/batch_norm_4/FusedBatchNormV3_grad/FusedBatchNormGradV3:1' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn3/batch_norm_4/FusedBatchNormV3_grad/FusedBatchNormGradV3:2' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_conv2/conv2d_1/Conv2D_grad/Conv2DBackpropFilter:0' shape=(1, 1, 1024, 1024) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_conv2/conv2d_1/BiasAdd_grad/BiasAddGrad:0' shape=(1024,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_bn2/batch_norm_1/FusedBatchNormV3_grad/FusedBatchNormGradV3:1' shape=(1024,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_bn2/batch_norm_1/FusedBatchNormV3_grad/FusedBatchNormGradV3:2' shape=(1024,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv4/conv2d_5/Conv2D_grad/Conv2DBackpropFilter:0' shape=(3, 3, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_conv4/conv2d_5/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn4/batch_norm_5/FusedBatchNormV3_grad/FusedBatchNormGradV3:1' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_bn4/batch_norm_5/FusedBatchNormV3_grad/FusedBatchNormGradV3:2' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_bbox_fc/dense_1/MatMul_grad/MatMul_1:0' shape=(1024, 16) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_bbox_fc/dense_1/BiasAdd_grad/BiasAddGrad:0' shape=(16,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_deconv/conv2d_transpose/conv2d_transpose_grad/Conv2DBackpropFilter:0' shape=(2, 2, 256, 256) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask_deconv/conv2d_transpose/BiasAdd_grad/BiasAddGrad:0' shape=(256,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_logits/dense/MatMul_grad/MatMul_1:0' shape=(1024, 4) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_class_logits/dense/BiasAdd_grad/BiasAddGrad:0' shape=(4,) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask/conv2d_6/Conv2D_grad/Conv2DBackpropFilter:0' shape=(1, 1, 256, 4) dtype=float32>, <tf.Tensor 'gradients/tower_0/mask_rcnn/mrcnn_mask/conv2d_6/BiasAdd_grad/BiasAddGrad:0' shape=(4,) dtype=float32>]
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c5p5/Conv2D_grad/Conv2DBackpropFilter:0", shape=(1, 1, 2048, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c5p5/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c4p4/Conv2D_grad/Conv2DBackpropFilter:0", shape=(1, 1, 1024, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c4p4/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c3p3/Conv2D_grad/Conv2DBackpropFilter:0", shape=(1, 1, 512, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c3p3/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c2p2/Conv2D_grad/Conv2DBackpropFilter:0", shape=(1, 1, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_c2p2/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p5/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p5/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p2/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p2/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p3/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p3/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p4/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/fpn_p4/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/AddN_28:0", shape=(3, 3, 256, 512), dtype=float32)
--------------------------g: Tensor("gradients/AddN_26:0", shape=(512,), dtype=float32)
--------------------------g: Tensor("gradients/AddN_11:0", shape=(1, 1, 512, 6), dtype=float32)
--------------------------g: Tensor("gradients/AddN_10:0", shape=(6,), dtype=float32)
--------------------------g: Tensor("gradients/AddN_22:0", shape=(1, 1, 512, 12), dtype=float32)
--------------------------g: Tensor("gradients/AddN_21:0", shape=(12,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv1/conv2d_2/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv1/conv2d_2/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn1/batch_norm_2/FusedBatchNormV3_grad/FusedBatchNormGradV3:1", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn1/batch_norm_2/FusedBatchNormV3_grad/FusedBatchNormGradV3:2", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv2/conv2d_3/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv2/conv2d_3/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn2/batch_norm_3/FusedBatchNormV3_grad/FusedBatchNormGradV3:1", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn2/batch_norm_3/FusedBatchNormV3_grad/FusedBatchNormGradV3:2", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_conv1/conv2d/Conv2D_grad/Conv2DBackpropFilter:0", shape=(7, 7, 256, 1024), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_conv1/conv2d/BiasAdd_grad/BiasAddGrad:0", shape=(1024,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_bn1/batch_norm/FusedBatchNormV3_grad/FusedBatchNormGradV3:1", shape=(1024,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_bn1/batch_norm/FusedBatchNormV3_grad/FusedBatchNormGradV3:2", shape=(1024,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv3/conv2d_4/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv3/conv2d_4/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn3/batch_norm_4/FusedBatchNormV3_grad/FusedBatchNormGradV3:1", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn3/batch_norm_4/FusedBatchNormV3_grad/FusedBatchNormGradV3:2", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_conv2/conv2d_1/Conv2D_grad/Conv2DBackpropFilter:0", shape=(1, 1, 1024, 1024), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_conv2/conv2d_1/BiasAdd_grad/BiasAddGrad:0", shape=(1024,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_bn2/batch_norm_1/FusedBatchNormV3_grad/FusedBatchNormGradV3:1", shape=(1024,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_bn2/batch_norm_1/FusedBatchNormV3_grad/FusedBatchNormGradV3:2", shape=(1024,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv4/conv2d_5/Conv2D_grad/Conv2DBackpropFilter:0", shape=(3, 3, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_conv4/conv2d_5/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn4/batch_norm_5/FusedBatchNormV3_grad/FusedBatchNormGradV3:1", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_bn4/batch_norm_5/FusedBatchNormV3_grad/FusedBatchNormGradV3:2", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_bbox_fc/dense_1/MatMul_grad/MatMul_1:0", shape=(1024, 16), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_bbox_fc/dense_1/BiasAdd_grad/BiasAddGrad:0", shape=(16,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_deconv/conv2d_transpose/conv2d_transpose_grad/Conv2DBackpropFilter:0", shape=(2, 2, 256, 256), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask_deconv/conv2d_transpose/BiasAdd_grad/BiasAddGrad:0", shape=(256,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_logits/dense/MatMul_grad/MatMul_1:0", shape=(1024, 4), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_class_logits/dense/BiasAdd_grad/BiasAddGrad:0", shape=(4,), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask/conv2d_6/Conv2D_grad/Conv2DBackpropFilter:0", shape=(1, 1, 256, 4), dtype=float32)
--------------------------g: Tensor("gradients/tower_0/mask_rcnn/mrcnn_mask/conv2d_6/BiasAdd_grad/BiasAddGrad:0", shape=(4,), dtype=float32)
------------------------------layer output: Tensor("rpn_class_loss_1/rpn_class_loss/Identity:0", shape=(), dtype=float32)
Traceback (most recent call last):
File "./applications/letrain.py", line 639, in <module>
tf.app.run()
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/platform/app.py", line 36, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/absl/app.py", line 308, in run
_run_main(main, args)
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/absl/app.py", line 254, in _run_main
sys.exit(main(argv))
File "./applications/letrain.py", line 632, in main
LeTrain().keras_model_train(user_mode)
File "/home/hpcadmin/hys/letrain_dir/letrain2/engine/base_train.py", line 1473, in keras_model_train
self.run_train_callback(model, dataset_train, data_val, config=config)
File "./applications/letrain.py", line 361, in run_train_callback
return train_callback(model=model, dataset_train=dataset_train,
File "/home/hpcadmin/hys/letrain_dir/letrain2/applications/maskrcnn/get_maskrcnn_loss.py", line 281, in train_callback
_train(model, config, dataset_train, dataset_val,
File "/home/hpcadmin/hys/letrain_dir/letrain2/applications/maskrcnn/get_maskrcnn_loss.py", line 404, in _train
model.compile(learning_rate, config.LEARNING_MOMENTUM)
File "/home/hpcadmin/hys/letrain_dir/letrain2/applications/maskrcnn/mrcnn/model.py", line 2525, in compile
self.keras_model.add_metric(loss, name=name, aggregation='mean')
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/keras/engine/base_layer_v1.py", line 1242, in add_metric
self._graph_network_add_metric(value, aggregation, name)
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/keras/engine/functional.py", line 1009, in _graph_network_add_metric
self._insert_layers(new_layers, new_nodes)
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/keras/engine/functional.py", line 936, in _insert_layers
layer_set = set(self._self_tracked_trackables)
File "/home/hpcadmin/hys/conda_env_tf_2/lib/python3.8/site-packages/tensorflow/python/trackable/data_structures.py", line 677, in __hash__
raise TypeError("unhashable type: 'ListWrapper'")
TypeError: unhashable type: 'ListWrapper'
ERROR:tensorflow:unhashable type: 'ListWrapper'
E1104 09:58:17.444831 140297968976640 letrain.py:642] unhashable type: 'ListWrapper'
Has anyone solved this problem? I have tried to modify it:
self.keras_model.add_loss(loss)
self.keras_ model. add_ metric(loss, name=name, aggregation='mean')
| open | 2022-11-04T02:07:35Z | 2022-12-20T13:13:13Z | https://github.com/matterport/Mask_RCNN/issues/2899 | [] | houyushan | 1 |
pytorch/vision | computer-vision | 8,903 | Setting more than 2 elements to `scale` argument of `RandomResizedCrop()` works | ### 🐛 Describe the bug
Setting more than 2 elements to `scale` argument of [RandomResizedCrop()](https://pytorch.org/vision/main/generated/torchvision.transforms.v2.RandomResizedCrop.html) works as shown below. *`scale` argument should accept 2 elements:
```python
from torchvision.transforms.v2 import RandomResizedCrop
rrc = RandomResizedCrop(size=100, scale=[0.1, 0.2, 0.3, 0.4, 0.5])
rrc
# RandomResizedCrop(size=(100, 100),
# scale=[0.1, 0.2, 0.3, 0.4, 0.5],
# ratio=(0.75, 1.3333333333333333),
# interpolation=InterpolationMode.BILINEAR,
# antialias=True)
```
In addition, setting 0 or 1 element to `scale` argument of `RandomResizedCrop()` doesn't work as shown below:
```python
from torchvision.transforms.v2 import RandomResizedCrop
rrc = RandomResizedCrop(size=100, scale=[])
rrc # Error
rrc = RandomResizedCrop(size=100, scale=[0.1])
rrc # Error
```
> IndexError: list index out of range
### Versions
```python
import torchvision
torchvision.__version__ # '0.20.1'
``` | closed | 2025-02-08T14:49:04Z | 2025-02-28T11:02:06Z | https://github.com/pytorch/vision/issues/8903 | [] | hyperkai | 1 |
mljar/mljar-supervised | scikit-learn | 610 | Question: How to convert a model or ensemble from JSON to ONNH | Hi,
Question:Is it possible to convert a model or ensemble from JSON to ONNH? | open | 2023-03-23T13:29:42Z | 2023-03-23T13:34:16Z | https://github.com/mljar/mljar-supervised/issues/610 | [] | VladPerervenko | 1 |
huggingface/datasets | pandas | 6,501 | OverflowError: value too large to convert to int32_t | ### Describe the bug

### Steps to reproduce the bug
just loading datasets
### Expected behavior
how can I fix it
### Environment info
pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl
pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl
done | open | 2023-12-15T10:10:21Z | 2023-12-15T10:10:21Z | https://github.com/huggingface/datasets/issues/6501 | [] | zhangfan-algo | 0 |
ray-project/ray | python | 50,912 | [core] Split grpc common lib into multiple targets | Subissue for https://github.com/ray-project/ray/issues/50586 | closed | 2025-02-26T04:17:24Z | 2025-02-26T23:49:18Z | https://github.com/ray-project/ray/issues/50912 | [
"enhancement",
"core"
] | dentiny | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,409 | Abouttransfer learning | closed | 2022-04-18T10:05:36Z | 2022-04-18T10:05:43Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1409 | [] | ZhenyuLiu-SYSU | 0 | |
AirtestProject/Airtest | automation | 597 | swipe() 在 MuMu 模拟器中上下滑动表现相反 | **描述问题bug**
IDE中运行脚本,swipe是正常的,打包之后,发现swipe操作变成反向的了,只测过上下滑动,左右滑动没试过。
```
# 打包后在模拟器里
swipe([394, 334], [394, 1000])
# IDE中的
swipe([394, 1000], [394, 334])
```
**python 版本:** `Python 3.7.3`
**airtest 版本:** `1.1.0`
| open | 2019-11-07T10:12:06Z | 2019-11-11T01:45:11Z | https://github.com/AirtestProject/Airtest/issues/597 | [
"bug"
] | liubiantao | 3 |
huggingface/datasets | computer-vision | 6,605 | ELI5 no longer available, but referenced in example code | Here, an example code is given:
https://huggingface.co/docs/transformers/tasks/language_modeling
This code + article references the ELI5 dataset.
ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5
"Defunct: Dataset "eli5" is defunct and no longer accessible due to unavailability of the source data.
Reddit recently [changed the terms of access](https://www.reddit.com/r/reddit/comments/12qwagm/an_update_regarding_reddits_api/) to its API, making the source data for this dataset unavailable.
"
Please change the example code to use a different dataset. | closed | 2024-01-19T10:21:52Z | 2024-02-01T17:58:23Z | https://github.com/huggingface/datasets/issues/6605 | [] | drdsgvo | 1 |
frol/flask-restplus-server-example | rest-api | 59 | How do you deal with circular imports ? | For example I have models like this :
```python
# modules/patients/models.py
class Patient(db.Model, OwnerMixin):
__owner_backref_name__ = "patients"
id = db.Column(db.Integer, primary_key=True)
phone = db.Column(db.String)
email = db.Column(db.String)
first_name = db.Column(db.String)
last_name = db.Column(db.String)
birthday = db.Column(db.DateTime)
zip_code = db.Column(db.String)
address = db.Column(db.String)
# I would like to do this, but there's circular import....
@aggregated('claims', db.Column(db.Integer))
def unpaid_amount(self):
return db.func.count('1')
# modules/accounting/models.py
class Claim(db.Model, OwnerMixin):
__owner_backref_name__ = 'claims'
id = db.Column(db.Integer, primary_key=True)
date = db.Column(db.DateTime, nullable=False)
label = db.Column(db.String, nullable=False)
amount = db.Column(db.Float, nullable=False)
ref = db.Column(db.String)
patient_id = db.Column(db.Integer, db.ForeignKey('patient.id'), nullable=False)
patient = db.relationship(
Patient,
backref=db.backref('claims', cascade="delete, delete-orphan")
)
```
Patient is in dedicated module "patient"
and claim model is in 'accounting' modules with other models related to accounting.
The problem is I can't import claim model and I can't access to claim model in patient model in order to, for example, make the sum of claims amount.
I think this is some design issue, but I want to be sure if there's any workaround before rewriting.... | closed | 2017-06-30T12:42:26Z | 2017-12-19T20:21:45Z | https://github.com/frol/flask-restplus-server-example/issues/59 | [
"question"
] | askz | 6 |
ansible/ansible | python | 84,752 | The get_url module may fail with Errno 110 (ETIMEDOUT) before the timeout argument elapses | ### Summary
The `timeout` argument to the `get_url` module works fine, but does not cover cases where the Kernel times out opening the socket. There must be a way to pass the timeout to the Kernel, WDYT?
### Issue Type
Bug Report
### Component Name
get_url
### Ansible Version
```console
$ ansible --version
ansible [core 2.16.3]
config file = None
configured module search path = ['/home/david/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/david/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.12.3 (main, Jan 17 2025, 18:03:48) [GCC 13.3.0] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
EDITOR(env: EDITOR) = vim
```
### OS / Environment
$ cat /etc/os-release
NAME="Linux Mint"
VERSION="22.1 (Xia)"
ID=linuxmint
ID_LIKE="ubuntu debian"
PRETTY_NAME="Linux Mint 22.1"
VERSION_ID="22.1"
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.linuxmint.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
VERSION_CODENAME=xia
UBUNTU_CODENAME=noble
### Steps to Reproduce
1. Run `sudo sysctl net.ipv4.tcp_syn_retries=1 net.ipv4.tcp_retries2=1` to lower Kernel settings for the sake of the reproduction
2. Run `ansible -m "get_url" -a "url=http://8.8.8.8 dest=/tmp/url timeout=20" localhost`
The above command specified a timeout of 20s, but the command will fail in a few seconds with
```
localhost | FAILED! => {
"changed": false,
"dest": "/tmp/url",
"elapsed": 3,
"gid": 1000,
"group": "david",
"mode": "0664",
"msg": "Request failed: <urlopen error [Errno 110] Connexion terminée par expiration du délai d'attente>",
"owner": "david",
"size": 41,
"state": "file",
"uid": 1000,
"url": "http://8.8.8.8"
}
```
Notice the `elapsed: 3` and the `Errno 110` which is ETIMEDOUT
### Expected Results
The command times out after 20s.
### Actual Results
```console
localhost | FAILED! => {
"changed": false,
"dest": "/tmp/url",
"elapsed": 3,
"gid": 1000,
"group": "david",
"mode": "0664",
"msg": "Request failed: <urlopen error [Errno 110] Connexion terminée par expiration du délai d'attente>",
"owner": "david",
"size": 41,
"state": "file",
"uid": 1000,
"url": "http://8.8.8.8"
}
```
### Code of Conduct
- [x] I agree to follow the Ansible Code of Conduct | closed | 2025-02-25T13:13:58Z | 2025-03-11T13:00:03Z | https://github.com/ansible/ansible/issues/84752 | [
"module",
"bug",
"affects_2.16"
] | ddolcimascolo | 4 |
taverntesting/tavern | pytest | 907 | 3.0 Release | - ~MQTT as an optional extra instead of being in core (`pip install tavern[mqtt]`)~
- ~More recent version of pytest~
- ~Convert all type annotations to py3.11+ versions and make that the minimum version supported~
- Better grpc support / stabilisation of features
- It might be useful to allow a flag when mkaing a grpc request that serialises the message into a `protobuf.Any` message . This is sometimes useful
- Clean up docs to indicate gRPC support on the front page, classifiers, etc?
- ~Remove `_deprecated_recurse_access_key`~
- Update to protobuf 5? | open | 2024-01-18T18:27:54Z | 2024-04-27T13:09:30Z | https://github.com/taverntesting/tavern/issues/907 | [
"Type: Maintenance"
] | michaelboulton | 1 |
aleju/imgaug | machine-learning | 276 | ValueError: Could not convert string to float: 'max' | When I run the example code after line: 'Apply heavy augmentations to images (used to create the image at the very top of this readme):', the output raised the error:
ValueError: Could not convert string to float: 'max'
The entire warnings and errors are:
/usr/lib/python3/dist-packages/numpy/core/numeric.py:301: FutureWarning: in the future, full((1,), 255) will return an array of dtype('int64')
format(shape, fill_value, array(fill_value).dtype), FutureWarning)
/usr/lib/python3/dist-packages/numpy/core/numeric.py:301: FutureWarning: in the future, full((1,), 3) will return an array of dtype('int64')
format(shape, fill_value, array(fill_value).dtype), FutureWarning)
/usr/lib/python3/dist-packages/numpy/core/numeric.py:301: FutureWarning: in the future, full((1,), 11) will return an array of dtype('int64')
format(shape, fill_value, array(fill_value).dtype), FutureWarning)
/usr/lib/python3/dist-packages/numpy/core/numeric.py:301: FutureWarning: in the future, full((1,), 'max') will return an array of dtype('<U3')
format(shape, fill_value, array(fill_value).dtype), FutureWarning)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-94-e664ef3358fb> in <module>
85 )
86
---> 87 images_aug = seq.augment_images(images)
/usr/local/lib/python3.5/dist-packages/imgaug/augmenters/meta.py in augment_images(self, images, parents, hooks)
538 random_state=ia.copy_random_state(self.random_state),
539 parents=parents,
--> 540 hooks=hooks
541 )
542 # move "forward" the random state, so that the next call to
/usr/local/lib/python3.5/dist-packages/imgaug/augmenters/meta.py in _augment_images(self, images, random_state, parents, hooks)
1954 images=images,
1955 parents=parents + [self],
-> 1956 hooks=hooks
1957 )
1958 else:
/usr/local/lib/python3.5/dist-packages/imgaug/augmenters/meta.py in augment_images(self, images, parents, hooks)
450 random_state=ia.copy_random_state(self.random_state),
451 parents=parents,
--> 452 hooks=hooks
453 )
454 # move "forward" the random state, so that the next call to
/usr/local/lib/python3.5/dist-packages/imgaug/augmenters/meta.py in _augment_images(self, images, random_state, parents, hooks)
2237 images=images_to_aug,
2238 parents=parents + [self],
-> 2239 hooks=hooks
2240 )
2241 output_is_array = ia.is_np_array(images_to_aug)
/usr/local/lib/python3.5/dist-packages/imgaug/augmenters/meta.py in augment_images(self, images, parents, hooks)
450 random_state=ia.copy_random_state(self.random_state),
451 parents=parents,
--> 452 hooks=hooks
453 )
454 # move "forward" the random state, so that the next call to
/usr/local/lib/python3.5/dist-packages/imgaug/augmenters/blend.py in _augment_images(self, images, random_state, parents, hooks)
614 alphas = np.float64(alphas).transpose((1, 2, 0))
615 else:
--> 616 alphas = self.factor.draw_samples((h, w), random_state=ia.new_random_state(seeds[i]))
617 ia.do_assert(0.0 <= alphas.item(0) <= 1.0)
618 result[i] = blend_alpha(image_first, image_second, alphas, eps=self.epsilon)
/usr/local/lib/python3.5/dist-packages/imgaug/parameters.py in draw_samples(self, size, random_state)
286 samples = self._draw_samples(
287 size if not ia.is_single_integer(size) else tuple([size]),
--> 288 random_state)
289 ia.forward_random_state(random_state)
290
/usr/local/lib/python3.5/dist-packages/imgaug/parameters.py in _draw_samples(self, size, random_state)
2077 def _draw_samples(self, size, random_state):
2078 seed = random_state.randint(0, 10**6)
-> 2079 aggregation_method = self.aggregation_method.draw_sample(random_state=ia.new_random_state(seed))
2080 iterations = self.iterations.draw_sample(random_state=ia.new_random_state(seed+1))
2081 ia.do_assert(iterations > 0)
/usr/local/lib/python3.5/dist-packages/imgaug/parameters.py in draw_sample(self, random_state)
259
260 """
--> 261 return self.draw_samples(1, random_state=random_state)[0]
262
263 def draw_samples(self, size, random_state=None):
/usr/local/lib/python3.5/dist-packages/imgaug/parameters.py in draw_samples(self, size, random_state)
286 samples = self._draw_samples(
287 size if not ia.is_single_integer(size) else tuple([size]),
--> 288 random_state)
289 ia.forward_random_state(random_state)
290
/usr/local/lib/python3.5/dist-packages/imgaug/parameters.py in _draw_samples(self, size, random_state)
1112
1113 def _draw_samples(self, size, random_state):
-> 1114 return np.full(size, self.value)
1115
1116 def __repr__(self):
/usr/lib/python3/dist-packages/numpy/core/numeric.py in full(shape, fill_value, dtype, order)
300 "in the future, full({0}, {1!r}) will return an array of {2!r}".
301 format(shape, fill_value, array(fill_value).dtype), FutureWarning)
--> 302 multiarray.copyto(a, fill_value, casting='unsafe')
303 return a
304
ValueError: could not convert string to float: 'max'
Can anyone help me fix this bug? I run the example code on Jupyter notebook with Python3.5. | open | 2019-03-02T06:53:24Z | 2019-03-04T07:23:26Z | https://github.com/aleju/imgaug/issues/276 | [] | thuongtrannc | 3 |
jmcnamara/XlsxWriter | pandas | 386 | Feature request: stop_if_true for conditional formatting | Thank you very much for your outstanding work.
I found xlsxwriter missing two conditional format-related features, so I made a little revision to add these two features:
Icon_set: Displays the specified icon in the data cell according to the specified condition. #387
Stop_if_true: Once the conditional formatting is met, the subsequent conditional formatting in the list is no longer checked.
I am a novice in python and I do not have software engineering experience. These codes do work but they are not satisfacting, so would you be so kind as to add these two features into the new version?
very grateful.
Attachments to
[example.zip](https://github.com/jmcnamara/XlsxWriter/files/538608/example.zip)
worksheet.py revision and demo (Python 3.5 xlsxwriter 0.8.4)
| closed | 2016-10-19T10:18:21Z | 2017-09-11T20:49:14Z | https://github.com/jmcnamara/XlsxWriter/issues/386 | [
"feature request",
"short term"
] | cavlc | 5 |
FactoryBoy/factory_boy | django | 659 | Easier use of `sqlalchemy_session_persistence` together with post-generation hooks | #### The problem
At the moment it seems there is no way of applying the session persistence setting after post-generation hooks. I'm trying to make sure everything is `flush()`ed by the end of the creation of my factory, but stuff in post-generation hooks is excluded from that unless whatever you call calls flush itself.
#### Proposed solution
Unsure. There are other moving parts in the instantiation strategies that need to be considered. The obvious thing is to follow the setting of `sqlalchemy_session_persistence` at the end of `create()` instead of `_create()` I think.
#### Extra notes
Maybe I am simply using post-generation hooks incorrectly. Here's an example of one of the offending hooks:
```python
@factory.post_generation
def parent(obj, create, extracted, **kwargs):
if extracted:
obj.parent = extracted
obj.value = extracted.value
```
We have some denormalisation and need to propagate this `value` from the parent, which of course is a modification to the underlying model and needs to be `flush()`ed at some point (and, for us, this is best done before the end of creation) | open | 2019-11-11T12:31:03Z | 2019-11-11T12:31:03Z | https://github.com/FactoryBoy/factory_boy/issues/659 | [] | fish-face | 0 |
httpie/cli | python | 1,300 | Unable to See Status Code in Output | ## Checklist
- [ X] I've searched for similar issues.
- [ X] I'm using the latest version of HTTPie.
---
## Minimal reproduction code and steps
1. Run local web service that returns 401 for endpoint :8080/get-values
2. Run HTTPie against endpoint
## Current result
**Curl output:**
```sh
$ curl -v 'http://localhost:8080/get-values'
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 8080 (#0)
> GET /get-values HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.1
> Accept: */*
>
< HTTP/1.1 401
< Content-Type: application/json
< Transfer-Encoding: chunked
< Date: Sat, 19 Feb 2022 00:01:22 GMT
<
* Connection #0 to host localhost left intact
* <body redacted>
```
**HTTPie output:**
```sh
$ http --verbose ':8080/get-values'
GET /get-values HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: localhost:8080
User-Agent: HTTPie/3.0.2
HTTP/1.1
Connection: keep-alive
Content-Type: application/json
Date: Sat, 19 Feb 2022 00:02:54 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
<body redacted>
```
## Expected result
Expecting to see `HTTP/1.1 401` instead of just `HTTP/1.1 `.
## Debug output
Please re-run the command with `--debug`, then copy the entire command & output and paste both below:
```bash
$ http --debug ':8080/get-values'
HTTPie 3.0.2
Requests 2.27.1
Pygments 2.11.2
Python 3.10.2 (main, Feb 2 2022, 08:42:42) [Clang 13.0.0 (clang-1300.0.29.3)]
/usr/local/Cellar/httpie/3.0.2/libexec/bin/python3.10
Darwin 20.6.0
<Environment {'as_silent': <function Environment.as_silent at 0x10d7d0f70>,
'colors': 256,
'config': {'default_options': ['--style=fruity']},
'config_dir': PosixPath('/Users/<redacted>/.config/httpie'),
'devnull': <property object at 0x10d7c4cc0>,
'is_windows': False,
'log_error': <function Environment.log_error at 0x10d7d1000>,
'program_name': 'http',
'stderr': <_io.TextIOWrapper name='<stderr>' mode='w' encoding='utf-8'>,
'stderr_isatty': True,
'stdin': <_io.TextIOWrapper name='<stdin>' mode='r' encoding='utf-8'>,
'stdin_encoding': 'utf-8',
'stdin_isatty': True,
'stdout': <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>,
'stdout_encoding': 'utf-8',
'stdout_isatty': True}>
<PluginManager {'adapters': [],
'auth': [<class 'httpie.plugins.builtin.BasicAuthPlugin'>,
<class 'httpie.plugins.builtin.DigestAuthPlugin'>,
<class 'httpie.plugins.builtin.BearerAuthPlugin'>],
'converters': [],
'formatters': [<class 'httpie.output.formatters.headers.HeadersFormatter'>,
<class 'httpie.output.formatters.json.JSONFormatter'>,
<class 'httpie.output.formatters.xml.XMLFormatter'>,
<class 'httpie.output.formatters.colors.ColorFormatter'>]}>
>>> requests.request(**{'auth': None,
'data': RequestJSONDataDict(),
'headers': <HTTPHeadersDict('User-Agent': b'HTTPie/3.0.2')>,
'method': 'get',
'params': <generator object MultiValueOrderedDict.items at 0x10da93450>,
'url': 'http://localhost:8080/get-values'})
HTTP/1.1
Connection: keep-alive
Content-Type: application/json
Date: Sat, 19 Feb 2022 00:05:16 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
<body redacted>
``` | closed | 2022-02-19T00:11:30Z | 2022-03-03T16:28:04Z | https://github.com/httpie/cli/issues/1300 | [
"bug"
] | EarthCitizen | 5 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,644 | Input images are stretched when using test | I trained a model on my own data set of 22,000 256x256 jpg image pairs. But when I use test to generate B images from A images that are also 256x256, the results are only the left 128x256 pixels of the image stretched out to 256x256.
During training, the output in visdom looked correct, no stretching. I cannot figure out what would cause this. I've not added any extra switch values.
My command for test is
python test.py --dataroot ./datasets/biosphereNormalMaps --name biosphereNormalMaps --model pix2pix --direction AtoB
I've tried making the test images jpg and png with the same result. | open | 2024-04-19T07:35:41Z | 2025-03-05T14:21:18Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1644 | [] | KreesoAuriga | 2 |
vaexio/vaex | data-science | 2,035 | [BUG-REPORT] Cannot pass dictionaries into registered functions | Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
If you pass a dictionary into a registered function, you get a syntax error, while the same works for a list.
```
import vaex
df = vaex.example()
df = df[df["id"] < 10][:100]
labels = {0:"now", 1: "happy", 2: "sad", 3:"arg", 4:"foo", 5:"bar", 6:"something", 7:"is", 8:"happening", 9:"here"}
@vaex.register_function()
def index_to_label(arr, mapping):
return np.array([mapping[i] for i in arr])
df.id.index_to_label(labels)
```
throws
```
Expression = index_to_label(id, {0: 'now', 1: 'happy', 2: 'sad', 3: 'a...
Length: 100 dtype: string (expression)
--------------------------------------
Error evaluating: SyntaxError('invalid syntax', ('<unknown>', 1, 30, "index_to_label(id, {0: 'now' 1: 'happy' 2: 'sad' 3: 'arg' 4: 'foo' 5: 'bar' 6: 'something' 7: 'is' 8: 'happening' 9: 'here'})\n"))
```
while the same works as a list
```
import vaex
import numpy as np
df = vaex.example()
df = df[df["id"] < 10][:100]
labels = {0:"now", 1: "happy", 2: "sad", 3:"arg", 4:"foo", 5:"bar", 6:"something", 7:"is", 8:"happening", 9:"here"}
labels_list = [labels[i] for i in labels]
@vaex.register_function()
def index_to_label(arr, mapping):
return np.array([labels[i] for i in arr])
df.id.index_to_label(labels_list)
```
I also tried to be explicit like the docs
```
import vaex
import numpy as np
import json
df = vaex.example()
df = df[df["id"] < 10][:100]
labels = {0:"now", 1: "happy", 2: "sad", 3:"arg", 4:"foo", 5:"bar", 6:"something", 7:"is", 8:"happening", 9:"here"}
@vaex.register_function(on_expression=False)
def index_to_label(mapping, arr):
return np.array([mapping.get(i) for i in arr])
df.func.index_to_label(labels, df.id)
```
but that also failed
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: 4.9.1
- Vaex was installed via: pip / conda-forge / from source pip
- OS:Mac/Linux
| closed | 2022-04-28T00:44:17Z | 2022-04-29T16:17:01Z | https://github.com/vaexio/vaex/issues/2035 | [] | Ben-Epstein | 1 |
gradio-app/gradio | python | 10,093 | HTTP 307 temporary redirect code not considered OK | ### Describe the bug
HTTP Code `302` is listed in the acceptable HTTP codes for `url_ok`; however, alternative temporary redirect codes `303` and `307` are not included in the list.
https://github.com/gradio-app/gradio/blob/01b919f04b69732fd8adb52f6d156e5683589221/gradio/networking.py#L51-L62
MDN Docs for the mentioned HTTP codes:
- 302: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/302
- 303: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/303
- 307: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/307
Although `302` is commonly used, `303` and `307` may be used to be more specific about how the temporary redirect should be handled.
Currently, `url_ok()` does not consider response codes `303` and `307` to be OK. Could they be added to the permitted response codes?
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
## Example App for Repro
This app should be run in Positron with the [Run App feature](https://positron.posit.co/run-interactive-apps.html), which creates a proxy for the app and displays the proxied url in the viewer.
```python
import gradio as gr
def image_classifier(inp):
return {'cat': 0.3, 'dog': 0.7}
demo = gr.Interface(fn=image_classifier, inputs="image", outputs="label")
demo.launch()
```
The app runs successfully, but crashes immediately after starting up with:
```python
Traceback (most recent call last):
File "/Users/sashimi/Documents/projects/my-python-project_gradio/gradio_example.py", line 5, in <module>
demo.launch()
File "/Users/sashimi/Documents/projects/my-python-project_gradio/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 2590, in launch
raise ValueError(
ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.
```
Adding `307` to the list of acceptable status codes allows the app to run successfully, without the following error occurring:
```python
Traceback (most recent call last):
File "/Users/sashimi/Documents/projects/my-python-project_gradio/gradio_example.py", line 5, in <module>
demo.launch()
File "/Users/sashimi/Documents/projects/my-python-project_gradio/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 2590, in launch
raise ValueError(
ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.
```
### Screenshot
_No response_
### Logs
Modified `networking.py` in my `.venv` to see the response code -- `307`
```
def url_ok(url: str) -> bool:
try:
for _ in range(5):
with warnings.catch_warnings():
warnings.filterwarnings("ignore")
r = httpx.head(url, timeout=3, verify=False)
if r.status_code in (200, 401, 302): # 401 or 302 if auth is set
return True
# --- start sharon debug ---
print(r)
# --- end sharon debug ---
time.sleep(0.500)
except (ConnectionError, httpx.ConnectError, httpx.TimeoutException):
return False
return False
```
#### Output
```
~/Documents/projects/my-python-project_gra
dio/.venv/bin/python /Users/sashimi/Documents/projects/my-python-project_gradio/gradio_example.py
* Running on local URL: http://127.0.0.1:7860
<Response [307 Temporary Redirect]>
<Response [307 Temporary Redirect]>
<Response [307 Temporary Redirect]>
<Response [307 Temporary Redirect]>
<Response [307 Temporary Redirect]>
Traceback (most recent call last):
File "/Users/sashimi/Documents/projects/my-python-project_gradio/gradio_example.py", line 5, in <module>
demo.launch()
File "/Users/sashimi/Documents/projects/my-python-project_gradio/.venv/lib/python3.12/site-packages/gradio/blocks.py", line 2590, in launch
raise ValueError(
ValueError: When localhost is not accessible, a shareable link must be created. Please set share=True or check your proxy settings to allow access to localhost.
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.7.1
gradio_client version: 1.5.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.5
ffmpy: 0.4.0
gradio-client==1.5.0 is not installed.
httpx: 0.28.0
huggingface-hub: 0.26.3
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.12
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.10.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.8.1
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit==0.12.0 is not installed.
typer: 0.14.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.28.0
huggingface-hub: 0.26.3
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
Blocking usage of gradio | closed | 2024-12-02T21:39:32Z | 2024-12-02T22:24:04Z | https://github.com/gradio-app/gradio/issues/10093 | [
"bug"
] | sharon-wang | 0 |
sinaptik-ai/pandas-ai | pandas | 865 | SmartDatalake persistently fails when asked to plot | ### System Info
pandasai version: 1.5.13
### 🐛 Describe the bug
Asking for different plots the function keeps returning the matplotlib.pyplot module (plt) which is an unexpected return type.
I persistently see this pattern trying different queries:
```
2024-01-10 16:54:08 [INFO]
Code running:
....
plt.show()
result = {'type': 'plot', 'value': plt}
```
2024-01-10 16:54:08 [ERROR] Pipeline failed on step 4: expected str, bytes or os.PathLike object, not module | closed | 2024-01-10T14:59:41Z | 2024-07-04T16:05:41Z | https://github.com/sinaptik-ai/pandas-ai/issues/865 | [] | RoyKulik | 10 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,442 | visualizer error for 6 input and 3 output channels | Hello and thanks a lot for this software and taking the time to read my issue.
I'm trying to train pix2pix to go from a combination of 2 RGB images (6 input channels) to 1 RGB image (3 output channels). My dataset looks like this (same portion of the sky in optical, ultraviolet and infrared (false-coloured), respectively):

Setting --input_nc 6 and modifying __getitem__ in aligned_dataset to be able to input 2 images (6 channels) like this:
```
w, h = AB.size
w3 = int(w / 3)
A = AB.crop((0, 0, w3, h))
B = AB.crop((w3, 0, w3*2, h))
C = AB.crop((w3*2, 0, w, h))
# apply the same transform to both A and B
transform_params = get_params(self.opt, A.size)
A_transform = get_transform(self.opt, transform_params, grayscale=(self.input_nc == 1))
B_transform = get_transform(self.opt, transform_params, grayscale=(self.output_nc == 1))
A = A_transform(A)
B = B_transform(B)
C = B_transform(C)
B = torch.cat((B, C))
return {'A': A, 'B': B, 'A_paths': AB_path, 'B_paths': AB_path}
```
I get the following error after epoch 1:
```
(epoch: 1, iters: 100, time: 1.336, data: 0.211) G_GAN: 1.438 G_L1: 2.366 D_real: 0.543 D_fake: 0.663
(epoch: 1, iters: 200, time: 1.338, data: 0.005) G_GAN: 0.956 G_L1: 1.331 D_real: 0.871 D_fake: 0.448
(epoch: 1, iters: 300, time: 1.338, data: 0.003) G_GAN: 0.834 G_L1: 2.449 D_real: 0.504 D_fake: 0.583
/usr/local/lib/python3.7/dist-packages/visdom/__init__.py:366: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray.
return np.array(a)
Traceback (most recent call last):
File "train.py", line 57, in <module>
visualizer.display_current_results(model.get_current_visuals(), epoch, save_result)
File "/content/pytorch-CycleGAN-and-pix2pix/util/visualizer.py", line 154, in display_current_results
padding=2, opts=dict(title=title + ' images'))
File "/usr/local/lib/python3.7/dist-packages/visdom/__init__.py", line 389, in wrapped_f
return f(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/visdom/__init__.py", line 1292, in images
height = int(tensor.shape[2] + 2 * padding)
IndexError: tuple index out of range
```
Are there any other modifications needed beyond the ones in dataset? What would they be? Thanks a lot for any assistance. | open | 2022-07-04T08:58:53Z | 2022-07-11T09:39:45Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1442 | [] | Davegdd | 2 |
horovod/horovod | deep-learning | 3,761 | Keras: Multi Task Learning, weighted sampling for multiple target column | **Is your feature request related to a problem? Please describe.**
Dealing with Imbalance class is always been a challenge and for multi-task learning it becomes more challenging. As in multi-task learning we have multiple target label while in simple classification we have one target label. To resolve this, engineers normally adopt for weighted sampling. The technique puts weights on samples (row) based on their class frequency.
In simple Keras script we can perform weighted sampling for multi task learning with the following code:
```
n_samples = x_train.shape[0]
w1 = np.random.rand(n_samples,)
w2 = np.random.rand(n_samples,)
model.fit(x_train, (y_train_1, y_train_2), epochs=10,
batch_size=2048, sample_weight={'target_col_1': w1, 'target_col_2': w2})
```
We can use weighted sample technique `sample_weight_col` in `hvd kears estimator` however, this parameter takes only one column.
**Describe the solution you'd like**
In the horovod api documentation, it is specified that **sample_weight_col – Optional column indicating the weight of each sample.**
Here I would like pass _**list/dictionary of weight columns**_ that can map to different target column like keras example above.
| open | 2022-10-31T10:48:36Z | 2022-11-03T09:52:34Z | https://github.com/horovod/horovod/issues/3761 | [
"enhancement"
] | hamzafar | 0 |
sinaptik-ai/pandas-ai | pandas | 676 | Add project dependencies license | Create a folder in main tree that storage all dependencies licenses used in the project in order to maintain transparency and compliance with licensing requirements.
[Example](https://github.com/pandas-dev/pandas/tree/main/LICENSES) | closed | 2023-10-23T12:16:40Z | 2023-11-02T12:35:54Z | https://github.com/sinaptik-ai/pandas-ai/issues/676 | [] | camilaccb | 8 |
geex-arts/django-jet | django | 315 | Overriding CSS | I have a custom CSS reference in the Media inner class for a ModelAdmin. I notice that the Jet `_changeform.scss` appears to override it. (I don't want `list-style: None` to appear in my markdown preview.)
Is there something I need to change or add in order to get my CSS to appear last (viz., override anything else)?
Without actually changing the code in Jet itself I mean... | open | 2018-04-04T20:14:53Z | 2018-04-04T20:36:33Z | https://github.com/geex-arts/django-jet/issues/315 | [] | jgoodleaf | 1 |
jupyterlab/jupyter-ai | jupyter | 1,246 | Deepseek R1 models via OpenRouter not working with magic commands | jupyter lab:4.3.5
jupyter_ai:2.29.1

 | closed | 2025-02-17T05:23:26Z | 2025-02-27T08:02:27Z | https://github.com/jupyterlab/jupyter-ai/issues/1246 | [
"bug"
] | LCBProgHome | 5 |
PaddlePaddle/ERNIE | nlp | 796 | 请问有CV大模型吗 | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| closed | 2022-04-09T00:32:42Z | 2022-07-14T07:41:31Z | https://github.com/PaddlePaddle/ERNIE/issues/796 | [
"wontfix"
] | raychiu0202 | 3 |
feature-engine/feature_engine | scikit-learn | 383 | Wrong reference for SelectByShuffling | If I am right, the calculation of the "scorer" between the unshuffled and the shuffled data set is not fair.
The initial model performance is computed on the test set only:
```
# store initial model performance
self.initial_model_performance_ = model["test_score"].mean()
```
while the shuffled performance is computed on the full (shuffled) data set
```
# determine the performance with the shuffled feature
performance = np.mean(
[scorer(m, X_shuffled, y) for m in model["estimator"]]
)
```
This leads to negative values for the performance drift
```
performance_drift = self.initial_model_performance_ - performance
```
A mapping of CV model to the fold used for test could be a solution.
| closed | 2022-03-09T12:52:30Z | 2022-03-25T10:10:14Z | https://github.com/feature-engine/feature_engine/issues/383 | [] | gverbock | 2 |
robotframework/robotframework | automation | 4,709 | Add `__repr__()` method to NormalizedDict | I see the problem, that with the RobotDebug REPL i am updating right now, the NormalizedDict are just shown like this:
`${SUITE_METADATA} = <robot.utils.normalizing.NormalizedDict object at 0x1048063e0>`
Could we do it like this?
```python
def __repr__(self):
return '%s(%s)' % (self.__class__.__name__, str(self))
```
or:
```python
def __repr__(self):
return f'{self.__class__.__name__}({self.__str__()})'
```
Then it would be :
`${SUITE_METADATA} = NormalizedDict({'Hello': 'World', 'Test': '123'})`
We could also write it like this?
`robot.utils.NormalizedDict({'Hello': 'World', 'Test': '123'})`
But i think the former version would be good enough
What do you think?
| closed | 2023-03-30T01:13:50Z | 2023-05-04T20:18:34Z | https://github.com/robotframework/robotframework/issues/4709 | [
"enhancement",
"priority: low",
"beta 1",
"effort: small"
] | Snooz82 | 1 |
Yorko/mlcourse.ai | matplotlib | 673 | Jupyter kernel error | Hi,
When I open the jupyter notebooks for the course, I am getting the following kernel error:
"RuntimeError: Permissions assignment failed for secure file: '/notebooks/home/.local/share/jupyter/runtime/kernel-c1c99a70-5225-4507-b438-c1b9697b5473.json'.Got '33279' instead of '600'"
Tried to open the notebooks by a preinstalled Anaconda and it works fine.
Possible reason for failure: https://discourse.jupyter.org/t/jupyter-core-4-6-2-release-with-insure-mode-option/3300 | closed | 2020-09-13T07:35:20Z | 2020-10-31T08:48:29Z | https://github.com/Yorko/mlcourse.ai/issues/673 | [] | matemik | 0 |
pykaldi/pykaldi | numpy | 137 | getting the ASpIRE chain models | How can I make offline ASpIRE chain models to work using pykaldi. When we download the model from the Kaldi repo, it says we need to run the scripts to make it as a model. When I try to make the model, it gives error command tree-info not found. In a forum, I saw it is because of compilation error with Kaldi. Do we need to install Kaldi from the source to create the model? | closed | 2019-06-19T09:13:20Z | 2019-09-06T23:13:43Z | https://github.com/pykaldi/pykaldi/issues/137 | [] | savindi-wijenayaka | 1 |
widgetti/solara | fastapi | 802 | bug documentation: In the components documentation page, breadcrumbs break | In the components documentation page, breadcrumbs take you to a non existing page.
| closed | 2024-09-27T14:21:44Z | 2025-01-29T15:18:50Z | https://github.com/widgetti/solara/issues/802 | [
"bug"
] | alonsosilvaallende | 0 |
pytest-dev/pytest-html | pytest | 547 | ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] | I have install pytest
I am in a virtual environment using pipenv sync , pipenv run shell.
Environment : `macOs`
pytest v : `pytest 7.1.3`
I have this error when running pytest:
```
ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --cov --cov-report term-missing
inifile: /Users/mehdirhouzlane/aleph-client/setup.cfg
rootdir: /Users/mehdirhouzlane/aleph-client
``` | closed | 2022-09-02T14:40:23Z | 2022-09-04T11:44:24Z | https://github.com/pytest-dev/pytest-html/issues/547 | [] | mrhouzlane | 2 |
wkentaro/labelme | computer-vision | 853 | How to add feature to change label colors with keyboard | Hi @wkentaro, thanks for the wonderful repository! I want to add a feature that will make my life simpler - I want to be able to change the labels using keyboard shortcuts, so that when I click on a polygon and press a certain key, that polygon's label changes to another label. This will be very helpful for me personally as I'll be editing labels of hundreds of images (I'll be looking at each image and changing the labels). I do not want to edit the shape of the annotation this way, only the label.
Any pointers on how to implement something like this are welcome. If you could let me know which scripts I should look into and modify to accomplish this, it'll be really helpful. Thanks! | closed | 2021-04-05T06:04:35Z | 2022-06-25T04:46:22Z | https://github.com/wkentaro/labelme/issues/853 | [] | kgupta359 | 0 |
joke2k/django-environ | django | 91 | Inline comments does not work | If you put
```
VARIABLE=SomeValue # Comment
```
in your `.env` file, the `# Comment` part is included in the value for `VARIABLE`, so its value becomes `SomeValue # Comment`.
| closed | 2016-07-28T09:46:00Z | 2023-07-06T21:22:06Z | https://github.com/joke2k/django-environ/issues/91 | [
"enhancement"
] | beruic | 4 |
pallets/flask | flask | 4,568 | Move `url_for` to the `Flask` app object | Add a `url_for` method to the `Flask` app object. Similar to functions like `flask.json.dumps`, `flask.url_for` should look for a `current_app` and call its `url_for` method. This will allow applications to override the url building behavior. | closed | 2022-05-02T14:31:29Z | 2022-05-29T00:06:33Z | https://github.com/pallets/flask/issues/4568 | [
"save-for-sprint"
] | davidism | 1 |
healthchecks/healthchecks | django | 894 | [feature] integration groups / tagged integrations | It would be great to be able to tag integrations and then instead of enabling integrations manually for checks one could enable all integrations by simply enabling a tag. This would also be nice to have for auto provisioning of checks while being able to enable a set of integrations?
What do you think? I should be able to provide the code if that is something you'd like to have. | closed | 2023-09-21T12:13:47Z | 2023-10-12T11:43:05Z | https://github.com/healthchecks/healthchecks/issues/894 | [] | apollo13 | 19 |
biolab/orange3 | pandas | 6,036 | "Louvain Clustering" progress indicator desync | "Louvain Clustering" progress bar (in the widget windows) and progress animation (in the workflow) are not synchronized. The percentage doesn't increment at the same time.
| closed | 2022-06-19T19:08:11Z | 2022-06-19T19:25:42Z | https://github.com/biolab/orange3/issues/6036 | [
"bug report"
] | hydrastarmaster | 1 |
iperov/DeepFaceLab | machine-learning | 917 | Merger(SAEHD) crashes in interactive mode when switching from help screen to main screen. | ## Expected behavior
Running '_7) merge SAEHD.bat_' launches the merger.
If you select 'Yes' on the interactive merger option the merger launches with a GUI which starts on the help screen.
From the help screen you can switch to the main / preview screen by pressing tab.
## Actual behavior
When attempting to switch from _help_ screen to _main_ screen by pressing tab the merger crashes with a curious error:
```
> Invalid number of channels in input image:
> 'VScn::contains(scn)'
> where
> 'scn' is 5
```
See full dump and trace below.
Only the first merged image is created in data_dst/merged.
If I run the merger as non-interactive, the merge is successful.
The error seems similar to #645 but that issue is resolved without any solution.
## Steps to reproduce
Unable to determine triggers, cannot reproduce in other projects.
## Other relevant information
- **Operating system and version:** Windows 10
- **Application version:** DeepFaceLab_NVIDIA_build_08_02_2020.exe
- XSeg trained for 40K iterations
- SAEHD trained for ~70K iterations with different options along the way
## Full dump
```
Running merger.
Choose one of saved models, or enter a name to create a new model.
[r] : rename
[d] : delete
[0] : 192 liae-ud - latest
: 0
0
Loading 192 liae-ud_SAEHD model...
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce RTX 2060 SUPER
[0] Which GPU indexes to choose? : 0
0
Initializing models: 100%|###############################################################| 4/4 [00:01<00:00, 3.74it/s]
================== Model Summary ==================
== ==
== Model name: 192 liae-ud_SAEHD ==
== ==
== Current iteration: 73595 ==
== ==
==---------------- Model Options ----------------==
== ==
== resolution: 192 ==
== face_type: f ==
== models_opt_on_gpu: True ==
== archi: liae-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== eyes_prio: True ==
== uniform_yaw: False ==
== lr_dropout: n ==
== random_warp: False ==
== gan_power: 0.0 ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: True ==
== pretrain: False ==
== autobackup_hour: 2 ==
== write_preview_history: True ==
== target_iter: 1000000 ==
== random_flip: True ==
== batch_size: 8 ==
== ==
==----------------- Running On ------------------==
== ==
== Device index: 0 ==
== Name: GeForce RTX 2060 SUPER ==
== VRAM: 8.00GB ==
== ==
===================================================
[y] Use interactive merger? ( y/n ) : y
[12] Number of workers? ( 1-12 ?:help ) : 6
6
Collecting alignments: 100%|#########################################################| 942/942 [00:18<00:00, 52.18it/s]
Computing motion vectors: 100%|##################################################| 2031/2031 [00:00<00:00, 3304.05it/s]
Running on CPU4.
Running on CPU0.
Running on CPU1.
Running on CPU2.
Running on CPU3.
Running on CPU5.
Merging: 0%| | 0/2031 [00:00<?, ?it/s]
no faces found for dst_000_000000.png, copying without faces
MergerConfig dst_000_000000.png:
Mode: overlay
mask_mode: learned-prd*learned-dst
erode_mask_modifier: 0
blur_mask_modifier: 0
motion_blur_power: 0
output_face_scale: 0
color_transfer_mode: rct
sharpen_mode : None
blursharpen_amount : 0
super_resolution_power: 0
image_denoise_power: 0
bicubic_degrade_power: 0
color_degrade_power: 0
================
Traceback (most recent call last):
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\mainscripts\Merger.py", line 208, in main
subprocess_count = subprocess_count,
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 268, in run
if self.on_tick() and all ([cli.state == 0 for cli in self.clis]):
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\merger\InteractiveMergerSubprocessor.py", line 414, in on_tick
self.screen_manager.switch_screens()
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\merger\MergerScreen\MergerScreen.py", line 140, in switch_screens
self.screens[self.current_screen_id].show(force=True)
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\merger\MergerScreen\MergerScreen.py", line 105, in show
io.show_image(self.scrn_manager.wnd_name, screen)
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\core\interact\interact.py", line 131, in show_image
self.on_show_image(wnd_name,img)
File "D:\projects\DeepFake\Iamthearm\DFL_smith_to_midget\_internal\DeepFaceLab\core\interact\interact.py", line 466, in on_show_image
cv2.imshow (wnd_name, img)
cv2.error: OpenCV(4.1.0) c:\projects\opencv-python\opencv\modules\imgproc\src\color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function '__cdecl cv::impl::`anonymous-namespace'::CvtHelper<struct cv::impl::`anonymous namespace'::Set<3,4,-1>,struct cv::impl::A0xe227985e::Set<3,4,-1>,struct cv::impl::A0xe227985e::Set<0,2,5>,2>::CvtHelper(const class cv::_InputArray &,const class cv::_OutputArray &,int)'
> Invalid number of channels in input image:
> 'VScn::contains(scn)'
> where
> 'scn' is 5
Done.
Press any key to continue . . .
``` | open | 2020-10-04T18:32:08Z | 2023-06-08T21:27:07Z | https://github.com/iperov/DeepFaceLab/issues/917 | [] | sandstrand | 7 |
prkumar/uplink | rest-api | 66 | Add documentation for v0.4 features | Here are a some v0.4 features that may be missing documentation:
- [ ] Basic Auth support (#58)
- [ ] Registering response handlers (#62)
- [ ] Registering error handlers (#63)
- [ ] Set request properties from constructor arguments (#65)
- [ ] Added `Consumer._inject` method and `inject` decorator (#67) | closed | 2018-02-05T18:40:02Z | 2018-02-10T07:15:49Z | https://github.com/prkumar/uplink/issues/66 | [
"Documentation"
] | prkumar | 0 |
onnx/onnx | tensorflow | 5,895 | QLinearAdd Op Request | # QLinearAdd Operator Request
### Describe the operator
An Add Operator for quantized data. It supports zero_point and scale input tensor for the Add input (A and B) and output (C) tensors:
This Op exists in ONNXRuntime time but not in the ONNX standard operators
https://github.com/microsoft/onnxruntime/blob/main/docs/ContribOperators.md#com.microsoft.QLinearAdd
<!-- Why is this operator necessary? What does it accomplish? -->
Several int8.onnx models in the onnx model zoo validated directory user this Op. Pretty much all of them but one which have QlinearMatMul also have QLinearAdd.
Onnx-mlir would like to support these models but we only support official ONNX Operators.
### Can this operator be constructed using existing onnx operators?
Unsure.
### Is this operator used by any model currently? Which one?
Several offhand I found in the onnx model zoo:
[bvlcalexnet-12-int8](https://github.com/onnx/models/blob/main/validated/vision/classification/alexnet/model/bvlcalexnet-12-int8.onnx)
[mnist-12-int8](https://github.com/onnx/models/blob/main/validated/vision/classification/mnist/model/mnist-12-int8.onnx)
[vgg16-12-int8](https://github.com/onnx/models/blob/main/validated/vision/classification/vgg/model/vgg16-12-int8.onnx)
### Are you willing to contribute it? (Y/N)
N
### Notes
ONNX already has QLinearMatMul and QLinearConv from these models but appears to be missing QLinearAdd.
| open | 2024-02-01T17:59:01Z | 2025-02-05T06:44:01Z | https://github.com/onnx/onnx/issues/5895 | [
"topic: operator",
"stale"
] | cjvolzka | 3 |
apify/crawlee-python | automation | 689 | How to scratch TEMU | I'm trying to scrape the information from TEMU https://www.temu.com/.
Processing https://www.temu.com/vn-en/2--car--------universal--sun----pvc---accessories-----g-601099650626830.html ...
Extracted Data:
Title: No title found.

`import sys
import asyncio
import time
from PyQt6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QPushButton, QTextEdit, QLineEdit, QLabel
from PyQt6.QtCore import QThread, pyqtSignal
from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext
class CrawlerThread(QThread):
log_signal = pyqtSignal(str)
data_signal = pyqtSignal(str)
runtime_signal = pyqtSignal(str)
def __init__(self, url):
super().__init__()
self.url = url
self.request_count = 0
self.failed_requests = 0
self.total_duration = 0
async def run_crawler(self):
crawler = PlaywrightCrawler(max_requests_per_crawl=1)
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext):
self.log_signal.emit(f"Processing {context.request.url} ...")
start_time = time.time()
try:
# Extract title from the specified div with class '_2rn4tqXP'
title_element = context.page.locator('._2rn4tqXP')
title = await title_element.inner_text() if await title_element.count() > 0 else "No title found."
# Calculate request duration
request_duration = time.time() - start_time
self.request_count += 1
self.total_duration += request_duration
# Emit data signal
self.data_signal.emit(f"Title: {title}\n")
except Exception as e:
self.failed_requests += 1
self.log_signal.emit(f"Error: {e}")
await crawler.run([self.url])
# Calculate and emit runtime statistics
average_duration = self.total_duration / self.request_count if self.request_count > 0 else 0
runtime_stats = (
f"Requests Finished: {self.request_count}\n"
f"Requests Failed: {self.failed_requests}\n"
f"Average Request Duration: {average_duration:.2f} seconds\n"
f"Total Runtime: {self.total_duration:.2f} seconds"
)
self.runtime_signal.emit(runtime_stats)
def run(self):
asyncio.run(self.run_crawler())
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Web Data Crawler")
self.setGeometry(100, 100, 800, 600)
# Widgets
self.url_input = QLineEdit()
self.url_input.setPlaceholderText("Enter URL here")
self.start_button = QPushButton("Start Crawling")
self.output_area = QTextEdit()
self.output_area.setReadOnly(True)
self.runtime_label = QLabel("Runtime Statistics:")
# Layout
layout = QVBoxLayout()
layout.addWidget(self.url_input)
layout.addWidget(self.start_button)
layout.addWidget(self.output_area)
layout.addWidget(self.runtime_label)
container = QWidget()
container.setLayout(layout)
self.setCentralWidget(container)
# Connections
self.start_button.clicked.connect(self.start_crawling)
def start_crawling(self):
url = self.url_input.text().strip()
if not url:
self.output_area.setText("Please enter a valid URL.")
return
self.output_area.clear()
# Run the crawler in a separate thread
self.crawler_thread = CrawlerThread(url)
self.crawler_thread.log_signal.connect(self.update_output)
self.crawler_thread.data_signal.connect(self.display_data)
self.crawler_thread.runtime_signal.connect(self.display_runtime)
self.crawler_thread.start()
def update_output(self, text):
self.output_area.append(text)
def display_data(self, data):
self.output_area.append("Extracted Data:\n" + data)
def display_runtime(self, runtime):
self.runtime_label.setText("Runtime Statistics:\n" + runtime)
app = QApplication(sys.argv)
window = MainWindow()
window.show()
sys.exit(app.exec())
`
I've tried but they all return no find.can someone help me? | closed | 2024-11-13T01:29:07Z | 2024-11-13T10:01:54Z | https://github.com/apify/crawlee-python/issues/689 | [
"t-tooling"
] | KalvinThien | 0 |
automl/auto-sklearn | scikit-learn | 770 | sample weight vs class weight in balancing | In the implementation of balancing, whether to use class weight in init param or sample weight in fit param depends on hard coded lists of classifiers. Currently only decision tree and svc are using class weight, however, sklearn already supports class weight for extra trees and random forests. I think it would be good to use class weight in init param as much as possible, because you will lose the sample weight in fit param after fitting, while class weight stays with the clf object. | open | 2020-01-30T02:33:27Z | 2022-06-10T14:01:54Z | https://github.com/automl/auto-sklearn/issues/770 | [
"question"
] | tsinggggg | 1 |
bmoscon/cryptofeed | asyncio | 333 | Coinbase Level3 orderbook callback broken? | I use `example/demo_config.py' which registers a callback fro L3 orderbook data on Coinbase
```
from cryptofeed import FeedHandler
from cryptofeed.callback import BookCallback, TradeCallback
from cryptofeed.defines import BID, ASK, L3_BOOK, TRADES
from cryptofeed.exchanges import Coinbase
async def trade(feed, pair, order_id, timestamp, side, amount, price):
print("Timestamp: {} Feed: {} Pair: {} ID: {} Side: {} Amount: {} Price: {}".format(timestamp, feed, pair, order_id, side, amount, price))
async def book(feed, pair, book, timestamp):
print('Timestamp: {} Feed: {} Pair: {} Book Bid Size is {} Ask Size is {}'.format(timestamp, feed, pair, len(book[BID]), len(book[ASK])))
def main():
f = FeedHandler()
f.add_feed(Coinbase(config={TRADES: ['BTC-USD'], L3_BOOK: ['ETH-USD']}, callbacks={TRADES: TradeCallback(trade), L3_BOOK: BookCallback(book)}))
f.run()
if __name__ == '__main__':
main()
```
I use cryptofeed 1.6.2 and get the following error:
```
TypeError: book() takes 4 positional arguments but 5 were given
```
Is this callback broken or just the example? | closed | 2020-11-20T18:47:08Z | 2020-11-21T19:30:41Z | https://github.com/bmoscon/cryptofeed/issues/333 | [
"bug"
] | degloff | 3 |
dnouri/nolearn | scikit-learn | 34 | Train on full dataset | I would like to train on a full dataset and was wondering if there is anything I need to do besides
eval_size=0.0?
Exactly when are the final weights saved, is it for the best validation score, best training score, last epoch, ect.?
Thank you for the help and great code!
| closed | 2015-01-31T18:23:52Z | 2015-02-10T14:30:54Z | https://github.com/dnouri/nolearn/issues/34 | [] | msegala | 5 |
pallets/flask | flask | 4,753 | TypeError: send_file() got an unexpected keyword argument 'attachment_filename' | Old code start throwing this exception:
`TypeError: send_file() got an unexpected keyword argument 'attachment_filename'` | closed | 2022-08-08T17:25:10Z | 2022-08-23T00:06:52Z | https://github.com/pallets/flask/issues/4753 | [] | igushev-truesource | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 240 | CUDA version | **System information**
* Have I written custom code: No
* OS Platform(e.g., window10 or Linux Ubuntu 16.04): Ubuntu 16.04.6 LTS
* Python version: 3.7.10
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3): Pytorch 1.6.0
* Use GPU or not: No
* CUDA/cuDNN version(if you use GPU): CUDA Version 10.1.243
* The network you trained(e.g., Resnet34 network): pytorch_object_detection/faster_rcnn/train_res50_fpn.py
**Describe the current behavior**
May I ask what version of CUDA is needed for this project?
Will CUDA 10.1 not work?
**Error info / logs**
AssertionError:
The NVIDIA driver on your system is too old (found version 10010).
Please update your GPU driver by downloading and installing a new
version from the URL: http://www.nvidia.com/Download/index.aspx
Alternatively, go to: https://pytorch.org to install
a PyTorch version that has been compiled with your version
of the CUDA driver. | closed | 2021-04-27T08:05:21Z | 2021-05-07T00:46:24Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/240 | [] | zmz125 | 9 |
Farama-Foundation/Gymnasium | api | 779 | [Bug Report] Inconsistent deepcopy on mujoco environment | ### Describe the bug
Thank you for maintaining this library; those updates make `gymnasium` better and better.
I've encountered an inconsistency in the behavior of Mujoco environments. Specifically, when I perform a `deepcopy` of an environment and then apply identical actions to both the original and the copied environment, the resulting states differ significantly. This is unexpected, given Mujoco's deterministic nature.
What do you think? Have I missed something?
### Code example
```shell
from copy import deepcopy
import numpy as np
from gymnasium.envs.mujoco.ant_v4 import AntEnv
from gymnasium.envs.mujoco.half_cheetah_v4 import HalfCheetahEnv
from gymnasium.envs.mujoco.hopper_v4 import HopperEnv
from gymnasium.envs.mujoco.humanoidstandup_v4 import HumanoidStandupEnv
from gymnasium.envs.mujoco.inverted_pendulum_v4 import InvertedPendulumEnv
env = AntEnv()
done = False
truncated = False
state_original, _ = env.reset(seed=0)
copied_env = deepcopy(env)
step_number = 0
while not done and not truncated:
action = env.action_space.sample()
copied_state, _, done_copied, truncated_copied, _ = copied_env.step(action)
state_original, _, done, truncated, _ = env.step(action)
assert np.array_equal(
copied_state, state_original
), f" states are different at step_number: {step_number}"
assert done_copied == done
assert truncated_copied == truncated
step_number += 1
```
### System info
- Describe how Gymnasium was installed: `pip`
- Version of gymnasium: `'0.29.1'`
- What OS/version of Linux you're using:
```
cat /etc/*-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.6 LTS"
NAME="Ubuntu"
VERSION="20.04.6 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.6 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
```
- Python version: `3.11`
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-11-16T13:00:56Z | 2023-11-17T14:41:07Z | https://github.com/Farama-Foundation/Gymnasium/issues/779 | [
"bug"
] | AdilZouitine | 6 |
BeanieODM/beanie | asyncio | 653 | merge_models does not merge list even if it does not include Link | ### Discussed in https://github.com/roman-right/beanie/discussions/652
<div type='discussions-op-text'>
<sup>Originally posted by **tomohirohiratsuka** August 10, 2023</sup>
Hi, I'm wondering this is expected behavior or not.
When I update document with `update` method, it returns updated document only when the updated property is not list.
I found that inside update method it merging via `parsing.merge_models`.
`merge_models` looks like care about if the list includes Link or not but it does not merge even when no Links are found in the list.
Is this expected behavior?
I'd appreciate it, if anyone answer.
Thank you.
```python:documents.py
@wrap_with_actions(EventTypes.UPDATE)
@save_state_after
async def update(
self,
*args,
ignore_revision: bool = False,
session: Optional[ClientSession] = None,
bulk_writer: Optional[BulkWriter] = None,
skip_actions: Optional[List[Union[ActionDirections, str]]] = None,
skip_sync: Optional[bool] = None,
**pymongo_kwargs,
) -> DocType:
"""
Partially update the document in the database
:param args: *Union[dict, Mapping] - the modifications to apply.
:param session: ClientSession - pymongo session.
:param ignore_revision: bool - force update. Will update even if revision id is not the same, as stored
:param bulk_writer: "BulkWriter" - Beanie bulk writer
:param pymongo_kwargs: pymongo native parameters for update operation
:return: None
"""
arguments = list(args)
if skip_sync is not None:
raise DeprecationWarning(
"skip_sync parameter is not supported. The document get synced always using atomic operation."
)
use_revision_id = self.get_settings().use_revision
if self.id is not None:
find_query: Dict[str, Any] = {"_id": self.id}
else:
find_query = {"_id": PydanticObjectId()}
if use_revision_id and not ignore_revision:
find_query["revision_id"] = self._previous_revision_id
if use_revision_id:
arguments.append(SetRevisionId(self.revision_id))
try:
result = await self.find_one(find_query).update(
*arguments,
session=session,
response_type=UpdateResponse.NEW_DOCUMENT,
bulk_writer=bulk_writer,
**pymongo_kwargs,
)
except DuplicateKeyError:
raise RevisionIdWasChanged
if bulk_writer is None:
if use_revision_id and not ignore_revision and result is None:
raise RevisionIdWasChanged
merge_models(self, result) # this update self
return self
```
```python:parsing.py
def merge_models(left: BaseModel, right: BaseModel) -> None:
from beanie.odm.fields import Link
if hasattr(left, "_previous_revision_id") and hasattr(
right, "_previous_revision_id"
):
left._previous_revision_id = right._previous_revision_id # type: ignore
for k, right_value in right.__iter__():
left_value = left.__getattribute__(k)
if isinstance(right_value, BaseModel) and isinstance(
left_value, BaseModel
):
merge_models(left_value, right_value)
continue
if isinstance(right_value, list):
links_found = False
for i in right_value:
if isinstance(i, Link):
links_found = True
break
if links_found: # why no links case is not handled?
continue
elif not isinstance(right_value, Link):
left.__setattr__(k, right_value)
```
</div> | open | 2023-08-10T16:55:43Z | 2024-09-16T14:48:49Z | https://github.com/BeanieODM/beanie/issues/653 | [
"bug",
"not reproduced"
] | roman-right | 7 |
ultralytics/ultralytics | pytorch | 19,499 | Why is the Inference Speed of YOLOv8 ONNX Much Slower Compared to PyTorch (.pt) Model | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
When using YOLOv8 for object detection, I noticed that after converting the PyTorch model (.pt) to ONNX format, the inference speed significantly decreased. Specifically:
ONNX:
Speed: 122.5ms preprocess, 27.4ms inference, 15.2ms postprocess per image at shape (1, 3, 640, 640)
PT:
Speed: 2.8ms preprocess, 23.5ms inference, 14.2ms postprocess per image at shape (1, 3, 384, 640)
use
from ultralytics import YOLO
onnx_model = YOLO("D:\pythonProject1\my_yolov8\yolov8n.pt")
# Run inference
results = onnx_model(r"D:\useing\Final_Test")
On different hardware (e.g., CPU, GPU), the performance of ONNX was consistently worse than the PyTorch model.
I have tried the following optimization measures:
Used ONNX Runtime for inference with CUDA acceleration enabled.
Ensured the ONNX model was exported with optimal settings (e.g., opset version, dynamic axes).
Despite these efforts, the ONNX model still performs slower. Could this be due to differences in model optimization, runtime overhead, or specific limitations of ONNX for YOLOv8? Are there additional steps or configurations I can apply to improve ONNX inference speed?
### Additional
_No response_ | open | 2025-03-03T10:01:04Z | 2025-03-03T10:54:27Z | https://github.com/ultralytics/ultralytics/issues/19499 | [
"question",
"detect",
"exports"
] | WindFreeFoliage | 3 |
xinntao/Real-ESRGAN | pytorch | 699 | Real-Esrgan fails to provide better result compare to waifu2X | I've tried every model that is included in Colab, and tried to upscale normal anime scenes, but it did not fix the blur. And when I compared it with waifu 2x on the same clips, waifu2x provided a better result. People are blaming Real-Esrgan because it mostly increases the Sharpness and fails to reduce noise. Also, it doesn't look like the main clips. It actually disappointed me. Some people also stopped using these models. Sometimes the output result is so bad that you won't upscale anymore | open | 2023-10-08T20:32:00Z | 2024-04-24T05:22:45Z | https://github.com/xinntao/Real-ESRGAN/issues/699 | [] | Joysikder | 1 |
babysor/MockingBird | deep-learning | 262 | 請問如何對特定一個人的語音進行訓練 | 我目前使用 PreTrain Model的合成器來合成語音,但效果不好
(選用的是第三個pretrained model - @FawenYo,因為無法從百度網盤下載,沒試過作者的 Model )
是否可以只針對某一個人的語音進行訓練? 需要訓練 synthesizer 還是 vocoder?
有嘗試拿 aidatatang_200zh dataset 重新訓練 synthesizer 效果不好
請問有沒有機會拿 pretrained.pt 加上自己的音檔做 transfer learning
謝謝 | open | 2021-12-11T16:20:22Z | 2022-02-09T06:55:46Z | https://github.com/babysor/MockingBird/issues/262 | [] | shihyulee | 6 |
miguelgrinberg/microblog | flask | 31 | bootstrap base.html dose not align | in base.html,
<a class="brand" </a>
In safari and firefox, this will make microblog index and other nav links not aligned horizontally.
changed to
<a class="navbar-brand" </a>
will solve it | closed | 2017-01-25T19:30:35Z | 2017-12-10T20:20:23Z | https://github.com/miguelgrinberg/microblog/issues/31 | [
"question"
] | zhangruiskyline | 5 |
apache/airflow | automation | 47,846 | Function decorated by 'asset.multi' is not asset alias | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
Function decorated by 'asset.multi' is not asset alias
The function should be present in the DB table 'asset_alias'
<img width="720" alt="Image" src="https://github.com/user-attachments/assets/51f4d1c5-34f6-4ee0-ace2-0c02f093eabb" />
### What you think should happen instead?
_No response_
### How to reproduce
Deploy this DAG and check the DB table asset_alias; asset_multi should be present there.
```python
@asset(schedule=None)
def dag1_asset():
pass
@asset(schedule=None)
def dag2_asset():
pass
@asset.multi(outlets=[dag1_asset, dag2_asset], schedule=None)
def asset_multi():
pass
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-17T07:53:19Z | 2025-03-20T09:08:25Z | https://github.com/apache/airflow/issues/47846 | [
"kind:bug",
"priority:low",
"area:core",
"area:datasets",
"affected_version:3.0.0beta"
] | atul-astronomer | 3 |
andfanilo/streamlit-echarts | streamlit | 42 | GitHub Discussions feature request | Could @andfanilo you please turn on the GH Discussions where questions and other non-issue stuff could be discussed? It may act as a community booster as well :)
Thanks | open | 2022-08-02T18:32:56Z | 2022-08-02T18:32:56Z | https://github.com/andfanilo/streamlit-echarts/issues/42 | [] | AdamJel | 0 |
biolab/orange3 | scikit-learn | 6,719 | ROC Curve widget sets a wrong prior probability | With apologies to self for writing such a bad bug report: I have no time to properly explore it now, but I have to write this down lest I forget.
I encountered a situation (on Pima diabetes data) in which the ROC widget's target was set to 1, but the prior probability was that for class 0. I changed the target to 0 and back to 1, and the prior probabilty was reset properly. My hunch is that if the widget was loaded in the workflow, the target is retrieved from settings, but the prior probability is set before and disregarding the target. Changing the target back and forth calls the necessary callbacks and updates the prior probability. This is just a hypothesis, I don't have time to actually reproduce the bug and check the code. | closed | 2024-01-26T15:14:10Z | 2024-03-28T15:24:45Z | https://github.com/biolab/orange3/issues/6719 | [
"bug",
"snack"
] | janezd | 1 |
xorbitsai/xorbits | numpy | 295 | ENH: construct dataframe from series with different lengths | ### Is your feature request related to a problem? Please describe
Allow constructing a dataframe from series with different lengths:
```
s1 = pd.Series([1, 2, 3])
s2 = pd.Series([1, 2, 3, 4])
pd.DataFrame([s1, s2])
```
Current in Xorbits, you got:
```
ValueError: all input tensors must have the same shape
```
But in pandas, you got:
```
0 1 2 3
0 1.0 2.0 3.0 NaN
1 1.0 2.0 3.0 4.0
```
| open | 2023-03-22T07:48:22Z | 2024-12-16T01:52:29Z | https://github.com/xorbitsai/xorbits/issues/295 | [
"enhancement"
] | UranusSeven | 1 |
QingdaoU/OnlineJudge | django | 387 | 数据库删除 submissision & contest 的问题 | 请问目前数据库的维护有没有提供定期 或者定量 删除 submissision & contest 的功能呢?
很感谢您的回覆 | open | 2021-10-11T14:34:25Z | 2021-12-25T17:07:56Z | https://github.com/QingdaoU/OnlineJudge/issues/387 | [] | u8621011 | 3 |
falconry/falcon | api | 2,049 | Drop 3.5 & 3.6 support | We are going to drop Python < 3.7 support in 4.0.
A non-exhaustive checklist:
- [x] Update Trove classifiers and `python_requires`
- [x] Move/adapt the Python 3.5 exception from `falcon.asgi` to the root module
- [x] Remove obsolete CI gates
- [x] Remove obsolete `tox` environments
- [x] Remove 3.5 and 3.6 from wheel preparation scripts/CI
- [x] Remove any usage of `CoroWrapper`, it was there only to service 3.6, and it was slated for removal in 3.10, and it is finally gone in 3.11. | closed | 2022-03-30T08:23:35Z | 2022-08-25T18:11:13Z | https://github.com/falconry/falcon/issues/2049 | [
"maintenance"
] | vytas7 | 6 |
QingdaoU/OnlineJudge | django | 234 | How to change the Compile Option | Hello,
I want to use this OSS for OnlineJudge for friends only.
However, I want to limit the compilation options.
We only use C/C ++.
For example, from `-O2` to` -O0`
It seems to be effective to change `judge/launugages.py`.
Is there anything else to do?
| closed | 2019-03-27T14:18:43Z | 2019-03-30T13:55:01Z | https://github.com/QingdaoU/OnlineJudge/issues/234 | [] | MaineK00n | 3 |
microsoft/nni | data-science | 5,146 | {"error":"File not found: C:\\Users\\SiWuXie\\Desktop\\Dll\\experiment\\9yg5pvfz\\trials\\a8qqO\\trial.log"} | **Describe the issue**:
[2022-09-28 10:44:47] INFO (main) Start NNI manager
[2022-09-28 10:44:48] INFO (NNIDataStore) Datastore initialization done
[2022-09-28 10:44:48] INFO (RestServer) Starting REST server at port 8080, URL prefix: "/"
[2022-09-28 10:44:48] INFO (RestServer) REST server started.
[2022-09-28 10:44:48] INFO (NNIManager) Starting experiment: 9yg5pvfz
[2022-09-28 10:44:48] INFO (NNIManager) Setup training service...
[2022-09-28 10:44:48] INFO (LocalTrainingService) Construct local machine training service.
[2022-09-28 10:44:48] INFO (NNIManager) Setup tuner...
[2022-09-28 10:44:48] INFO (NNIManager) Change NNIManager status from: INITIALIZED to: RUNNING
[2022-09-28 10:44:49] INFO (NNIManager) Add event listeners
[2022-09-28 10:44:49] INFO (LocalTrainingService) Run local machine training service.
[2022-09-28 10:44:49] INFO (NNIManager) NNIManager received command from dispatcher: ID,
[2022-09-28 10:44:49] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 64, "hidden_size1": 16, "hidden_size2": 128, "hidden_size3": 16, "lr": 0.1}, "parameter_index": 0}
[2022-09-28 10:44:49] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size0": 16, "hidden_size1": 64, "hidden_size2": 128, "hidden_size3": 32, "lr": 0.1}, "parameter_index": 0}
[2022-09-28 10:44:54] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 0,
hyperParameters: {
value: '{"parameter_id": 0, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 64, "hidden_size1": 16, "hidden_size2": 128, "hidden_size3": 16, "lr": 0.1}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-09-28 10:44:54] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 1,
hyperParameters: {
value: '{"parameter_id": 1, "parameter_source": "algorithm", "parameters": {"batch_size": 32, "hidden_size0": 16, "hidden_size1": 64, "hidden_size2": 128, "hidden_size3": 32, "lr": 0.1}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-09-28 10:45:05] INFO (NNIManager) Trial job a8qqO status changed from WAITING to SUCCEEDED
[2022-09-28 10:45:05] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 16, "hidden_size1": 128, "hidden_size2": 128, "hidden_size3": 64, "lr": 0.1}, "parameter_index": 0}
[2022-09-28 10:45:06] ERROR (NNIRestHandler) Error: File not found: C:\Users\SiWuXie\Desktop\Dll\experiment\9yg5pvfz\trials\a8qqO\trial.log
at LocalTrainingService.getTrialFile (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\training_service\local\localTrainingService.js:146:19)
at NNIManager.getTrialFile (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\core\nnimanager.js:333:37)
at C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\rest_server\restHandler.js:284:29
at Layer.handle [as handle_request] (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at next (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\route.js:137:13)
at Route.dispatch (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\route.js:112:3)
at Layer.handle [as handle_request] (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\layer.js:95:5)
at C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\index.js:281:22
at param (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\index.js:360:14)
at param (C:\Users\SiWuXie\AppData\Roaming\Python\Python39\site-packages\nni_node\node_modules\express\lib\router\index.js:371:14)
[2022-09-28 10:45:10] INFO (NNIManager) submitTrialJob: form: {
sequenceId: 2,
hyperParameters: {
value: '{"parameter_id": 2, "parameter_source": "algorithm", "parameters": {"batch_size": 16, "hidden_size0": 16, "hidden_size1": 128, "hidden_size2": 128, "hidden_size3": 64, "lr": 0.1}, "parameter_index": 0}',
index: 0
},
placementConstraint: { type: 'None', gpus: [] }
}
[2022-09-28 10:45:16] INFO (NNIManager) Trial job v0t9o status changed from WAITING to SUCCEEDED
[2022-09-28 10:45:16] INFO (NNIManager) NNIManager received command from dispatcher: TR, {"parameter_id": 3, "parameter_source": "algorithm", "parameters": {"batch_size": 64, "hidden_size0": 64, "hidden_size1": 64, "hidden_size2": 128, "hidden_size3": 16, "lr": 0.001}, "parameter_index": 0}
**Environment**:
- NNI version: 2.9
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2022-09-28T02:45:41Z | 2022-10-08T08:30:29Z | https://github.com/microsoft/nni/issues/5146 | [
"support"
] | siwuxei | 13 |
kennethreitz/responder | flask | 286 | Routing issue | Hi,
While inspired by responder and build a framework for a talk I found an issue related to `parse` library. Take for instance:
```python
import responder
api = responder.API()
@api.route("/{greeting}")
async def greet_world(req, resp, *, greeting):
resp.text = f"{greeting}, world!"
if __name__ == '__main__':
api.run()
```
`/hello/responder` will match `/{greeting}"`, unless we put a `/` at the end of the pattern any route will match it. I suggest that when adding a new route we check if it ends with a `/` and if not add it underthehood.
If you approve it, I will make a PR. | closed | 2019-02-11T20:26:06Z | 2019-04-30T11:28:38Z | https://github.com/kennethreitz/responder/issues/286 | [] | oldani | 1 |
ultralytics/ultralytics | computer-vision | 19,108 | Scores for all classes for each prediction box | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I want be able to get the scores across all classes for a prediction.
Example, if I have a picture of a car, I still want the prediction scores for the other classes I'm considering.
I don't see a way to do this after going through the documentation. I just get an output tensor which gives the top class and score.

### Additional
_No response_ | open | 2025-02-06T18:58:41Z | 2025-02-07T23:18:02Z | https://github.com/ultralytics/ultralytics/issues/19108 | [
"question",
"detect"
] | bharathsivaram10 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.