repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
biolab/orange3 | pandas | 6,739 | A new Melt widget icon | **What's your use case?**
As of today, the melt widget icon is a shopping cart. This is misleading and not easy to understand.
**What's your proposed solution?**
A new icon, pretty much explicit.

https://github.com/simonaubertbd/misc_icons/blob/main/melt.svg
**Are there any alternative solutions?**
N/A
| open | 2024-02-17T07:01:40Z | 2024-03-11T11:43:22Z | https://github.com/biolab/orange3/issues/6739 | [] | simonaubertbd | 5 |
onnx/onnx | deep-learning | 5,909 | onnx.helper.make_attribute_ref does not set attr.ref_attr_name | # Bug Report
### Describe the bug
onnx.helper.make_attribute_ref does not create a reference attribute, it creates a normal attribute with name and type.
Should it not set attr.ref_attr_name to refer to the parent function's attribute? | open | 2024-02-06T09:12:50Z | 2024-02-08T13:12:57Z | https://github.com/onnx/onnx/issues/5909 | [
"bug"
] | aernoudt | 3 |
vastsa/FileCodeBox | fastapi | 99 | 关于“系统管理-文件大小”的单位“bit”是否错误? | 10485760 Bytes 等于 10 MB
10485760 bit 等于 1.25MB
按照默认的配置文件里,默认文件是10MB。那么这在“系统管理-文件大小” 的说明里,单位应该是“Bytes”吧? | closed | 2023-10-24T05:32:07Z | 2023-10-24T11:34:49Z | https://github.com/vastsa/FileCodeBox/issues/99 | [] | Dicartle | 1 |
simple-login/app | flask | 2,070 | Add TOS notice on account activation | open | 2024-03-16T15:09:43Z | 2024-03-16T15:09:43Z | https://github.com/simple-login/app/issues/2070 | [] | Geri680 | 0 | |
PeterL1n/RobustVideoMatting | computer-vision | 210 | Performance using grayscale images | Hey there,
thanks for this amazing tool!
Does anybody know how the performance for grayscale images is?
I want to use it in the dark and I have an infrared camera.
If retraining is required how much GPU hours do you think are necessary?
:) | open | 2022-11-23T14:43:01Z | 2023-08-23T03:20:22Z | https://github.com/PeterL1n/RobustVideoMatting/issues/210 | [] | bytosaur | 2 |
lexiforest/curl_cffi | web-scraping | 477 | Disable cookie storage in AsyncSession | Hello everyone,
I'm looking for a way to not store cookies in an AsyncSession.
I have the following code , but i noticed that cookies are stored between requests. When the site i'm trying to scrape detect i'm a bot it stores a cookie that is used again by all the requests of the session. I'd like to disable cookie storage.
Is there an easy way to do it ?
Thanks in advance
```
async with AsyncSession(
verify=False,
impersonate='chrome',
headers={'Connection': 'close'},
) as session:
async def make_single_request(request_id: str, request: Dict[str, Any]) -> Tuple[str, Optional[Response]]:
method = request.get('method', 'GET')
url = request['url']
for attempt in range(self.max_retries):
try:
kwargs = {
'params': request.get('params'),
'data': request.get('data'),
'headers': request.get('headers'),
'proxy': self.proxy_url,
'referer': request.get('referer')
}
if method == 'GET':
response = await session.get(url, **kwargs)
else:
response = await session.post(url, **kwargs)
if response.status_code in self.retry_status_codes:
raise Exception(f"Received status code {response.status_code}")
# Handle response encoding
#self._handle_response_encoding(response)
return request_id, response
``` | open | 2025-01-15T17:25:19Z | 2025-01-16T02:05:32Z | https://github.com/lexiforest/curl_cffi/issues/477 | [
"question"
] | omar6995 | 1 |
dynaconf/dynaconf | fastapi | 1,221 | [RFC] support for specifying specific secret versions via the Vault API | Vault secrets can be accessed by the version. The current implementation brings the latest version of the secret.
Sometimes, we would like to set a new for the secret (this leads to the "new version in Vault"), but we want the "production" system to continue to use the old value/version.
The Vault versions can be different between environments. So, it will be nice to define the desired versions via environment variables:
```
VAULT_VESIONS_FOR_DYNACONF="{"default": 17, "dev": 11, "production": 10, "staging": 12}"
```
And you can use the provided "environment" secret versions in the Vault's call:
```
version = obj.VAULT_VESIONS_FOR_DYNACONF.get(env)
version_query = f"?version={version}" if version else ""
path = "/".join([obj.VAULT_PATH_FOR_DYNACONF, env]) + version_query
```
Thanks in advance
Dmitry | open | 2025-01-09T10:41:55Z | 2025-01-09T10:41:55Z | https://github.com/dynaconf/dynaconf/issues/1221 | [
"Not a Bug",
"RFC"
] | yukels | 0 |
gradio-app/gradio | machine-learning | 9,977 | Queuing related guides contain outdated information about `concurrency_count` | ### Describe the bug
These guides related to queuing still refer to `concurrency_count`:
- [Queuing](https://www.gradio.app/guides/queuing)
- [Setting Up a Demo for Maximum Performance](https://www.gradio.app/guides/setting-up-a-demo-for-maximum-performance)
However, as confirmed in #9463:
> The `concurrency_count` parameter has been removed from `.queue()`. In Gradio 4, this parameter was already deprecated and had no effect. In Gradio 5, this parameter has been removed altogether.
Running the code from [Queuing](https://www.gradio.app/guides/queuing) guide results in the error below:
```
Exception has occurred: TypeError
EventListener._setup.<locals>.event_trigger() got an unexpected keyword argument 'concurrency_count'
File "./test_gradio.py", line 23, in <module>
greet_btn.click(fn=greet, inputs=[tag, output], outputs=[
TypeError: EventListener._setup.<locals>.event_trigger() got an unexpected keyword argument 'concurrency_count'
```
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
# Sample code from https://www.gradio.app/guides/queuing
import gradio as gr
with gr.Blocks() as demo:
prompt = gr.Textbox()
image = gr.Image()
generate_btn = gr.Button("Generate Image")
generate_btn.click(image_gen, prompt, image, concurrency_count=5)
```
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.6.0
gradio_client version: 1.4.3
```
### Severity
I can work around it | closed | 2024-11-17T23:13:20Z | 2024-11-24T05:47:38Z | https://github.com/gradio-app/gradio/issues/9977 | [
"good first issue",
"docs/website"
] | the-eddie | 3 |
jacobgil/pytorch-grad-cam | computer-vision | 383 | Could you give me some guidance on how I can adapt this to image captioning? | Thanks for you great work. In the past I have used your repo for several classical computer vision problems.
I am now trying to use this for video captioning - the model I am using has an image encoder and the encoding is then passed through a causal self-attention layer and then a text decoder to generate a caption. I want to understand which part of the image led to the prediction of a certain word (at index i in the generated caption). Would this be possible if so how would I do this? | closed | 2023-02-03T17:46:04Z | 2024-09-28T04:16:59Z | https://github.com/jacobgil/pytorch-grad-cam/issues/383 | [] | ganyeshprasanna | 5 |
yihong0618/running_page | data-visualization | 332 | 有办法支持导出咕咚的室内跑步数据么 | 有办法导出咕咚的室内跑步数据吗,gpx都是室外跑步记录生成的,不知道是不是可以生成室内跑步数据导入其他平台 | closed | 2022-10-31T04:56:27Z | 2022-11-04T08:39:46Z | https://github.com/yihong0618/running_page/issues/332 | [] | darkmanno6 | 2 |
modelscope/data-juicer | data-visualization | 440 | [Bug]: KeyError: 'resource' | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
MacOS15.0
### Installation Method 安装方式
source
### Data-Juicer Version Data-Juicer版本
0.2.0
### Python Version Python版本
3.10
### Describe the bug 描述这个bug
最新的main分之的代码,运行算子的时候,出现以下错误信息:
```
Traceback (most recent call last):
File "/Users/zyc/code/data-juicer/data_juicer/core/data.py", line 199, in process
dataset, resource_util_per_op = Monitor.monitor_func(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/zyc/code/data-juicer/data_juicer/core/monitor.py", line 210, in monitor_func
resource_util_dict['resource'] = mdict['resource']
~~~~~^^^^^^^^^^^^
File "<string>", line 2, in __getitem__
File "/Users/zyc/miniconda3/lib/python3.11/multiprocessing/managers.py", line 837, in _callmethod
raise convert_to_error(kind, result)
KeyError: 'resource'
```
而在之前的代码中不会抱这个错误。试了几个算子都是这样。
### To Reproduce 如何复现
python process_data.py --config ../configs/demo/process_demo.yaml
配置文件如下:
`# Process config example for dataset
# global parameters
project_name: 'demo-process'
dataset_path: '/Users/zyc/code/data-juicer/demos/data/demo-dataset-chatml.jsonl'
np: 1 # number of subprocess to process your dataset
text_keys: ["messages"]
export_path: '/Users/zyc/code/data-juicer/outputs/demo-process/demo-processed_chatml.jsonl'
use_cache: false # whether to use the cache management of Hugging Face datasets. It might take up lots of disk space when using cache
ds_cache_dir: null # cache dir for Hugging Face datasets. In default, it\'s the same as the environment variable `HF_DATASETS_CACHE`, whose default value is usually "~/.cache/huggingface/datasets". If this argument is set to a valid path by users, it will override the default cache dir
use_checkpoint: false # whether to use the checkpoint management to save the latest version of dataset to work dir when processing. Rerun the same config will reload the checkpoint and skip ops before it. Cache will be disabled when using checkpoint. If args of ops before the checkpoint are changed, all ops will be rerun from the beginning.
temp_dir: null # the path to the temp directory to store intermediate caches when cache is disabled, these cache files will be removed on-the-fly. In default, it's None, so the temp dir will be specified by system. NOTICE: you should be caution when setting this argument because it might cause unexpected program behaviors when this path is set to an unsafe directory.
open_tracer: true # whether to open the tracer to trace the changes during process. It might take more time when opening tracer
op_list_to_trace: [] # only ops in this list will be traced by tracer. If it's empty, all ops will be traced. Only available when tracer is opened.
trace_num: 10 # number of samples to show the differences between datasets before and after each op. Only available when tracer is opened.
op_fusion: false # whether to fuse operators that share the same intermediate variables automatically. Op fusion might reduce the memory requirements slightly but speed up the whole process.
cache_compress: null # the compression method of the cache file, which can be specified in ['gzip', 'zstd', 'lz4']. If this parameter is None, the cache file will not be compressed. We recommend you turn on this argument when your input dataset is larger than tens of GB and your disk space is not enough.
# for distributed processing
executor_type: default # type of executor, support "default" or "ray" for now.
ray_address: auto # the address of the Ray cluster.
# only for data analysis
save_stats_in_one_file: false # whether to store all stats result into one file
# process schedule: a list of several process operators with their arguments
process:
# Mapper ops. Most of these ops need no arguments.
- generate_instruction_mapper: # filter text with total token number out of specific range
hf_model: '/Users/zyc/data/models/qwen/Qwen2-1___5B-Instruct' # model name on huggingface to generate instruction.
seed_file: '/Users/zyc/code/data-juicer/demos/data/demo-dataset-chatml.jsonl' # Seed file as instruction samples to generate new instructions, chatml format.
instruct_num: 3 # the number of generated samples.
similarity_threshold: 0.7 # the similarity score threshold between the generated samples and the seed samples.Range from 0 to 1. Samples with similarity score less than this threshold will be kept.
prompt_template: null # Prompt template for generate samples. Please make sure the template contains "{augmented_data}", which corresponds to the augmented samples.
qa_pair_template: null # Prompt template for generate question and answer pair description. Please make sure the template contains two "{}" to format question and answer. Default: '【问题】\n{}\n【回答】\n{}\n'.
example_template: null # Prompt template for generate examples. Please make sure the template contains "{qa_pairs}", which corresponds to the question and answer pair description generated by param `qa_pair_template`.
qa_extraction_pattern: null # Regular expression pattern for parsing question and answer from model response.
enable_vllm: false # Whether to use vllm for inference acceleration.
tensor_parallel_size: null # It is only valid when enable_vllm is True. The number of GPUs to use for distributed execution with tensor parallelism.
max_model_len: null # It is only valid when enable_vllm is True. Model context length. If unspecified, will be automatically derived from the model config.
max_num_seqs: 256 # It is only valid when enable_vllm is True. Maximum number of sequences to be processed in a single iteration.
sampling_params: { "max_length": 1024 }
`
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
![Uploading 截屏2024-09-29 下午5.22.36.png…]()
### Additional 额外信息
_No response_ | closed | 2024-09-29T09:22:48Z | 2025-02-07T03:36:45Z | https://github.com/modelscope/data-juicer/issues/440 | [
"bug",
"stale-issue"
] | luckystar1992 | 4 |
opengeos/leafmap | streamlit | 170 | Save draw features as GeoJSON | ### Discussed in https://github.com/giswqs/leafmap/discussions/168
<div type='discussions-op-text'>
<sup>Originally posted by **egeres** January 9, 2022</sup>
It's possible to add makers, lines and polygonal areas directly in the notebook's ipywidget. **Is it also possible to save these changes from that UI**?
If not, in which attribute is it possible to access this information in a `leafmap.Map` object?

</div> | closed | 2022-01-10T05:49:17Z | 2022-01-11T02:22:55Z | https://github.com/opengeos/leafmap/issues/170 | [] | giswqs | 1 |
erdewit/ib_insync | asyncio | 435 | Support for new SCHEDULE in reqHistoricalData | IB has introduced a new type of historical bar data, SCHEDULE, https://interactivebrokers.github.io/tws-api/historical_bars.html that seems to be the trading times for each trading day. IB being IB also decided to use a different callback, probably because the format of the data returned is different.
It is possible to send the correct request using ib_insync, but the TWS side then makes the usual compatibility check and returns an error message.
1091 321 Error validating request.-'bS' : cause - Your client version is too low for:Historical Schedule requests NVDA | closed | 2022-01-29T23:12:00Z | 2022-01-30T08:49:58Z | https://github.com/erdewit/ib_insync/issues/435 | [] | mdelvaux | 1 |
sunscrapers/djoser | rest-api | 35 | PyPi version out of date | I have noticed that the djoser version installed from pypi assumes no trailing slash is found at the end of the URL, while the current urls.py does assume trailing slash. I'd assume that it's because the pypi version is out of sync with the github one. Any plans to update it?
| closed | 2015-04-20T10:43:25Z | 2015-04-22T19:46:02Z | https://github.com/sunscrapers/djoser/issues/35 | [] | avimeir | 1 |
ijl/orjson | numpy | 466 | Installing ORJSON 3.10 breaks poetry builds | Here is the error I'm seeing
```
#12 3.668 RuntimeError
#12 3.668
#12 3.668 Unable to find installation candidates for orjson (3.10.0)
#12 3.668
#12 3.668 at ~/.local/share/pypoetry/venv/lib/python3.11/site-packages/poetry/installation/chooser.py:74 in choose_for
#12 3.674 70│
#12 3.674 71│ links.append(link)
#12 3.674 72│
#12 3.674 73│ if not links:
#12 3.674 → 74│ raise RuntimeError(f"Unable to find installation candidates for {package}")
#12 3.674 75│
#12 3.674 76│ # Get the best link
#12 3.674 77│ chosen = max(links, key=lambda link: self._sort_key(package, link))
#12 3.674 78│
``` | closed | 2024-03-28T01:10:57Z | 2024-08-05T14:31:19Z | https://github.com/ijl/orjson/issues/466 | [
"Stale"
] | timothyjlaurent | 10 |
yezyilomo/django-restql | graphql | 280 | Empty string(`""`) Should be invalid value for nested fields | Currently, the empty string (`""`) is a valid value for nested fields, it's equivalent to `None`(`null`). But this behavior does not make sense to users since, in reality, the empty string doesn't correspond to `None` in any way, so we're going to drop it. | closed | 2021-09-14T21:28:25Z | 2021-09-14T22:38:47Z | https://github.com/yezyilomo/django-restql/issues/280 | [
"enhancement"
] | yezyilomo | 0 |
dynaconf/dynaconf | flask | 1,009 | [bug] TypeError for older versions of HVAC in read_secret_version method | **Describe the bug**
A combination of newer versions of Dynaconf with older versions of HVAC result in an incompatible mix of expected vs available arguments. Specifically you can get the following traceback.
```python
109 try:
110 if obj.VAULT_KV_VERSION_FOR_DYNACONF == 2:
--> 111 data = client.secrets.kv.v2.read_secret_version(
112 path,
113 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
114 raise_on_deleted_version=True, # keep default behavior
115 )
116 else:
117 data = client.secrets.kv.read_secret(
118 "data/" + path,
119 mount_point=obj.VAULT_MOUNT_POINT_FOR_DYNACONF,
120 )
TypeError: KvV2.read_secret_version() got an unexpected keyword argument 'raise_on_deleted_version'
```
The PR introducing this feature was included in HVAC 1.1.0: https://github.com/hvac/hvac/pull/907
**To Reproduce**
Steps to reproduce the behavior:
1. Have a version of HVAC older than 1.1.0
2. Trigger a vault version read
| closed | 2023-09-26T20:50:54Z | 2023-09-28T10:18:41Z | https://github.com/dynaconf/dynaconf/issues/1009 | [
"bug"
] | JacobCallahan | 0 |
pallets/flask | flask | 5,447 | When using Flask to receive multiple files, an extra ‘0D’ appears at the end of some images | When using Flask to receive multiple image files, occasionally an extra "0D" appears at the end of the image. The original data is saved, and the end of this image is normal "FFD90D0A". Packet capture tools also show a normal ending. If this request is sent repeatedly, the same error will continue to occur, and the saved image cannot be resent to reproduce the issue.
Below is the main code for sending and receiving. This code cannot be run directly because it needs to read some data.
```
files={}
files[i]=("name", data[i], 'image/jpeg')
response = requests.post(
"http://192.168.1.50:8006/test",
files = files,
)
```
```
import flask
app = flask.Flask(__name__)
@app.before_request
def before_request():
path=flask.request.path
data=flask.request.get_data(cache=True, as_text=False)
flask.request.data1=data
@app.route('/test', methods = ['POST'])
def testfile():
data=flask.request.data1
files=flask.request.files
files=[(i,files[i]) for i in files]
for i in range(len(files)):
with open(f"test/{i}.jpg","wb") as f:
files[i][1].seek(0, os.SEEK_END)
file_size = files[i][1].tell()
files[i][1].seek(0)
f.write(files[i][1].read(file_size))
return flask.jsonify({"code":0})
if __name__ == '__main__':
app.run(host="0.0.0.0",port=8006)
```
I have analyzed the raw data and source code, and used packet capture tools to view the received data, all of which are normal. I suspect there is a problem with the parsing, but the flask source code shows that the parsing will remove "\r\n", why would there still be errors?
The probability of this issue occurring is very small, making it difficult to reproduce. However, when it does occur, the same error will occur. | closed | 2024-03-27T08:03:15Z | 2024-04-12T00:06:01Z | https://github.com/pallets/flask/issues/5447 | [] | simpia | 1 |
rio-labs/rio | data-visualization | 199 | [Feature Request] Image Editor / Paint Tool | ### Description
It would be a great addition to have a built-in image editor and/or paint tool for Stable Diffusion inpainting, etc.
### Suggested Solution
Example:
[Gradio Image Editor component](https://www.gradio.app/docs/gradio/imageeditor)
### Alternatives
_No response_
### Additional Context
_No response_
### Related Issues/Pull Requests
_No response_ | open | 2024-12-24T13:13:14Z | 2025-02-17T03:40:37Z | https://github.com/rio-labs/rio/issues/199 | [
"new component",
"needs discussion"
] | ikmalsaid | 9 |
giotto-ai/giotto-tda | scikit-learn | 672 | Boundary of homological features | Is there a simple way to get the information (indices) of each simplex that forms the boundary of a homological feature in the persistence diagram?
I have been able to achieve this with dionysus by tracking each point within the point cloud with an index, and placing some of the persistence information into a dataframe, wherein each feature has a homological dimension, birth, and death (included in gtda), as well as a birth simplex, death simplex, and homology nodes (which make up the boundary).
This was relatively simple to do with the 'pair' attribute in dionysus. However, I was wondering what the easiest way to achieve this in giotto-tda may be? I am interested in making the persistence diagram have information about the homology nodes / boundary show up in the hover tooltip of the plot.
I will continue to work towards this, but if the package itself already has a simple way to achieve this I'm sure many users would be interested to know.
Thank you for your wonderful package and help. | open | 2023-06-27T18:36:24Z | 2023-06-27T18:36:24Z | https://github.com/giotto-ai/giotto-tda/issues/672 | [
"enhancement"
] | chaxor | 0 |
huggingface/datasets | tensorflow | 6,568 | keep_in_memory=True does not seem to work | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | open | 2024-01-08T08:03:58Z | 2024-01-13T04:53:04Z | https://github.com/huggingface/datasets/issues/6568 | [] | kopyl | 6 |
Johnserf-Seed/TikTokDownload | api | 146 | [Feature] 国际版tiktok 的支持, |
(https://www.tiktok.com/@10382aaaabc/video/7088552388780428546)
刚开始我以为是国内 能访问,特意开了国外服务器,发现也不支持,希望开通 | closed | 2022-04-26T12:25:32Z | 2024-02-24T10:28:48Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/146 | [
"需求建议(enhancement)",
"已确认(confirmed)"
] | donaldlee2008 | 1 |
sngyai/Sequoia | pandas | 37 | 使用代码以后选股全为空? | 运行了6afcdc5c98版本的代码。(新版本创建h5文件总是有UTF-8问题,老版本没问题)
什么都没改变,只把end = '2019-02-01'。
但是没有出现注释中出现的几只股票。是不是本来应该在
“**************"回踩年线"**************下面出现。”
还有什么需要设置的吗?
----------------------
backtrace_ma250.py里面的注释。
使用示例:result = backtrace_ma250.check(code_name, data, end_date=end_date)
如:当end_date='2019-02-01',输出选股结果如下:
[('601616', '广电电气'), ('002243', '通产丽星'), ('000070', '特发信息'), ('300632', '光莆股份'), ('601700', '风范股份'), ('002017', '东信和平'), ('600775', '南京熊猫'), ('300265', '通光线缆'), ('600677', '航天通信'), ('600776', '东方通信')]
当然,该函数中的参数可能存在过拟合的问题”
---------------------
PS C:\Sequoia36> & C:/Users/guochan/anaconda3/envs/py36/python.exe c:/Sequoia36/main.py
[Getting data:]##################################################涨停数:70 跌停数:33
涨幅大于5%数:341 跌幅大于5%数:130
年线以上个股数量: 0
。。。。。。。。。。。。。。省略。。。。。。。。。。。
**************"回踩年线"**************
[]
**************"回踩年线"**************
| closed | 2022-02-03T16:19:09Z | 2025-03-05T09:48:03Z | https://github.com/sngyai/Sequoia/issues/37 | [] | zhanxu1031 | 3 |
tflearn/tflearn | data-science | 639 | ValueError: Only call `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...) | Anaconda 3.5
TensorFlow 1.0
TFLearn from latest Git pull
In:
> tflearn\examples\extending_tensorflow> python trainer.py
> Traceback (most recent call last):
> File "trainer.py", line 39, in <module>
> loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(net, Y))
> File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1578, in softmax_cross_entropy
> labels, logits)
> File "C:\Anaconda3\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1533, in _ensure_xent_args
> "named arguments (labels=..., logits=..., ...)" % name)
> ValueError: Only call `softmax_cross_entropy_with_logits` with named arguments (labels=..., logits=..., ...) | open | 2017-02-28T15:36:45Z | 2018-11-08T19:16:09Z | https://github.com/tflearn/tflearn/issues/639 | [] | EricPerbos | 18 |
openapi-generators/openapi-python-client | fastapi | 180 | Properties that are both nullable & required use Optional without importing it | **Describe the bug**
If a property is both nullable and required, It will be defined as Optional, but Optional won't be important leading to a warning during code generation:
`test-api-client/test_api_client/models/cat.py:11:9: F821 undefined name 'Optional'`
**To Reproduce**
Steps to reproduce the behavior:
```
+ rm -rf test-api-client
+ openapi-python-client generate --path test.yaml
Generating test-api-client
+ autoflake --in-place --remove-all-unused-imports --remove-unused-variables --expand-star-imports -r test-api-client/
+ isort test-api-client/
+ black test-api-client/
reformatted /Users/daniel/openapi-specs-upstream-repros/test-api-client/test_api_client/models/__init__.py
All done! ✨ 🍰 ✨
1 file reformatted, 7 files left unchanged.
+ flake8 --ignore=E722,E501 test-api-client/
test-api-client/test_api_client/models/cat.py:11:9: F821 undefined name 'Optional'
+ cat test-api-client/test_api_client/models/cat.py
from __future__ import annotations
from dataclasses import dataclass
from typing import Any, Dict
@dataclass
class Cat:
""" """
id: Optional[str]
def to_dict(self) -> Dict[str, Any]:
id = self.id
return {
"id": id,
}
@staticmethod
def from_dict(d: Dict[str, Any]) -> Cat:
id = d["id"]
return Cat(
id=id,
)
```
**Expected behavior**
`Optional` should be imported if it is used in a file.
**OpenAPI Spec File**
https://gist.github.com/dtkav/1e40db511a80ca4a545fa4519a8d9adf
**Desktop (please complete the following information):**
- OS: macOS 10.15.6
- Python Version: 3.8.0
- openapi-python-client version 0.6.0-alpha
**Additional context**
Potentially fixed by #177 | closed | 2020-09-04T02:18:06Z | 2020-09-04T13:58:43Z | https://github.com/openapi-generators/openapi-python-client/issues/180 | [
"🐞bug"
] | dtkav | 1 |
voila-dashboards/voila | jupyter | 581 | Latex in plotly is not rendered/invisble | Voila does not show any latex text in plotly figures.
Voila version: 0.1.21
Plotly version: 4.5.2
Example:
```
import plotly.graph_objects as go
go.FigureWidget(layout={'title': r'$LaTeX$'})
```
Renders nicely in Jupyterlab:

But not when I run the notebook with voila:

Interestingly enough, the raw string is visible when I download the plot:

Any idea what is going wrong here?
I tried updating plotly and it's extensions to the latest version (4.6.0), but didn't change anything. | open | 2020-04-21T12:03:23Z | 2020-04-21T12:03:23Z | https://github.com/voila-dashboards/voila/issues/581 | [] | ghost | 0 |
awesto/django-shop | django | 452 | CMSPageRenderer template call potentially flawed | When having an error in any site that uses the CMSPageRenderer as its renderer, the CMSPageRenderer retrieves the exception template, which does not support the `request` keyword argument and then fails trying to call the render function on the exception template. | open | 2016-11-11T09:49:51Z | 2016-11-25T14:39:52Z | https://github.com/awesto/django-shop/issues/452 | [
"bug"
] | perplexa | 3 |
dpgaspar/Flask-AppBuilder | rest-api | 1,519 | Extending the documentation for AUTH_LDAP | Hey Flask-AppBuilder Team,
It would be nice to have also the AUTH_LDAP_SEARCH_FILTER documented. We needed this for the Apache Airflow LDAP connection. A look into the source code brought it to light.
As description you could include the following.
_This filter can be used to restrict the LDAP search. An example value is "(|(memberOf=cn=group1*)(memberOf=cn=group2*))"_
Here is the link to the documentation where I expected the parameter:
https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-ldap
Many thanks for your good work. Keep up the good work | closed | 2020-11-17T14:51:54Z | 2020-11-27T11:08:05Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1519 | [
"docs"
] | PhilippB21 | 1 |
yt-dlp/yt-dlp | python | 12,135 | Dynamic Range Compression (DRC), what is used with "bestaudio"; bitrate deviation. | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [x] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
Hello everyone,
I load my videos with the format selection "bestvideo[height<=1080][fps<=30][ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best".
Here is an example video (Headphone users be warned) which is downloaded with the format IDs 136+140: https://www.youtube.com/watch?v=_dRedt6DzAY
Does "bestaudio" always select a non-DRC format or is it better to exclude it explicit - how could that look like?
A simple drc-yes/no option would be quite handy.
By the way... the ID 140 is displayed with "130k ABR", but MediaInfo says the audio track has a constant 128 kb/s?
Who is fibbing here? :-)
Thank you in advance for any feedback!
Greetings, Martin
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | closed | 2025-01-19T11:23:00Z | 2025-01-21T13:53:44Z | https://github.com/yt-dlp/yt-dlp/issues/12135 | [
"question"
] | m-fessler | 4 |
home-assistant/core | asyncio | 141,288 | Unavailable Sensors | ### The problem
Hi, Sensors many times are Unavailable, Logs Clear

### What version of Home Assistant Core has the issue?
2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/ukraine_alarm/
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-24T14:19:39Z | 2025-03-24T14:19:45Z | https://github.com/home-assistant/core/issues/141288 | [
"integration: ukraine_alarm"
] | milandzuris | 1 |
jwkvam/bowtie | jupyter | 26 | webpack 2 breaks things => pin to 1.13.2 for now | closed | 2016-09-21T23:52:19Z | 2016-09-28T18:19:01Z | https://github.com/jwkvam/bowtie/issues/26 | [] | jwkvam | 1 | |
matplotlib/mplfinance | matplotlib | 689 | Segment anchoring problem | Hello,
[FX_NAS100, 5.csv](https://github.com/user-attachments/files/18380187/FX_NAS100.5.csv)
I hope I'm not out of line with the problem I can't solve.
I would like to draw a red line and a green line representing the amplitude of the US trading session on the NASDAQ100.
The history is in candlestick units of 5 minutes.
The two lines (for the historical period) should be drawn from the 15:30 candle to the 21:55 candle.
Tracing is correct at first, but the lines move as soon as you use the program's zoom or advance functions.
Perhaps tracing with “axhline” isn't the right solution; I've tried using the xplot method to exploit the position of the reference candle index, but I can't get anywhere.
Do you have any ideas for solving this problem?
Later, I want to draw other lines with different levels, but until I solve this problem, I'm stuck.
Thanks
```
import` pandas as pd
import mplfinance as mpf
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import tkinter as tk
from tkinter import ttk
import os
import matplotlib.pyplot as plt
df_bourse = pd.DataFrame([
{'Année': 2022, 'Période': 'Été US', 'Début': '2022-03-14', 'Fin': '2022-03-28', 'Modedate': 1},
{'Année': 2022, 'Période': 'Hiver US', 'Début': '2022-10-31', 'Fin': '2022-11-04', 'Modedate': 1},
{'Année': 2023, 'Période': 'Été US', 'Début': '2023-03-13', 'Fin': '2023-03-24', 'Modedate': 1},
{'Année': 2023, 'Période': 'Hiver US', 'Début': '2023-10-30', 'Fin': '2023-11-03', 'Modedate': 1},
{'Année': 2024, 'Période': 'Été US', 'Début': '2024-03-11', 'Fin': '2024-03-28', 'Modedate': 1},
{'Année': 2024, 'Période': 'Hiver US', 'Début': '2024-10-28', 'Fin': '2024-11-01', 'Modedate': 1}
])
df_bourse['Début'] = pd.to_datetime(df_bourse['Début'])
df_bourse['Fin'] = pd.to_datetime(df_bourse['Fin'])
csv_path = r"C:\Formation Python\FX_NAS100, 5.csv"
df = pd.read_csv(csv_path)
csv_filename = os.path.basename(csv_path)
df["time"] = pd.to_datetime(df["time"], unit='s')
df['date'] = df['time'].dt.date
def determine_modedate(date):
for _, row in df_bourse.iterrows():
if row['Début'].date() <= date <= row['Fin'].date():
return row['Modedate']
return 2
df['Modedate'] = df['date'].apply(determine_modedate)
df['amp_journée'] = 0.0
for day in df['date'].unique():
day_data = df[df['date'] == day]
if not day_data.empty:
modedate = day_data['Modedate'].iloc[0]
if modedate == 1:
start_time = pd.to_datetime("14:30").time()
end_time = pd.to_datetime("21:00").time()
else:
start_time = pd.to_datetime("15:30").time()
end_time = pd.to_datetime("22:00").time()
filtered_data = day_data[(day_data['time'].dt.time >= start_time) & (day_data['time'].dt.time <= end_time)]
if not filtered_data.empty:
high_max = filtered_data['high'].max()
low_min = filtered_data['low'].min()
amplitude = high_max - low_min
df.loc[df['date'] == day, 'amp_journée'] = amplitude
percentages = [25, 50, 75, 100, 125, 150, 175, 200, 225, 250, 275]
for p in percentages:
df[f'{p}%'] = df['amp_journée'] * (p / 100)
df[f'-{p}%'] = df['amp_journée'] * (-p / 100)
cols = ['date', 'time', 'open', 'high', 'low', 'close', 'Modedate', 'amp_journée']
percentage_cols = [f'{p}%' for p in percentages] + [f'-{p}%' for p in percentages]
df = df[cols + percentage_cols]
# Interface graphique principale
root = tk.Tk()
root.title("Graphique en chandelier japonais")
root.state('zoomed')
frame = ttk.Frame(root)
frame.pack(fill=tk.BOTH, expand=True)
start_index = 0
num_candles = 100
# Champs pour les pas
step_frame = ttk.Frame(root)
step_frame.pack(side=tk.TOP, fill=tk.X)
tk.Label(step_frame, text="Pas graphique:").pack(side=tk.LEFT, padx=5)
graph_step_entry = ttk.Entry(step_frame, width=5)
graph_step_entry.insert(0, "30")
graph_step_entry.pack(side=tk.LEFT, padx=5)
tk.Label(step_frame, text="Pas zoom:").pack(side=tk.LEFT, padx=5)
zoom_step_entry = ttk.Entry(step_frame, width=5)
zoom_step_entry.insert(0, "30")
zoom_step_entry.pack(side=tk.LEFT, padx=5)
# Fenêtre pour afficher le DataFrame
def show_dataframe():
df_window = tk.Toplevel(root)
df_window.title("Tableau des données")
df_window.state('zoomed')
tree_frame = ttk.Frame(df_window)
tree_frame.pack(fill=tk.BOTH, expand=True)
tree_scroll_y = ttk.Scrollbar(tree_frame, orient=tk.VERTICAL)
tree_scroll_x = ttk.Scrollbar(tree_frame, orient=tk.HORIZONTAL)
tree = ttk.Treeview(tree_frame, yscrollcommand=tree_scroll_y.set, xscrollcommand=tree_scroll_x.set)
tree_scroll_y.pack(side=tk.RIGHT, fill=tk.Y)
tree_scroll_x.pack(side=tk.BOTTOM, fill=tk.X)
tree_scroll_y.config(command=tree.yview)
tree_scroll_x.config(command=tree.xview)
tree.pack(fill=tk.BOTH, expand=True)
df_display = df.copy().reset_index()
numeric_columns = ['amp_journée'] + [col for col in df_display.columns if '%' in col] + ['open', 'high', 'low', 'close']
for column in numeric_columns:
if column in df_display.columns:
df_display[column] = df_display[column].round(2)
tree["columns"] = ["index"] + list(df_display.columns)[1:]
tree["show"] = "headings"
tree.heading("index", text="Index")
tree.column("index", width=50, anchor='center')
for column in df_display.columns[1:]:
tree.heading(column, text=column)
if column == 'time':
tree.column(column, width=200, anchor='center')
else:
tree.column(column, width=100, anchor='center')
for _, row in df_display.iterrows():
tree.insert("", "end", values=[row['index']] + list(row)[1:])
# Champ de saisie pour l'index
index_frame = ttk.Frame(df_window)
index_frame.pack(side=tk.TOP, fill=tk.X)
tk.Label(index_frame, text="Index:").pack(side=tk.LEFT, padx=5)
index_entry = ttk.Entry(index_frame, width=10)
index_entry.pack(side=tk.LEFT, padx=5)
def select_by_index(event=None):
index = index_entry.get()
if index.isdigit():
index = int(index)
for item in tree.get_children():
if int(tree.item(item)['values'][0]) == index:
tree.selection_set(item)
tree.see(item)
break
index_entry.bind('<Return>', select_by_index)
df_window.mainloop()
btn_show_df = ttk.Button(root, text="Afficher le tableau", command=show_dataframe)
btn_show_df.pack(side=tk.TOP, pady=5)
# Ajouter une ligne rouge et verte
def add_lines(ax, df_slice):
try:
unique_dates = df_slice['date'].unique()
for date in unique_dates:
daily_data = df_slice[df_slice['date'] == date]
if daily_data.empty:
continue
modedate = daily_data['Modedate'].iloc[0]
if modedate == 1:
start_time = pd.to_datetime("14:30").time()
end_time = pd.to_datetime("21:00").time()
else:
start_time = pd.to_datetime("15:30").time()
end_time = pd.to_datetime("21:55").time()
filtered_data = daily_data[
(daily_data['time'].dt.time >= start_time) &
(daily_data['time'].dt.time <= end_time)
]
if filtered_data.empty:
continue
max_high = filtered_data['high'].max()
min_low = filtered_data['low'].min()
time_range = df_slice['time']
xmin = (pd.Timestamp.combine(date, start_time) - time_range.min()).total_seconds() / (time_range.max() - time_range.min()).total_seconds()
xmax = (pd.Timestamp.combine(date, end_time) - time_range.min()).total_seconds() / (time_range.max() - time_range.min()).total_seconds()
ax.axhline(y=max_high, color='red', linestyle='--', xmin=xmin, xmax=xmax)
ax.axhline(y=min_low, color='green', linestyle='--', xmin=xmin, xmax=xmax)
except Exception as e:
pass
# fonction pour gérer le mouvement de la souris
def on_mouse_move(event):
if event.inaxes:
x, y = event.xdata, event.ydata
ax = event.inaxes
# Effacer les lignes précédentes
for line in ax.lines:
if line.get_label() in ['crosshair_h', 'crosshair_v']:
line.remove()
# Dessiner les nouvelles lignes
ax.axhline(y=y, color='black', linewidth=0.5, label='crosshair_h')
ax.axvline(x=x, color='black', linewidth=0.5, label='crosshair_v')
# Mettre à jour les données dans la fenêtre
if df_slice is not None and len(df_slice) > 0:
index = max(0, min(int(x), len(df_slice) - 1))
data = df_slice.iloc[index]
info_text = f"Index: {start_index + index}\nOpen: {data['open']:.2f}\nHigh: {data['high']:.2f}\nLow: {data['low']:.2f}\nClose: {data['close']:.2f}\nTime: {data['time']}"
info_window.set(info_text)
# Mise à jour des valeurs X et Y
x_value.set(f"X: {data['time']}")
y_value.set(f"Y: {y:.2f}")
# Redessiner le graphique
fig.canvas.draw_idle()
# Evènement molette de la sourie
def on_scroll(event):
global num_candles
if event.button == 'up':
num_candles = max(num_candles - int(zoom_step_entry.get()), 10)
elif event.button == 'down':
num_candles += int(zoom_step_entry.get())
update_chart()
# Fonctions pour le graphique
def update_chart():
global start_index, num_candles, df_slice, fig
plt.close('all') # Ferme toutes les figures existantes
df_slice = df.iloc[start_index:start_index + num_candles]
if df_slice.empty:
return
df_ohlc = df_slice[['time', 'open', 'high', 'low', 'close']].copy()
df_ohlc.set_index('time', inplace=True)
fig, axlist = mpf.plot(df_ohlc, type='candle', style='charles', title=csv_filename,
ylabel='Prix', volume=False, returnfig=True)
ax = axlist[0]
add_lines(ax, df_slice)
if hasattr(frame, "canvas"):
frame.canvas.get_tk_widget().destroy()
canvas = FigureCanvasTkAgg(fig, master=frame)
canvas.draw()
canvas.get_tk_widget().pack(side=tk.TOP, fill=tk.BOTH, expand=1)
frame.canvas = canvas
fig.canvas.mpl_connect('motion_notify_event', on_mouse_move)
fig.canvas.mpl_connect('scroll_event', on_scroll)
def increase_candles():
global num_candles
num_candles += int(zoom_step_entry.get())
update_chart()
def decrease_candles():
global num_candles
num_candles = max(num_candles - int(zoom_step_entry.get()), 10) # Minimum de 10 chandeliers
update_chart()
def move_right():
global start_index
start_index += int(graph_step_entry.get())
update_chart()
def move_left():
global start_index
start_index = max(start_index - int(graph_step_entry.get()), 0)
update_chart()
# Boutons
button_frame = ttk.Frame(root)
button_frame.pack(side=tk.BOTTOM, fill=tk.X)
plus_button = ttk.Button(button_frame, text="+", command=decrease_candles)
plus_button.pack(side=tk.LEFT, padx=5, pady=5)
minus_button = ttk.Button(button_frame, text="-", command=increase_candles)
minus_button.pack(side=tk.LEFT, padx=5, pady=5)
left_button = ttk.Button(button_frame, text="←", command=move_left)
left_button.pack(side=tk.LEFT, padx=5, pady=5)
right_button = ttk.Button(button_frame, text="→", command=move_right)
right_button.pack(side=tk.LEFT, padx=5, pady=5)
# Fenêtre d'information
info_frame = ttk.Frame(root)
info_frame.pack(side=tk.BOTTOM, fill=tk.X)
# Sous-frame pour les informations centrées
center_info_frame = ttk.Frame(info_frame)
center_info_frame.pack(side=tk.LEFT, expand=True)
info_window = tk.StringVar()
info_label = ttk.Label(center_info_frame, textvariable=info_window, justify=tk.LEFT)
info_label.pack(pady=10)
# Sous-frame pour les valeurs X et Y à droite
xy_info_frame = ttk.Frame(info_frame)
xy_info_frame.pack(side=tk.RIGHT)
x_value = tk.StringVar()
y_value = tk.StringVar()
x_label = ttk.Label(xy_info_frame, textvariable=x_value, justify=tk.LEFT)
y_label = ttk.Label(xy_info_frame, textvariable=y_value, justify=tk.LEFT)
x_label.pack(padx=10, anchor='w')
y_label.pack(padx=10, anchor='w')
update_chart()
root.mainloop()
``` | open | 2025-01-10T17:15:11Z | 2025-01-10T17:15:11Z | https://github.com/matplotlib/mplfinance/issues/689 | [
"question"
] | PYTHON-Deb | 0 |
NullArray/AutoSploit | automation | 1,100 | Unhandled Exception (d4d7593cf) | Autosploit version: `3.1.2`
OS information: `Linux-4.19.0-kali5-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py -a -q *****`
Error mesage: `HTTPSConnectionPool(host='api.zoomeye.org', port=443): Max retries exceeded with url: /user/login (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fd7dba836d0>: Failed to establish a new connection: [Errno -2] Name or service not known',))`
Error traceback:
```
Traceback (most recent call):
File "/home/pentest/Downloads/autosploit/autosploit/main.py", line 109, in main
AutoSploitParser().single_run_args(opts, loaded_tokens, loaded_exploits)
File "/home/pentest/Downloads/autosploit/lib/cmdline/cmd.py", line 209, in single_run_args
save_mode=search_save_mode
File "/home/pentest/Downloads/autosploit/api_calls/zoomeye.py", line 88, in search
raise AutoSploitAPIConnectionError(str(e))
errors: HTTPSConnectionPool(host='api.zoomeye.org', port=443): Max retries exceeded with url: /user/login (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fd7dba836d0>: Failed to establish a new connection: [Errno -2] Name or service not known',))
```
Metasploit launched: `False`
| closed | 2019-06-01T09:18:28Z | 2019-07-25T15:44:39Z | https://github.com/NullArray/AutoSploit/issues/1100 | [] | AutosploitReporter | 0 |
dpgaspar/Flask-AppBuilder | rest-api | 1,937 | ModelViwe可以支持三级的联动效果么 | closed | 2022-10-13T08:26:37Z | 2022-10-24T07:45:43Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1937 | [] | novechina | 0 | |
plotly/dash-core-components | dash | 673 | Bug: Dropdowns with clearable=False are still clearable | Dropdown with clearable=False are still clearable by hitting the backspace or del key on the keyboard.
You can replicate the bug here in the Dropdown Clear section:
https://dash.plot.ly/dash-core-components/dropdown | closed | 2019-05-08T16:43:21Z | 2022-03-21T20:25:26Z | https://github.com/plotly/dash-core-components/issues/673 | [
"size: 2"
] | philspbr | 8 |
kymatio/kymatio | numpy | 403 | inverse of complex wavelet transform | I was wondering how can I invert the wavelet transform
```
U_r = pad(input)
U_0_c = fft(U_r, 'C2C')
# low pass filter
U_1_c = subsample_fourier(cdgmm(U_0_c, phi[0]), k=2**J)
U_J_r = fft(U_1_c, 'C2R')
# high pass filter
for n1 in range(len(psi)):
j1 = psi[n1]['j']
U_1_c = cdgmm(U_0_c, psi[n1][0])
if(j1 > 0):
U_1_c = subsample_fourier(U_1_c, k=2 ** j1)
U_1_c = fft(U_1_c, 'C2C', inverse=True)
```
I am specifically confused about what filters to use and how to invert subsamplings. | closed | 2019-06-25T23:15:29Z | 2019-06-25T23:30:08Z | https://github.com/kymatio/kymatio/issues/403 | [] | nshervt | 3 |
seleniumbase/SeleniumBase | web-scraping | 2,270 | Driver uc=True - You are using an unsupported command-line flag: --disable-setuid-sandbox. Stability and security will suffer. | ## Issue
On Linux, when using the Driver class with `uc=True` the following error is shown in Chrome:
`You are using an unsupported command-line flag: --disable-setuid-sandbox. Stability and security will suffer.`
## Test Code
The code used for testing is this:
```
from seleniumbase import Driver
driver = Driver(headed=True)
# driver = Driver(headed=True, uc=True)
driver.open("about:blank")
# driver.uc_open("about:blank")
driver.quit()
```
The commented out lines is for the testing, as follow.
## Test 01
Lines comment out:
```
driver = Driver(headed=True)
driver.open("about:blank")
```
Result: ok
## Test 02
Lines comment out:
```
driver = Driver(headed=True, uc=True)
driver.open("about:blank")
```
Result: Warning is shown
## Test 03
Lines comment out:
```
driver = Driver(headed=True, uc=True)
driver.uc_open("about:blank")
```
Result: Warning is shown
## Screenshots
Ok. No warning (Test 01):

Warning message (Test 02, Test 03):

## More info
OS: Ubuntu 23.10
Python: 3.10.13
venv: conda
SB: 4.20.9
| closed | 2023-11-13T06:56:13Z | 2024-09-06T17:50:31Z | https://github.com/seleniumbase/SeleniumBase/issues/2270 | [
"question",
"UC Mode / CDP Mode"
] | pablorq | 1 |
clovaai/donut | computer-vision | 263 | Dataset Loader didn't work properly on Kaggle | Good afternoon,
This morning I was trying to run Donut on Kaggle. The structure of the dataset is similar with the one defined on the documentation. However, when I am trying train the model, an error occurred, saying that the "ground truth" didn't exist. While checking on the sample, it shows that the load_dataset recognize the folder name as label and ignore the metadata.jsonl file inside the folder.

I can read the jsonl file via command, tho.

I prepare the Donut with this code:
```
!git clone https://github.com/clovaai/donut.git
!cd donut && pip install .
```
Thank you for your help | closed | 2023-10-25T03:46:36Z | 2023-10-25T12:42:00Z | https://github.com/clovaai/donut/issues/263 | [] | wdprsto | 1 |
clovaai/donut | computer-vision | 320 | High inference lantency | I fine-tuned Donut on custom data, but inference takes 15 seconds on CPU (8 cores). I realized the generate() function is the one that takes too long. Are there any improvements that could be made to reduce this latency please ? Thanks | open | 2024-11-11T12:38:26Z | 2024-11-11T12:38:26Z | https://github.com/clovaai/donut/issues/320 | [] | Altimis | 0 |
numba/numba | numpy | 9,151 | 0.58.0rc1 old-style error capturing warning issues |
Issues:
- [x] Warnings about old-style error are raised regardless of whether the error is a `NumbaError`. (ref: https://matrix.to/#/!gFwSEpcuaweltInwDW:gitter.im/$5Mc1guc4peAryXvx3WjSWhD3TDxC_Nzb-mCQsl-s17k?via=gitter.im&via=matrix.org&via=matrix.kraut.space)
- [x] Reason of the warning is unclear (ref: https://numba.discourse.group/t/ann-numba-0-58-0rc1-and-llvmlite-0-41-0rc1/2078/2) | closed | 2023-08-22T15:14:26Z | 2023-09-05T21:57:46Z | https://github.com/numba/numba/issues/9151 | [
"bug",
"Task"
] | sklam | 9 |
amidaware/tacticalrmm | django | 1,932 | [BUG] {{agent.used_ram}} {{agent.plat_release}} {{agent.time_zone}} broken | **Server Info (please complete the following information):**
- OS: Debian 12
- Browser: Edge
- RMM Version (as shown in top left of web UI): 19.2
**Installation Method:**
- [ ] Standard
- [ ] Standard with `--insecure` flag at install
- [ ] Docker
**Agent Info (please complete the following information):**
- Agent version (as shown in the 'Summary' tab of the agent from web UI): Agent v2.7.0
- Agent OS: window server 2019/2022
**Describe the bug**
{{agent.used_ram}} output the hostname
{{agent.plat_release}} output the hostname
{{agent.time_zone}} output null
**To Reproduce**
Steps to reproduce the behavior:
Use the variables
**Expected behavior**
fields should be giving an expected output
**Additional context**
https://discord.com/channels/736478043522072608/744281869499105290/1265683298735099964
| closed | 2024-07-24T15:06:10Z | 2024-07-24T17:04:53Z | https://github.com/amidaware/tacticalrmm/issues/1932 | [] | P6g9YHK6 | 1 |
cvat-ai/cvat | computer-vision | 8,505 | UI does not check removed frames when a job is closed without saving, only annotations | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
_No response_
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
_No response_ | open | 2024-10-03T11:26:22Z | 2024-10-03T11:26:22Z | https://github.com/cvat-ai/cvat/issues/8505 | [
"bug"
] | bsekachev | 0 |
autogluon/autogluon | computer-vision | 4,393 | [BUG] `CrostonOptimize` and `CrostonClassic` are not supported | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The models `CrostonOptimize` and `CrostonClassic` are documented but return an error:
```
ValueError: Model CrostonClassic is not supported yet
```
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
They should work.
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
```python
import pandas as pd
from autogluon.timeseries import TimeSeriesDataFrame, TimeSeriesPredictor
df = pd.read_csv("https://autogluon.s3.amazonaws.com/datasets/timeseries/m4_hourly_subset/train.csv")
train_data = TimeSeriesDataFrame.from_data_frame(
df, id_column="item_id", timestamp_column="timestamp")
TimeSeriesPredictor(
prediction_length=48,
path="autogluon-m4-hourly",
target="target",
eval_metric="MASE",
).fit(
train_data,
presets="medium_quality",
hyperparameters={
# "CrostonSBA": {}, # works
# "CrostonOptimize": {}, # does not work
"CrostonClassic": {}, # does not work
},
)
```
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
```
INSTALLED VERSIONS
------------------
date : 2024-08-15
time : 11:28:40.074670
python : 3.11.9.final.0
OS : Linux
OS-release : 5.10.223-190.872.amzn2int.x86_64
Version : #1 SMP Tue Jul 30 16:40:37 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 96
cpu_ram_mb : 380657.37890625
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 2855479
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.34.156
catboost : 1.2.5
defusedxml : 0.7.1
evaluate : 0.4.2
fastai : 2.7.16
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.3
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.9.1.post1
mlforecast : 0.10.0
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.3
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.17.1
optimum-intel : None
orjson : 3.10.6
pandas : 2.2.2
pdf2image : 1.17.0
Pillow : 10.2.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 72.1.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.0
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1+cpu
torchmetrics : 1.2.1
torchvision : 0.18.1+cpu
tqdm : 4.66.5
transformers : 4.40.2
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```
</details>
| closed | 2024-08-15T11:29:23Z | 2024-10-22T12:13:36Z | https://github.com/autogluon/autogluon/issues/4393 | [
"bug",
"API & Doc",
"module: timeseries"
] | nathanaelbosch | 0 |
paperless-ngx/paperless-ngx | machine-learning | 7,383 | [BUG] Allow import of signed PDFs (by default) | ### Description
Preface:
Basically, there is already a (similar, now closed: #4047) issue on this topic. It also mentions a workaround that works. In my opinion, however, this workaround should not only be reserved for special cases, but should be included in the standard behavior of Paperless.
In view of the fact that more and more PDFs will contain a digital signature in the future, the problem described here will become increasingly relevant.
In a normal Paperless installation, the import of a digitally signed PDF is aborted with an error message: `DigitalSignatureError: Input PDF has a digital signature. OCR would alter the document, invalidating the signature.
`
The cause lies in the "OCRmyPDF" module used by Paperless for OCR, which by default prefers not to use OCR in order to preserve the signature.
In the context of Paperless, however, this decision is to be evaluated differently: Since Paperless additionally saves the original PDF (with the digital signature) anyway, the loss of the digital signature in a copy of the document (in favor of OCR in this version) is not a problem.
Paperless should therefore instruct the OCRmyPDF module by default not to take any existing digital signature into account during OCR. This is easily possible with a command line parameter.
@tomfitzhenry recommends in https://github.com/paperless-ngx/paperless-ngx/discussions/4047#discussioncomment-7019544 to set ENV `PAPERLESS_OCR_USER_ARGS` to `{"invalidate_digital_signatures": true}`, but this seems (as the further discussion there shows, due to the required escaping of quotation marks) sometimes very difficult in the different Docker configurations.
So my suggestion is to set this value as the default, unless the user explicitly sets it differently.
### Steps to reproduce
1. Spin up a fresh instance of paperless
2. upload a digitally signed PDF
### Webserver logs
```bash
[2024-08-02 18:51:17,841] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: Antrag.pdf: Error occurred while consuming document Antrag.pdf: DigitalSignatureError: Input PDF has a digital signature. OCR would alter the document,
invalidating the signature.
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 368, in parse
ocrmypdf.ocr(**args)
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/api.py", line 375, in ocr
return run_pipeline(options=options, plugin_manager=plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/_pipelines/ocr.py", line 225, in run_pipeline
return _run_pipeline(options, plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/_pipelines/ocr.py", line 189, in _run_pipeline
validate_pdfinfo_options(context)
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/_pipeline.py", line 202, in validate_pdfinfo_options
raise DigitalSignatureError()
ocrmypdf.exceptions.DigitalSignatureError: Input PDF has a digital signature. OCR would alter the document,
invalidating the signature.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 419, in parse
ocrmypdf.ocr(**args)
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/api.py", line 375, in ocr
return run_pipeline(options=options, plugin_manager=plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/_pipelines/ocr.py", line 225, in run_pipeline
return _run_pipeline(options, plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/_pipelines/ocr.py", line 189, in _run_pipeline
validate_pdfinfo_options(context)
File "/usr/local/lib/python3.11/site-packages/ocrmypdf/_pipeline.py", line 202, in validate_pdfinfo_options
raise DigitalSignatureError()
ocrmypdf.exceptions.DigitalSignatureError: Input PDF has a digital signature. OCR would alter the document,
invalidating the signature.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/asgiref/sync.py", line 327, in main_wrap
raise exc_info[1]
File "/usr/src/paperless/src/documents/consumer.py", line 582, in run
document_parser.parse(self.working_copy, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 431, in parse
raise ParseError(f"{e.__class__.__name__}: {e!s}") from e
documents.parsers.ParseError: DigitalSignatureError: Input PDF has a digital signature. OCR would alter the document,
invalidating the signature.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 151, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 613, in run
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 302, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: Antrag.pdf: Error occurred while consuming document Antrag.pdf: DigitalSignatureError: Input PDF has a digital signature. OCR would alter the document, invalidating the signature.
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.1
### Host OS
Debian 12 Bookworm
### Installation method
Docker - official image
### System status
_No response_
### Browser
Chrome
### Configuration changes
_No response_
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-04T11:19:04Z | 2024-10-02T03:07:30Z | https://github.com/paperless-ngx/paperless-ngx/issues/7383 | [
"not a bug"
] | chrisblech | 7 |
zappa/Zappa | django | 825 | [Migrated] Can we have classes on "events[].app_function"? | Originally from: https://github.com/Miserlou/Zappa/issues/2064 by [jourdanrodrigues](https://github.com/jourdanrodrigues)
Some event handlers I need to write are not simple and I wish I could spread its logic in class' methods. The error I get when I try to do this is the following:
`Function signature is invalid. Expected a function that accepts at most 2 arguments or varargs.`
Considering the example below, the issue is that `self` is taken into account:
```python
class EventMetaclass(type):
def __init__(cls, *args, **kwargs):
super().__init__(*args, **kwargs)
original_handle = cls.handle
@wraps(original_handle)
def handle(self):
if self.should_handle():
original_handle(self)
cls.handle = handle
class Event(metaclass=EventMetaclass):
__payload = None
def __init__(self, event: dict, context):
self.event = event
self.context = context
self.__log_event()
self.__payload = self.extract_payload(self.event)
self.handle()
def __log_event(self) -> None:
event = json.dumps(self.event)
context_vars = vars(self.context)
context = json.dumps({key: value for key, value in context_vars.items() if key != 'identity'})
logger.info(f'Triggered {type(self).__name__};\nPayload: {event};\nContext: {context}')
@staticmethod
def extract_payload(event: dict) -> dict:
"""
Static to allow usage outside of this class' context e.g. in a proxy
"""
return json.loads(event['Records'][0]['Sns']['Message'])
def handle(self) -> None:
raise NotImplementedError()
def should_handle(self) -> bool:
return True
def get_payload(self) -> dict:
return {**self.__payload}
```
With that, I could pack some common event logic, make things more reusable and split (in my perspective), like:
```python
class CreatedEvent(Event):
def should_handle(self):
return self.get_payload().get('created_successfully')
def handle(self):
# Execute stuff
pass
```
My workaround is to create a function that turns that class into a function and using it in the `zappa_settings.json`:
```python
...
class Event(metaclass=EventMetaclass):
...
@classmethod
def as_function(cls):
def handle_event(event: dict, context):
cls(event, context).handle()
return handle_event
class CreatedEvent(Event):
...
created_event = CreatedEvent.as_function()
```
I'll try to find some time in the upcoming days to get a PR to address this, but I tried a bit today and, despite it seems simple, there's some tricky handling that needs to be done. | closed | 2021-02-20T12:52:08Z | 2024-04-13T19:10:07Z | https://github.com/zappa/Zappa/issues/825 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
onnx/onnx | tensorflow | 6,748 | Change PyTorch forward pass parameters at inference time | # Ask a Question
### Question
In the code example below we can change the `deterministic` parameter when we export the model, but then it remain fixed. Is there a way to change that parameter during inference?
### Code
```python
import torch as th
import onnxruntime as ort
import gymnasium as gym
from typing import Tuple
from stable_baselines3 import PPO
from stable_baselines3.common.policies import BasePolicy
from stable_baselines3.common.utils import obs_as_tensor
class Custom(gym.Env):
def __init__(self):
super().__init__()
self.observation_space = gym.spaces.Dict(
{
"a": gym.spaces.Box(low=-1, high=1, shape=(3,), dtype=np.float32),
"b": gym.spaces.Box(low=-1, high=1, shape=(6,), dtype=np.float32),
"c": gym.spaces.Box(low=-1, high=1, shape=(9,), dtype=np.float32),
}
)
self.action_space = gym.spaces.Box(low=-1, high=1, shape=(10,), dtype=np.float32)
def reset(self, seed=None):
return self.observation_space.sample(), {}
def step(self, action):
return self.observation_space.sample(), 0.0, False, False, {}
class OnnxableSB3Policy(th.nn.Module):
def __init__(self, policy: BasePolicy):
super().__init__()
self.policy = policy
def forward(self, observation: th.Tensor, deterministic=True) -> Tuple[th.Tensor, th.Tensor, th.Tensor]:
return self.policy(observation, deterministic=deterministic)
env = Custom()
obs, _ = env.reset()
obs = {k: v.reshape(1, -1) for k, v in obs.items()}
PPO("MultiInputPolicy", env).save("ppo_model")
model = PPO.load("ppo_model.zip", device="cpu")
# Convert to ONNX
onnx_policy = OnnxableSB3Policy(model.policy)
obs_tensor = obs_as_tensor(obs, model.policy.device)
th.onnx.export(
onnx_policy,
args=(obs_tensor, {"deterministic": True}),
f="agent.onnx",
)
# Start session
ort_session = ort.InferenceSession("agent.onnx")
onnxruntime_input = {k.name: v for k, v in zip(ort_session.get_inputs(), obs.values())}
# Run
onnxruntime_outputs, _, _ = ort_session.run(None, onnxruntime_input)
onnxruntime_outputs
```
| open | 2025-03-02T03:46:29Z | 2025-03-03T23:54:11Z | https://github.com/onnx/onnx/issues/6748 | [
"question"
] | darkopetrovic | 1 |
horovod/horovod | pytorch | 4,005 | [Volcano] Error using horovod with Vocalno cluster | **Environment:**
1. Framework: pytorch
2. Framework version: 1.10.0
3. Horovod version: 0.28.1
4. MPI version: 4.0.2
5. CUDA version: 11.0
6. NCCL version: 2.19.3-1
7. Python version: 3.7.11
10. OS and version: ubuntu 18.04 (in docker container)
11. GCC version: 7.5.0
**Bug report:**
I am running a distributed training on a GPU cluster with volcano system, here's what I got:
```bash
# command
horovodrun --verbose -np 4 -H job-170013380772780309415-yihua-zhou-master-0.job-170013380772780309415-yihua-zhou:1,job-170013380772780309415-yihua-zhou-worker-0.job-170013380772780309415-yihua-zhou:1,job-170013380772780309415-yihua-zhou-worker-1.job-170013380772780309415-yihua-zhou:1,job-170013380772780309415-yihua-zhou-worker-2.job-170013380772780309415-yihua-zhou:1 -p 6666 \
python -c "import horovod.torch as hvd; hvd.init(); print(hvd.rank(), hvd.local_rank(), hvd.size())"
# error_message
Filtering local host names.
Remote host found: job-170013380772780309415-yihua-zhou-worker-0.job-170013380772780309415-yihua-zhou job-170013380772780309415-yihua-zhou-worker-1.job-170013380772780309415-yihua-zhou job-170013380772780309415-yihua-zhou-worker-2.job-170013380772780309415-yihua-zhou
Checking ssh on all remote hosts.
SSH was successful into all the remote hosts.
Testing interfaces on all the hosts.
Interfaces on all the hosts were successfully checked.
Common interface found: eth0
mpirun --allow-run-as-root --tag-output -np 4 -H job-170013380772780309415-yihua-zhou-master-0.job-170013380772780309415-yihua-zhou:1,job-170013380772780309415-yihua-zhou-worker-0.job-170013380772780309415-yihua-zhou:1,job-170013380772780309415-yihua-zhou-worker-1.job-170013380772780309415-yihua-zhou:1,job-170013380772780309415-yihua-zhou-worker-2.job-170013380772780309415-yihua-zhou:1 -bind-to none -map-by slot -mca pml ob1 -mca btl ^openib -mca plm_rsh_args "-p 6666" -mca btl_tcp_if_include eth0 -x NCCL_SOCKET_IFNAME=eth0 -x CUDAToolkit_ROOT -x CUDA_HOME -x DEBIAN_FRONTEND -x HOME -x HOROVOD_CUDA_HOME -x HOROVOD_NCCL_HOME -x HOSTNAME -x JobID -x KUBERNETES_PORT -x KUBERNETES_PORT_443_TCP -x KUBERNETES_PORT_443_TCP_ADDR -x KUBERNETES_PORT_443_TCP_PORT -x KUBERNETES_PORT_443_TCP_PROTO -x KUBERNETES_SERVICE_HOST -x KUBERNETES_SERVICE_PORT -x KUBERNETES_SERVICE_PORT_HTTPS -x LC_CTYPE -x LD_LIBRARY_PATH -x LOG_FILE -x MASTER_ADDR -x MASTER_PORT -x MKL_NUM_THREADS -x NCCL_DEBUG -x NCCL_IB_DISABLE -x NCCL_SOCKET_IFNAME -x NUMEXPR_NUM_THREADS -x NVIDIA_DRIVER_CAPABILITIES -x NVIDIA_VISIBLE_DEVICES -x OMP_NUM_THREADS -x OPENBLAS_NUM_THREADS -x PARENT_ENV -x PATH -x PWD -x PYTORCH_VERSION -x RANK -x SHLVL -x TERM -x TZ -x USER -x VC_MASTER_HOSTS -x VC_MASTER_NUM -x VC_WORKER_HOSTS -x VC_WORKER_NUM -x VECLIB_NUM_THREADS -x WORLD_SIZE -x _ python -c 'import horovod.torch as hvd; hvd.init(); print(hvd.rank(), hvd.local_rank(), hvd.size())'
ssh: Could not resolve hostname job-170013380772780309415-yihua-zhou-worker-0: Name or service not known
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
--------------------------------------------------------------------------
ORTE does not know how to route a message to the specified daemon
located on the indicated node:
my node: job-170013380772780309415-yihua-zhou-master-0
target node: job-170013380772780309415-yihua-zhou-worker-1
This is usually an internal programming error that should be
reported to the developers. In the meantime, a workaround may
be to set the MCA param routed=direct on the command line or
in your environment. We apologize for the problem.
--------------------------------------------------------------------------
[job-170013380772780309415-yihua-zhou-master-0:00851] 1 more process has sent help message help-errmgr-base.txt / no-path
[job-170013380772780309415-yihua-zhou-master-0:00851] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
```
| closed | 2023-11-16T12:52:49Z | 2023-11-17T02:19:21Z | https://github.com/horovod/horovod/issues/4005 | [
"bug"
] | SimZhou | 5 |
D4Vinci/Scrapling | web-scraping | 47 | Page With Cloudflare Verification Never Loads | ### What would you like to share?
Not sure if this is a bug or I'm not using the framework properly, so opening it as an issue instead. I'm using the following code to load a page (I've provided the URL previously over email):
`StealthyFetcher().fetch(url, wait_selector='.place-list', timeout=120000)
`
Even with the 2 minute timeout, the page fails to load. My suspicion is its stalling out at the Cloudflare verification. Could you please help me out?
### Additional information
_No response_ | closed | 2025-03-08T19:13:55Z | 2025-03-09T16:45:05Z | https://github.com/D4Vinci/Scrapling/issues/47 | [
"enhancement",
"invalid",
"cloudflare"
] | mrcolumbia | 4 |
litestar-org/litestar | api | 3,485 | Bug: `return_dto` is silently ignored if return data type does not match DTO definition | ### Description
Below is a minimal example that showcases the issue.
The context in which this came up for me is that I'm trying to port an existing app which uses FastAPI/SQLModel to Litestar. As illustrated in the SQLModel docs (for example [here](https://sqlmodel.tiangolo.com/tutorial/fastapi/multiple-models/#multiple-models-with-inheritance)), I have several pydantic models for read/write/db operations - most of which I would like to ditch in favour of Litestar's DTOs. I accidentally defined my return DTOs using the wrong pydantic model and things weren't working as expected - it would simply ignore the DTO and return the full, unfiltered data.
Since I'm new to the concept of DTOs it took me a while to figure out what the cause of the issue was. For a new user like me it would be very helpful to have Litestar print a prominent warning (if not abort with an error).
### URL to code causing the issue
_No response_
### MCVE
```python
from pydantic import BaseModel
from litestar import Litestar, get
from litestar.dto import DTOConfig
from litestar.contrib.pydantic import PydanticDTO
class MyMessage(BaseModel):
msg: str
priority: int
class MyMessage2(BaseModel):
msg: str
priority: int
class ReadDTO(PydanticDTO[MyMessage]):
config = DTOConfig(exclude={"priority"})
@get("/", return_dto=ReadDTO)
async def hello_world() -> MyMessage:
"""Handler function that returns a greeting dictionary."""
return MyMessage(msg="Hello world", priority=1)
app = Litestar(route_handlers=[hello_world])
```
### Steps to reproduce
```bash
1. Save the above MCVE as `app.py` and run it via `litestar run`
2. `curl http://127.0.0.1:8008` returns the expected output: `{"msg":"Hello world"}` (note that the `priority` field is correctly stripped by the `return_dto`).
3. In the definition of `ReadDTO`, change `PydanticDTO[MyMessage]` to `PydanticDTO[MyMessage2]`.
4. `curl http://127.0.0.1:8008` now returns the _full_ dictionary including the `priority` field: `{"msg": "Hello world", "priority": 1}`.
This is of course due to the fact that the `hello_world` handler constructs an instance of `MyMessage` while the return DTO is parameterised on `MyMessage2`.
However, there is no warning or any indication that there is a mismatch. Instead, the return DTO is just silently ignored and the full data is returned.
```
### Screenshots
```bash
""
```
### Logs
_No response_
### Litestar Version
```
$ litestar version
2.8.3final0
```
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-05-10T11:00:27Z | 2025-03-20T15:54:42Z | https://github.com/litestar-org/litestar/issues/3485 | [
"Bug :bug:"
] | maxalbert | 1 |
MilesCranmer/PySR | scikit-learn | 630 | [Feature]: How to get the value of a constant in a fixed expression | ### Feature Request
Hi, thanks for developing the helpful tool.
I want to use pysr to get the values of the constants C1 and C2 in the following fixed expression:
y=(1-C1)*x1/(x2+1)+x3/(C2+1)
x1,x2,x3,y are known data sets.
How should I use pysr to get the constant C1 and C2?
Thanks! | closed | 2024-05-24T05:06:43Z | 2024-05-24T08:27:33Z | https://github.com/MilesCranmer/PySR/issues/630 | [
"enhancement"
] | ccclalala123 | 0 |
flairNLP/flair | nlp | 2,912 | Spacing injected by Flair into Named Entities skewing reported metrics | When doing sequence tagging for Named Entities, Flair is injecting spaces around punctuation inside the Span itself (which I suspect this is due to the tokenization being applied). I previously reported this as a question (https://github.com/flairNLP/flair/issues/2908), but have since learned it is also skewing reported metrics as it causes a string comparison of the Entity to NOT match, and thus falsely penalizes the metrics reported.
For example,
when tagging people:
`Peter T. Pan Jr.`
Flair alters the string to:
`Peter T . Pan Jr .`
when tagging user Organizations:
`A+Flying`
Flair alters the string to:
`A + Flying`
when tagging user handles:
`@__peterpan__`
Flair alters the string to:
`@ __ peterpan __` | closed | 2022-08-17T19:50:02Z | 2023-01-07T13:48:26Z | https://github.com/flairNLP/flair/issues/2912 | [
"bug",
"wontfix"
] | None-Such | 1 |
kizniche/Mycodo | automation | 1,036 | Feature Request: Setpoint Tracking for Bang-Bang Hysteretic Controller | _**Is your feature request related to a problem? Please describe.**_
I really like the setpoint tracking option for PID Controllers and want to find more ways to control outputs based on time of day. However, I can't find a way to apply the Method I created for setpoint tracking to anything other than a PID setpoint. In particular, I'd like to be able to use the Method with a Bang-Bang Hysteretic Controller, because it seems like some of the outputs I'm trying to control aren't really good candidates for a PID.
_**Describe the solution you'd like**_
A setpoint tracking option for Bang-Bang Hysteretic Controllers
**Describe alternatives you've considered**
I've tried to find a way to include Methods in conditional controllers to achieve a similar result. The only feasible way I could see was to also create a PID controller, enable setpoint tracking for that controller using the Method I created, and then add the PID setpoint as one of the Measurements for the custom Conditional. This seems like an unnecessarily complicated and inefficient solution. Especially if one wants to create multiple controllers this way, you'd need to use multiple "dummy" PIDs that were blank apart from including information on setpoint tracking, creating a lot of unnecessary clutter.
| open | 2021-06-22T17:31:18Z | 2021-07-06T19:03:41Z | https://github.com/kizniche/Mycodo/issues/1036 | [] | DarthPleurotus | 2 |
slackapi/python-slack-sdk | asyncio | 1,357 | Unable to import module 'main': No module named 'slack_sdk' AWS Lambda Python 3.10 | I am trying to deploy a AWS Lambda with the following code:
main.py
```python
import os
import slack_sdk
slackBotToken = os.environ["BOT_TOKEN"]
slack_client = slack_sdk.WebClient(token=slackBotToken)
```
requirements.txt:
slack_sdk
(Share the commands to run, source code, and project settings (e.g., setup.py))
### Expected result:
It should initialize the slack client and then I can work with slack api's to work with slack apis to handle events
### Actual result:
Errors in Cloud Watch Logs:
```
2023-04-23T21:26:58.505+02:00Copy[ERROR] Runtime.ImportModuleError: Unable to import module 'main': No module named 'slack_sdk'Traceback (most recent call last): | [ERROR] Runtime.ImportModuleError: Unable to import module 'main': No module named 'slack_sdk' Traceback (most recent call last):
``` | closed | 2023-04-23T19:35:00Z | 2023-06-12T00:03:57Z | https://github.com/slackapi/python-slack-sdk/issues/1357 | [
"question",
"web-client",
"Version: 3x",
"auto-triage-stale"
] | ankit-ghub | 3 |
Yorko/mlcourse.ai | scikit-learn | 761 | The dataset for lecture 5 practical is not available | I was watching practical of lecture 5 and I like to code along the video, but the dataset used in the video is not available anywhere. Its not present in your kaggle nor in the github repo.
Would humbly request you to upload the same | closed | 2024-06-19T05:17:00Z | 2024-06-25T12:09:46Z | https://github.com/Yorko/mlcourse.ai/issues/761 | [] | Arnax308 | 1 |
chatanywhere/GPT_API_free | api | 207 | ZoteroGPT插件突然无法使用 | **Describe the bug 描述bug**
在Zotero内安装GPT插件,使用chatanywhere API接口时,出现报错,同时使用沉浸式翻译正常。
错误提示:
出错了,下面是一些报错信息:
HTTP POST https://api.chatanywhere.tech/v1/chat/completions failed with status code 400:{“error”:{“message”:“Invalid value for ‘content’: expected a string, got null.”,“type”:“invalid_request_error”,“param”:“messages.[2].content”,“code”:null}}
如果是网络问题,建议优化网络,比如打开/切换代理等。 | closed | 2024-04-02T03:14:55Z | 2024-04-06T10:32:33Z | https://github.com/chatanywhere/GPT_API_free/issues/207 | [] | ComradePenguin-1917 | 1 |
redis/redis-om-python | pydantic | 296 | modify ci flows to invoke | open | 2022-07-07T07:35:40Z | 2022-07-07T07:35:40Z | https://github.com/redis/redis-om-python/issues/296 | [
"maintenance"
] | chayim | 0 | |
Kanaries/pygwalker | pandas | 132 | v0.1.9.1 to_html bug | Hi, I've tried to insert pygwalker into a Django div with mark_safe and file keeps loading forever. Looking at JavaScript console it throws this error twice: `TypeError: t is undefined`
This is my view:
```
from django.shortcuts import render
import pandas as pd
import pygwalker as pyg
from django.utils.safestring import mark_safe
def test(request):
df = pd.read_csv("https://kanaries-app.s3.ap-northeast-1.amazonaws.com/public-datasets/bike_sharing_dc.csv",
parse_dates=['date'])
div = mark_safe(pyg.to_html(df))
return render(request, 'tests.html', {'div': div})
```
And in template I simply do `{{ div }}`
Could you help me with this? Thanks
_Originally posted by @P3RI9 in https://github.com/Kanaries/pygwalker/issues/94#issuecomment-1597042897_
| closed | 2023-06-19T13:51:08Z | 2023-07-03T03:42:54Z | https://github.com/Kanaries/pygwalker/issues/132 | [
"bug",
"fixed but needs feedback",
"P1"
] | longxiaofei | 4 |
marcomusy/vedo | numpy | 410 | PLY with texture | So I have 2 meshes, one of the entire object and one which is a snip of the object which I have colored based of temperature interpolation. When I try to export to ply, I’m unable to preserve the color on the section with merging the meshes. Any suggestions on how to do this? Thanks. | closed | 2021-06-09T14:59:39Z | 2021-06-10T12:47:53Z | https://github.com/marcomusy/vedo/issues/410 | [] | dodointhesnow | 2 |
pennersr/django-allauth | django | 3,676 | The email_confirmed signal not sent when email verified via admin | The admin action to make an email address verified should fire the `email_confirmed` signal, [but doesn't](https://github.com/pennersr/django-allauth/blob/main/allauth/account/admin.py#L21). | closed | 2024-03-09T14:38:03Z | 2024-09-10T10:29:07Z | https://github.com/pennersr/django-allauth/issues/3676 | [] | boosh | 1 |
healthchecks/healthchecks | django | 981 | Unable to use different domains for web GUI and pings? | Until recently I was using the single domain _old.domain.tld_ for both the web GUI and for pings, but now I want to split this up:
* _new.domain.tld_ for the web GUI
* _old.domain.tld_ for the pings (so I don't have to update any of my scripts)
This is what I'm trying:
**Caddyfile**:
```
old.domain.tld {
import cloudflare
reverse_proxy /ping/* healthchecks_web:8000
redir https://new.domain.tld{uri}
}
new.domain.tld {
import cloudflare
reverse_proxy healthchecks_web:8000
}
```
**.env.production**:
```
ALLOWED_HOSTS=new.domain.tld,old.domain.tld
SITE_ROOT=https://new.domain.tld
PING_ENDPOINT=https://old.domain.tld/ping/
```
This works:
* webGUI is available on _new.domain.tld_
* when I accidentally surf to _old.domain.tld_ this get redirected to _new.domain.tld_
This fails:
* all of my pings no longer work with this setup
What am I doing wrong? | closed | 2024-03-27T22:09:43Z | 2024-03-28T22:04:34Z | https://github.com/healthchecks/healthchecks/issues/981 | [] | pro-sumer | 2 |
tpvasconcelos/ridgeplot | plotly | 181 | Plot not showing up in Jupyter notebook (VSCode) | Hi,
When I run the code from the example in my Jupyter Notebook using VSCode, I do not see any output:
<img width="1165" alt="image" src="https://github.com/tpvasconcelos/ridgeplot/assets/68286264/73661c19-8ced-457e-a961-fec187907e91">
Do you maybe know what might be causing this? | closed | 2024-03-21T09:34:34Z | 2024-10-08T14:27:34Z | https://github.com/tpvasconcelos/ridgeplot/issues/181 | [
"question"
] | krunolp | 2 |
ivy-llc/ivy | numpy | 28,397 | Fix Ivy Failing Test: tensorflow - statistical.cummin | closed | 2024-02-22T21:05:33Z | 2024-02-26T06:24:21Z | https://github.com/ivy-llc/ivy/issues/28397 | [
"Sub Task"
] | samthakur587 | 0 | |
scikit-multilearn/scikit-multilearn | scikit-learn | 154 | Cannot import 'NetworkXLabelGraphClusterer' | I am just trying to import NetworkXLabelGraphClusterer using scikit-multilearn==0.2.0 but I get this error as it is not in the package: "cannot import name 'NetworkXLabelGraphClusterer'"
I am just running the code you provide in "5.2.2. Estimating hyper-parameter k for embedded classifiers"
Best,
Giammy | closed | 2019-01-23T18:12:14Z | 2023-03-14T16:46:30Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/154 | [] | ghiander | 4 |
kubeflow/katib | scikit-learn | 2,388 | Refactor `/pkg/webhook/v1beta1/pod/inject_webhook_test.go` according to Developer Guide. | ### What you would like to be added?
Refactor `/pkg/webhook/v1beta1/pod/inject_webhook_test.go` to obey rules defined in the [Developer Guide](https://github.com/kubeflow/katib/blob/master/docs/developer-guide.md).
### Why is this needed?
Currently, testcases in `/pkg/webhook/v1beta1/pod/inject_webhook_test.go` are defined in slices and use `reflect.Equal` for comparison. It's not recommended according to the Developer Guide. We should:
1. Use [cmp.Diff](https://pkg.go.dev/github.com/google/go-cmp/cmp#Diff) instead of reflect.Equal, to provide useful comparisons.
2. Define test cases as maps instead of slices to avoid dependencies on the running order. Map key should be equal to the test case name.
### Love this feature?
Give it a 👍 We prioritize the features with most 👍 | closed | 2024-07-16T11:46:50Z | 2024-08-16T18:46:30Z | https://github.com/kubeflow/katib/issues/2388 | [
"area/testing",
"kind/feature"
] | Electronic-Waste | 2 |
allenai/allennlp | nlp | 5,508 | `conda install allennlp==2.8.0 -c conda-forge` hangs forever | When I run `conda install allennlp==2.8.0 -c conda-forge` in an environment with Python 3.8, it hangs forever.
@h-vetinari, do you know anything about that? | closed | 2021-12-11T02:07:18Z | 2022-06-01T22:14:55Z | https://github.com/allenai/allennlp/issues/5508 | [
"bug",
"stale"
] | dirkgr | 8 |
piskvorky/gensim | data-science | 3,411 | Python11 can not install gensim, if it is possible, I wish Python11 can have the right version for gensim too | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
| closed | 2022-12-09T08:38:34Z | 2022-12-09T09:49:36Z | https://github.com/piskvorky/gensim/issues/3411 | [] | Victorrrrr86 | 2 |
django-cms/django-cms | django | 7,768 | Bounty program | Is https://www.django-cms.org/en/bounty-program/ active ?!
| open | 2024-01-16T12:39:58Z | 2024-02-17T11:11:24Z | https://github.com/django-cms/django-cms/issues/7768 | [] | TouchstoneTheDev | 2 |
sczhou/CodeFormer | pytorch | 178 | Getting Issue after submitting the image | Getting below error.
t.split is not a function | open | 2023-03-13T10:50:18Z | 2023-03-13T10:50:18Z | https://github.com/sczhou/CodeFormer/issues/178 | [] | thisisprateektyagi | 0 |
trevorstephens/gplearn | scikit-learn | 104 | Drop Python 2.7 support in 2019 | `scikit-learn` and `numpy` will both be ending support for Python 2.7 in any release after 2018, further `scikit-learn` will also drop Python 3.4 after this point. `gplearn` will follow suit in order to reduce development and CI complexity, as well as encourage adoption of Python 3. | closed | 2018-10-12T07:41:43Z | 2018-12-04T04:07:19Z | https://github.com/trevorstephens/gplearn/issues/104 | [
"dependencies"
] | trevorstephens | 2 |
MaartenGr/BERTopic | nlp | 1,766 | Zero-shot Topic Modeling and Representation Model Error | I've started to leverage the zero-shot topic modeling from the .16 release. However, I'm finding that when I use the zeroshot_topic list and add in a representation model, I end up getting the following warning: `KeyError: '-1'`
```python
keybert = KeyBERTInspired(top_n_words=10,
nr_repr_docs=5,
nr_samples = 500,
nr_candidate_words=100,
random_state = 20231226)
pos_nouns = [[{'POS': 'PROPN'}], [{'POS': 'NOUN'}] ]
pos_verbs = [[{'POS': 'VERB'}]]
pos_adject = [[{'POS': 'ADJ'}]]
pos_nouns_ = PartOfSpeech("en_core_web_sm", pos_patterns = pos_nouns)
pos_verbs_ = PartOfSpeech("en_core_web_sm", pos_patterns = pos_verbs)
pos_adject_ = PartOfSpeech("en_core_web_sm", pos_patterns = pos_adject)
representation_model = {
"KeyBERT":keybert,
"PoS_Nouns": pos_nouns_,
'PoS_Verbs': pos_verbs_,
'PoS_Adjectives': pos_adject_,
}
zeroshot_model_sent = BERTopic(
embedding_model=sentence_model,
vectorizer_model = vectorizer_model,
ctfidf_model = ctfidf_model,
umap_model = umap_model,
hdbscan_model=hdbscan_model,
zeroshot_topic_list = zeroshot_topic_list,
zeroshot_min_similarity = .3,
calculate_probabilities=True,
verbose=True
)
zeroshot_topics_sent, zeroshot_prob_sent = zeroshot_model_sent.fit_transform(documents = sentence_docs2023)
```
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[141], line 14
1 zeroshot_model_sent = BERTopic(
2 embedding_model=sentence_model,
3 vectorizer_model = vectorizer_model,
(...)
11 verbose=True
12 )
---> 14 zeroshot_topics_sent, zeroshot_prob_sent = zeroshot_model_sent.fit_transform(documents = sentence_docs2023)
File c:\\Users\\my_name\\.conda\\envs\\BERTopic\\lib\\site-packages\\bertopic\\_bertopic.py:448, in BERTopic.fit_transform(self, documents, embeddings, images, y)
446 # Combine Zero-shot with outliers
447 if self._is_zeroshot() and len(documents) != len(doc_ids):
--> 448 predictions = self._combine_zeroshot_topics(documents, assigned_documents, assigned_embeddings)
450 return predictions, self.probabilities_
File c:\\Users\\my_name\\.conda\\envs\\BERTopic\\lib\\site-packages\\bertopic\\_bertopic.py:3540, in BERTopic._combine_zeroshot_topics(self, documents, assigned_documents, embeddings)
3537 return self.topics_, self.probabilities_
3539 # Merge the two topic models
-> 3540 merged_model = BERTopic.merge_models([zeroshot_model, self], min_similarity=1)
3542 # Update topic labels and representative docs of the zero-shot model
3543 for topic in range(len(set(y))):
File c:\\Users\\my_name\\.conda\\envs\\BERTopic\\lib\\site-packages\\bertopic\\_bertopic.py:3150, in BERTopic.merge_models(cls, models, min_similarity, embedding_model)
3147 merged_topics[\"topic_labels\"][str(new_topic_val)] = selected_topics[\"topic_labels\"][str(new_topic)]
3149 if selected_topics[\"topic_aspects\"]:
-> 3150 merged_topics[\"topic_aspects\"][str(new_topic_val)] = selected_topics[\"topic_aspects\"][str(new_topic)]
3152 # Add new embeddings
3153 new_tensors = tensors[new_topic - selected_topics[\"_outliers\"]]
KeyError: '-1'"
}
```
I'm guessing there is something in the documentation that would indicate representation models cannot be used when Zero Shot Topic Modeling, but I haven't found anything. | closed | 2024-01-22T19:42:46Z | 2024-01-24T17:36:41Z | https://github.com/MaartenGr/BERTopic/issues/1766 | [] | jarends15 | 2 |
ets-labs/python-dependency-injector | asyncio | 53 | Review docs: Installation | closed | 2015-05-08T14:38:03Z | 2015-05-12T13:42:13Z | https://github.com/ets-labs/python-dependency-injector/issues/53 | [
"docs"
] | rmk135 | 0 | |
gradio-app/gradio | python | 9,969 | Add a setting that allows users to customize the tab placement (`Left`, `Center`, or `Right`). | - [x] I have searched to see if a similar issue already exists.
I think many would agree that it would be convenient to place some tabs in different locations. For example, the "Settings" tab could be located on the right, and the "INFO" tab could be on the left. The main tabs could be placed in the center or in another convenient location for users.
It would also be nice to have the ability to arrange tabs in a column instead of in a single line.
These changes could significantly improve some interfaces and make them more user-friendly.
| open | 2024-11-16T09:11:40Z | 2025-01-11T12:41:36Z | https://github.com/gradio-app/gradio/issues/9969 | [
"enhancement"
] | Bebra777228 | 1 |
huggingface/transformers | deep-learning | 36,653 | AutoModel from_pretrained does not recursively download relative imports | ## **Background**
I have the following setup:
My directory looks like:
```
|--modeling.py
|--backbone.py
|--modules.py
```
In `modeling.py` I have an import statement to `backbone`:
```
from .backbone import ...
```
and likewise in `backbone.py` I have an import statement to `modules`:
```
from .modules import ...
```
My auto_map references `modeling.py`:
```
config.auto_map = {"AutoModel": "modeling.[some_class_in_this_file]"}
```
Now, I push a model to the hub and I also upload all 3 files to the repo.
## **Issue**
However, when I try to use `AutoModel.from_pretrained` on my repo, I get a `FileNotFoundError` because for some reason `modules.py` was not cloned/pulled to the HF_HOME dir:
```
FileNotFoundError: [Errno 2] No such file or directory: '<HF_HOME>/transformers_modules/<repo>/modules.py'
```
I understand that the necessary relative imports are obtained recursively as [here](https://github.com/huggingface/transformers/blob/81aa9b2e07b359cd3555c118010fd9f26c601e54/src/transformers/dynamic_module_utils.py#L108-L138).
How come the relevant files aren't also pulled/cloned recursively?
### Expected behavior
Recursive dependencies should be cloned/pulled when `AutoModel.from_pretrained` is called | open | 2025-03-12T00:50:46Z | 2025-03-21T17:54:37Z | https://github.com/huggingface/transformers/issues/36653 | [
"bug"
] | yair-schiff | 2 |
plotly/dash | data-science | 3,230 | document type checking for Dash apps | we want `pyright dash` to produce few (or no) errors - we should add a note saying we are working toward this goal to the documentation, while also explaining that `pyright dash` does currently produce errors and that piecemal community contributions are very welcome. | closed | 2025-03-20T16:01:18Z | 2025-03-21T17:40:33Z | https://github.com/plotly/dash/issues/3230 | [
"documentation",
"P1"
] | gvwilson | 0 |
agronholm/anyio | asyncio | 57 | AsyncFile difference | You use Trio's "native" async file, which uses ``await s.aclose()`` to async-close a file (Trio convention).
However, on asyncio there is ``await s.close()``. | closed | 2019-05-09T06:08:28Z | 2019-05-09T14:13:05Z | https://github.com/agronholm/anyio/issues/57 | [] | smurfix | 0 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 783 | 使用swin-transformer里的predict.py和官方swin_tiny_patch4_window7_224.pth模型预测没结果; | **System information**
* Have I written custom code:
* OS Platform(e.g., window10 or Linux Ubuntu 16.04):
* Python version:
* Deep learning framework and version(e.g., Tensorflow2.1 or Pytorch1.3):
* Use GPU or not:
* CUDA/cuDNN version(if you use GPU):
* The network you trained(e.g., Resnet34 network):
**Describe the current behavior**
**Error info / logs**
| open | 2023-12-14T10:55:16Z | 2024-01-28T09:13:37Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/783 | [] | LegendSun0 | 2 |
dask/dask | numpy | 11,306 | ⚠️ Upstream CI failed ⚠️ | [Workflow Run URL](https://github.com/dask/dask/actions/runs/10369194454)
<details><summary>Python 3.12 Test Summary</summary>
```
dask/tests/test_tokenize.py::test_tokenize_random_functions[np.random]: AssertionError: assert '4a55f6ba9e8b...621cf04435925' == 'fd1bc1fa3a47...68dddc4dce8d1'
- fd1bc1fa3a4700bccef68dddc4dce8d1
+ 4a55f6ba9e8b379d5e1621cf04435925
```
</details>
| closed | 2024-08-13T12:16:31Z | 2024-08-13T17:48:07Z | https://github.com/dask/dask/issues/11306 | [
"upstream"
] | github-actions[bot] | 1 |
fugue-project/fugue | pandas | 195 | [BUG] Dask print does not print all rows | **Minimal Code To Reproduce**
```
%%fsql dask
CREATE [[0,1],[0,2],[1,1],[1,3]] SCHEMA a:int,b:int
PRINT 3 ROWS
```
**Describe the bug**
The problem is the `head` function of dask requires npartitions, and even setting to -1, if a df has 5 records and I want to print 10 rows, I still get the warning which is not nice. So we should revert to itertuples approach.
**Environment (please complete the following information):**
- Backend: pandas/dask/ray? dask 2021.4.1
- Backend version:
- Python version: 3.7
- OS: linux/windows
| closed | 2021-04-29T08:17:40Z | 2021-04-29T17:17:15Z | https://github.com/fugue-project/fugue/issues/195 | [
"bug",
"high priority",
"core feature",
"dask"
] | goodwanghan | 0 |
autokey/autokey | automation | 817 | Kubuntu 22.04 Activate either version fails | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Bug
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [X] autokey-gtk
- [X] autokey-qt
- [ ] beta
- [X] bug
- [X] critical
- [ ] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
_No response_
### Which Linux distribution did you use?
Kubuntu 22.04
### Which AutoKey GUI did you use?
Both
### Which AutoKey version did you use?
96.0
### How did you install AutoKey?
```
#!/usr/bin/bash
# 2022-11-18 06:46 - 01_all_install_autokey.sh (dgdrive /media/dgdrive/scripts)
sudo apt update
sudo apt install python3-pip -y
pip3 install cffi
pip3 install --user https://github.com/autokey/autokey/archive/v0.96.0.zip
sudo apt install xautomation
echo
read -p "autokey and xautomation installed"
```
### Can you briefly describe the issue?
```
ineuw@ku2204dt:~$ autokey-qt -c
Traceback (most recent call last):
File "/home/ineuw/.local/bin/autokey-qt", line 5, in <module>
from autokey.qtui.__main__ import Application
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/qtui/__main__.py", line 23, in <module>
from autokey.qtapp import Application
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/qtapp.py", line 41, in <module>
from autokey.qtui import common as ui_common
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/qtui/common.py", line 26, in <module>
from PyQt5.QtSvg import QSvgRenderer
ModuleNotFoundError: No module named 'PyQt5.QtSvg'
ineuw@ku2204dt:~$ autokey-gtk -c
Traceback (most recent call last):
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/gtkui/notifier.py", line 29, in <module>
gi.require_version('AyatanaAppIndicator3', '0.1')
File "/usr/lib/python3/dist-packages/gi/__init__.py", line 126, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace AyatanaAppIndicator3 not available
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ineuw/.local/bin/autokey-gtk", line 5, in <module>
from autokey.gtkui.__main__ import main
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/gtkui/__main__.py", line 4, in <module>
from autokey.gtkapp import Application
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/gtkapp.py", line 40, in <module>
from autokey.gtkui.notifier import get_notifier
File "/home/ineuw/.local/lib/python3.10/site-packages/autokey/gtkui/notifier.py", line 31, in <module>
gi.require_version('AppIndicator3', '0.1')
File "/usr/lib/python3/dist-packages/gi/__init__.py", line 126, in require_version
raise ValueError('Namespace %s not available' % namespace)
ValueError: Namespace AppIndicator3 not available
ineuw@ku2204dt:~$
```
### Can the issue be reproduced?
Always
### What are the steps to reproduce the issue?
see the above output in terminal
### What should have happened?
Load
### What actually happened?
nothing
### Do you have screenshots?
no
### Can you provide the output of the AutoKey command?
```bash
it shows in the error message
```
### Anything else?
nothing | closed | 2023-03-19T10:26:06Z | 2023-03-22T17:08:23Z | https://github.com/autokey/autokey/issues/817 | [
"bug",
"autokey-qt",
"autokey-gtk",
"installation/configuration"
] | ineuw | 16 |
facebookresearch/fairseq | pytorch | 4,726 | ModuleNotFoundError: No module named 'fairseq.models.speech_to_text.modules' | ## 🐛 Bug
When importing anything from the main branch of Fairseq, an error occurs.
### To Reproduce
Install fresh Fairseq
```bash
pip install 'git+https://github.com/facebookresearch/fairseq.git'
```
then try importing from it
```Python
from fairseq.dataclass.configs import FairseqConfig
```
Result:
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-2-d142f6ae2745>](https://localhost:8080/#) in <module>
----> 1 from fairseq.dataclass.configs import FairseqConfig
13 frames
[/usr/local/lib/python3.7/dist-packages/fairseq/__init__.py](https://localhost:8080/#) in <module>
31 hydra_init()
32
---> 33 import fairseq.criterions # noqa
34 import fairseq.distributed # noqa
35 import fairseq.models # noqa
[/usr/local/lib/python3.7/dist-packages/fairseq/criterions/__init__.py](https://localhost:8080/#) in <module>
34 if file.endswith(".py") and not file.startswith("_"):
35 file_name = file[: file.find(".py")]
---> 36 importlib.import_module("fairseq.criterions." + file_name)
[/usr/lib/python3.7/importlib/__init__.py](https://localhost:8080/#) in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129
[/usr/local/lib/python3.7/dist-packages/fairseq/criterions/ctc.py](https://localhost:8080/#) in <module>
19 from fairseq.dataclass import FairseqDataclass
20 from fairseq.logging.meters import safe_round
---> 21 from fairseq.tasks import FairseqTask
22
23
[/usr/local/lib/python3.7/dist-packages/fairseq/tasks/__init__.py](https://localhost:8080/#) in <module>
134 # automatically import any Python files in the tasks/ directory
135 tasks_dir = os.path.dirname(__file__)
--> 136 import_tasks(tasks_dir, "fairseq.tasks")
[/usr/local/lib/python3.7/dist-packages/fairseq/tasks/__init__.py](https://localhost:8080/#) in import_tasks(tasks_dir, namespace)
115 ):
116 task_name = file[: file.find(".py")] if file.endswith(".py") else file
--> 117 importlib.import_module(namespace + "." + task_name)
118
119 # expose `task_parser` for sphinx
[/usr/lib/python3.7/importlib/__init__.py](https://localhost:8080/#) in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129
[/usr/local/lib/python3.7/dist-packages/fairseq/tasks/online_backtranslation.py](https://localhost:8080/#) in <module>
32 encoders,
33 )
---> 34 from fairseq.sequence_generator import SequenceGenerator
35 from fairseq.tasks import register_task
36 from fairseq.tasks.translation import TranslationTask, load_langpair_dataset
[/usr/local/lib/python3.7/dist-packages/fairseq/sequence_generator.py](https://localhost:8080/#) in <module>
14 from fairseq import search, utils
15 from fairseq.data import data_utils
---> 16 from fairseq.models import FairseqIncrementalDecoder
17 from fairseq.ngram_repeat_block import NGramRepeatBlock
18
[/usr/local/lib/python3.7/dist-packages/fairseq/models/__init__.py](https://localhost:8080/#) in <module>
233 # automatically import any Python files in the models/ directory
234 models_dir = os.path.dirname(__file__)
--> 235 import_models(models_dir, "fairseq.models")
[/usr/local/lib/python3.7/dist-packages/fairseq/models/__init__.py](https://localhost:8080/#) in import_models(models_dir, namespace)
215 ):
216 model_name = file[: file.find(".py")] if file.endswith(".py") else file
--> 217 importlib.import_module(namespace + "." + model_name)
218
219 # extra `model_parser` for sphinx
[/usr/lib/python3.7/importlib/__init__.py](https://localhost:8080/#) in import_module(name, package)
125 break
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
129
[/usr/local/lib/python3.7/dist-packages/fairseq/models/speech_to_text/__init__.py](https://localhost:8080/#) in <module>
5
6 from .berard import * # noqa
----> 7 from .convtransformer import * # noqa
8 from .multi_modality_model import * # noqa
9 from .s2t_conformer import * # noqa
[/usr/local/lib/python3.7/dist-packages/fairseq/models/speech_to_text/convtransformer.py](https://localhost:8080/#) in <module>
21 register_model_architecture,
22 )
---> 23 from fairseq.models.speech_to_text.modules.convolution import infer_conv_output_dim
24 from fairseq.models.transformer import Embedding, TransformerDecoder
25 from fairseq.modules import LayerNorm, PositionalEmbedding, TransformerEncoderLayer
ModuleNotFoundError: No module named 'fairseq.models.speech_to_text.modules'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
```
### Expected behavior
The import succeeds.
### Environment
- fairseq Version (e.g., 1.0 or main): `main`
- PyTorch Version (e.g., 1.0): does not matter
- OS (e.g., Linux): does not matter
- How you installed fairseq (`pip`, source): `pip install 'git+https://github.com/facebookresearch/fairseq.git'`
- Build command you used (if compiling from source): NA
- Python version: does not matter
- CUDA/cuDNN version: NA
- GPU models and configuration: NA
### Additional context
NA
| open | 2022-09-15T13:39:59Z | 2022-09-19T18:46:31Z | https://github.com/facebookresearch/fairseq/issues/4726 | [
"bug",
"needs triage"
] | avidale | 5 |
huggingface/datasets | pandas | 7,073 | CI is broken for convert_to_parquet: Invalid rev id: refs/pr/1 404 error causes RevisionNotFoundError | See: https://github.com/huggingface/datasets/actions/runs/10095313567/job/27915185756
```
FAILED tests/test_hub.py::test_convert_to_parquet - huggingface_hub.utils._errors.RevisionNotFoundError: 404 Client Error. (Request ID: Root=1-66a25839-31ce7b475e70e7db1e4d44c2;b0c8870f-d5ef-4bf2-a6ff-0191f3df0f64)
Revision Not Found for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5188a8-17219154347516/preupload/refs%2Fpr%2F1.
Invalid rev id: refs/pr/1
```
```
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/hub.py:86: in convert_to_parquet
dataset.push_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/dataset_dict.py:1722: in push_to_hub
split_additions, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/datasets/arrow_dataset.py:5511: in _push_parquet_shards_to_hub
api.preupload_lfs_files(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/hf_api.py:4231: in preupload_lfs_files
_fetch_upload_modes(
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:118: in _inner_fn
return fn(*args, **kwargs)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/huggingface_hub/_commit_api.py:507: in _fetch_upload_modes
hf_raise_for_status(resp)
``` | closed | 2024-07-26T08:27:41Z | 2024-07-27T05:48:02Z | https://github.com/huggingface/datasets/issues/7073 | [] | albertvillanova | 9 |
google-deepmind/sonnet | tensorflow | 70 | BatchReshape and BatchFlatten don't work in the straight-forward way | Dear engineers, I just want to convert two 4x4 gray images (now the shape is (2, 4, 4, 1)) into some layout suitable for layer normalization ( the intended shape is (2, 16), let's assume here I am sane to do so) and then convert it back. The straight forward way is to utilize BatchFlatten and BatachReshape in pair to do the job. But such a natural idea seems to be not workable here:
```
import tensorflow as tf
import sonnet as snt
class MyLN(snt.AbstractModule):
def __init__(self, name = "my_ln"):
super(MyLN, self).__init__(name = name)
with self._enter_variable_scope():
self._ln = snt.LayerNorm()
def _build(self, inputs):
return snt.BatchReshape(inputs.get_shape().as_list())(self._ln(snt.BatchFlatten()(inputs)))
t = tf.truncated_normal([2, 4, 4, 1])
n = MyLN()(t)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
t_val, n_val = sess.run([t, n])
print(t_val)
print(n_val)
```
I just wonder have I misunderstood something about these two modules? Thanks a lot for pointing out that.
| closed | 2017-10-29T01:02:00Z | 2017-10-29T23:16:25Z | https://github.com/google-deepmind/sonnet/issues/70 | [] | mingyr | 2 |
browser-use/browser-use | python | 203 | Integrations for orchestration (E.g. crewai) | Has anyone tested a pipeline generating where browser-use is integrated to frameworks like crewai, maybe some sort of feedback loop | open | 2025-01-10T15:20:21Z | 2025-01-22T01:36:08Z | https://github.com/browser-use/browser-use/issues/203 | [] | Edu126 | 7 |
freqtrade/freqtrade | python | 10,785 | freqtrade - ERROR - Fatal exception! joblib.externals.loky.process_executor._RemoteTraceback | ```
2024-10-13 16:32:58,163 - freqtrade.optimize.hyperopt - INFO - Effective number of parallel workers used: 2
Hyperopt results
┏━━━━━━┳━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┓
┃ Best ┃ Epoch ┃ Trades ┃ Win Draw Loss Win% ┃ Avg profit ┃ Profit ┃ Avg duration ┃ Objective ┃ Max Drawdown (Acct) ┃
┡━━━━━━╇━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━┩
└──────┴───────┴────────┴───────────────────────┴────────────┴────────┴──────────────┴───────────┴─────────────────────┘
Epochs ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0/200 0% • 0:00:43 • -:--:--
2024-10-13 16:33:41,355 - freqtrade - ERROR - Fatal exception!
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/externals/loky/process_executor.py", line 463, in _process_worker
r = call_item()
^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/externals/loky/process_executor.py", line 291, in __call__
return self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 598, in __call__
return [func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/externals/loky/cloudpickle_wrapper.py", line 32, in __call__
return self._obj(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/optimize/hyperopt.py", line 392, in generate_optimizer
bt_results = self.backtesting.backtest(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/optimize/backtesting.py", line 1495, in backtest
self.backtest_loop(row, pair, current_time, end_date, trade_dir)
File "/freqtrade/freqtrade/optimize/backtesting.py", line 1380, in backtest_loop
self._check_trade_exit(trade, row, current_time) # Place exit order if necessary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/optimize/backtesting.py", line 885, in _check_trade_exit
t = self._get_exit_for_signal(trade, row, exit_, current_time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/optimize/backtesting.py", line 772, in _get_exit_for_signal
and len(row[EXIT_TAG_IDX]) > 0
^^^^^^^^^^^^^^^^^^^^^^
TypeError: object of type 'int' has no len()
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/freqtrade/freqtrade/main.py", line 45, in main
return_code = args["func"](args)
^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/commands/optimize_commands.py", line 109, in start_hyperopt
hyperopt.start()
File "/freqtrade/freqtrade/optimize/hyperopt.py", line 661, in start
f_val = self.run_optimizer_parallel(parallel, asked)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/freqtrade/freqtrade/optimize/hyperopt.py", line 484, in run_optimizer_parallel
return parallel(
^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 2007, in __call__
return output if self.return_generator else list(output)
^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 1650, in _get_outputs
yield from self._retrieve()
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 1754, in _retrieve
self._raise_error_fast()
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 1789, in _raise_error_fast
error_job.get_result(self.timeout)
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 745, in get_result
return self._return_or_raise()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ftuser/.local/lib/python3.12/site-packages/joblib/parallel.py", line 763, in _return_or_raise
raise self._result
TypeError: object of type 'int' has no len()
``` | closed | 2024-10-13T16:36:05Z | 2024-10-17T05:00:46Z | https://github.com/freqtrade/freqtrade/issues/10785 | [
"Question - Outdated Version"
] | nbwcbz | 3 |
aws/aws-sdk-pandas | pandas | 2,511 | Add ResultReuseConfiguration parameter to Athena query | **Is your idea related to a problem? Please describe.**
I would like to enable cache feature in Athena however, there is no option to pass "ResultReuseConfiguration"
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/athena/client/start_query_execution.html | closed | 2023-11-03T09:51:07Z | 2024-05-23T14:27:54Z | https://github.com/aws/aws-sdk-pandas/issues/2511 | [
"enhancement",
"closing-soon"
] | sivankumar86 | 3 |
mljar/mercury | data-visualization | 19 | Precise URL of watched notebook app | Please display (in the terminal after running a command) to the user a precise URL address of watched notebook. | closed | 2022-01-24T09:12:54Z | 2022-02-07T13:18:43Z | https://github.com/mljar/mercury/issues/19 | [
"enhancement",
"good first issue",
"help wanted"
] | pplonski | 17 |
hack4impact/flask-base | sqlalchemy | 132 | Redis on Bash on Ubuntu on Windows | I ran into some issues when trying to set this up on my Windows computer. I installed Flask-base using Bash on Ubuntu on Windows (because Redis does not currently support Windows).
On configuring flask-base, running `honcho start -f Local` would result in `redis.exceptions.ConnectionError: Error 111 connecting to localhost:6379. Connection refused.` so I could not launch the app.
I fixed this by following [this guide](http://jessicadeen.com/tech/how-to-configure-redis-server-on-bash-on-ubuntu-on-windows-10-wsl/). For some reason I also had to also edit ` /etc/redis/redis.conf` and replace `port 6379` (default) with something different like `port 6378` and then setting `REDISTOGO_URL=http://localhost:6378` in Flask-base config.env | closed | 2017-03-27T00:06:33Z | 2019-09-14T21:18:45Z | https://github.com/hack4impact/flask-base/issues/132 | [] | sbue | 2 |
jupyter/nbgrader | jupyter | 1,336 | "manual grading" screen unable to format any math containing a prime symbol `'` | <!--
Thanks for helping to improve nbgrader!
If you are submitting a bug report or looking for support, please use the below
template so we can efficiently solve the problem.
If you are requesting a new feature, feel free to remove irrelevant pieces of
the issue template.
-->
### Operating system
MacOS 10.14.4 (18E226)
### `nbgrader --version`
0.6.1
### `jupyter notebook --version`
6.0.3
### Expected behavior
In one of the courses I am teaching, we wanted to use nbgrader to automate some of the grading. Until now it's working beautifully -- thanks for this.
For this week, we have a notebook containing function derivatives, like so:
$$
y'(x) = f(x, y(x))
$$
which sets correctly in my `source` and `release` versions, looking a little like this:

### Actual behaviour
If I then open a student submission, at first I am able to see the math, like so,

but when it is about to replace the text with a MathJax formula, it looks like so:

When this happens, I get these errors in my terminal:
```
[W 17:49:18.142 NotebookApp] 404 GET /static/components/MathJax/jax/output/HTML-CSS/fonts/STIX/fontdata.js?V=2.7.7 (::1) 2.53ms referer=http://localhost:8889/formgrader/submissions/ccc77ef030ea4ae5bda1d7d0e787089c/?index=0
[W 17:49:20.909 NotebookApp] 404 GET /static/components/MathJax/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf (::1) 1.49ms referer=http://localhost:8889/formgrader/submissions/ccc77ef030ea4ae5bda1d7d0e787089c/?index=0
```
This only happens in the formulas containing a prime/derivative sign, weirdly enough.
### Steps to reproduce the behavior
I get the same results using a different browser and also using a different machine (MacOS 10.14.6, jupyter 6.0.2, nbgrader 0.6.1)
I am not sure how I can add a MWE but if it seems useful later I can try making a test assignment and a submission, or I could share an anonymized submission from the real assignment. Please let me know which one is preferable.
| open | 2020-05-14T15:54:16Z | 2022-09-06T07:44:26Z | https://github.com/jupyter/nbgrader/issues/1336 | [] | Jannertje | 11 |
minimaxir/textgenrnn | tensorflow | 208 | System memory not freeing after train | I'm playing around with this module living in a web API where a `/train` route will train a `textgenrnn` object on some training data and then save the model weights to a cloud bucket for later downloading and getting output from a `/generate` route. I'm using Flask.
For `/train`, there will understandably be a spike in memory usage while the model trains, but after this completes I want to free that memory to keep the server's footprint as small as possible. However, it seems like memory used in previous trains of `textgenrnn` (~120 MB) is sticking around and stacking on top of later memory usage.
Here is a profile of the server's memory usage while making the exact same call to `/train` twice in a row. Notice how the memory usage strictly increases on the second call, instead of resetting after the first call.

I use the model roughly as such:
```
model = textgenrnn()
model.train_on_texts(training_strings)
# do API stuff with the model
# now throw the kitchen sink at it, trying to free up memory
del model
gc.collect()
tf.keras.backend.clear_session()
tf.compat.v1.reset_default_graph()
```
Am I missing something? | open | 2020-07-24T21:57:17Z | 2020-11-24T00:48:32Z | https://github.com/minimaxir/textgenrnn/issues/208 | [] | jkatofsky | 1 |
Farama-Foundation/Gymnasium | api | 509 | [Proposal] FrameStack wrapper but for vector observations | ### Proposal
Maybe I missed it, but there is nothing like that, right?
Basically the idea would be to concatenate consecutive vector observations (the default ones) by stacking them. Eg, Pendulum-v0 returns obs of shape (3,): VectorStack(env, 4) would return obs of shape (12,), where obs are stacked like in FrameStack.
[This paper](https://arxiv.org/abs/2101.01857) does something similar, but instead of just stacking obs it stacks the difference of consecutive observations. Something like this could be achieved by having an extra argument that defines how obs are stacked.
### Motivation
Using a wrapper like that may be an alternative to recurrent policies like LSTM.
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-05-18T05:22:08Z | 2023-05-20T19:13:17Z | https://github.com/Farama-Foundation/Gymnasium/issues/509 | [
"enhancement"
] | sparisi | 5 |
jazzband/django-oauth-toolkit | django | 756 | Ceiling pin Django2 incompatible versions | Add a Django<2 ceiling pin to versions<1.2. This would allow `pip-compile` to properly reconcile to the correct `django-oath-toolkit` version. We use `django-oauth-toolkit` in a package let's say `my-django-auth`. This is then used in multiple APIs. Our packages are generally unpinned unless necessary, like using backwards incompatible features. This allows the application to decide when to upgrade. Without the ceiling unless we pin this package at the application level we will bring in a Django1 incompatible version. | closed | 2019-10-31T22:47:19Z | 2021-10-23T01:18:44Z | https://github.com/jazzband/django-oauth-toolkit/issues/756 | [] | Amertz08 | 0 |
desec-io/desec-stack | rest-api | 425 | deSEC settings for pfSense | I'm using pfSense and would like to set up deSEC. I tried several setting but didn't manage to get it working. I appreciate any help. The last think I tried are the following settings:
As username I used the domain
As password I used the token

I get the following error in the logs:
Jul 28 01:01:02 | php-cgi | | rc.dyndns.update: DynDns (): Dynamic Dns: cacheIP != wan_ip. Updating. Cached IP: 0.0.0.0 WAN IP: 77.59.185.127
-- | -- | -- | --
Jul 28 01:01:02 | php-cgi | | rc.dyndns.update: Dynamic DNS custom (): _update() starting.
Jul 28 01:01:02 | php-cgi | | rc.dyndns.update: Sending request to: update.dedyn.io
Jul 28 01:01:02 | php-cgi | | rc.dyndns.update: Dynamic DNS custom (): _checkStatus() starting.
Jul 28 01:01:02 | php-cgi | | rc.dyndns.update: phpDynDNS (): (Error) Result did not match. [<html>^M <head><title>301 Moved Permanently</title></head>^M <body>^M <center><h1\>301 Moved Permanently</h1\></center>^M <hr><center>nginx</center>^M </body>^M </html>^M ]
| closed | 2020-07-27T15:19:15Z | 2020-08-19T05:24:33Z | https://github.com/desec-io/desec-stack/issues/425 | [] | AndreasAZiegler | 8 |
Gozargah/Marzban | api | 1,055 | اپدیت | درود با عرض خسته نباشید هسته ایکس ری به نسخه 1.8.15 ارتقا پیدا کرد و کلی فیچر اومده و رفته ولی هنوز مرزبان بروز نشده ، لطفا قابلیت مولتی کاستوم کانفیگ رو اضافه کنید با تشکر | closed | 2024-06-19T14:56:42Z | 2024-07-15T20:53:38Z | https://github.com/Gozargah/Marzban/issues/1055 | [
"Feature"
] | w0l4i | 0 |
Nemo2011/bilibili-api | api | 238 | [需求] B站热搜接口 | https://www.bilibili.com/blackboard/activity-trending-topic.html
GET https://app.bilibili.com/x/v2/search/trending/ranking?limit=30
```
{
"code": 0,
"message": "0",
"ttl": 1,
"data": {
"trackid": "17570037212562506523",
"list": [
{
"position": 1,
"keyword": "性能堪比游戏本的轻薄本",
"show_name": "性能堪比游戏本的轻薄本",
"word_type": 5,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221213/eaf2dd702d7cc14d8d9511190245d057/lrx9rnKo24.png",
"hot_id": 71404,
"is_commercial": "0"
},
{
"position": 2,
"keyword": "JKL一级被线杀",
"show_name": "JKL一级被线杀",
"word_type": 5,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221213/eaf2dd702d7cc14d8d9511190245d057/lrx9rnKo24.png",
"hot_id": 71456,
"is_commercial": "0"
},
{
"position": 3,
"keyword": "舔狗",
"show_name": "反舔狗部例行检查",
"word_type": 9,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221117/eaf2dd702d7cc14d8d9511190245d057/EeuqbMwao9.png",
"hot_id": 71432,
"is_commercial": "0"
},
{
"position": 4,
"keyword": "被造黄谣女生决定不采取自诉",
"show_name": "被造黄谣女生决定不采取自诉",
"word_type": 5,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221213/eaf2dd702d7cc14d8d9511190245d057/lrx9rnKo24.png",
"hot_id": 71391,
"is_commercial": "0"
},
{
"position": 5,
"keyword": "研究生的电脑都装些什么",
"show_name": "研究生的电脑都装些什么",
"word_type": 4,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221118/eaf2dd702d7cc14d8d9511190245d057/UF7B1wVKT2.png",
"hot_id": 71558,
"is_commercial": "0"
},
{
"position": 6,
"keyword": "王者赛季末最强10英雄",
"show_name": "王者赛季末最强10英雄",
"word_type": 8,
"hot_id": 71542,
"is_commercial": "0"
},
{
"position": 7,
"keyword": "苏大恶意P图者被开除学籍",
"show_name": "苏大恶意P图者被开除学籍",
"word_type": 8,
"hot_id": 71483,
"is_commercial": "0"
},
{
"position": 8,
"keyword": "从订书钉中取出立方体",
"show_name": "从订书钉中取出立方体",
"word_type": 4,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221118/eaf2dd702d7cc14d8d9511190245d057/UF7B1wVKT2.png",
"hot_id": 71551,
"is_commercial": "0"
},
{
"position": 9,
"keyword": "新海诚回应粉丝叫诚哥",
"show_name": "新海诚回应粉丝叫诚哥",
"word_type": 8,
"hot_id": 71296,
"is_commercial": "0"
},
{
"position": 10,
"keyword": "高考语文选择前3题技巧",
"show_name": "高考语文选择前3题技巧",
"word_type": 4,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221118/eaf2dd702d7cc14d8d9511190245d057/UF7B1wVKT2.png",
"hot_id": 71562,
"is_commercial": "0"
},
{
"position": 11,
"keyword": "24岁女大学生意外当选村长",
"show_name": "24岁女大学生意外当选村长",
"word_type": 8,
"hot_id": 71369,
"is_commercial": "0"
},
{
"position": 12,
"keyword": "当你走进北京人的大脑",
"show_name": "当你走进北京人的大脑",
"word_type": 4,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221118/eaf2dd702d7cc14d8d9511190245d057/UF7B1wVKT2.png",
"hot_id": 71545,
"is_commercial": "0"
},
{
"position": 13,
"keyword": "steam春促史低推荐",
"show_name": "steam春促史低推荐",
"word_type": 8,
"hot_id": 71519,
"is_commercial": "0"
},
{
"position": 14,
"keyword": "UP原创绫华浴衣皮肤",
"show_name": "UP原创绫华浴衣皮肤",
"word_type": 8,
"hot_id": 71393,
"is_commercial": "0"
},
{
"position": 15,
"keyword": "T1如何横扫DK",
"show_name": "T1如何横扫DK",
"word_type": 8,
"hot_id": 71502,
"is_commercial": "0"
},
{
"position": 16,
"keyword": "LOL蝎子VGU重做讲解",
"show_name": "LOL蝎子VGU重做讲解",
"word_type": 4,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221118/eaf2dd702d7cc14d8d9511190245d057/UF7B1wVKT2.png",
"hot_id": 71531,
"is_commercial": "0"
},
{
"position": 17,
"keyword": "文心一言是ChatPPT吗",
"show_name": "文心一言是ChatPPT吗",
"word_type": 8,
"hot_id": 71354,
"is_commercial": "0"
},
{
"position": 18,
"keyword": "左右三个朝代历史的人",
"show_name": "左右三个朝代历史的人",
"word_type": 8,
"hot_id": 71484,
"is_commercial": "0"
},
{
"position": 19,
"keyword": "奥拉星宙域无极",
"show_name": "奥拉星宙域无极",
"word_type": 8,
"hot_id": 71336,
"is_commercial": "0"
},
{
"position": 20,
"keyword": "得此舍友吾欲何求",
"show_name": "得此舍友吾欲何求",
"word_type": 9,
"icon": "http://i0.hdslb.com/bfs/activity-plat/static/20221117/eaf2dd702d7cc14d8d9511190245d057/EeuqbMwao9.png",
"hot_id": 71468,
"is_commercial": "0"
},
{
"position": 21,
"keyword": "机械男毕业9年后现状",
"show_name": "机械男毕业9年后现状",
"word_type": 8,
"hot_id": 71503,
"is_commercial": "0"
},
{
"position": 22,
"keyword": "澳门Mezza9自助餐值吗",
"show_name": "澳门Mezza9自助餐值吗",
"word_type": 8,
"hot_id": 71325,
"is_commercial": "0"
},
{
"position": 23,
"keyword": "平板支撑的10个常见错",
"show_name": "平板支撑的10个常见错",
"word_type": 8,
"hot_id": 71473,
"is_commercial": "0"
},
{
"position": 24,
"keyword": "突查银杏酒店管理学院",
"show_name": "突查银杏酒店管理学院",
"word_type": 8,
"hot_id": 71327,
"is_commercial": "0"
},
{
"position": 25,
"keyword": "年轻化阿尔茨海默如何触发",
"show_name": "年轻化阿尔茨海默如何触发",
"word_type": 8,
"hot_id": 71293,
"is_commercial": "0"
},
{
"position": 26,
"keyword": "球2给世界来点中国震撼",
"show_name": "球2给世界来点中国震撼",
"word_type": 8,
"hot_id": 71300,
"is_commercial": "0"
},
{
"position": 27,
"keyword": "豆瓣9.4分重启人生",
"show_name": "豆瓣9.4分重启人生",
"word_type": 8,
"hot_id": 71318,
"is_commercial": "0"
},
{
"position": 28,
"keyword": "泰国水果有多便宜",
"show_name": "泰国水果有多便宜",
"word_type": 8,
"hot_id": 71405,
"is_commercial": "0"
},
{
"position": 29,
"keyword": "这才是汉堡该有的样子",
"show_name": "这才是汉堡该有的样子",
"word_type": 8,
"hot_id": 71406,
"is_commercial": "0"
},
{
"position": 30,
"keyword": "美女野兽原型有多惨",
"show_name": "美女野兽原型有多惨",
"word_type": 8,
"hot_id": 71340,
"is_commercial": "0"
}
],
"exp_str": "8000#5510#6609#7709"
}
}
``` | closed | 2023-03-19T07:12:25Z | 2023-04-11T12:49:24Z | https://github.com/Nemo2011/bilibili-api/issues/238 | [
"need",
"feature"
] | z0z0r4 | 2 |
PokeAPI/pokeapi | graphql | 413 | 400 Code Error | Hi,
I used docker-composer to run poekapi, I changed gunicorn from 8000 to 7001 in docker files as well as nginx vhost anc gunicorn ini file. Also I modify ports as:
```
ports:
- "85:80"
- "445:443"
```
When I do docker-compose up --build and try to open webpage I get: Bad Request (400).
My apache vhost is like:
```
<VirtualHost *:80>
ServerName pokeapi.expij.h-y.eu
ServerAlias pokeapi.expij.h-y.eu
ServerSignature Off
ProxyPreserveHost On
AllowEncodedSlashes Off
ProxyVia On
ProxyRequests Off
ProxyPass / http://127.0.0.1:85/
ProxyPassReverse / http://127.0.0.1:85/
ProxyPreserveHost on
#Set up apache error documents, if back end goes down (i.e. 503 error) then a maintenance/deploy page is thrown up.
ErrorDocument 404 /404.html
ErrorDocument 422 /422.html
ErrorDocument 500 /500.html
ErrorDocument 503 /deploy.html
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b" common_forwarded
ErrorLog /var/www/sites/*.expij.h-y.eu/logs/pokeapi.expij.h-y.eu/error.log
CustomLog /var/www/sites/*.expij.h-y.eu/logs/pokeapi.expij.h-y.eu/forwarded.log common_forwarded
CustomLog /var/www/sites/*.expij.h-y.eu/logs/pokeapi.expij.h-y.eu/access.log combined env=!dontlog
CustomLog /var/www/sites/*.expij.h-y.eu/logs/pokeapi.expij.h-y.eu/com.log combined
</VirtualHost>
```
| closed | 2019-01-31T23:41:26Z | 2020-08-19T10:15:38Z | https://github.com/PokeAPI/pokeapi/issues/413 | [] | boberski666 | 6 |
FactoryBoy/factory_boy | sqlalchemy | 316 | save() method called twice when using post_generation | I just realised that my model factory was calling the model `save` method twice when defining `post_generation` function. Here is the many-to-many example:
``` python
class UserFactory(factory.django.DjangoModelFactory):
class Meta:
model = models.User
name = "John Doe"
@factory.post_generation
def groups(self, create, extracted, **kwargs):
if not create:
# Simple build, do nothing.
return
if extracted:
# A list of groups were passed in, use them
for group in extracted:
self.groups.add(group)
```
Calling `UserFactory` triggers `save()` on `User` model twice.
This happens because:
1/ The `DjangoModelFactory` redefines `_after_postgeneration`:
``` python
@classmethod
def _after_postgeneration(cls, obj, create, results=None):
"""Save again the instance if creating and at least one hook ran."""
if create and results:
# Some post-generation hooks ran, and may have modified us.
obj.save()
```
2/ This `results` comes from the call to our post_generation function, which with previous `User` example returns `{'groups': None}`
Which use case has led to this post-generation saving? If some data had to be saved in a custom `post_generation` function, nothing is preventing to explicitly call `save` on `obj`.
Do I have missed something?
Should I override this method if I don't want to get this double `save()` on my model, or there is a better way?
| closed | 2016-07-29T09:34:27Z | 2023-10-02T10:55:14Z | https://github.com/FactoryBoy/factory_boy/issues/316 | [
"Django"
] | romgar | 8 |
babysor/MockingBird | pytorch | 368 | 提示无法正常运行 | 我换了一台电脑继续运行运行以下命令:python C:\MockingBird-main\demo_toolbox.py -d .\samples后提示:
Traceback (most recent call last):
File "C:\MockingBird-main\demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "C:\MockingBird-main\toolbox\__init__.py", line 1, in <module>
from toolbox.ui import UI
File "C:\MockingBird-main\toolbox\ui.py", line 16, in <module>
import umap
ModuleNotFoundError: No module named 'umap'
于是我又换了种说法:python C:\MockingBird-main\demo_toolbox.py又提示:
Traceback (most recent call last):
File "C:\MockingBird-main\demo_toolbox.py", line 2, in <module>
from toolbox import Toolbox
File "C:\MockingBird-main\toolbox\__init__.py", line 1, in <module>
from toolbox.ui import UI
File "C:\MockingBird-main\toolbox\ui.py", line 16, in <module>
import umap
ModuleNotFoundError: No module named 'umap'
这台新机器是双显版本1.Intel iris Xe 11代酷睿搭载 2.NVDIA GeForce MX350 CUDA版本为11.6.99
对于小白来说,请问如何解决???
| open | 2022-02-03T13:30:31Z | 2022-02-04T09:52:38Z | https://github.com/babysor/MockingBird/issues/368 | [] | zhang065 | 3 |
pytorch/vision | computer-vision | 8,278 | Tests fail: ValueError: Could not find the operator torchvision::nms. Please make sure you have already registered the operator and (if registered from C++) loaded it via torch.ops.load_library. | ### 🐛 Describe the bug
```
===> py39-torchvision-0.17.1 depends on file: /usr/local/bin/python3.9 - found
cd /usr/ports/misc/py-torchvision/work-py39/vision-0.17.1 && /usr/bin/env XDG_DATA_HOME=/usr/ports/misc/py-torchvision/work-py39 XDG_CONFIG_HOME=/usr/ports/misc/py-torchvision/work-py39 XDG_CACHE_HOME=/usr/ports/misc/py-torchvision/work-py39/.cache HOME=/usr/ports/misc/py-torchvision/work-py39 PATH=/usr/local/libexec/ccache:/usr/ports/misc/py-torchvision/work-py39/.bin:/home/yuri/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin PKG_CONFIG_LIBDIR=/usr/ports/misc/py-torchvision/work-py39/.pkgconfig:/usr/local/libdata/pkgconfig:/usr/local/share/pkgconfig:/usr/libdata/pkgconfig MK_DEBUG_FILES=no MK_KERNEL_SYMBOLS=no SHELL=/bin/sh NO_LINT=YES LDSHARED="cc -shared" PYTHONDONTWRITEBYTECODE= PYTHONOPTIMIZE= PREFIX=/usr/local LOCALBASE=/usr/local CC="cc" CFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CPP="cpp" CPPFLAGS="" LDFLAGS=" -fstack-protector-strong " LIBS="" CXX="c++" CXXFLAGS="-O2 -pipe -fstack-protector-strong -fno-strict-aliasing " CCACHE_DIR="/tmp/.ccache" BSD_INSTALL_PROGRAM="install -s -m 555" BSD_INSTALL_LIB="install -s -m 0644" BSD_INSTALL_SCRIPT="install -m 555" BSD_INSTALL_DATA="install -m 0644" BSD_INSTALL_MAN="install -m 444" /usr/local/bin/python3.9 -m pytest -k '' -rs -v -o addopts=
ImportError while loading conftest '/usr/ports/misc/py-torchvision/work-py39/vision-0.17.1/test/conftest.py'.
test/conftest.py:7: in <module>
from common_utils import (
test/common_utils.py:22: in <module>
from torchvision import io, tv_tensors
torchvision/__init__.py:6: in <module>
from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils
torchvision/_meta_registrations.py:164: in <module>
def meta_nms(dets, scores, iou_threshold):
/usr/local/lib/python3.9/site-packages/torch/_custom_ops.py:253: in inner
custom_op = _find_custom_op(qualname, also_check_torch_library=True)
/usr/local/lib/python3.9/site-packages/torch/_custom_op/impl.py:1076: in _find_custom_op
overload = get_op(qualname)
/usr/local/lib/python3.9/site-packages/torch/_custom_op/impl.py:1062: in get_op
error_not_found()
/usr/local/lib/python3.9/site-packages/torch/_custom_op/impl.py:1052: in error_not_found
raise ValueError(
E ValueError: Could not find the operator torchvision::nms. Please make sure you have already registered the operator and (if registered from C++) loaded it via torch.ops.load_library.
*** Error code 4
```
### Versions
```
$ python3.9 collect_env.py
Collecting environment information...
PyTorch version: 2.1.0a0
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A
OS: freebsd14
GCC version: Could not collect
Clang version: 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)
CMake version: version 3.28.3
Libc version: N/A
Python version: 3.9.18 (main, Dec 10 2023, 01:22:40) [Clang 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1 (64-bit runtime)
Python platform: FreeBSD-14.0-STABLE-amd64-64bit-ELF
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 2060
Nvidia driver version: 535.104.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Versions of relevant libraries:
[pip3] flake8==6.1.0
[pip3] mypy==1.8.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.25.0
[pip3] pytest-flake8==1.1.1
[pip3] pytest-mypy==0.10.3
[pip3] torch==2.1.0a0+gitunknown
[pip3] torchvision==0.17.1+06c0f5e
[conda] Could not collect
``` | closed | 2024-02-23T17:00:14Z | 2024-02-23T17:25:10Z | https://github.com/pytorch/vision/issues/8278 | [] | yurivict | 1 |
onnx/onnx | pytorch | 6,356 | Why does SplitToSequence not allow zeros in `split` input? | ### Question
The ONNX 1.18.0 documentation says that for operator SplitToSequence:
```
‘split’ must contain only positive numbers.
```
Why are zeros not allowed? The semantics for zeros seems unambiguous, and it seems like they should be allowed per the principle of least surprise.
| open | 2024-09-09T16:28:04Z | 2024-09-09T21:32:29Z | https://github.com/onnx/onnx/issues/6356 | [
"question",
"module: spec"
] | ArchRobison | 1 |
onnx/onnx | tensorflow | 6,356 | Why does SplitToSequence not allow zeros in `split` input? | ### Question
The ONNX 1.18.0 documentation says that for operator SplitToSequence:
```
‘split’ must contain only positive numbers.
```
Why are zeros not allowed? The semantics for zeros seems unambiguous, and it seems like they should be allowed per the principle of least surprise.
| open | 2024-09-09T16:28:04Z | 2024-09-09T21:32:29Z | https://github.com/onnx/onnx/issues/6356 | [
"question",
"module: spec"
] | ArchRobison | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.