Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
19,430
25,600,089,553
IssuesEvent
2022-12-01 19:22:38
winter-telescope/winterdrp
https://api.github.com/repos/winter-telescope/winterdrp
closed
MulitProcess Processor
enhancement wishlist processors
I would love to have a Multiprocess Processor, which would split batches into different python processes. Basically, you could take 8 batches, make the flats first, then run each batch on N different CPU for factor N speedup.
1.0
MulitProcess Processor - I would love to have a Multiprocess Processor, which would split batches into different python processes. Basically, you could take 8 batches, make the flats first, then run each batch on N different CPU for factor N speedup.
process
mulitprocess processor i would love to have a multiprocess processor which would split batches into different python processes basically you could take batches make the flats first then run each batch on n different cpu for factor n speedup
1
1,312
3,864,523,023
IssuesEvent
2016-04-08 14:12:38
ParsePlatform/parse-server
https://api.github.com/repos/ParsePlatform/parse-server
closed
Feature Request: Dashboard-Modifiable Parse Server Config
in-process
I'm interested in exposing some of the Parse Server configuration to the user through the dashboard. For example, being able to set 'allowClientClassCreation', 'verbose', client keys, etc. from the dashboard would be a nice management experience. I think that this could leverage the _GlobalConfig code or something similar. If the settings exist in _GlobalConfig collection they would be loaded into the app configuration cache, overriding the configuration specified in code. Any thoughts?
1.0
Feature Request: Dashboard-Modifiable Parse Server Config - I'm interested in exposing some of the Parse Server configuration to the user through the dashboard. For example, being able to set 'allowClientClassCreation', 'verbose', client keys, etc. from the dashboard would be a nice management experience. I think that this could leverage the _GlobalConfig code or something similar. If the settings exist in _GlobalConfig collection they would be loaded into the app configuration cache, overriding the configuration specified in code. Any thoughts?
process
feature request dashboard modifiable parse server config i m interested in exposing some of the parse server configuration to the user through the dashboard for example being able to set allowclientclasscreation verbose client keys etc from the dashboard would be a nice management experience i think that this could leverage the globalconfig code or something similar if the settings exist in globalconfig collection they would be loaded into the app configuration cache overriding the configuration specified in code any thoughts
1
13,206
15,651,188,339
IssuesEvent
2021-03-23 09:54:19
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
closed
Release 0.2.0a1
process + tools
We are now [85 commits ahead of `master`](https://github.com/pystatgen/sgkit/compare/0.1.0a1...master) and have some nice improvements in VCF parsing in particular that it would be good for new users to pick up. Once we've gotten consensus on https://github.com/pystatgen/sgkit/issues/494 it would probably be good to cut another release.
1.0
Release 0.2.0a1 - We are now [85 commits ahead of `master`](https://github.com/pystatgen/sgkit/compare/0.1.0a1...master) and have some nice improvements in VCF parsing in particular that it would be good for new users to pick up. Once we've gotten consensus on https://github.com/pystatgen/sgkit/issues/494 it would probably be good to cut another release.
process
release we are now and have some nice improvements in vcf parsing in particular that it would be good for new users to pick up once we ve gotten consensus on it would probably be good to cut another release
1
21,299
28,495,775,392
IssuesEvent
2023-04-18 14:06:03
dDevTech/tapas-top-frontend
https://api.github.com/repos/dDevTech/tapas-top-frontend
closed
Url imagen validacion 19/04/2023
pending in process
- Crear nueva condicion de validacion url imagen para que comience por http: o https: (sea una URL) - Debe hacerse en todos los lugares donde se modifique una url (en registro, ajustes de perfil, newDish...) - Tambien comporbar si termina su extensión en un formato de imagen (jpg, png , gif etc...)
1.0
Url imagen validacion 19/04/2023 - - Crear nueva condicion de validacion url imagen para que comience por http: o https: (sea una URL) - Debe hacerse en todos los lugares donde se modifique una url (en registro, ajustes de perfil, newDish...) - Tambien comporbar si termina su extensión en un formato de imagen (jpg, png , gif etc...)
process
url imagen validacion crear nueva condicion de validacion url imagen para que comience por http o https sea una url debe hacerse en todos los lugares donde se modifique una url en registro ajustes de perfil newdish tambien comporbar si termina su extensión en un formato de imagen jpg png gif etc
1
211,693
23,837,346,455
IssuesEvent
2022-09-06 07:22:01
kxxt/kxxt-website
https://api.github.com/repos/kxxt/kxxt-website
closed
CVE-2022-0624 (High) detected in parse-path-4.0.4.tgz - autoclosed
security vulnerability
## CVE-2022-0624 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-path-4.0.4.tgz</b></p></summary> <p>Parse paths (local paths, urls: ssh/git/etc)</p> <p>Library home page: <a href="https://registry.npmjs.org/parse-path/-/parse-path-4.0.4.tgz">https://registry.npmjs.org/parse-path/-/parse-path-4.0.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/parse-path/package.json</p> <p> Dependency Hierarchy: - gatsby-plugin-gatsby-cloud-4.18.0.tgz (Root Library) - gatsby-telemetry-3.18.0.tgz - git-up-4.0.5.tgz - parse-url-6.0.2.tgz - :x: **parse-path-4.0.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kxxt/kxxt-website/commit/37f8543da5164a1a7ef318756aa0eac1c5e89a09">37f8543da5164a1a7ef318756aa0eac1c5e89a09</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Authorization Bypass Through User-Controlled Key in GitHub repository ionicabizau/parse-path prior to 5.0.0. <p>Publish Date: 2022-06-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0624>CVE-2022-0624</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0624">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0624</a></p> <p>Release Date: 2022-06-28</p> <p>Fix Resolution: parse-path - 5.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-0624 (High) detected in parse-path-4.0.4.tgz - autoclosed - ## CVE-2022-0624 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>parse-path-4.0.4.tgz</b></p></summary> <p>Parse paths (local paths, urls: ssh/git/etc)</p> <p>Library home page: <a href="https://registry.npmjs.org/parse-path/-/parse-path-4.0.4.tgz">https://registry.npmjs.org/parse-path/-/parse-path-4.0.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/parse-path/package.json</p> <p> Dependency Hierarchy: - gatsby-plugin-gatsby-cloud-4.18.0.tgz (Root Library) - gatsby-telemetry-3.18.0.tgz - git-up-4.0.5.tgz - parse-url-6.0.2.tgz - :x: **parse-path-4.0.4.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kxxt/kxxt-website/commit/37f8543da5164a1a7ef318756aa0eac1c5e89a09">37f8543da5164a1a7ef318756aa0eac1c5e89a09</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Authorization Bypass Through User-Controlled Key in GitHub repository ionicabizau/parse-path prior to 5.0.0. <p>Publish Date: 2022-06-28 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0624>CVE-2022-0624</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0624">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-0624</a></p> <p>Release Date: 2022-06-28</p> <p>Fix Resolution: parse-path - 5.0.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in parse path tgz autoclosed cve high severity vulnerability vulnerable library parse path tgz parse paths local paths urls ssh git etc library home page a href path to dependency file package json path to vulnerable library node modules parse path package json dependency hierarchy gatsby plugin gatsby cloud tgz root library gatsby telemetry tgz git up tgz parse url tgz x parse path tgz vulnerable library found in head commit a href found in base branch master vulnerability details authorization bypass through user controlled key in github repository ionicabizau parse path prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution parse path step up your open source security game with mend
0
317,249
27,222,064,372
IssuesEvent
2023-02-21 06:36:18
codestates-seb/seb41_main_006
https://api.github.com/repos/codestates-seb/seb41_main_006
closed
[TEST] pet testcode 작성
test
### Task list --- - [x] controller - [x] service - [x] repository ### ETC --- 기타 사항 작성
1.0
[TEST] pet testcode 작성 - ### Task list --- - [x] controller - [x] service - [x] repository ### ETC --- 기타 사항 작성
non_process
pet testcode 작성 task list controller service repository etc 기타 사항 작성
0
128,740
10,550,333,843
IssuesEvent
2019-10-03 10:48:26
WordPress/gutenberg
https://api.github.com/repos/WordPress/gutenberg
closed
MSEdge: Block toolbar overlay block contents
Browser Issues Needs Testing [Type] Bug
**Describe the bug** Block toolbar appears _on top_ of the block contents when editing a post in MSEdge **To Reproduce** Steps to reproduce the behavior: 1. Start new post `wp-admin/post-new.php` 2. add a `Heading` block 3. force toolbar to show-up 4. See it appears over the block **Expected behavior** A toolbar appears above the block frame **Screenshots** <img width="509" alt="header block" src="https://user-images.githubusercontent.com/5654161/46828554-27bf9e00-cda4-11e8-8b03-bb095ce504ad.png"> **Desktop (please complete the following information):** - OS: Win 10 - Browser MSEdge - Version 41.16299.492.0 **Additional context** Gutenberg v3.9
1.0
MSEdge: Block toolbar overlay block contents - **Describe the bug** Block toolbar appears _on top_ of the block contents when editing a post in MSEdge **To Reproduce** Steps to reproduce the behavior: 1. Start new post `wp-admin/post-new.php` 2. add a `Heading` block 3. force toolbar to show-up 4. See it appears over the block **Expected behavior** A toolbar appears above the block frame **Screenshots** <img width="509" alt="header block" src="https://user-images.githubusercontent.com/5654161/46828554-27bf9e00-cda4-11e8-8b03-bb095ce504ad.png"> **Desktop (please complete the following information):** - OS: Win 10 - Browser MSEdge - Version 41.16299.492.0 **Additional context** Gutenberg v3.9
non_process
msedge block toolbar overlay block contents describe the bug block toolbar appears on top of the block contents when editing a post in msedge to reproduce steps to reproduce the behavior start new post wp admin post new php add a heading block force toolbar to show up see it appears over the block expected behavior a toolbar appears above the block frame screenshots img width alt header block src desktop please complete the following information os win browser msedge version additional context gutenberg
0
61,921
25,784,234,628
IssuesEvent
2022-12-09 18:44:13
Azure/azure-cli
https://api.github.com/repos/Azure/azure-cli
closed
Unable to deploy web app
Web Apps Service Attention
### **This is autogenerated. Please review and update as needed.** ## Describe the bug **Command Name** `az webapp deployment source config-zip` **Errors:** ``` The command failed with an unexpected error. Here is the traceback: HTTPSConnectionPool(host='mfecontainer.scm.azurewebsites.net', port=443): Max retries exceeded with url: /api/zipdeploy?isAsync=true (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connection.py", line 159, in _new_conn File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/util/connection.py", line 84, in create_connection File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/util/connection.py", line 74, in create_connection TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 670, in urlopen File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 381, in _make_request File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 976, in _validate_conn File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connection.py", line 308, in connect File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connection.py", line 171, in _new_conn urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/adapters.py", line 439, in send File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 724, in urlopen File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/util/retry.py", line 439, in increment urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='mfecontainer.scm.azurewebsites.net', port=443): Max retries exceeded with url: /api/zipdeploy?isAsync=true (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 657, in execute File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 691, in _run_job File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 328, in __call__ File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/appservice/custom.py", line 401, in enable_zip_deploy_webapp File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/appservice/custom.py", line 430, in enable_zip_deploy File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/api.py", line 116, in post File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/api.py", line 60, in request File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/sessions.py", line 533, in request File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/sessions.py", line 646, in send File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/adapters.py", line 516, in send requests.exceptions.ConnectionError: HTTPSConnectionPool(host='mfecontainer.scm.azurewebsites.net', port=443): Max retries exceeded with url: /api/zipdeploy?isAsync=true (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) ``` ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - _Put any pre-requisite steps here..._ - `az webapp deployment source config-zip -g {} -n {} --src {}` ## Expected Behavior ## Environment Summary ``` Windows-10-10.0.18362-SP0 Python 3.8.9 Installer: MSI azure-cli 2.24.0 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated-->
1.0
Unable to deploy web app - ### **This is autogenerated. Please review and update as needed.** ## Describe the bug **Command Name** `az webapp deployment source config-zip` **Errors:** ``` The command failed with an unexpected error. Here is the traceback: HTTPSConnectionPool(host='mfecontainer.scm.azurewebsites.net', port=443): Max retries exceeded with url: /api/zipdeploy?isAsync=true (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connection.py", line 159, in _new_conn File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/util/connection.py", line 84, in create_connection File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/util/connection.py", line 74, in create_connection TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 670, in urlopen File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 381, in _make_request File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 976, in _validate_conn File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connection.py", line 308, in connect File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connection.py", line 171, in _new_conn urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/adapters.py", line 439, in send File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/connectionpool.py", line 724, in urlopen File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\urllib3/util/retry.py", line 439, in increment urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='mfecontainer.scm.azurewebsites.net', port=443): Max retries exceeded with url: /api/zipdeploy?isAsync=true (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 657, in execute File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 691, in _run_job File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 328, in __call__ File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/appservice/custom.py", line 401, in enable_zip_deploy_webapp File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/appservice/custom.py", line 430, in enable_zip_deploy File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/api.py", line 116, in post File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/api.py", line 60, in request File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/sessions.py", line 533, in request File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/sessions.py", line 646, in send File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\requests/adapters.py", line 516, in send requests.exceptions.ConnectionError: HTTPSConnectionPool(host='mfecontainer.scm.azurewebsites.net', port=443): Max retries exceeded with url: /api/zipdeploy?isAsync=true (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x04EAAEE0>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')) ``` ## To Reproduce: Steps to reproduce the behavior. Note that argument values have been redacted, as they may contain sensitive information. - _Put any pre-requisite steps here..._ - `az webapp deployment source config-zip -g {} -n {} --src {}` ## Expected Behavior ## Environment Summary ``` Windows-10-10.0.18362-SP0 Python 3.8.9 Installer: MSI azure-cli 2.24.0 ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated-->
non_process
unable to deploy web app this is autogenerated please review and update as needed describe the bug command name az webapp deployment source config zip errors the command failed with an unexpected error here is the traceback httpsconnectionpool host mfecontainer scm azurewebsites net port max retries exceeded with url api zipdeploy isasync true caused by newconnectionerror failed to establish a new connection a connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond traceback most recent call last file d a s build scripts windows artifacts cli lib site packages connection py line in new conn file d a s build scripts windows artifacts cli lib site packages util connection py line in create connection file d a s build scripts windows artifacts cli lib site packages util connection py line in create connection timeouterror a connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond during handling of the above exception another exception occurred traceback most recent call last file d a s build scripts windows artifacts cli lib site packages connectionpool py line in urlopen file d a s build scripts windows artifacts cli lib site packages connectionpool py line in make request file d a s build scripts windows artifacts cli lib site packages connectionpool py line in validate conn file d a s build scripts windows artifacts cli lib site packages connection py line in connect file d a s build scripts windows artifacts cli lib site packages connection py line in new conn exceptions newconnectionerror failed to establish a new connection a connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond during handling of the above exception another exception occurred traceback most recent call last file d a s build scripts windows artifacts cli lib site packages requests adapters py line in send file d a s build scripts windows artifacts cli lib site packages connectionpool py line in urlopen file d a s build scripts windows artifacts cli lib site packages util retry py line in increment exceptions maxretryerror httpsconnectionpool host mfecontainer scm azurewebsites net port max retries exceeded with url api zipdeploy isasync true caused by newconnectionerror failed to establish a new connection a connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond during handling of the above exception another exception occurred traceback most recent call last file d a s build scripts windows artifacts cli lib site packages knack cli py line in invoke file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in execute file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in run jobs serially file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in run job file d a s build scripts windows artifacts cli lib site packages azure cli core commands init py line in call file d a s build scripts windows artifacts cli lib site packages azure cli core commands command operation py line in handler file d a s build scripts windows artifacts cli lib site packages azure cli command modules appservice custom py line in enable zip deploy webapp file d a s build scripts windows artifacts cli lib site packages azure cli command modules appservice custom py line in enable zip deploy file d a s build scripts windows artifacts cli lib site packages requests api py line in post file d a s build scripts windows artifacts cli lib site packages requests api py line in request file d a s build scripts windows artifacts cli lib site packages requests sessions py line in request file d a s build scripts windows artifacts cli lib site packages requests sessions py line in send file d a s build scripts windows artifacts cli lib site packages requests adapters py line in send requests exceptions connectionerror httpsconnectionpool host mfecontainer scm azurewebsites net port max retries exceeded with url api zipdeploy isasync true caused by newconnectionerror failed to establish a new connection a connection attempt failed because the connected party did not properly respond after a period of time or established connection failed because connected host has failed to respond to reproduce steps to reproduce the behavior note that argument values have been redacted as they may contain sensitive information put any pre requisite steps here az webapp deployment source config zip g n src expected behavior environment summary windows python installer msi azure cli additional context
0
242,071
18,511,816,555
IssuesEvent
2021-10-20 04:49:39
UWA-CITS3200-18-2021/ReSQ
https://api.github.com/repos/UWA-CITS3200-18-2021/ReSQ
closed
Put internal retrospective into formal document
documentation priority
## Basic Information Take internal retrospective from sticky notes to formal document to be submitted for sprint 3.
1.0
Put internal retrospective into formal document - ## Basic Information Take internal retrospective from sticky notes to formal document to be submitted for sprint 3.
non_process
put internal retrospective into formal document basic information take internal retrospective from sticky notes to formal document to be submitted for sprint
0
71,412
7,244,300,873
IssuesEvent
2018-02-14 14:44:41
gradle/gradle
https://api.github.com/repos/gradle/gradle
closed
Fail task of type Test immediately upon encountering first failure
a:feature from:contributor in:testing
Original issue: https://issues.gradle.org/browse/GRADLE-1518 **Highly voted issue: 43!** ### Expected Behavior Immediate feedback for long-running test suites. Expose a configuration option for the `Test` task type for failing upon encountering first failure. The test report indicates the failed test. Tests that have not been run are indicated explicitly. ### Current Behavior Task does not fail immediately after a failure is detected. A user has to wait until whole test suite is exercised before task fails and report is generated. ### Context We use the `Test` task for longer running integration and acceptance testing. In these cases I would like to fail the build fast (right after the first test failure is detected).
1.0
Fail task of type Test immediately upon encountering first failure - Original issue: https://issues.gradle.org/browse/GRADLE-1518 **Highly voted issue: 43!** ### Expected Behavior Immediate feedback for long-running test suites. Expose a configuration option for the `Test` task type for failing upon encountering first failure. The test report indicates the failed test. Tests that have not been run are indicated explicitly. ### Current Behavior Task does not fail immediately after a failure is detected. A user has to wait until whole test suite is exercised before task fails and report is generated. ### Context We use the `Test` task for longer running integration and acceptance testing. In these cases I would like to fail the build fast (right after the first test failure is detected).
non_process
fail task of type test immediately upon encountering first failure original issue highly voted issue expected behavior immediate feedback for long running test suites expose a configuration option for the test task type for failing upon encountering first failure the test report indicates the failed test tests that have not been run are indicated explicitly current behavior task does not fail immediately after a failure is detected a user has to wait until whole test suite is exercised before task fails and report is generated context we use the test task for longer running integration and acceptance testing in these cases i would like to fail the build fast right after the first test failure is detected
0
803,801
29,189,814,007
IssuesEvent
2023-05-19 18:48:38
sczerwinski/wavefront-obj-intellij-plugin
https://api.github.com/repos/sczerwinski/wavefront-obj-intellij-plugin
closed
Fix override-only methods usage violations in 2023.2 EAP
type:bug resolution:done priority:high component:editor
Fix override-only verification errors: Override-only methods usage violations (4) - `AsyncFileEditorProvider.Builder.build()` (3) - Override-only method `AsyncFileEditorProvider.Builder.build()` is invoked in `BaseSplitEditorProvider.createEditor(...)`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info. - Override-only method `AsyncFileEditorProvider.Builder.build()` is invoked in `BaseSplitEditorProvider.Builder.buildPreviewEditor()`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info. - Override-only method `AsyncFileEditorProvider.Builder.build()` is invoked in `BaseSplitEditorProvider.Builder.buildTextEditor()`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info. - `AsyncFileEditorProvider.createEditorAsync(...)` (1) - Override-only method `AsyncFileEditorProvider.createEditorAsync(...)` is invoked in `BaseSplitEditorProvider.Builder.getAsyncFileEditorBuilder(...)`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info.
1.0
Fix override-only methods usage violations in 2023.2 EAP - Fix override-only verification errors: Override-only methods usage violations (4) - `AsyncFileEditorProvider.Builder.build()` (3) - Override-only method `AsyncFileEditorProvider.Builder.build()` is invoked in `BaseSplitEditorProvider.createEditor(...)`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info. - Override-only method `AsyncFileEditorProvider.Builder.build()` is invoked in `BaseSplitEditorProvider.Builder.buildPreviewEditor()`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info. - Override-only method `AsyncFileEditorProvider.Builder.build()` is invoked in `BaseSplitEditorProvider.Builder.buildTextEditor()`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info. - `AsyncFileEditorProvider.createEditorAsync(...)` (1) - Override-only method `AsyncFileEditorProvider.createEditorAsync(...)` is invoked in `BaseSplitEditorProvider.Builder.getAsyncFileEditorBuilder(...)`. This method is marked with `@ApiStatus.OverrideOnly` annotation, which indicates that the method must be only overridden but not invoked by client code. See documentation of the `@ApiStatus.OverrideOnly` for more info.
non_process
fix override only methods usage violations in eap fix override only verification errors override only methods usage violations asyncfileeditorprovider builder build override only method asyncfileeditorprovider builder build is invoked in basespliteditorprovider createeditor this method is marked with apistatus overrideonly annotation which indicates that the method must be only overridden but not invoked by client code see documentation of the apistatus overrideonly for more info override only method asyncfileeditorprovider builder build is invoked in basespliteditorprovider builder buildprevieweditor this method is marked with apistatus overrideonly annotation which indicates that the method must be only overridden but not invoked by client code see documentation of the apistatus overrideonly for more info override only method asyncfileeditorprovider builder build is invoked in basespliteditorprovider builder buildtexteditor this method is marked with apistatus overrideonly annotation which indicates that the method must be only overridden but not invoked by client code see documentation of the apistatus overrideonly for more info asyncfileeditorprovider createeditorasync override only method asyncfileeditorprovider createeditorasync is invoked in basespliteditorprovider builder getasyncfileeditorbuilder this method is marked with apistatus overrideonly annotation which indicates that the method must be only overridden but not invoked by client code see documentation of the apistatus overrideonly for more info
0
581,768
17,330,740,973
IssuesEvent
2021-07-28 01:41:51
justalemon/PlayerCompanion
https://api.github.com/repos/justalemon/PlayerCompanion
closed
Softlock in Three's Company
priority: p1 high status: confirmed type: bug
Reported in the 5mods page by thalilmythos: > This mod causes the mission "three's company" to be impossible to complete, It interrupts the animation of Michael hanging after the switch from Frankling as a sniper happens, I just spent one hour trying to find out which script in my game does this, And it's this.
1.0
Softlock in Three's Company - Reported in the 5mods page by thalilmythos: > This mod causes the mission "three's company" to be impossible to complete, It interrupts the animation of Michael hanging after the switch from Frankling as a sniper happens, I just spent one hour trying to find out which script in my game does this, And it's this.
non_process
softlock in three s company reported in the page by thalilmythos this mod causes the mission three s company to be impossible to complete it interrupts the animation of michael hanging after the switch from frankling as a sniper happens i just spent one hour trying to find out which script in my game does this and it s this
0
18,572
3,697,354,102
IssuesEvent
2016-02-27 16:31:34
PrairieLearn/PrairieLearn
https://api.github.com/repos/PrairieLearn/PrairieLearn
opened
Don't show the current test score in the sidebar during exams
enhancement test formats
Exam/Retry exam currently show the test score in the sidebar while the test is still open and in progress. Some students have reported that this makes them feel stressed when it's a low score, so we should probably not show it. The information would still be available from the test overview page.
1.0
Don't show the current test score in the sidebar during exams - Exam/Retry exam currently show the test score in the sidebar while the test is still open and in progress. Some students have reported that this makes them feel stressed when it's a low score, so we should probably not show it. The information would still be available from the test overview page.
non_process
don t show the current test score in the sidebar during exams exam retry exam currently show the test score in the sidebar while the test is still open and in progress some students have reported that this makes them feel stressed when it s a low score so we should probably not show it the information would still be available from the test overview page
0
11,355
14,173,149,347
IssuesEvent
2020-11-12 17:55:16
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Please explain in detail on how to use output variable
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
Please explain in detail on how to use output variable --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=classic%2Cbatch#use-output-variables-from-tasks) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Please explain in detail on how to use output variable - Please explain in detail on how to use output variable --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=classic%2Cbatch#use-output-variables-from-tasks) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
please explain in detail on how to use output variable please explain in detail on how to use output variable document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
387,015
11,454,676,146
IssuesEvent
2020-02-06 17:31:42
Sakuten/backend
https://api.github.com/repos/Sakuten/backend
closed
cards/以下のファイルに対してテストをつくる
low priority refactoring
Step 1: 目的 ============ * テストがない Step 2: 概要 ============ * 書く
1.0
cards/以下のファイルに対してテストをつくる - Step 1: 目的 ============ * テストがない Step 2: 概要 ============ * 書く
non_process
cards 以下のファイルに対してテストをつくる step 目的 テストがない step 概要 書く
0
1,072
3,536,286,525
IssuesEvent
2016-01-17 05:23:33
t3kt/vjzual2
https://api.github.com/repos/t3kt/vjzual2
opened
pivot reverse mode doesn't work properly in linked transform module
bug video processing
see #260. it's negating the values rather than flipping them across the 0.5 axis
1.0
pivot reverse mode doesn't work properly in linked transform module - see #260. it's negating the values rather than flipping them across the 0.5 axis
process
pivot reverse mode doesn t work properly in linked transform module see it s negating the values rather than flipping them across the axis
1
6,624
9,725,425,682
IssuesEvent
2019-05-30 08:40:28
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Reference layer is not optional in raster calculator
Bug Processing
Author Name: **Mario Reyes** (@ernesto561) Original Redmine Issue: [19229](https://issues.qgis.org/issues/19229) Affected QGIS version: 3.0.3 Redmine category:processing/qgis --- When the raster calculator from the processing toolbox is used for example to sum two rasters, the following error message is displayed> ``` Traceback (most recent call last): File "C:/PROGRA~1/QGIS3~1.0/apps/qgis/./python/plugins\processing\algs\qgis\RasterCalculator.py", line 136, in processAlgorithm raise QgsProcessingException(self.tr("No reference layer selected nor extent box provided")) _core.QgsProcessingException: No reference layer selected nor extent box provided ``` If an optional layer is provided, which specifies the output extent and CRS there's no problem.
1.0
Reference layer is not optional in raster calculator - Author Name: **Mario Reyes** (@ernesto561) Original Redmine Issue: [19229](https://issues.qgis.org/issues/19229) Affected QGIS version: 3.0.3 Redmine category:processing/qgis --- When the raster calculator from the processing toolbox is used for example to sum two rasters, the following error message is displayed> ``` Traceback (most recent call last): File "C:/PROGRA~1/QGIS3~1.0/apps/qgis/./python/plugins\processing\algs\qgis\RasterCalculator.py", line 136, in processAlgorithm raise QgsProcessingException(self.tr("No reference layer selected nor extent box provided")) _core.QgsProcessingException: No reference layer selected nor extent box provided ``` If an optional layer is provided, which specifies the output extent and CRS there's no problem.
process
reference layer is not optional in raster calculator author name mario reyes original redmine issue affected qgis version redmine category processing qgis when the raster calculator from the processing toolbox is used for example to sum two rasters the following error message is displayed traceback most recent call last file c progra apps qgis python plugins processing algs qgis rastercalculator py line in processalgorithm raise qgsprocessingexception self tr no reference layer selected nor extent box provided core qgsprocessingexception no reference layer selected nor extent box provided if an optional layer is provided which specifies the output extent and crs there s no problem
1
130,083
10,596,209,792
IssuesEvent
2019-10-09 20:43:11
MicrosoftDocs/vsts-docs
https://api.github.com/repos/MicrosoftDocs/vsts-docs
closed
Use drag and drop to nest test suites is no longer working
Pri1 cba devops-test/tech devops/prod product-feedback
Test suites are not able to be included in an already existing test suite. User has to create a new test suite and is not able to move a test suite in another branch. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3c520dee-1218-777e-7405-551623817c03 * Version Independent ID: 82b5d172-ae1a-2b4c-82ac-595cd7609d3c * Content: [New test plans page - Azure Test Plans](https://docs.microsoft.com/en-us/azure/devops/test/new-test-plans-page?view=azure-devops#feedback) * Content Source: [docs/test/new-test-plans-page.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/test/new-test-plans-page.md) * Product: **devops** * Technology: **devops-test** * GitHub Login: @harishkragarwal * Microsoft Alias: **harishkragarwal**
1.0
Use drag and drop to nest test suites is no longer working - Test suites are not able to be included in an already existing test suite. User has to create a new test suite and is not able to move a test suite in another branch. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 3c520dee-1218-777e-7405-551623817c03 * Version Independent ID: 82b5d172-ae1a-2b4c-82ac-595cd7609d3c * Content: [New test plans page - Azure Test Plans](https://docs.microsoft.com/en-us/azure/devops/test/new-test-plans-page?view=azure-devops#feedback) * Content Source: [docs/test/new-test-plans-page.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/test/new-test-plans-page.md) * Product: **devops** * Technology: **devops-test** * GitHub Login: @harishkragarwal * Microsoft Alias: **harishkragarwal**
non_process
use drag and drop to nest test suites is no longer working test suites are not able to be included in an already existing test suite user has to create a new test suite and is not able to move a test suite in another branch document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops test github login harishkragarwal microsoft alias harishkragarwal
0
7,239
10,409,267,453
IssuesEvent
2019-09-13 08:19:13
Open-EO/openeo-api
https://api.github.com/repos/Open-EO/openeo-api
opened
Add logging information to job (and services?)
accepted job management processes service management
We discussed on the 3rd year planning to add a field that can include logs from the batch (and web service?) information in the corresponding information endpoints (e.g. GET /jobs/:id). Back-ends can add information there, e.g. when the users uses the debug or output process.
1.0
Add logging information to job (and services?) - We discussed on the 3rd year planning to add a field that can include logs from the batch (and web service?) information in the corresponding information endpoints (e.g. GET /jobs/:id). Back-ends can add information there, e.g. when the users uses the debug or output process.
process
add logging information to job and services we discussed on the year planning to add a field that can include logs from the batch and web service information in the corresponding information endpoints e g get jobs id back ends can add information there e g when the users uses the debug or output process
1
139,200
31,279,904,735
IssuesEvent
2023-08-22 08:56:04
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
Component Development - error with FormController Save method
No Code Attached Yet
Custom component development- Frontend add/edit save error. Form displays, Controller saves data, upon displaying form a second time, row 1 of my db table is updated, despite starting with a blank form. Always row 1. Version 4.2.9. Setting $loadData = false in Model does not solve problem. I found this in FormController.save: $recordId = $this->input->getInt($urlVar); which returns 1. Then: $data[$key] = $recordId; sets the id = 1, which is incorrect... row 1 will always be edited (first insert will work as it is row 1). Workaround: Because I want only an add, this works: Override save method, then, $recordId = "0"; // or set value here, get id value from app->input for edits alternatively: $data[$key] = $app->input->get('id'); // and check for 0 in my situation Issue seems to be site only; admin side seems to work, perhaps the way record id are passed from the list view adminForm, or perhaps using a different controller on admin side for save. Unsure. Discussion appreciated... -Christo
1.0
Component Development - error with FormController Save method - Custom component development- Frontend add/edit save error. Form displays, Controller saves data, upon displaying form a second time, row 1 of my db table is updated, despite starting with a blank form. Always row 1. Version 4.2.9. Setting $loadData = false in Model does not solve problem. I found this in FormController.save: $recordId = $this->input->getInt($urlVar); which returns 1. Then: $data[$key] = $recordId; sets the id = 1, which is incorrect... row 1 will always be edited (first insert will work as it is row 1). Workaround: Because I want only an add, this works: Override save method, then, $recordId = "0"; // or set value here, get id value from app->input for edits alternatively: $data[$key] = $app->input->get('id'); // and check for 0 in my situation Issue seems to be site only; admin side seems to work, perhaps the way record id are passed from the list view adminForm, or perhaps using a different controller on admin side for save. Unsure. Discussion appreciated... -Christo
non_process
component development error with formcontroller save method custom component development frontend add edit save error form displays controller saves data upon displaying form a second time row of my db table is updated despite starting with a blank form always row version setting loaddata false in model does not solve problem i found this in formcontroller save recordid this input getint urlvar which returns then data recordid sets the id which is incorrect row will always be edited first insert will work as it is row workaround because i want only an add this works override save method then recordid or set value here get id value from app input for edits alternatively data app input get id and check for in my situation issue seems to be site only admin side seems to work perhaps the way record id are passed from the list view adminform or perhaps using a different controller on admin side for save unsure discussion appreciated christo
0
249,946
27,012,125,450
IssuesEvent
2023-02-10 16:11:27
opentok/learning-opentok-node
https://api.github.com/repos/opentok/learning-opentok-node
closed
eslint-7.25.0.tgz: 2 vulnerabilities (highest severity is: 7.5) - autoclosed
security vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-7.25.0.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimatch/package.json</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (eslint version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-3807](https://www.mend.io/vulnerability-database/CVE-2021-3807) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ansi-regex-5.0.0.tgz | Transitive | 7.26.0 | &#9989; | | [CVE-2022-3517](https://www.mend.io/vulnerability-database/CVE-2022-3517) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | minimatch-3.0.4.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3807</summary> ### Vulnerable Library - <b>ansi-regex-5.0.0.tgz</b></p> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - eslint-7.25.0.tgz (Root Library) - strip-ansi-6.0.0.tgz - :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3807>CVE-2021-3807</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution (ansi-regex): 5.0.1</p> <p>Direct dependency fix Resolution (eslint): 7.26.0</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-3517</summary> ### Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - eslint-7.25.0.tgz (Root Library) - :x: **minimatch-3.0.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service. <p>Publish Date: 2022-10-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3517>CVE-2022-3517</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-17</p> <p>Fix Resolution: minimatch - 3.0.5</p> </p> <p></p> </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
True
eslint-7.25.0.tgz: 2 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>eslint-7.25.0.tgz</b></p></summary> <p></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimatch/package.json</p> <p> </details> ## Vulnerabilities | CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (eslint version) | Remediation Available | | ------------- | ------------- | ----- | ----- | ----- | ------------- | --- | | [CVE-2021-3807](https://www.mend.io/vulnerability-database/CVE-2021-3807) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | ansi-regex-5.0.0.tgz | Transitive | 7.26.0 | &#9989; | | [CVE-2022-3517](https://www.mend.io/vulnerability-database/CVE-2022-3517) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | minimatch-3.0.4.tgz | Transitive | N/A* | &#10060; | <p>*For some transitive vulnerabilities, there is no version of direct dependency with a fix. Check the section "Details" below to see if there is a version of transitive dependency where vulnerability is fixed.</p> ## Details <details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2021-3807</summary> ### Vulnerable Library - <b>ansi-regex-5.0.0.tgz</b></p> <p>Regular expression for matching ANSI escape codes</p> <p>Library home page: <a href="https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz">https://registry.npmjs.org/ansi-regex/-/ansi-regex-5.0.0.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/ansi-regex/package.json</p> <p> Dependency Hierarchy: - eslint-7.25.0.tgz (Root Library) - strip-ansi-6.0.0.tgz - :x: **ansi-regex-5.0.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> ansi-regex is vulnerable to Inefficient Regular Expression Complexity <p>Publish Date: 2021-09-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-3807>CVE-2021-3807</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/">https://huntr.dev/bounties/5b3cf33b-ede0-4398-9974-800876dfd994/</a></p> <p>Release Date: 2021-09-17</p> <p>Fix Resolution (ansi-regex): 5.0.1</p> <p>Direct dependency fix Resolution (eslint): 7.26.0</p> </p> <p></p> :rescue_worker_helmet: Automatic Remediation is available for this issue </details><details> <summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-3517</summary> ### Vulnerable Library - <b>minimatch-3.0.4.tgz</b></p> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz">https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - eslint-7.25.0.tgz (Root Library) - :x: **minimatch-3.0.4.tgz** (Vulnerable Library) <p>Found in base branch: <b>main</b></p> </p> <p></p> ### Vulnerability Details <p> A vulnerability was found in the minimatch package. This flaw allows a Regular Expression Denial of Service (ReDoS) when calling the braceExpand function with specific arguments, resulting in a Denial of Service. <p>Publish Date: 2022-10-17 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-3517>CVE-2022-3517</a></p> </p> <p></p> ### CVSS 3 Score Details (<b>7.5</b>) <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> <p></p> ### Suggested Fix <p> <p>Type: Upgrade version</p> <p>Release Date: 2022-10-17</p> <p>Fix Resolution: minimatch - 3.0.5</p> </p> <p></p> </details> *** <p>:rescue_worker_helmet: Automatic Remediation is available for this issue.</p>
non_process
eslint tgz vulnerabilities highest severity is autoclosed vulnerable library eslint tgz path to dependency file package json path to vulnerable library node modules minimatch package json vulnerabilities cve severity cvss dependency type fixed in eslint version remediation available high ansi regex tgz transitive high minimatch tgz transitive n a for some transitive vulnerabilities there is no version of direct dependency with a fix check the section details below to see if there is a version of transitive dependency where vulnerability is fixed details cve vulnerable library ansi regex tgz regular expression for matching ansi escape codes library home page a href path to dependency file package json path to vulnerable library node modules ansi regex package json dependency hierarchy eslint tgz root library strip ansi tgz x ansi regex tgz vulnerable library found in base branch main vulnerability details ansi regex is vulnerable to inefficient regular expression complexity publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ansi regex direct dependency fix resolution eslint rescue worker helmet automatic remediation is available for this issue cve vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file package json path to vulnerable library node modules minimatch package json dependency hierarchy eslint tgz root library x minimatch tgz vulnerable library found in base branch main vulnerability details a vulnerability was found in the minimatch package this flaw allows a regular expression denial of service redos when calling the braceexpand function with specific arguments resulting in a denial of service publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution minimatch rescue worker helmet automatic remediation is available for this issue
0
218,971
24,424,770,300
IssuesEvent
2022-10-06 01:02:19
kxxt/kxxt-website
https://api.github.com/repos/kxxt/kxxt-website
opened
WS-2022-0322 (Medium) detected in d3-color-1.4.1.tgz, d3-color-3.0.1.tgz
security vulnerability
## WS-2022-0322 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>d3-color-1.4.1.tgz</b>, <b>d3-color-3.0.1.tgz</b></p></summary> <p> <details><summary><b>d3-color-1.4.1.tgz</b></p></summary> <p>Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).</p> <p>Library home page: <a href="https://registry.npmjs.org/d3-color/-/d3-color-1.4.1.tgz">https://registry.npmjs.org/d3-color/-/d3-color-1.4.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/d3-brush/node_modules/d3-color/package.json</p> <p> Dependency Hierarchy: - gatsby-remark-mermaid-2.1.0.tgz (Root Library) - mermaid-8.13.8.tgz - dagre-d3-0.6.4.tgz - d3-5.16.0.tgz - :x: **d3-color-1.4.1.tgz** (Vulnerable Library) </details> <details><summary><b>d3-color-3.0.1.tgz</b></p></summary> <p>Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).</p> <p>Library home page: <a href="https://registry.npmjs.org/d3-color/-/d3-color-3.0.1.tgz">https://registry.npmjs.org/d3-color/-/d3-color-3.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/d3-color/package.json,/node_modules/d3-brush/node_modules/d3-color/package.json</p> <p> Dependency Hierarchy: - gatsby-remark-mermaid-2.1.0.tgz (Root Library) - mermaid-8.13.8.tgz - d3-7.2.1.tgz - :x: **d3-color-3.0.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/kxxt/kxxt-website/commit/37f8543da5164a1a7ef318756aa0eac1c5e89a09">37f8543da5164a1a7ef318756aa0eac1c5e89a09</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The d3-color module provides representations for various color spaces in the browser. Versions prior to 3.1.0 are vulnerable to a Regular expression Denial of Service. This issue has been patched in version 3.1.0. There are no known workarounds. <p>Publish Date: 2022-09-29 <p>URL: <a href=https://github.com/d3/d3-color/commit/994d8fd95181484a5a27c5edc919aa625781432d>WS-2022-0322</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-36jr-mh4h-2g58">https://github.com/advisories/GHSA-36jr-mh4h-2g58</a></p> <p>Release Date: 2022-09-29</p> <p>Fix Resolution: d3-color - 3.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2022-0322 (Medium) detected in d3-color-1.4.1.tgz, d3-color-3.0.1.tgz - ## WS-2022-0322 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>d3-color-1.4.1.tgz</b>, <b>d3-color-3.0.1.tgz</b></p></summary> <p> <details><summary><b>d3-color-1.4.1.tgz</b></p></summary> <p>Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).</p> <p>Library home page: <a href="https://registry.npmjs.org/d3-color/-/d3-color-1.4.1.tgz">https://registry.npmjs.org/d3-color/-/d3-color-1.4.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/d3-brush/node_modules/d3-color/package.json</p> <p> Dependency Hierarchy: - gatsby-remark-mermaid-2.1.0.tgz (Root Library) - mermaid-8.13.8.tgz - dagre-d3-0.6.4.tgz - d3-5.16.0.tgz - :x: **d3-color-1.4.1.tgz** (Vulnerable Library) </details> <details><summary><b>d3-color-3.0.1.tgz</b></p></summary> <p>Color spaces! RGB, HSL, Cubehelix, Lab and HCL (Lch).</p> <p>Library home page: <a href="https://registry.npmjs.org/d3-color/-/d3-color-3.0.1.tgz">https://registry.npmjs.org/d3-color/-/d3-color-3.0.1.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/d3-color/package.json,/node_modules/d3-brush/node_modules/d3-color/package.json</p> <p> Dependency Hierarchy: - gatsby-remark-mermaid-2.1.0.tgz (Root Library) - mermaid-8.13.8.tgz - d3-7.2.1.tgz - :x: **d3-color-3.0.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/kxxt/kxxt-website/commit/37f8543da5164a1a7ef318756aa0eac1c5e89a09">37f8543da5164a1a7ef318756aa0eac1c5e89a09</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The d3-color module provides representations for various color spaces in the browser. Versions prior to 3.1.0 are vulnerable to a Regular expression Denial of Service. This issue has been patched in version 3.1.0. There are no known workarounds. <p>Publish Date: 2022-09-29 <p>URL: <a href=https://github.com/d3/d3-color/commit/994d8fd95181484a5a27c5edc919aa625781432d>WS-2022-0322</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-36jr-mh4h-2g58">https://github.com/advisories/GHSA-36jr-mh4h-2g58</a></p> <p>Release Date: 2022-09-29</p> <p>Fix Resolution: d3-color - 3.1.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in color tgz color tgz ws medium severity vulnerability vulnerable libraries color tgz color tgz color tgz color spaces rgb hsl cubehelix lab and hcl lch library home page a href path to dependency file package json path to vulnerable library node modules brush node modules color package json dependency hierarchy gatsby remark mermaid tgz root library mermaid tgz dagre tgz tgz x color tgz vulnerable library color tgz color spaces rgb hsl cubehelix lab and hcl lch library home page a href path to dependency file package json path to vulnerable library node modules color package json node modules brush node modules color package json dependency hierarchy gatsby remark mermaid tgz root library mermaid tgz tgz x color tgz vulnerable library found in head commit a href found in base branch master vulnerability details the color module provides representations for various color spaces in the browser versions prior to are vulnerable to a regular expression denial of service this issue has been patched in version there are no known workarounds publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution color step up your open source security game with mend
0
10,497
4,074,266,335
IssuesEvent
2016-05-28 09:54:03
SleepyTrousers/EnderIO
https://api.github.com/repos/SleepyTrousers/EnderIO
closed
Block in travel anchor
bug Code Complete
When you put block to travel anchor in 2.3.0.417, it will do this: http://jpeg.cz/images/2015/10/09/ABdCt.png http://jpeg.cz/images/2015/10/09/VzUME.png Water will glitch and items in NEI will burn.
1.0
Block in travel anchor - When you put block to travel anchor in 2.3.0.417, it will do this: http://jpeg.cz/images/2015/10/09/ABdCt.png http://jpeg.cz/images/2015/10/09/VzUME.png Water will glitch and items in NEI will burn.
non_process
block in travel anchor when you put block to travel anchor in it will do this water will glitch and items in nei will burn
0
5,387
8,211,502,757
IssuesEvent
2018-09-04 13:59:22
openvstorage/framework
https://api.github.com/repos/openvstorage/framework
closed
[DEVELOP] update succeeds but logs errors
process_cantreproduce type_bug
Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) ------------------- Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) RuntimeError: UNIQUE constraint failed: setting.code ---------------- Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) RuntimeError: UNIQUE constraint failed: setting.code --------------- Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) RuntimeError: UNIQUE constraint failed: setting.code
1.0
[DEVELOP] update succeeds but logs errors - Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) ------------------- Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) RuntimeError: UNIQUE constraint failed: setting.code ---------------- Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) RuntimeError: UNIQUE constraint failed: setting.code --------------- Traceback (most recent call last): File "/opt/OpenvStorage/ovs/lib/albamigration.py", line 224, in migrate_sdm alba_node.client.update_execute_migration_code() File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 234, in update_execute_migration_code return self._call(requests.post, 'update/execute_migration_code') File "/opt/OpenvStorage/ovs/extensions/plugins/asdmanager.py", line 89, in _call raise RuntimeError(error_message) RuntimeError: UNIQUE constraint failed: setting.code
process
update succeeds but logs errors traceback most recent call last file opt openvstorage ovs lib albamigration py line in migrate sdm alba node client update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in update execute migration code return self call requests post update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in call raise runtimeerror error message traceback most recent call last file opt openvstorage ovs lib albamigration py line in migrate sdm alba node client update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in update execute migration code return self call requests post update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in call raise runtimeerror error message runtimeerror unique constraint failed setting code traceback most recent call last file opt openvstorage ovs lib albamigration py line in migrate sdm alba node client update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in update execute migration code return self call requests post update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in call raise runtimeerror error message runtimeerror unique constraint failed setting code traceback most recent call last file opt openvstorage ovs lib albamigration py line in migrate sdm alba node client update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in update execute migration code return self call requests post update execute migration code file opt openvstorage ovs extensions plugins asdmanager py line in call raise runtimeerror error message runtimeerror unique constraint failed setting code
1
2,008
4,828,499,100
IssuesEvent
2016-11-07 16:24:47
nodejs/node
https://api.github.com/repos/nodejs/node
closed
properties created using Symbols in process.env are not intercepted
process
<!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v6.2.0 * **Platform**: x86_64 * **Subsystem**: <!-- Enter your issue details below this comment. --> switching off the flag PropertyHandlerFlags::kOnlyInterceptStrings in node.cc causes Type Error when intercepting via Symbols on process.env collection, which is strongly typed for Strings (error in EnvSetter in node.cc). to reproduce: switch off the flag and run /test/parralel/test-v8-interceptStrings-not-Symbols.js
1.0
properties created using Symbols in process.env are not intercepted - <!-- Thank you for reporting an issue. This issue tracker is for bugs and issues found within Node.js core. If you require more general support please file an issue on our help repo. https://github.com/nodejs/help Please fill in as much of the template below as you're able. Version: output of `node -v` Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows) Subsystem: if known, please specify affected core module name If possible, please provide code that demonstrates the problem, keeping it as simple and free of external dependencies as you are able. --> * **Version**: v6.2.0 * **Platform**: x86_64 * **Subsystem**: <!-- Enter your issue details below this comment. --> switching off the flag PropertyHandlerFlags::kOnlyInterceptStrings in node.cc causes Type Error when intercepting via Symbols on process.env collection, which is strongly typed for Strings (error in EnvSetter in node.cc). to reproduce: switch off the flag and run /test/parralel/test-v8-interceptStrings-not-Symbols.js
process
properties created using symbols in process env are not intercepted thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version platform subsystem switching off the flag propertyhandlerflags konlyinterceptstrings in node cc causes type error when intercepting via symbols on process env collection which is strongly typed for strings error in envsetter in node cc to reproduce switch off the flag and run test parralel test interceptstrings not symbols js
1
64,701
7,836,410,684
IssuesEvent
2018-06-17 19:08:49
mui-org/material-ui
https://api.github.com/repos/mui-org/material-ui
closed
[CircularProgress] End of line shape: use butt
component: CircularProgress good first issue material design spec
## Expected Behavior Should be correctly spaced at 95-99%. ## Current Behavior ![Current Behavior](https://cdn.pbrd.co/images/HqkDBM8.png) ## Steps to Reproduce (for bugs) ``` <CircularProgress variant="static" value={100} size={150}/> <CircularProgress variant="static" value={99} size={150}/> <CircularProgress variant="static" value={98} size={150}/> <CircularProgress variant="static" value={97} size={150}/> <CircularProgress variant="static" value={96} size={150}/> <CircularProgress variant="static" value={95} size={150}/> <CircularProgress variant="static" value={94} size={150}/> ``` ## Your Environment | Tech | Version | |--------------|---------| | Material-UI | 1.2.1 | | React | 16.4.1 | | browser | Chrome |
1.0
[CircularProgress] End of line shape: use butt - ## Expected Behavior Should be correctly spaced at 95-99%. ## Current Behavior ![Current Behavior](https://cdn.pbrd.co/images/HqkDBM8.png) ## Steps to Reproduce (for bugs) ``` <CircularProgress variant="static" value={100} size={150}/> <CircularProgress variant="static" value={99} size={150}/> <CircularProgress variant="static" value={98} size={150}/> <CircularProgress variant="static" value={97} size={150}/> <CircularProgress variant="static" value={96} size={150}/> <CircularProgress variant="static" value={95} size={150}/> <CircularProgress variant="static" value={94} size={150}/> ``` ## Your Environment | Tech | Version | |--------------|---------| | Material-UI | 1.2.1 | | React | 16.4.1 | | browser | Chrome |
non_process
end of line shape use butt expected behavior should be correctly spaced at current behavior steps to reproduce for bugs your environment tech version material ui react browser chrome
0
288,555
31,861,459,733
IssuesEvent
2023-09-15 11:14:15
nidhi7598/linux-v4.19.72_CVE-2022-3564
https://api.github.com/repos/nidhi7598/linux-v4.19.72_CVE-2022-3564
opened
WS-2021-0522 (Medium) detected in linuxlinux-4.19.294
Mend: dependency security vulnerability
## WS-2021-0522 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/aio.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In Linux/Kernel is vulnerable to use-after-free due to missing POLLFREE handling in fs/aio.c <p>Publish Date: 2021-12-01 <p>URL: <a href=https://github.com/gregkh/linux/commit/60d311f9e6381d779d7d53371f87285698ecee24>WS-2021-0522</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GSD-2021-1002601">https://osv.dev/vulnerability/GSD-2021-1002601</a></p> <p>Release Date: 2021-12-01</p> <p>Fix Resolution: Linux/Kernel - v4.19.221, v5.4.165, v5.10.85, v5.15.8, v5.16-rc5 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2021-0522 (Medium) detected in linuxlinux-4.19.294 - ## WS-2021-0522 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.294</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-v4.19.72_CVE-2022-3564/commit/9ffee08efa44c7887e2babb8f304df0fa1094efb">9ffee08efa44c7887e2babb8f304df0fa1094efb</a></p> <p>Found in base branch: <b>main</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/fs/aio.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> In Linux/Kernel is vulnerable to use-after-free due to missing POLLFREE handling in fs/aio.c <p>Publish Date: 2021-12-01 <p>URL: <a href=https://github.com/gregkh/linux/commit/60d311f9e6381d779d7d53371f87285698ecee24>WS-2021-0522</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://osv.dev/vulnerability/GSD-2021-1002601">https://osv.dev/vulnerability/GSD-2021-1002601</a></p> <p>Release Date: 2021-12-01</p> <p>Fix Resolution: Linux/Kernel - v4.19.221, v5.4.165, v5.10.85, v5.15.8, v5.16-rc5 </p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in linuxlinux ws medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files fs aio c vulnerability details in linux kernel is vulnerable to use after free due to missing pollfree handling in fs aio c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux kernel step up your open source security game with mend
0
390,659
26,867,530,250
IssuesEvent
2023-02-04 03:20:17
ZeStream/zestream-server
https://api.github.com/repos/ZeStream/zestream-server
closed
Document setting up Cloud Storage
documentation
**Is your feature request related to a problem? Please describe.** Documentation **Describe the solution you'd like** As now we have function to upload the folder directly, there should be a doc when people can read and easily understand, how to setup GCP/Azure/AWS.
1.0
Document setting up Cloud Storage - **Is your feature request related to a problem? Please describe.** Documentation **Describe the solution you'd like** As now we have function to upload the folder directly, there should be a doc when people can read and easily understand, how to setup GCP/Azure/AWS.
non_process
document setting up cloud storage is your feature request related to a problem please describe documentation describe the solution you d like as now we have function to upload the folder directly there should be a doc when people can read and easily understand how to setup gcp azure aws
0
4,479
7,343,499,363
IssuesEvent
2018-03-07 11:34:59
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Is there easy way to create cluster with reverse-proxy enabled using powershell or azcli without template?
assigned-to-author in-process product-question service-fabric triaged
az sf cluster create --resource-group $ResourceGroupName --location $Location \ --cluster-name $ClusterName --cluster-size $ClusterSize --os WindowsServer2016DatacenterwithContainers \ --vault-name $VaultName --vault-resource-group $ResourceGroupName \ --certificate-subject-name $Subject --certificate-password $Password --certificate-output-folder . \ --vm-sku $VmSku --vm-user-name $VmUserName --vm-password $VmPassword \ **--reverse-proxy-endpoint-port 19081 # like this** and.. option for load balancer endpoint port like **--lb-endpoint-port "[80, 83, 443, 8080, 19081]" --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 2e251c01-0830-5b77-46f9-7929c0a4d556 * Version Independent ID: 85c404d3-319d-66a8-49eb-bbb923563ab6 * [Content](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md) * Service: service-fabric
1.0
Is there easy way to create cluster with reverse-proxy enabled using powershell or azcli without template? - az sf cluster create --resource-group $ResourceGroupName --location $Location \ --cluster-name $ClusterName --cluster-size $ClusterSize --os WindowsServer2016DatacenterwithContainers \ --vault-name $VaultName --vault-resource-group $ResourceGroupName \ --certificate-subject-name $Subject --certificate-password $Password --certificate-output-folder . \ --vm-sku $VmSku --vm-user-name $VmUserName --vm-password $VmPassword \ **--reverse-proxy-endpoint-port 19081 # like this** and.. option for load balancer endpoint port like **--lb-endpoint-port "[80, 83, 443, 8080, 19081]" --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 2e251c01-0830-5b77-46f9-7929c0a4d556 * Version Independent ID: 85c404d3-319d-66a8-49eb-bbb923563ab6 * [Content](https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster) * [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/service-fabric/service-fabric-tutorial-create-vnet-and-windows-cluster.md) * Service: service-fabric
process
is there easy way to create cluster with reverse proxy enabled using powershell or azcli without template az sf cluster create resource group resourcegroupname location location cluster name clustername cluster size clustersize os vault name vaultname vault resource group resourcegroupname certificate subject name subject certificate password password certificate output folder vm sku vmsku vm user name vmusername vm password vmpassword reverse proxy endpoint port like this and option for load balancer endpoint port like lb endpoint port document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service service fabric
1
120,430
15,765,499,988
IssuesEvent
2021-03-31 14:12:37
Qiskit/qiskit-nature
https://api.github.com/repos/Qiskit/qiskit-nature
closed
Implement Transformers
type: design type: discussion
Migrated from Enterprise Github. Creator: @mrossinek The state of the `SecondQuantizedSumOp`-Transformations is the following: - [ ] Implementation - [ ] SeniorityZeroTransformation - [x] FreezeCoreTransformation - [x] ActiveSpaceTransformation - [ ] ParticleHoleTransformation - ~TranscorrelatedTransformation~ - [ ] Unittests - [ ] SeniorityZeroTransformation - [x] FreezeCoreTransformation - [x] ActiveSpaceTransformation - [ ] ParticleHoleTransformation - ~TranscorrelatedTransformation~ Note: this issue may be broken down into several issues as work starts to progress.
1.0
Implement Transformers - Migrated from Enterprise Github. Creator: @mrossinek The state of the `SecondQuantizedSumOp`-Transformations is the following: - [ ] Implementation - [ ] SeniorityZeroTransformation - [x] FreezeCoreTransformation - [x] ActiveSpaceTransformation - [ ] ParticleHoleTransformation - ~TranscorrelatedTransformation~ - [ ] Unittests - [ ] SeniorityZeroTransformation - [x] FreezeCoreTransformation - [x] ActiveSpaceTransformation - [ ] ParticleHoleTransformation - ~TranscorrelatedTransformation~ Note: this issue may be broken down into several issues as work starts to progress.
non_process
implement transformers migrated from enterprise github creator mrossinek the state of the secondquantizedsumop transformations is the following implementation seniorityzerotransformation freezecoretransformation activespacetransformation particleholetransformation transcorrelatedtransformation unittests seniorityzerotransformation freezecoretransformation activespacetransformation particleholetransformation transcorrelatedtransformation note this issue may be broken down into several issues as work starts to progress
0
65,713
8,837,095,746
IssuesEvent
2019-01-05 00:51:55
zulip/zulip
https://api.github.com/repos/zulip/zulip
closed
How to implement a custom authentication backend?
area: authentication area: documentation (developer)
The documentation is not really helping on this subject: > To write such an integration, look in zproject/backends.py at the implementation of GitHubAuthBackend, which is a small wrapper around the popular python-social-auth library. You can write a similar class, and add a few settings to control it. To test your backend (which we’d require for a pull request to the main Zulip codebase,) see the framework in test_auth_backends.py. See also our developer documentation on testing auth backends. I am trying to create an authentication backend that only takes a username (not an email) and a password and authenticates the user on a remote http server using requests. I've first tried to create a new `CustomAuthBackend(ZulipAuthMixin)` class and add a authenticate function, and also replace all parameters everywhere to use this class be it doesn't find it. And I cannot specify to use username instead of emails for the login. I've then tried to subclass `ZulipLDAPAuthBackend` with my own new class `CustomAuthBackend(ZulipLDAPAuthBackend)`, and I replaced everywhere `ZulipLDAPAuthBackend` by `CustomAuthBackend`, and I also specified a value for LDAP_APPEND_DOMAIN. And it works now!, I can loggin, but after the successfull connection, I get the following error: ``` 2018-11-30 15:47:24.994 ERR [django.request] Internal Server Error: /accounts/login/ Traceback (most recent call last): File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/core/handlers/exception.py", line 41, in inner response = get_response(request) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "./zerver/views/auth.py", line 640, in login_page extra_context=extra_context, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/views.py", line 54, in inner return func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/views.py", line 150, in login )(request) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view return self.dispatch(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper return bound_func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper return view(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper return bound_func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 149, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper return bound_func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/views.py", line 90, in dispatch return super(LoginView, self).dispatch(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/generic/base.py", line 88, in dispatch return handler(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/generic/edit.py", line 182, in post if form.is_valid(): File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 183, in is_valid return self.is_bound and not self.errors File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 175, in errors self.full_clean() File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 385, in full_clean self._clean_form() File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 412, in _clean_form cleaned_data = self.clean() File "./zerver/forms.py", line 281, in clean realm=realm, return_data=return_data) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/__init__.py", line 77, in authenticate user.backend = backend_path AttributeError: 'dict' object has no attribute 'backend' ``` The problem is I do not return a _LDAPUser in my authenticate function, but I cannot really create one. It would be awesome to really be able to create a simple CustomAuthBackend without working around with non-working code. Thanks for your help!
1.0
How to implement a custom authentication backend? - The documentation is not really helping on this subject: > To write such an integration, look in zproject/backends.py at the implementation of GitHubAuthBackend, which is a small wrapper around the popular python-social-auth library. You can write a similar class, and add a few settings to control it. To test your backend (which we’d require for a pull request to the main Zulip codebase,) see the framework in test_auth_backends.py. See also our developer documentation on testing auth backends. I am trying to create an authentication backend that only takes a username (not an email) and a password and authenticates the user on a remote http server using requests. I've first tried to create a new `CustomAuthBackend(ZulipAuthMixin)` class and add a authenticate function, and also replace all parameters everywhere to use this class be it doesn't find it. And I cannot specify to use username instead of emails for the login. I've then tried to subclass `ZulipLDAPAuthBackend` with my own new class `CustomAuthBackend(ZulipLDAPAuthBackend)`, and I replaced everywhere `ZulipLDAPAuthBackend` by `CustomAuthBackend`, and I also specified a value for LDAP_APPEND_DOMAIN. And it works now!, I can loggin, but after the successfull connection, I get the following error: ``` 2018-11-30 15:47:24.994 ERR [django.request] Internal Server Error: /accounts/login/ Traceback (most recent call last): File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/core/handlers/exception.py", line 41, in inner response = get_response(request) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 187, in _get_response response = self.process_exception_by_middleware(e, request) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/core/handlers/base.py", line 185, in _get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "./zerver/views/auth.py", line 640, in login_page extra_context=extra_context, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/views.py", line 54, in inner return func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/views.py", line 150, in login )(request) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/generic/base.py", line 68, in view return self.dispatch(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper return bound_func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/decorators/debug.py", line 76, in sensitive_post_parameters_wrapper return view(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper return bound_func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 149, in _wrapped_view response = view_func(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 67, in _wrapper return bound_func(*args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/decorators/cache.py", line 57, in _wrapped_view_func response = view_func(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/utils/decorators.py", line 63, in bound_func return func.__get__(self, type(self))(*args2, **kwargs2) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/views.py", line 90, in dispatch return super(LoginView, self).dispatch(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/generic/base.py", line 88, in dispatch return handler(request, *args, **kwargs) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/views/generic/edit.py", line 182, in post if form.is_valid(): File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 183, in is_valid return self.is_bound and not self.errors File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 175, in errors self.full_clean() File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 385, in full_clean self._clean_form() File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/forms/forms.py", line 412, in _clean_form cleaned_data = self.clean() File "./zerver/forms.py", line 281, in clean realm=realm, return_data=return_data) File "/home/zulip/deployments/2018-11-07-22-43-54/zulip-py3-venv/lib/python3.5/site-packages/django/contrib/auth/__init__.py", line 77, in authenticate user.backend = backend_path AttributeError: 'dict' object has no attribute 'backend' ``` The problem is I do not return a _LDAPUser in my authenticate function, but I cannot really create one. It would be awesome to really be able to create a simple CustomAuthBackend without working around with non-working code. Thanks for your help!
non_process
how to implement a custom authentication backend the documentation is not really helping on this subject to write such an integration look in zproject backends py at the implementation of githubauthbackend which is a small wrapper around the popular python social auth library you can write a similar class and add a few settings to control it to test your backend which we’d require for a pull request to the main zulip codebase see the framework in test auth backends py see also our developer documentation on testing auth backends i am trying to create an authentication backend that only takes a username not an email and a password and authenticates the user on a remote http server using requests i ve first tried to create a new customauthbackend zulipauthmixin class and add a authenticate function and also replace all parameters everywhere to use this class be it doesn t find it and i cannot specify to use username instead of emails for the login i ve then tried to subclass zulipldapauthbackend with my own new class customauthbackend zulipldapauthbackend and i replaced everywhere zulipldapauthbackend by customauthbackend and i also specified a value for ldap append domain and it works now i can loggin but after the successfull connection i get the following error err internal server error accounts login traceback most recent call last file home zulip deployments zulip venv lib site packages django core handlers exception py line in inner response get response request file home zulip deployments zulip venv lib site packages django core handlers base py line in get response response self process exception by middleware e request file home zulip deployments zulip venv lib site packages django core handlers base py line in get response response wrapped callback request callback args callback kwargs file zerver views auth py line in login page extra context extra context kwargs file home zulip deployments zulip venv lib site packages django contrib auth views py line in inner return func args kwargs file home zulip deployments zulip venv lib site packages django contrib auth views py line in login request file home zulip deployments zulip venv lib site packages django views generic base py line in view return self dispatch request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapper return bound func args kwargs file home zulip deployments zulip venv lib site packages django views decorators debug py line in sensitive post parameters wrapper return view request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in bound func return func get self type self file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapper return bound func args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapped view response view func request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in bound func return func get self type self file home zulip deployments zulip venv lib site packages django utils decorators py line in wrapper return bound func args kwargs file home zulip deployments zulip venv lib site packages django views decorators cache py line in wrapped view func response view func request args kwargs file home zulip deployments zulip venv lib site packages django utils decorators py line in bound func return func get self type self file home zulip deployments zulip venv lib site packages django contrib auth views py line in dispatch return super loginview self dispatch request args kwargs file home zulip deployments zulip venv lib site packages django views generic base py line in dispatch return handler request args kwargs file home zulip deployments zulip venv lib site packages django views generic edit py line in post if form is valid file home zulip deployments zulip venv lib site packages django forms forms py line in is valid return self is bound and not self errors file home zulip deployments zulip venv lib site packages django forms forms py line in errors self full clean file home zulip deployments zulip venv lib site packages django forms forms py line in full clean self clean form file home zulip deployments zulip venv lib site packages django forms forms py line in clean form cleaned data self clean file zerver forms py line in clean realm realm return data return data file home zulip deployments zulip venv lib site packages django contrib auth init py line in authenticate user backend backend path attributeerror dict object has no attribute backend the problem is i do not return a ldapuser in my authenticate function but i cannot really create one it would be awesome to really be able to create a simple customauthbackend without working around with non working code thanks for your help
0
4,118
2,715,516,705
IssuesEvent
2015-04-10 13:40:54
BrightFlair/www.brightflair.com
https://api.github.com/repos/BrightFlair/www.brightflair.com
opened
Homepage banner backgrounds
design
A thin banner showing the following captions, one at once (randomly displayed): * We transform great ideas into amazing experiences (showcasing app development). * We're here to effectively apply software within your business (showcasing Google Apps). * We craft software that solves problems (showcasing bespoke software). * We put fun into our work (showcasing game development) * We build tools that makes the magic happen (showcasing PHP.Gt) Each banner should be a letterbox size, rather than the trend of a full-page background image... this trend of design is certainly not what should be chased for brightflair.com: "Stunning full image background" http://themetrust.com/demos/baylie Follows on to homepage video issue.
1.0
Homepage banner backgrounds - A thin banner showing the following captions, one at once (randomly displayed): * We transform great ideas into amazing experiences (showcasing app development). * We're here to effectively apply software within your business (showcasing Google Apps). * We craft software that solves problems (showcasing bespoke software). * We put fun into our work (showcasing game development) * We build tools that makes the magic happen (showcasing PHP.Gt) Each banner should be a letterbox size, rather than the trend of a full-page background image... this trend of design is certainly not what should be chased for brightflair.com: "Stunning full image background" http://themetrust.com/demos/baylie Follows on to homepage video issue.
non_process
homepage banner backgrounds a thin banner showing the following captions one at once randomly displayed we transform great ideas into amazing experiences showcasing app development we re here to effectively apply software within your business showcasing google apps we craft software that solves problems showcasing bespoke software we put fun into our work showcasing game development we build tools that makes the magic happen showcasing php gt each banner should be a letterbox size rather than the trend of a full page background image this trend of design is certainly not what should be chased for brightflair com stunning full image background follows on to homepage video issue
0
5,351
8,180,189,930
IssuesEvent
2018-08-28 18:40:47
ArctosDB/new-collections
https://api.github.com/repos/ArctosDB/new-collections
closed
UMZM Data Migration
In migration Process Priority-High
Begin data migration; forward to VertNet migrators or pre-bulkloader as determined during initial evaluation; collection contact should remain in the loop during migration process, monthly check in by collection mentor.
1.0
UMZM Data Migration - Begin data migration; forward to VertNet migrators or pre-bulkloader as determined during initial evaluation; collection contact should remain in the loop during migration process, monthly check in by collection mentor.
process
umzm data migration begin data migration forward to vertnet migrators or pre bulkloader as determined during initial evaluation collection contact should remain in the loop during migration process monthly check in by collection mentor
1
27,936
2,697,796,937
IssuesEvent
2015-04-02 22:15:41
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
closed
sct_propseg wraper lack of a slash
bug priority: high sct_propseg
sct.printv("fslview "+input_filename+" "+folder_output+output_filename+" -l Red -b 0,1 -t 0.7 &")
1.0
sct_propseg wraper lack of a slash - sct.printv("fslview "+input_filename+" "+folder_output+output_filename+" -l Red -b 0,1 -t 0.7 &")
non_process
sct propseg wraper lack of a slash sct printv fslview input filename folder output output filename l red b t
0
585
3,060,127,974
IssuesEvent
2015-08-14 18:50:41
Microsoft/poshtools
https://api.github.com/repos/Microsoft/poshtools
closed
Unexpected Microsoft.PythonTools.Attacher.exe in the running processes
bug BugBash Process Attaching
MS build 3.0.1487 VS 2015 RTW build 1. In VS2015, create a powershell script project. You can use the one pasted below. (the problem is not script specific) 2. Set a breakpoint somewhere in the script. 3. F5 to stop at the breakpoint 4. Select "Debug->Attach to Process" in the Attach to Process dialog. Leave all settings with default values. Then chose "PowerShell host service" to attach to. Close the dialog 5. Set the focus back in the editor then hit F10 and F5. 6. Click on stop debugging in the toolbar 7. Repeat steps 3 and 4. Notice, in the Attach To Process, you will see an extra process named "Microsoft.PythonTools.Attacher.exe", which probably was launched by the PoshTools on the second attach. I understand this isn't a typical "Local Attach" scenario, but there was nothing preventing me from doing so. Script attached: # # Script.ps1 # Get-Service Get-Process $var = 'hello' $number = 1 $numbers = 1,2,3,4,5,6,7,8,9 $debug = "`$computer contains $computer" $head = "Column`tColumn`tColumn" $filter1 = "name='BITS'" $computer = 'BITS' $filter2 = "name=$computer'" $var = 'hello' $var | Get-Member $svc = Get-Service $svc[0].name $name = $svc[1].name $name.Length $name.ToUpper() $service = 'bits' $name = "Service is $service.ToUpper()" $upper = $name.ToUpper() $name = "Service is $upper" Write-Host $name $name = (Get-Service)[0].name Write-Host $name
1.0
Unexpected Microsoft.PythonTools.Attacher.exe in the running processes - MS build 3.0.1487 VS 2015 RTW build 1. In VS2015, create a powershell script project. You can use the one pasted below. (the problem is not script specific) 2. Set a breakpoint somewhere in the script. 3. F5 to stop at the breakpoint 4. Select "Debug->Attach to Process" in the Attach to Process dialog. Leave all settings with default values. Then chose "PowerShell host service" to attach to. Close the dialog 5. Set the focus back in the editor then hit F10 and F5. 6. Click on stop debugging in the toolbar 7. Repeat steps 3 and 4. Notice, in the Attach To Process, you will see an extra process named "Microsoft.PythonTools.Attacher.exe", which probably was launched by the PoshTools on the second attach. I understand this isn't a typical "Local Attach" scenario, but there was nothing preventing me from doing so. Script attached: # # Script.ps1 # Get-Service Get-Process $var = 'hello' $number = 1 $numbers = 1,2,3,4,5,6,7,8,9 $debug = "`$computer contains $computer" $head = "Column`tColumn`tColumn" $filter1 = "name='BITS'" $computer = 'BITS' $filter2 = "name=$computer'" $var = 'hello' $var | Get-Member $svc = Get-Service $svc[0].name $name = $svc[1].name $name.Length $name.ToUpper() $service = 'bits' $name = "Service is $service.ToUpper()" $upper = $name.ToUpper() $name = "Service is $upper" Write-Host $name $name = (Get-Service)[0].name Write-Host $name
process
unexpected microsoft pythontools attacher exe in the running processes ms build vs rtw build in create a powershell script project you can use the one pasted below the problem is not script specific set a breakpoint somewhere in the script to stop at the breakpoint select debug attach to process in the attach to process dialog leave all settings with default values then chose powershell host service to attach to close the dialog set the focus back in the editor then hit and click on stop debugging in the toolbar repeat steps and notice in the attach to process you will see an extra process named microsoft pythontools attacher exe which probably was launched by the poshtools on the second attach i understand this isn t a typical local attach scenario but there was nothing preventing me from doing so script attached script get service get process var hello number numbers debug computer contains computer head column tcolumn tcolumn name bits computer bits name computer var hello var get member svc get service svc name name svc name name length name toupper service bits name service is service toupper upper name toupper name service is upper write host name name get service name write host name
1
11,082
27,975,046,959
IssuesEvent
2023-03-25 13:28:23
PandaHugMonster/php-simputils
https://api.github.com/repos/PandaHugMonster/php-simputils
closed
New models "Email" and "Phone"
idea architecture model
Could be useful to create a few new models like "Email" and "Phone" and corresponding Normalization/Validation - [ ] `Email` - [ ] `Phone`
1.0
New models "Email" and "Phone" - Could be useful to create a few new models like "Email" and "Phone" and corresponding Normalization/Validation - [ ] `Email` - [ ] `Phone`
non_process
new models email and phone could be useful to create a few new models like email and phone and corresponding normalization validation email phone
0
38,150
2,839,626,206
IssuesEvent
2015-05-27 14:37:57
Kunstmaan/KunstmaanBundlesCMS
https://api.github.com/repos/Kunstmaan/KunstmaanBundlesCMS
closed
Sort sub entities in pageparts is not working
Priority: Normal Profile: Frontend Type: Bugfix
If you make sub entities sortable, you have JS errors in console and sort does not work.
1.0
Sort sub entities in pageparts is not working - If you make sub entities sortable, you have JS errors in console and sort does not work.
non_process
sort sub entities in pageparts is not working if you make sub entities sortable you have js errors in console and sort does not work
0
228,641
17,468,084,046
IssuesEvent
2021-08-06 20:10:34
woocommerce/woocommerce-admin
https://api.github.com/repos/woocommerce/woocommerce-admin
closed
[GlobalStep - 2.6.0 Beta 2] "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed on free features screen under business details step.
type: documentation
## Bug Description: "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed and also "Google listings Ads" and "Mailchimp for WooCommerce "are not listed on free features screen under business details step. ## Environment: WooCommerce Admin: v2.6.0 Beta-2 Woocommerce Version : WooCommerce 5.5.2 ## PC: Windows 10, Mac 10.14.6 Chrome(Version 91.0.4472.77) Firefox(Version 89.0.2) Safari: v14.1.2 ## Steps To Reproduce: 1. Create any test site using JN site. 1. Install and activate all the required plugins. 1. Install and activate WooCommerce Admin 2.6.0 Beta-2. 1. Go to Free features screen during onboarding. 1. Observe that "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed and also "Google listings Ads" and "Mailchimp for WooCommerce "are not listed ## Actual Result: "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed and also "Google listings Ads" and "Mailchimp for WooCommerce "are not listed on free features screen under business details step. ## Expected Result: "GET THE BASICS" and "GROW YOUR STORE" headers should be display and also "Google listings Ads" and "Mailchimp for WooCommerce " should listed on free features screen under business details step. ## Screenshot: ![#7472](https://user-images.githubusercontent.com/41110392/128512268-606f48bf-8a22-482a-a274-1ffd7446794c.png) *Isolating the problem (mark completed items with an [x]):* - [ ] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active. - [ ] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/). - [x] I can reproduce this bug consistently using the steps above. <details> ` ### WordPress Environment ### WC Version: 5.5.2 REST API Version: ✔ 5.5.2 WC Blocks Version: ✔ 5.3.3 Action Scheduler Version: ✔ 3.2.1 WC Admin Version: ✔ 2.6.0-beta.2 Log Directory Writable: ✔ WP Version: 5.8 WP Multisite: – WP Memory Limit: 256 MB WP Debug Mode: ✔ WP Cron: ✔ Language: en_US External object cache: – ### Server Environment ### Server Info: Apache/2.4.48 (Unix) OpenSSL/1.0.2g PHP Version: 7.4.21 PHP Post Max Size: 1 GB PHP Time Limit: 30 PHP Max Input Vars: 5000 cURL Version: 7.47.0 OpenSSL/1.0.2g SUHOSIN Installed: – MySQL Version: 5.7.33-0ubuntu0.16.04.1-log Max Upload Size: 512 MB Default Timezone is UTC: ✔ fsockopen/cURL: ✔ SoapClient: ✔ DOMDocument: ✔ GZip: ✔ Multibyte String: ✔ Remote Post: ✔ Remote Get: ✔ ### Database ### WC Database Version: 5.5.2 WC Database Prefix: wp_ Total Database Size: 10.80MB Database Data Size: 8.04MB Database Index Size: 2.76MB wp_woocommerce_sessions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_api_keys: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_attribute_taxonomies: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_downloadable_product_permissions: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_woocommerce_order_items: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_order_itemmeta: Data: 0.06MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_tax_rates: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_woocommerce_tax_rate_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_shipping_zones: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_woocommerce_shipping_zone_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_shipping_zone_methods: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_woocommerce_payment_tokens: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_payment_tokenmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_log: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_actionscheduler_actions: Data: 0.05MB + Index: 0.13MB + Engine InnoDB wp_actionscheduler_claims: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_actionscheduler_groups: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_actionscheduler_logs: Data: 0.05MB + Index: 0.03MB + Engine InnoDB wp_commentmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_comments: Data: 0.02MB + Index: 0.09MB + Engine InnoDB wp_gla_budget_recommendations: Data: 0.22MB + Index: 0.14MB + Engine InnoDB wp_gla_merchant_issues: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_gla_shipping_rates: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_gla_shipping_times: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_links: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailchimp_carts: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailchimp_jobs: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_custom_fields: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_dynamic_segment_filters: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_feature_flags: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_forms: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_log: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_mapping_to_external_entities: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletters: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_newsletter_links: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_mailpoet_newsletter_option: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_option_fields: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_posts: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_segment: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_templates: Data: 2.52MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_scheduled_tasks: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_scheduled_task_subscribers: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_segments: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_sending_queues: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_settings: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_statistics_clicks: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_mailpoet_statistics_forms: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_statistics_newsletters: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_statistics_opens: Data: 0.02MB + Index: 0.08MB + Engine InnoDB wp_mailpoet_statistics_unsubscribes: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_mailpoet_statistics_woocommerce_purchases: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_mailpoet_stats_notifications: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_subscribers: Data: 0.02MB + Index: 0.13MB + Engine InnoDB wp_mailpoet_subscriber_custom_field: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_subscriber_ips: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_subscriber_segment: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_user_flags: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_options: Data: 3.47MB + Index: 0.19MB + Engine InnoDB wp_postmeta: Data: 0.16MB + Index: 0.11MB + Engine InnoDB wp_posts: Data: 0.06MB + Index: 0.06MB + Engine InnoDB wp_termmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_terms: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_term_relationships: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_term_taxonomy: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_usermeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_users: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_wc_admin_notes: Data: 0.05MB + Index: 0.00MB + Engine InnoDB wp_wc_admin_note_actions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_wc_category_lookup: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_wc_customer_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_download_log: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_order_coupon_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_order_product_lookup: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_wc_order_stats: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_wc_order_tax_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_product_meta_lookup: Data: 0.02MB + Index: 0.09MB + Engine InnoDB wp_wc_reserved_stock: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_wc_tax_rate_classes: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_wc_webhooks: Data: 0.02MB + Index: 0.02MB + Engine InnoDB ### Post Type Counts ### attachment: 28 mailpoet_page: 1 page: 6 post: 2 product: 20 product_variation: 10 shop_coupon: 2 shop_order: 9 ### Security ### Secure connection (HTTPS): ✔ Hide errors from visitors: ✔ ### Active Plugins (10) ### Query Monitor: by John Blackbourn – 3.7.1 Companion Plugin: by Osk – 1.18 Google Listings and Ads: by WooCommerce – 1.3.0 Gutenberg: by Gutenberg Team – 11.2.1 Jetpack: by Automattic – 10.0 Mailchimp for WooCommerce: by Mailchimp – 2.5.2 MailPoet 3 (New): by MailPoet – 3.65.1 WooCommerce Admin: by WooCommerce – 2.6.0-beta.2 WooCommerce: by Automattic – 5.5.2 WP Crontrol: by John Blackbourn & crontributors – 1.10.0 ### Inactive Plugins (2) ### Akismet Anti-Spam: by Automattic – 4.1.10 Hello Dolly: by Matt Mullenweg – 1.7.2 ### Dropin Plugins (1) ### db.php: Query Monitor Database Class ### Settings ### API Enabled: – Force SSL: – Currency: USD ($) Currency Position: left Thousand Separator: , Decimal Separator: . Number of Decimals: 2 Taxonomies: Product Types: external (external) grouped (grouped) simple (simple) variable (variable) Taxonomies: Product Visibility: exclude-from-catalog (exclude-from-catalog) exclude-from-search (exclude-from-search) featured (featured) outofstock (outofstock) rated-1 (rated-1) rated-2 (rated-2) rated-3 (rated-3) rated-4 (rated-4) rated-5 (rated-5) Connected to WooCommerce.com: – ### WC Pages ### Shop base: #7 - /shop/ Cart: #8 - /cart/ Checkout: #9 - /checkout/ My account: #10 - /my-account/ Terms and conditions: ❌ Page not set ### Theme ### Name: Storefront Version: 3.7.0 Author URL: https://woocommerce.com/ Child Theme: ❌ – If you are modifying WooCommerce on a parent theme that you did not build personally we recommend using a child theme. See: How to create a child theme WooCommerce Support: ✔ ### Templates ### Overrides: – ### Action Scheduler ### Complete: 85 Oldest: 2021-08-06 07:36:40 +0000 Newest: 2021-08-06 11:54:51 +0000 ### Status report information ### Generated at: 2021-08-06 12:41:32 +00:00 ` </details>
1.0
[GlobalStep - 2.6.0 Beta 2] "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed on free features screen under business details step. - ## Bug Description: "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed and also "Google listings Ads" and "Mailchimp for WooCommerce "are not listed on free features screen under business details step. ## Environment: WooCommerce Admin: v2.6.0 Beta-2 Woocommerce Version : WooCommerce 5.5.2 ## PC: Windows 10, Mac 10.14.6 Chrome(Version 91.0.4472.77) Firefox(Version 89.0.2) Safari: v14.1.2 ## Steps To Reproduce: 1. Create any test site using JN site. 1. Install and activate all the required plugins. 1. Install and activate WooCommerce Admin 2.6.0 Beta-2. 1. Go to Free features screen during onboarding. 1. Observe that "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed and also "Google listings Ads" and "Mailchimp for WooCommerce "are not listed ## Actual Result: "GET THE BASICS" and "GROW YOUR STORE" headers are not displayed and also "Google listings Ads" and "Mailchimp for WooCommerce "are not listed on free features screen under business details step. ## Expected Result: "GET THE BASICS" and "GROW YOUR STORE" headers should be display and also "Google listings Ads" and "Mailchimp for WooCommerce " should listed on free features screen under business details step. ## Screenshot: ![#7472](https://user-images.githubusercontent.com/41110392/128512268-606f48bf-8a22-482a-a274-1ffd7446794c.png) *Isolating the problem (mark completed items with an [x]):* - [ ] I have deactivated other plugins and confirmed this bug occurs when only WooCommerce plugin is active. - [ ] This bug happens with a default WordPress theme active, or [Storefront](https://woocommerce.com/storefront/). - [x] I can reproduce this bug consistently using the steps above. <details> ` ### WordPress Environment ### WC Version: 5.5.2 REST API Version: ✔ 5.5.2 WC Blocks Version: ✔ 5.3.3 Action Scheduler Version: ✔ 3.2.1 WC Admin Version: ✔ 2.6.0-beta.2 Log Directory Writable: ✔ WP Version: 5.8 WP Multisite: – WP Memory Limit: 256 MB WP Debug Mode: ✔ WP Cron: ✔ Language: en_US External object cache: – ### Server Environment ### Server Info: Apache/2.4.48 (Unix) OpenSSL/1.0.2g PHP Version: 7.4.21 PHP Post Max Size: 1 GB PHP Time Limit: 30 PHP Max Input Vars: 5000 cURL Version: 7.47.0 OpenSSL/1.0.2g SUHOSIN Installed: – MySQL Version: 5.7.33-0ubuntu0.16.04.1-log Max Upload Size: 512 MB Default Timezone is UTC: ✔ fsockopen/cURL: ✔ SoapClient: ✔ DOMDocument: ✔ GZip: ✔ Multibyte String: ✔ Remote Post: ✔ Remote Get: ✔ ### Database ### WC Database Version: 5.5.2 WC Database Prefix: wp_ Total Database Size: 10.80MB Database Data Size: 8.04MB Database Index Size: 2.76MB wp_woocommerce_sessions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_api_keys: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_attribute_taxonomies: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_downloadable_product_permissions: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_woocommerce_order_items: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_order_itemmeta: Data: 0.06MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_tax_rates: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_woocommerce_tax_rate_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_shipping_zones: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_woocommerce_shipping_zone_locations: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_shipping_zone_methods: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_woocommerce_payment_tokens: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_woocommerce_payment_tokenmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_woocommerce_log: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_actionscheduler_actions: Data: 0.05MB + Index: 0.13MB + Engine InnoDB wp_actionscheduler_claims: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_actionscheduler_groups: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_actionscheduler_logs: Data: 0.05MB + Index: 0.03MB + Engine InnoDB wp_commentmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_comments: Data: 0.02MB + Index: 0.09MB + Engine InnoDB wp_gla_budget_recommendations: Data: 0.22MB + Index: 0.14MB + Engine InnoDB wp_gla_merchant_issues: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_gla_shipping_rates: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_gla_shipping_times: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_links: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailchimp_carts: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailchimp_jobs: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_custom_fields: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_dynamic_segment_filters: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_feature_flags: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_forms: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_log: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_mapping_to_external_entities: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletters: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_newsletter_links: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_mailpoet_newsletter_option: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_option_fields: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_posts: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_segment: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_newsletter_templates: Data: 2.52MB + Index: 0.00MB + Engine InnoDB wp_mailpoet_scheduled_tasks: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_scheduled_task_subscribers: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_segments: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_sending_queues: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_settings: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_statistics_clicks: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_mailpoet_statistics_forms: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_statistics_newsletters: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_statistics_opens: Data: 0.02MB + Index: 0.08MB + Engine InnoDB wp_mailpoet_statistics_unsubscribes: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_mailpoet_statistics_woocommerce_purchases: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_mailpoet_stats_notifications: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_subscribers: Data: 0.02MB + Index: 0.13MB + Engine InnoDB wp_mailpoet_subscriber_custom_field: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_subscriber_ips: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_mailpoet_subscriber_segment: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_mailpoet_user_flags: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_options: Data: 3.47MB + Index: 0.19MB + Engine InnoDB wp_postmeta: Data: 0.16MB + Index: 0.11MB + Engine InnoDB wp_posts: Data: 0.06MB + Index: 0.06MB + Engine InnoDB wp_termmeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_terms: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_term_relationships: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_term_taxonomy: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_usermeta: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_users: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_wc_admin_notes: Data: 0.05MB + Index: 0.00MB + Engine InnoDB wp_wc_admin_note_actions: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_wc_category_lookup: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_wc_customer_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_download_log: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_order_coupon_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_order_product_lookup: Data: 0.02MB + Index: 0.06MB + Engine InnoDB wp_wc_order_stats: Data: 0.02MB + Index: 0.05MB + Engine InnoDB wp_wc_order_tax_lookup: Data: 0.02MB + Index: 0.03MB + Engine InnoDB wp_wc_product_meta_lookup: Data: 0.02MB + Index: 0.09MB + Engine InnoDB wp_wc_reserved_stock: Data: 0.02MB + Index: 0.00MB + Engine InnoDB wp_wc_tax_rate_classes: Data: 0.02MB + Index: 0.02MB + Engine InnoDB wp_wc_webhooks: Data: 0.02MB + Index: 0.02MB + Engine InnoDB ### Post Type Counts ### attachment: 28 mailpoet_page: 1 page: 6 post: 2 product: 20 product_variation: 10 shop_coupon: 2 shop_order: 9 ### Security ### Secure connection (HTTPS): ✔ Hide errors from visitors: ✔ ### Active Plugins (10) ### Query Monitor: by John Blackbourn – 3.7.1 Companion Plugin: by Osk – 1.18 Google Listings and Ads: by WooCommerce – 1.3.0 Gutenberg: by Gutenberg Team – 11.2.1 Jetpack: by Automattic – 10.0 Mailchimp for WooCommerce: by Mailchimp – 2.5.2 MailPoet 3 (New): by MailPoet – 3.65.1 WooCommerce Admin: by WooCommerce – 2.6.0-beta.2 WooCommerce: by Automattic – 5.5.2 WP Crontrol: by John Blackbourn & crontributors – 1.10.0 ### Inactive Plugins (2) ### Akismet Anti-Spam: by Automattic – 4.1.10 Hello Dolly: by Matt Mullenweg – 1.7.2 ### Dropin Plugins (1) ### db.php: Query Monitor Database Class ### Settings ### API Enabled: – Force SSL: – Currency: USD ($) Currency Position: left Thousand Separator: , Decimal Separator: . Number of Decimals: 2 Taxonomies: Product Types: external (external) grouped (grouped) simple (simple) variable (variable) Taxonomies: Product Visibility: exclude-from-catalog (exclude-from-catalog) exclude-from-search (exclude-from-search) featured (featured) outofstock (outofstock) rated-1 (rated-1) rated-2 (rated-2) rated-3 (rated-3) rated-4 (rated-4) rated-5 (rated-5) Connected to WooCommerce.com: – ### WC Pages ### Shop base: #7 - /shop/ Cart: #8 - /cart/ Checkout: #9 - /checkout/ My account: #10 - /my-account/ Terms and conditions: ❌ Page not set ### Theme ### Name: Storefront Version: 3.7.0 Author URL: https://woocommerce.com/ Child Theme: ❌ – If you are modifying WooCommerce on a parent theme that you did not build personally we recommend using a child theme. See: How to create a child theme WooCommerce Support: ✔ ### Templates ### Overrides: – ### Action Scheduler ### Complete: 85 Oldest: 2021-08-06 07:36:40 +0000 Newest: 2021-08-06 11:54:51 +0000 ### Status report information ### Generated at: 2021-08-06 12:41:32 +00:00 ` </details>
non_process
get the basics and grow your store headers are not displayed on free features screen under business details step bug description get the basics and grow your store headers are not displayed and also google listings ads and mailchimp for woocommerce are not listed on free features screen under business details step environment woocommerce admin beta woocommerce version woocommerce pc windows mac chrome version firefox version safari steps to reproduce create any test site using jn site install and activate all the required plugins install and activate woocommerce admin beta go to free features screen during onboarding observe that get the basics and grow your store headers are not displayed and also google listings ads and mailchimp for woocommerce are not listed actual result get the basics and grow your store headers are not displayed and also google listings ads and mailchimp for woocommerce are not listed on free features screen under business details step expected result get the basics and grow your store headers should be display and also google listings ads and mailchimp for woocommerce should listed on free features screen under business details step screenshot isolating the problem mark completed items with an i have deactivated other plugins and confirmed this bug occurs when only woocommerce plugin is active this bug happens with a default wordpress theme active or i can reproduce this bug consistently using the steps above wordpress environment wc version rest api version ✔ wc blocks version ✔ action scheduler version ✔ wc admin version ✔ beta log directory writable ✔ wp version wp multisite – wp memory limit mb wp debug mode ✔ wp cron ✔ language en us external object cache – server environment server info apache unix openssl php version php post max size gb php time limit php max input vars curl version openssl suhosin installed – mysql version log max upload size mb default timezone is utc ✔ fsockopen curl ✔ soapclient ✔ domdocument ✔ gzip ✔ multibyte string ✔ remote post ✔ remote get ✔ database wc database version wc database prefix wp total database size database data size database index size wp woocommerce sessions data index engine innodb wp woocommerce api keys data index engine innodb wp woocommerce attribute taxonomies data index engine innodb wp woocommerce downloadable product permissions data index engine innodb wp woocommerce order items data index engine innodb wp woocommerce order itemmeta data index engine innodb wp woocommerce tax rates data index engine innodb wp woocommerce tax rate locations data index engine innodb wp woocommerce shipping zones data index engine innodb wp woocommerce shipping zone locations data index engine innodb wp woocommerce shipping zone methods data index engine innodb wp woocommerce payment tokens data index engine innodb wp woocommerce payment tokenmeta data index engine innodb wp woocommerce log data index engine innodb wp actionscheduler actions data index engine innodb wp actionscheduler claims data index engine innodb wp actionscheduler groups data index engine innodb wp actionscheduler logs data index engine innodb wp commentmeta data index engine innodb wp comments data index engine innodb wp gla budget recommendations data index engine innodb wp gla merchant issues data index engine innodb wp gla shipping rates data index engine innodb wp gla shipping times data index engine innodb wp links data index engine innodb wp mailchimp carts data index engine innodb wp mailchimp jobs data index engine innodb wp mailpoet custom fields data index engine innodb wp mailpoet dynamic segment filters data index engine innodb wp mailpoet feature flags data index engine innodb wp mailpoet forms data index engine innodb wp mailpoet log data index engine innodb wp mailpoet mapping to external entities data index engine innodb wp mailpoet newsletters data index engine innodb wp mailpoet newsletter links data index engine innodb wp mailpoet newsletter option data index engine innodb wp mailpoet newsletter option fields data index engine innodb wp mailpoet newsletter posts data index engine innodb wp mailpoet newsletter segment data index engine innodb wp mailpoet newsletter templates data index engine innodb wp mailpoet scheduled tasks data index engine innodb wp mailpoet scheduled task subscribers data index engine innodb wp mailpoet segments data index engine innodb wp mailpoet sending queues data index engine innodb wp mailpoet settings data index engine innodb wp mailpoet statistics clicks data index engine innodb wp mailpoet statistics forms data index engine innodb wp mailpoet statistics newsletters data index engine innodb wp mailpoet statistics opens data index engine innodb wp mailpoet statistics unsubscribes data index engine innodb wp mailpoet statistics woocommerce purchases data index engine innodb wp mailpoet stats notifications data index engine innodb wp mailpoet subscribers data index engine innodb wp mailpoet subscriber custom field data index engine innodb wp mailpoet subscriber ips data index engine innodb wp mailpoet subscriber segment data index engine innodb wp mailpoet user flags data index engine innodb wp options data index engine innodb wp postmeta data index engine innodb wp posts data index engine innodb wp termmeta data index engine innodb wp terms data index engine innodb wp term relationships data index engine innodb wp term taxonomy data index engine innodb wp usermeta data index engine innodb wp users data index engine innodb wp wc admin notes data index engine innodb wp wc admin note actions data index engine innodb wp wc category lookup data index engine innodb wp wc customer lookup data index engine innodb wp wc download log data index engine innodb wp wc order coupon lookup data index engine innodb wp wc order product lookup data index engine innodb wp wc order stats data index engine innodb wp wc order tax lookup data index engine innodb wp wc product meta lookup data index engine innodb wp wc reserved stock data index engine innodb wp wc tax rate classes data index engine innodb wp wc webhooks data index engine innodb post type counts attachment mailpoet page page post product product variation shop coupon shop order security secure connection https ✔ hide errors from visitors ✔ active plugins query monitor by john blackbourn – companion plugin by osk – google listings and ads by woocommerce – gutenberg by gutenberg team – jetpack by automattic – mailchimp for woocommerce by mailchimp – mailpoet new by mailpoet – woocommerce admin by woocommerce – beta woocommerce by automattic – wp crontrol by john blackbourn crontributors – inactive plugins akismet anti spam by automattic – hello dolly by matt mullenweg – dropin plugins db php query monitor database class settings api enabled – force ssl – currency usd currency position left thousand separator decimal separator number of decimals taxonomies product types external external grouped grouped simple simple variable variable taxonomies product visibility exclude from catalog exclude from catalog exclude from search exclude from search featured featured outofstock outofstock rated rated rated rated rated rated rated rated rated rated connected to woocommerce com – wc pages shop base shop cart cart checkout checkout my account my account terms and conditions ❌ page not set theme name storefront version author url child theme ❌ – if you are modifying woocommerce on a parent theme that you did not build personally we recommend using a child theme see how to create a child theme woocommerce support ✔ templates overrides – action scheduler complete oldest newest status report information generated at
0
11,807
14,628,131,691
IssuesEvent
2020-12-23 13:37:05
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Android] Participant details > Consent document is not shown for closed study having Eligibility test
Bug P0 Participant manager Process: Dev Process: Fixed Process: Tested dev
**Steps:** 1. Publish a study having token validation and Eligibility test 2. PM admin invites a user 3. Android Mobile participant enrolls into the study using valid token and passes the eligibility test 4. User enrolls successfully 5. Observe the participant details --> Consent History **A/R:** Consent document is not shown for closed study having Eligibility test **E/R:** Consent document should be shown for all type of studies Note: Issue is not observed for closed study having token validation Issue not observed for iOS users ![Screenshot_7](https://user-images.githubusercontent.com/60386291/102899937-a854ab80-4491-11eb-9f82-55489b1c8b98.png)
3.0
[PM] [Android] Participant details > Consent document is not shown for closed study having Eligibility test - **Steps:** 1. Publish a study having token validation and Eligibility test 2. PM admin invites a user 3. Android Mobile participant enrolls into the study using valid token and passes the eligibility test 4. User enrolls successfully 5. Observe the participant details --> Consent History **A/R:** Consent document is not shown for closed study having Eligibility test **E/R:** Consent document should be shown for all type of studies Note: Issue is not observed for closed study having token validation Issue not observed for iOS users ![Screenshot_7](https://user-images.githubusercontent.com/60386291/102899937-a854ab80-4491-11eb-9f82-55489b1c8b98.png)
process
participant details consent document is not shown for closed study having eligibility test steps publish a study having token validation and eligibility test pm admin invites a user android mobile participant enrolls into the study using valid token and passes the eligibility test user enrolls successfully observe the participant details consent history a r consent document is not shown for closed study having eligibility test e r consent document should be shown for all type of studies note issue is not observed for closed study having token validation issue not observed for ios users
1
364
2,798,259,253
IssuesEvent
2015-05-12 17:42:53
linguisticteam/resource-central
https://api.github.com/repos/linguisticteam/resource-central
closed
US1: Functionality for the Add Resource UI page
Work in Process
When adding a resource with multiple elements, the Add Resource page allows the user to add the first element (e.g. a part) together with the data specific only to the resource. What we have planned is the following: - upon clicking on the Submit button for adding a resource with multiple elements, the data from the form would be inserted to the database - the resource ID of the inserted resource would then be fetched - then the page would be redirected to itself, but with the resource ID added to the URL as a query string (e.g. "add_resource.php?resource_id=23") - based on the presence of that query string, the Add Resource page would bring up the form relevant to adding a resource with multiple elements and would prefill the information specific to the resource, while also making that information uneditable - the user would be able to enter the next element (e.g. part, lesson) just below the (prefilled) information specific to the resource - below the form would be a visualization of how the resource with its elements would look in the frontend
1.0
US1: Functionality for the Add Resource UI page - When adding a resource with multiple elements, the Add Resource page allows the user to add the first element (e.g. a part) together with the data specific only to the resource. What we have planned is the following: - upon clicking on the Submit button for adding a resource with multiple elements, the data from the form would be inserted to the database - the resource ID of the inserted resource would then be fetched - then the page would be redirected to itself, but with the resource ID added to the URL as a query string (e.g. "add_resource.php?resource_id=23") - based on the presence of that query string, the Add Resource page would bring up the form relevant to adding a resource with multiple elements and would prefill the information specific to the resource, while also making that information uneditable - the user would be able to enter the next element (e.g. part, lesson) just below the (prefilled) information specific to the resource - below the form would be a visualization of how the resource with its elements would look in the frontend
process
functionality for the add resource ui page when adding a resource with multiple elements the add resource page allows the user to add the first element e g a part together with the data specific only to the resource what we have planned is the following upon clicking on the submit button for adding a resource with multiple elements the data from the form would be inserted to the database the resource id of the inserted resource would then be fetched then the page would be redirected to itself but with the resource id added to the url as a query string e g add resource php resource id based on the presence of that query string the add resource page would bring up the form relevant to adding a resource with multiple elements and would prefill the information specific to the resource while also making that information uneditable the user would be able to enter the next element e g part lesson just below the prefilled information specific to the resource below the form would be a visualization of how the resource with its elements would look in the frontend
1
170,424
13,186,914,907
IssuesEvent
2020-08-13 01:44:38
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
closed
No refresh notification pops up after promoting snapshot for one ADLS Gen2 blob
:gear: adls gen2 :gear: blobs :heavy_check_mark: merged 🧪 testing
**Storage Explorer Version:** 1.14.1 **Build**: 20200710.5 **Branch**: hotfix/1.14.1 **Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ macOS Catalina **Architecture**: ia32/x64 **Regression From:** Not a regression **Steps to reproduce:** 1. Go to 'Settings' -> 'Data Explorers' -> Uncheck the setting 'Auto-refresh on change'. 2. Go to 'Settings' -> 'Services' -> 'Storage Accounts' -> Enable the setting 'Enable ADLS Gen2 snapshots (preview)' -> Restart Storage Explorer. 3. Expand one ADLS Gen2 storage account -> Blob Containers. 4. Create one blob container -> Upload one blob to it -> Create one snapshot for the blob. 5. Switch to snapshots view -> Click 'Promote Snapshot' -> Check the result. **Expect Experience:** Pop up a refresh notification. ![image](https://user-images.githubusercontent.com/41351993/87136038-7da0dd00-c2cd-11ea-9daf-157dbaec9575.png) **Actual Experience:** No refresh notification pops up. **More Info:** 1. It also needs to manually refresh under snapshots view after promoting snapshot with 'Auto-refresh on change' enabled.
1.0
No refresh notification pops up after promoting snapshot for one ADLS Gen2 blob - **Storage Explorer Version:** 1.14.1 **Build**: 20200710.5 **Branch**: hotfix/1.14.1 **Platform/OS:** Windows 10/ Linux Ubuntu 18.04/ macOS Catalina **Architecture**: ia32/x64 **Regression From:** Not a regression **Steps to reproduce:** 1. Go to 'Settings' -> 'Data Explorers' -> Uncheck the setting 'Auto-refresh on change'. 2. Go to 'Settings' -> 'Services' -> 'Storage Accounts' -> Enable the setting 'Enable ADLS Gen2 snapshots (preview)' -> Restart Storage Explorer. 3. Expand one ADLS Gen2 storage account -> Blob Containers. 4. Create one blob container -> Upload one blob to it -> Create one snapshot for the blob. 5. Switch to snapshots view -> Click 'Promote Snapshot' -> Check the result. **Expect Experience:** Pop up a refresh notification. ![image](https://user-images.githubusercontent.com/41351993/87136038-7da0dd00-c2cd-11ea-9daf-157dbaec9575.png) **Actual Experience:** No refresh notification pops up. **More Info:** 1. It also needs to manually refresh under snapshots view after promoting snapshot with 'Auto-refresh on change' enabled.
non_process
no refresh notification pops up after promoting snapshot for one adls blob storage explorer version build branch hotfix platform os windows linux ubuntu macos catalina architecture regression from not a regression steps to reproduce go to settings data explorers uncheck the setting auto refresh on change go to settings services storage accounts enable the setting enable adls snapshots preview restart storage explorer expand one adls storage account blob containers create one blob container upload one blob to it create one snapshot for the blob switch to snapshots view click promote snapshot check the result expect experience pop up a refresh notification actual experience no refresh notification pops up more info it also needs to manually refresh under snapshots view after promoting snapshot with auto refresh on change enabled
0
11,495
14,368,698,655
IssuesEvent
2020-12-01 08:46:36
lutraconsulting/qgis-crayfish-plugin
https://api.github.com/repos/lutraconsulting/qgis-crayfish-plugin
closed
FID of exported mesh faces/vertices
blocked-upstream enhancement processing
When exporting mesh faces and vertices using Crayfish 3.1.0 in QGIS 3.6.0, the fid attribute of the generated vector layer does not correspond to element and node IDs of the mesh data source (2dm-file). Would it be possible to use the IDs of the data source for the fid attribute? This would be very helpful in order to identify certain nodes and /or elements within the 2D domain.
1.0
FID of exported mesh faces/vertices - When exporting mesh faces and vertices using Crayfish 3.1.0 in QGIS 3.6.0, the fid attribute of the generated vector layer does not correspond to element and node IDs of the mesh data source (2dm-file). Would it be possible to use the IDs of the data source for the fid attribute? This would be very helpful in order to identify certain nodes and /or elements within the 2D domain.
process
fid of exported mesh faces vertices when exporting mesh faces and vertices using crayfish in qgis the fid attribute of the generated vector layer does not correspond to element and node ids of the mesh data source file would it be possible to use the ids of the data source for the fid attribute this would be very helpful in order to identify certain nodes and or elements within the domain
1
18,841
24,752,493,694
IssuesEvent
2022-10-21 14:47:17
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
closed
Switch to Gradle
enhancement process
### Problem Gradle provides a faster and more succinct build pipeline than Maven. With the Kotlin DSL, it provides a type-safe build system with auto-complete support inside IDEs. ### Solution * Add gradle kotlin build files and wrapper using gradle init * Make corrections as necessary * Keep maven build in parallel until feature parity is reached ### Alternatives _No response_
1.0
Switch to Gradle - ### Problem Gradle provides a faster and more succinct build pipeline than Maven. With the Kotlin DSL, it provides a type-safe build system with auto-complete support inside IDEs. ### Solution * Add gradle kotlin build files and wrapper using gradle init * Make corrections as necessary * Keep maven build in parallel until feature parity is reached ### Alternatives _No response_
process
switch to gradle problem gradle provides a faster and more succinct build pipeline than maven with the kotlin dsl it provides a type safe build system with auto complete support inside ides solution add gradle kotlin build files and wrapper using gradle init make corrections as necessary keep maven build in parallel until feature parity is reached alternatives no response
1
16,688
21,791,068,460
IssuesEvent
2022-05-14 22:53:25
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add Loose Limbs 5 from Evil Ed
suggested title in process
Please add as much of the following info as you can: Title: Loose Limbs 5: The Anatomy of Fear Type (film/tv show): Film Film or show in which it appears: Evil Ed Is the parent film/show streaming anywhere? https://tubitv.com/movies/475406/evil-ed About when in the parent film/show does it appear? 27:52 Actual footage of the film/show can be seen (yes/no)? Yes
1.0
Add Loose Limbs 5 from Evil Ed - Please add as much of the following info as you can: Title: Loose Limbs 5: The Anatomy of Fear Type (film/tv show): Film Film or show in which it appears: Evil Ed Is the parent film/show streaming anywhere? https://tubitv.com/movies/475406/evil-ed About when in the parent film/show does it appear? 27:52 Actual footage of the film/show can be seen (yes/no)? Yes
process
add loose limbs from evil ed please add as much of the following info as you can title loose limbs the anatomy of fear type film tv show film film or show in which it appears evil ed is the parent film show streaming anywhere about when in the parent film show does it appear actual footage of the film show can be seen yes no yes
1
11,073
9,211,251,947
IssuesEvent
2019-03-09 13:47:24
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
`aws_s3_bucket_object` metadata key name incorrect
service/s3
### Terraform Version Terraform v0.11.11 + provider.aws v2.0.0 + provider.null v2.1.0 ### Affected Resource(s) * aws_s3_bucket_object ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl provider "aws" { region = "ap-southeast-1" } data "aws_s3_bucket_object" "test" { bucket = "test-terraform-s3-bucket-object" key = "test" } resource "null_resource" "test" { provisioner "local-exec" { command = "aws s3api head-object --bucket 'test-terraform-s3-bucket-object' --key 'test' --query 'Metadata'" } } output "metadata" { value = "${data.aws_s3_bucket_object.test.metadata}" } ``` ### Expected Behavior The output of `data.aws_s3_bucket_object.test.metadata` key name should be the same as aws cli and aws S3 console (e.g. `test` ) ``` metadata = { test = Test Bucket Object } ``` ### Actual Behavior The key name automatic capitalize the first letter (e.g. `Test` instead of `test` ) ``` metadata = { Test = Test Bucket Object } ``` ### Steps to Reproduce <!--- Please list the steps required to reproduce the issue. ---> 1. `terraform apply`
1.0
`aws_s3_bucket_object` metadata key name incorrect - ### Terraform Version Terraform v0.11.11 + provider.aws v2.0.0 + provider.null v2.1.0 ### Affected Resource(s) * aws_s3_bucket_object ### Terraform Configuration Files <!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code ---> ```hcl provider "aws" { region = "ap-southeast-1" } data "aws_s3_bucket_object" "test" { bucket = "test-terraform-s3-bucket-object" key = "test" } resource "null_resource" "test" { provisioner "local-exec" { command = "aws s3api head-object --bucket 'test-terraform-s3-bucket-object' --key 'test' --query 'Metadata'" } } output "metadata" { value = "${data.aws_s3_bucket_object.test.metadata}" } ``` ### Expected Behavior The output of `data.aws_s3_bucket_object.test.metadata` key name should be the same as aws cli and aws S3 console (e.g. `test` ) ``` metadata = { test = Test Bucket Object } ``` ### Actual Behavior The key name automatic capitalize the first letter (e.g. `Test` instead of `test` ) ``` metadata = { Test = Test Bucket Object } ``` ### Steps to Reproduce <!--- Please list the steps required to reproduce the issue. ---> 1. `terraform apply`
non_process
aws bucket object metadata key name incorrect terraform version terraform provider aws provider null affected resource s aws bucket object terraform configuration files hcl provider aws region ap southeast data aws bucket object test bucket test terraform bucket object key test resource null resource test provisioner local exec command aws head object bucket test terraform bucket object key test query metadata output metadata value data aws bucket object test metadata expected behavior the output of data aws bucket object test metadata key name should be the same as aws cli and aws console e g test metadata test test bucket object actual behavior the key name automatic capitalize the first letter e g test instead of test metadata test test bucket object steps to reproduce terraform apply
0
11,241
14,015,265,590
IssuesEvent
2020-10-29 13:06:23
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - https://dwc.tdwg.org/pw/#dwcpw_p012
Class - Occurrence Process - implement Term - change
## Change term * Submitter: John Wieczorek * Justification (why is this change necessary?): consistency * Proponents (who needs this change): Everyone Proposed new attributes of the term: Remove extra space before closing parenthesis in * Label: pet/aquarium/terrarium species (including live food for such species)
1.0
Change term - https://dwc.tdwg.org/pw/#dwcpw_p012 - ## Change term * Submitter: John Wieczorek * Justification (why is this change necessary?): consistency * Proponents (who needs this change): Everyone Proposed new attributes of the term: Remove extra space before closing parenthesis in * Label: pet/aquarium/terrarium species (including live food for such species)
process
change term change term submitter john wieczorek justification why is this change necessary consistency proponents who needs this change everyone proposed new attributes of the term remove extra space before closing parenthesis in label pet aquarium terrarium species including live food for such species
1
338,922
30,329,306,572
IssuesEvent
2023-07-11 04:31:09
unifyai/ivy
https://api.github.com/repos/unifyai/ivy
closed
Fix raw_ops.test_tensorflow_LessEqual
TensorFlow Frontend Sub Task Failing Test
| | | |---|---| |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a>
1.0
Fix raw_ops.test_tensorflow_LessEqual - | | | |---|---| |torch|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |numpy|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |jax|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |tensorflow|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a> |paddle|<a href="https://github.com/unifyai/ivy/actions/runs/5515219518/jobs/10055314850"><img src=https://img.shields.io/badge/-success-success></a>
non_process
fix raw ops test tensorflow lessequal torch a href src numpy a href src jax a href src tensorflow a href src paddle a href src
0
795,140
28,063,348,868
IssuesEvent
2023-03-29 13:57:14
MichalLytek/typegraphql-prisma
https://api.github.com/repos/MichalLytek/typegraphql-prisma
closed
Pushing json array results in nested array
bug priority:high community
When you push a value into a json array you get an array added as the last element To reproduce, consider this prisma model: ``` model JsonArrayTable { id String @id @db.Uuid data Json[] } ``` The following mutation to initialize one row: ``` mutation($data: JsonArrayTableCreateInput!) { createOneJsonArrayTable(data: $data) { id } } { "data": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb", "data": { "set": [ { "prop1": "value1" } ] } } } ``` Now push a new json object into the array: ``` mutation($data: JsonArrayTableUpdateInput!, $where: JsonArrayTableWhereUniqueInput!) { updateOneJsonArrayTable(data: $data, where: $where) { id data } } { "data": { "data": { "push": { "prop2": "value2" } } }, "where": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb" } } ``` You'll get the following result: ``` { "data": { "updateOneJsonArrayTable": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb", "data": [ { "prop1": "value1" }, [ { "prop2": "value2" } ] ] } } } ``` The last element should be pushed as provided, and the result should be: ``` { "data": { "updateOneJsonArrayTable": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb", "data": [ { "prop1": "value1" }, { "prop2": "value2" } ] } } } ``` **Environment:** - Node (e.g. 16.17.0) - `typegraphql-prisma` version [0.21.5] - Prisma version [4.3.1] - TypeScript version [4.8.4]
1.0
Pushing json array results in nested array - When you push a value into a json array you get an array added as the last element To reproduce, consider this prisma model: ``` model JsonArrayTable { id String @id @db.Uuid data Json[] } ``` The following mutation to initialize one row: ``` mutation($data: JsonArrayTableCreateInput!) { createOneJsonArrayTable(data: $data) { id } } { "data": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb", "data": { "set": [ { "prop1": "value1" } ] } } } ``` Now push a new json object into the array: ``` mutation($data: JsonArrayTableUpdateInput!, $where: JsonArrayTableWhereUniqueInput!) { updateOneJsonArrayTable(data: $data, where: $where) { id data } } { "data": { "data": { "push": { "prop2": "value2" } } }, "where": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb" } } ``` You'll get the following result: ``` { "data": { "updateOneJsonArrayTable": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb", "data": [ { "prop1": "value1" }, [ { "prop2": "value2" } ] ] } } } ``` The last element should be pushed as provided, and the result should be: ``` { "data": { "updateOneJsonArrayTable": { "id": "5409c213-f29d-449e-a838-84cd8d2c5eeb", "data": [ { "prop1": "value1" }, { "prop2": "value2" } ] } } } ``` **Environment:** - Node (e.g. 16.17.0) - `typegraphql-prisma` version [0.21.5] - Prisma version [4.3.1] - TypeScript version [4.8.4]
non_process
pushing json array results in nested array when you push a value into a json array you get an array added as the last element to reproduce consider this prisma model model jsonarraytable id string id db uuid data json the following mutation to initialize one row mutation data jsonarraytablecreateinput createonejsonarraytable data data id data id data set now push a new json object into the array mutation data jsonarraytableupdateinput where jsonarraytablewhereuniqueinput updateonejsonarraytable data data where where id data data data push where id you ll get the following result data updateonejsonarraytable id data the last element should be pushed as provided and the result should be data updateonejsonarraytable id data environment node e g typegraphql prisma version prisma version typescript version
0
553,327
16,369,960,995
IssuesEvent
2021-05-15 00:00:49
Roadhog360/Et-Futurum
https://api.github.com/repos/Roadhog360/Et-Futurum
closed
Deepslate not generating. Add dimension support for worldgen features.
Enhancement Priority: Low
![image](https://user-images.githubusercontent.com/51277766/113346833-32aa7800-92e9-11eb-80af-8693dbc01d8f.png) ![image](https://user-images.githubusercontent.com/51277766/113346881-448c1b00-92e9-11eb-9dee-7fab3f32ba77.png) No config changes were made to the generation. This is caused by the Lord of The Rings mod, or any others that add dimensions. Et Futurum should have dimension support. ![2021-04-01_13 51 43](https://user-images.githubusercontent.com/51277766/113347104-8917b680-92e9-11eb-9f1b-23c8da3635f6.png)
1.0
Deepslate not generating. Add dimension support for worldgen features. - ![image](https://user-images.githubusercontent.com/51277766/113346833-32aa7800-92e9-11eb-80af-8693dbc01d8f.png) ![image](https://user-images.githubusercontent.com/51277766/113346881-448c1b00-92e9-11eb-9dee-7fab3f32ba77.png) No config changes were made to the generation. This is caused by the Lord of The Rings mod, or any others that add dimensions. Et Futurum should have dimension support. ![2021-04-01_13 51 43](https://user-images.githubusercontent.com/51277766/113347104-8917b680-92e9-11eb-9f1b-23c8da3635f6.png)
non_process
deepslate not generating add dimension support for worldgen features no config changes were made to the generation this is caused by the lord of the rings mod or any others that add dimensions et futurum should have dimension support
0
19,180
25,288,687,921
IssuesEvent
2022-11-16 21:42:20
BabylonJS/Babylon.js
https://api.github.com/repos/BabylonJS/Babylon.js
closed
Allow customization of the depth sampling function in post processes.
enhancement post-process
As discussed in https://forum.babylonjs.com/t/using-a-custom-depth-texture-source-with-post-processes-rather-than-the-depthrenderer/34007/5 This is feature request is about providing a mechanism to inject codes in post process shader without having to modify the shader store. It should be available through the default rendering pipeline. The circle circleOfConfusionPixelShader and imageProcessingFunctions are two good candidates to add extra defines in to unlock the forum thread. We will later be able to expand the list of supported entry points.
1.0
Allow customization of the depth sampling function in post processes. - As discussed in https://forum.babylonjs.com/t/using-a-custom-depth-texture-source-with-post-processes-rather-than-the-depthrenderer/34007/5 This is feature request is about providing a mechanism to inject codes in post process shader without having to modify the shader store. It should be available through the default rendering pipeline. The circle circleOfConfusionPixelShader and imageProcessingFunctions are two good candidates to add extra defines in to unlock the forum thread. We will later be able to expand the list of supported entry points.
process
allow customization of the depth sampling function in post processes as discussed in this is feature request is about providing a mechanism to inject codes in post process shader without having to modify the shader store it should be available through the default rendering pipeline the circle circleofconfusionpixelshader and imageprocessingfunctions are two good candidates to add extra defines in to unlock the forum thread we will later be able to expand the list of supported entry points
1
14,456
17,533,232,966
IssuesEvent
2021-08-12 01:46:58
qgis/QGIS
https://api.github.com/repos/qgis/QGIS
closed
Improvements to Processing:qgis:network analysis
Feedback stale Processing Feature Request
Author Name: **Paolo Cavallini** (@pcav) Original Redmine Issue: [16011](https://issues.qgis.org/issues/16011) Redmine category:processing/qgis --- I suggest to: * display start and end point for shortest path (point to point) * send a message to the user in case something goes wrong (e.g. tolerance is too low, so no path is generated) * default speed (5) seems to low * if the projection is in degrees (e.g. EPSG:4326), and the user chooses a too high tolerance (e.g. 2 degrees), the path is apparently not computed; better warn the user about this.
1.0
Improvements to Processing:qgis:network analysis - Author Name: **Paolo Cavallini** (@pcav) Original Redmine Issue: [16011](https://issues.qgis.org/issues/16011) Redmine category:processing/qgis --- I suggest to: * display start and end point for shortest path (point to point) * send a message to the user in case something goes wrong (e.g. tolerance is too low, so no path is generated) * default speed (5) seems to low * if the projection is in degrees (e.g. EPSG:4326), and the user chooses a too high tolerance (e.g. 2 degrees), the path is apparently not computed; better warn the user about this.
process
improvements to processing qgis network analysis author name paolo cavallini pcav original redmine issue redmine category processing qgis i suggest to display start and end point for shortest path point to point send a message to the user in case something goes wrong e g tolerance is too low so no path is generated default speed seems to low if the projection is in degrees e g epsg and the user chooses a too high tolerance e g degrees the path is apparently not computed better warn the user about this
1
20,403
27,061,911,417
IssuesEvent
2023-02-13 20:29:17
cse442-at-ub/DoesRepoMakeScrumBoardLookGood
https://api.github.com/repos/cse442-at-ub/DoesRepoMakeScrumBoardLookGood
opened
Create a randomized URL generator
Processing Task
**Task Tests** *Test1* 1) Call function in `utils.php` named `getURL` with arguments...
1.0
Create a randomized URL generator - **Task Tests** *Test1* 1) Call function in `utils.php` named `getURL` with arguments...
process
create a randomized url generator task tests call function in utils php named geturl with arguments
1
10,256
13,109,377,860
IssuesEvent
2020-08-04 18:34:49
googleapis/code-suggester
https://api.github.com/repos/googleapis/code-suggester
opened
Make PR commenting more user-friendly
type: process
Update the PR commenting mechanism to minimize the number of comments generated while minimizing the number of unchanged text in a comment. This reduces the number of irrelevant lines to the user which increases readability. In this example, the entire LICENSE file only changes the date, but the code comment block range is the entire file. The GUI displays all the changed and unchanged text given the code comment block range. ![image](https://user-images.githubusercontent.com/65685417/89331099-348b4100-d65f-11ea-9336-fb4ad1337d23.png) Ideally we'd want to reduce the unimportant text, while also minimizing the number of comments.
1.0
Make PR commenting more user-friendly - Update the PR commenting mechanism to minimize the number of comments generated while minimizing the number of unchanged text in a comment. This reduces the number of irrelevant lines to the user which increases readability. In this example, the entire LICENSE file only changes the date, but the code comment block range is the entire file. The GUI displays all the changed and unchanged text given the code comment block range. ![image](https://user-images.githubusercontent.com/65685417/89331099-348b4100-d65f-11ea-9336-fb4ad1337d23.png) Ideally we'd want to reduce the unimportant text, while also minimizing the number of comments.
process
make pr commenting more user friendly update the pr commenting mechanism to minimize the number of comments generated while minimizing the number of unchanged text in a comment this reduces the number of irrelevant lines to the user which increases readability in this example the entire license file only changes the date but the code comment block range is the entire file the gui displays all the changed and unchanged text given the code comment block range ideally we d want to reduce the unimportant text while also minimizing the number of comments
1
577,469
17,111,618,101
IssuesEvent
2021-07-10 12:32:00
CryptoAdvised/gunz-chain
https://api.github.com/repos/CryptoAdvised/gunz-chain
opened
Implement proof-of-play block validation consensus
Top priority
The proof-of-play validation is simple. Every peer of a game must validate each others. Every time a peer get validated he generate a block for the chain. To validate other peers , the client must send a packet to other peers and get a confirmation. This mechanism in term generate a new block for the blockchain.
1.0
Implement proof-of-play block validation consensus - The proof-of-play validation is simple. Every peer of a game must validate each others. Every time a peer get validated he generate a block for the chain. To validate other peers , the client must send a packet to other peers and get a confirmation. This mechanism in term generate a new block for the blockchain.
non_process
implement proof of play block validation consensus the proof of play validation is simple every peer of a game must validate each others every time a peer get validated he generate a block for the chain to validate other peers the client must send a packet to other peers and get a confirmation this mechanism in term generate a new block for the blockchain
0
42,940
17,373,131,859
IssuesEvent
2021-07-30 16:36:39
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Please update service section and include a topology for AKS with nodepool using public IP.
Pri1 assigned-to-author container-service/svc doc-enhancement triaged
Please update service section and include a topology for AKS with nodepool using public IP. [Enter feedback here] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9615c579-305d-2482-7eaf-2d120c75e8d5 * Version Independent ID: 064f4a10-bd74-f554-9550-781dd3e838cb * Content: [Concepts - Networking in Azure Kubernetes Services (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-network) * Content Source: [articles/aks/concepts-network.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/concepts-network.md) * Service: **container-service** * GitHub Login: @mlearned * Microsoft Alias: **mlearned**
1.0
Please update service section and include a topology for AKS with nodepool using public IP. - Please update service section and include a topology for AKS with nodepool using public IP. [Enter feedback here] --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 9615c579-305d-2482-7eaf-2d120c75e8d5 * Version Independent ID: 064f4a10-bd74-f554-9550-781dd3e838cb * Content: [Concepts - Networking in Azure Kubernetes Services (AKS) - Azure Kubernetes Service](https://docs.microsoft.com/en-us/azure/aks/concepts-network) * Content Source: [articles/aks/concepts-network.md](https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/concepts-network.md) * Service: **container-service** * GitHub Login: @mlearned * Microsoft Alias: **mlearned**
non_process
please update service section and include a topology for aks with nodepool using public ip please update service section and include a topology for aks with nodepool using public ip document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service container service github login mlearned microsoft alias mlearned
0
14,323
17,355,869,633
IssuesEvent
2021-07-29 14:20:43
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
closed
Status of Bazel 5.0.0-pre.20210722.2
P1 release team-XProduct type: process
- Expected release date: 2021-07-29 Task list: - [x] Pick release baseline: [0d94a347](https://github.com/bazelbuild/bazel/commit/0d94a3471cd845a2c401be4b3c9059eaeb8dcda0) with cherrypick [0eef1141](https://github.com/bazelbuild/bazel/commit/0eef11411d09732266f79f46bfd91bbbf2db6c44) - [x] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210722.2rc1/index.html - [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds/16949 - [x] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210722.2/index.html - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
1.0
Status of Bazel 5.0.0-pre.20210722.2 - - Expected release date: 2021-07-29 Task list: - [x] Pick release baseline: [0d94a347](https://github.com/bazelbuild/bazel/commit/0d94a3471cd845a2c401be4b3c9059eaeb8dcda0) with cherrypick [0eef1141](https://github.com/bazelbuild/bazel/commit/0eef11411d09732266f79f46bfd91bbbf2db6c44) - [x] Create release candidate: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210722.2rc1/index.html - [x] Post-submit: https://buildkite.com/bazel/bazel-bazel/builds/16949 - [x] Push the release: https://releases.bazel.build/5.0.0/rolling/5.0.0-pre.20210722.2/index.html - [x] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
process
status of bazel pre expected release date task list pick release baseline with cherrypick create release candidate post submit push the release update the
1
262,244
19,768,800,754
IssuesEvent
2022-01-17 07:42:39
kubernetes-sigs/descheduler
https://api.github.com/repos/kubernetes-sigs/descheduler
closed
Docs around autohealing are misleading
lifecycle/rotten kind/documentation
The [docs around autohealing](https://github.com/kubernetes-sigs/descheduler/blob/master/docs/user-guide.md#autoheal-node-problems) are a bit misleading in my opinion. They link off to Node Problem Detector, claiming that `Node Problem Detector can detect specific Node problems and taint any Nodes which have those problems.`. In fact, NPD doesn't do any tainting. It's the `TaintNodeByCondition` feature of the node controller that takes _some_ conditions and turns them in to taints. However this only works for the default node conditions: `PIDPressure`, `MemoryPressure`, `DiskPressure`, `Ready`, and some cloud provider specific conditions. There is an [open PR](https://github.com/kubernetes/node-problem-detector/pull/565) on NPD that wants to add this tainting behaviour, but the maintainers seem to think it shouldn't be NPD that does the tainting. The effect is that the autoheal cycle describe doesn't actually work, at least not for custom conditions. It would be wonderful if it did, because it's quite a compelling outcome, and it would be amazing if it were offered exclusively in terms of Kubernetes first party tooling. At the very least, we should change the wording in the docs to make it clear that NPD doesn't really participate in the autohealing. At most, I'm hoping that by raising this issue, the fact that this cycle doesn't currently work as intended can get a little more visibility. My guess at possible solutions: * Merge the PR linked above, so that NPD creates taints * Extend the node controller to add condition based taints for _all_ conditions, including custom ones * Create a new project to convert conditions in to taints * Add a new strategy in this repo that allows for descheduling based on Conditions
1.0
Docs around autohealing are misleading - The [docs around autohealing](https://github.com/kubernetes-sigs/descheduler/blob/master/docs/user-guide.md#autoheal-node-problems) are a bit misleading in my opinion. They link off to Node Problem Detector, claiming that `Node Problem Detector can detect specific Node problems and taint any Nodes which have those problems.`. In fact, NPD doesn't do any tainting. It's the `TaintNodeByCondition` feature of the node controller that takes _some_ conditions and turns them in to taints. However this only works for the default node conditions: `PIDPressure`, `MemoryPressure`, `DiskPressure`, `Ready`, and some cloud provider specific conditions. There is an [open PR](https://github.com/kubernetes/node-problem-detector/pull/565) on NPD that wants to add this tainting behaviour, but the maintainers seem to think it shouldn't be NPD that does the tainting. The effect is that the autoheal cycle describe doesn't actually work, at least not for custom conditions. It would be wonderful if it did, because it's quite a compelling outcome, and it would be amazing if it were offered exclusively in terms of Kubernetes first party tooling. At the very least, we should change the wording in the docs to make it clear that NPD doesn't really participate in the autohealing. At most, I'm hoping that by raising this issue, the fact that this cycle doesn't currently work as intended can get a little more visibility. My guess at possible solutions: * Merge the PR linked above, so that NPD creates taints * Extend the node controller to add condition based taints for _all_ conditions, including custom ones * Create a new project to convert conditions in to taints * Add a new strategy in this repo that allows for descheduling based on Conditions
non_process
docs around autohealing are misleading the are a bit misleading in my opinion they link off to node problem detector claiming that node problem detector can detect specific node problems and taint any nodes which have those problems in fact npd doesn t do any tainting it s the taintnodebycondition feature of the node controller that takes some conditions and turns them in to taints however this only works for the default node conditions pidpressure memorypressure diskpressure ready and some cloud provider specific conditions there is an on npd that wants to add this tainting behaviour but the maintainers seem to think it shouldn t be npd that does the tainting the effect is that the autoheal cycle describe doesn t actually work at least not for custom conditions it would be wonderful if it did because it s quite a compelling outcome and it would be amazing if it were offered exclusively in terms of kubernetes first party tooling at the very least we should change the wording in the docs to make it clear that npd doesn t really participate in the autohealing at most i m hoping that by raising this issue the fact that this cycle doesn t currently work as intended can get a little more visibility my guess at possible solutions merge the pr linked above so that npd creates taints extend the node controller to add condition based taints for all conditions including custom ones create a new project to convert conditions in to taints add a new strategy in this repo that allows for descheduling based on conditions
0
19,878
26,294,672,661
IssuesEvent
2023-01-08 20:47:05
bitfocus/companion-module-requests
https://api.github.com/repos/bitfocus/companion-module-requests
opened
Clearcom Station IC integration
NOT YET PROCESSED
- [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: Clear-Com Station IC software What you would like to be able to make it do from Companion: Run compatibility with the Station IC program from Clearcom. Direct links or attachments to the ethernet control protocol or API: https://www.clearcom.com/station-ic/
1.0
Clearcom Station IC integration - - [ ] **I have researched the list of existing Companion modules and requests and have determined this has not yet been requested** The name of the device, hardware, or software you would like to control: Clear-Com Station IC software What you would like to be able to make it do from Companion: Run compatibility with the Station IC program from Clearcom. Direct links or attachments to the ethernet control protocol or API: https://www.clearcom.com/station-ic/
process
clearcom station ic integration i have researched the list of existing companion modules and requests and have determined this has not yet been requested the name of the device hardware or software you would like to control clear com station ic software what you would like to be able to make it do from companion run compatibility with the station ic program from clearcom direct links or attachments to the ethernet control protocol or api
1
19,369
25,498,502,850
IssuesEvent
2022-11-27 23:50:46
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
Arquad empty honeycomb - unobtainable
needs investigation mod:pyalienlife mod:pypostprocessing
### Mod source Factorio Mod Portal ### Which mod are you having an issue with? - [X] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [X] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? Arquad empty honeycomb(AEHC) cannot be made with normal technology progression. There is circular dependency: Arquad empty honey comb-Arquad honey-Py Science 2-Bhoddos 1 Pypostprocessing failed to recognize Bhoddos spores mk1 (needed for first recipe to get AEHC) from ~~Filtration~~ Microfilters is Override recipe, not real Bhoddos spores unlock. Either change first recipe for AEHC from Bhoddos spores OR transfer recipe Used Comb->Wax from Arquad 2 ### Steps to reproduce _No response_ ### Additional context _No response_ ### Log file _No response_
1.0
Arquad empty honeycomb - unobtainable - ### Mod source Factorio Mod Portal ### Which mod are you having an issue with? - [X] pyalienlife - [ ] pyalternativeenergy - [ ] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [X] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [X] Progression - [ ] Balance - [X] Pypostprocessing failure - [ ] Other ### What is the problem? Arquad empty honeycomb(AEHC) cannot be made with normal technology progression. There is circular dependency: Arquad empty honey comb-Arquad honey-Py Science 2-Bhoddos 1 Pypostprocessing failed to recognize Bhoddos spores mk1 (needed for first recipe to get AEHC) from ~~Filtration~~ Microfilters is Override recipe, not real Bhoddos spores unlock. Either change first recipe for AEHC from Bhoddos spores OR transfer recipe Used Comb->Wax from Arquad 2 ### Steps to reproduce _No response_ ### Additional context _No response_ ### Log file _No response_
process
arquad empty honeycomb unobtainable mod source factorio mod portal which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem arquad empty honeycomb aehc cannot be made with normal technology progression there is circular dependency arquad empty honey comb arquad honey py science bhoddos pypostprocessing failed to recognize bhoddos spores needed for first recipe to get aehc from filtration microfilters is override recipe not real bhoddos spores unlock either change first recipe for aehc from bhoddos spores or transfer recipe used comb wax from arquad steps to reproduce no response additional context no response log file no response
1
73,206
14,009,221,696
IssuesEvent
2020-10-29 01:47:52
GTNewHorizons/GT-New-Horizons-Modpack
https://api.github.com/repos/GTNewHorizons/GT-New-Horizons-Modpack
closed
Nano circuit requieres weird soldering alloy amount
Type: Need Code changes Type: recipe change
#### Which modpack version are you using? GTNH 2.0.9.0QF4 # It requiers 3670L of soldering alloy wich is 26,1111 ingots, that's weird. Make it 26 or 27 or 32 ingots, an integer amount of ingots please. ![image](https://user-images.githubusercontent.com/57050655/89053350-5c289380-d357-11ea-8ce6-060112856fe4.png)
1.0
Nano circuit requieres weird soldering alloy amount - #### Which modpack version are you using? GTNH 2.0.9.0QF4 # It requiers 3670L of soldering alloy wich is 26,1111 ingots, that's weird. Make it 26 or 27 or 32 ingots, an integer amount of ingots please. ![image](https://user-images.githubusercontent.com/57050655/89053350-5c289380-d357-11ea-8ce6-060112856fe4.png)
non_process
nano circuit requieres weird soldering alloy amount which modpack version are you using gtnh it requiers of soldering alloy wich is ingots that s weird make it or or ingots an integer amount of ingots please
0
125,877
10,367,909,161
IssuesEvent
2019-09-07 12:41:08
emscripten-core/emscripten
https://api.github.com/repos/emscripten-core/emscripten
closed
Failed for test/hello_world_sdl.cpp
tests wontfix
MBA-Anton:emscripten asmirnov$ EMCC_FAST_COMPILER=0 ./emcc tests/hello_world_sdl.cpp -o hello_sdl.html clang: warning: argument unused during compilation: '-nostdinc++' Traceback (most recent call last): File "./emcc", line 1514, in <module> extra_files_to_link = system_libs.calculate([f for _, f in sorted(temp_files)], in_temp, stdout, stderr) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/system_libs.py", line 477, in calculate libfile = shared.Cache.get(name, create) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/cache.py", line 36, in get shutil.copyfile(creator(), cachename) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/system_libs.py", line 110, in create_libc return build_libc('libc.bc', libc_files) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/system_libs.py", line 37, in build_libc shared.Building.link(o_s, in_temp(lib_filename)) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/shared.py", line 1253, in link if Building.is_bitcode(f): File "/Users/asmirnov/Documents/dev/src/emscripten/tools/shared.py", line 1616, in is_bitcode b = open(filename, 'r').read(4) IOError: [Errno 2] No such file or directory: '/tmp/tmpw3x94X/fwrite.c.o' MBA-Anton:emscripten asmirnov$
1.0
Failed for test/hello_world_sdl.cpp - MBA-Anton:emscripten asmirnov$ EMCC_FAST_COMPILER=0 ./emcc tests/hello_world_sdl.cpp -o hello_sdl.html clang: warning: argument unused during compilation: '-nostdinc++' Traceback (most recent call last): File "./emcc", line 1514, in <module> extra_files_to_link = system_libs.calculate([f for _, f in sorted(temp_files)], in_temp, stdout, stderr) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/system_libs.py", line 477, in calculate libfile = shared.Cache.get(name, create) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/cache.py", line 36, in get shutil.copyfile(creator(), cachename) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/system_libs.py", line 110, in create_libc return build_libc('libc.bc', libc_files) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/system_libs.py", line 37, in build_libc shared.Building.link(o_s, in_temp(lib_filename)) File "/Users/asmirnov/Documents/dev/src/emscripten/tools/shared.py", line 1253, in link if Building.is_bitcode(f): File "/Users/asmirnov/Documents/dev/src/emscripten/tools/shared.py", line 1616, in is_bitcode b = open(filename, 'r').read(4) IOError: [Errno 2] No such file or directory: '/tmp/tmpw3x94X/fwrite.c.o' MBA-Anton:emscripten asmirnov$
non_process
failed for test hello world sdl cpp mba anton emscripten asmirnov emcc fast compiler emcc tests hello world sdl cpp o hello sdl html clang warning argument unused during compilation nostdinc traceback most recent call last file emcc line in extra files to link system libs calculate in temp stdout stderr file users asmirnov documents dev src emscripten tools system libs py line in calculate libfile shared cache get name create file users asmirnov documents dev src emscripten tools cache py line in get shutil copyfile creator cachename file users asmirnov documents dev src emscripten tools system libs py line in create libc return build libc libc bc libc files file users asmirnov documents dev src emscripten tools system libs py line in build libc shared building link o s in temp lib filename file users asmirnov documents dev src emscripten tools shared py line in link if building is bitcode f file users asmirnov documents dev src emscripten tools shared py line in is bitcode b open filename r read ioerror no such file or directory tmp fwrite c o mba anton emscripten asmirnov
0
4,328
4,996,215,937
IssuesEvent
2016-12-09 13:05:48
TechnionYP5777/SmartCity-Market
https://api.github.com/repos/TechnionYP5777/SmartCity-Market
closed
Create the first structure modules for the project
FirstPriority Infrastructure
#45 Should contain the following modules: Server Worker/Manager Cart Common As mentioned in the WIKI.
1.0
Create the first structure modules for the project - #45 Should contain the following modules: Server Worker/Manager Cart Common As mentioned in the WIKI.
non_process
create the first structure modules for the project should contain the following modules server worker manager cart common as mentioned in the wiki
0
70,047
15,049,646,189
IssuesEvent
2021-02-03 11:47:12
Snowfork/polkadot-ethereum
https://api.github.com/repos/Snowfork/polkadot-ethereum
opened
Triage all TODOs in codebase
Security
Triage all TODOs in codebase and either resolve or move to github issues backlog
True
Triage all TODOs in codebase - Triage all TODOs in codebase and either resolve or move to github issues backlog
non_process
triage all todos in codebase triage all todos in codebase and either resolve or move to github issues backlog
0
67,079
14,851,125,937
IssuesEvent
2021-01-18 06:15:48
turnkeylinux/tracker
https://api.github.com/repos/turnkeylinux/tracker
closed
Etherpad - Plain Text passwords
bug etherpad security upstream
[update by @jedmeister] Due to #1132 - Etherpad will be deprecated and not released as part of v15.0 :cry: As such, this issue is being closed and marked "won't fix". [updated update] **_Actually..._** I just posted an [update](https://github.com/turnkeylinux/tracker/issues/1132#issuecomment-409771622) ---- somewhat related to #813 As of https://github.com/turnkeylinux-apps/etherpad/pull/10 part of this issue has been resolved (don't reveal settings file to admins). However, I think that we should consider adding the "no clear text passwords" plugin for v15.0 release of etherpad. @jedmeister ---- Our etherpad appliance runs off nodejs app etherpad-lite (https://github.com/ether/etherpad-lite) which stores the administrator password in **plaintext**. Moreover the administrative users have read/write access to the `settings.json` file from within the admin interface, the same file which stores the plaintext passwords, allowing any authenticated administrator to read any user's password from any location. It also appears that there is or seems to be no way to logout of the admin user. Upstream seems to consider this concerning security risk as non-core functionality and has closed the related issue in preferance of a third party plugin for hashing password, see https://github.com/ether/etherpad-lite/issues/1650 To ensure our users don't enable admin without understanding the security implications, we've decided to disable easy setting of admin password. And document the risks of enabling it.
True
Etherpad - Plain Text passwords - [update by @jedmeister] Due to #1132 - Etherpad will be deprecated and not released as part of v15.0 :cry: As such, this issue is being closed and marked "won't fix". [updated update] **_Actually..._** I just posted an [update](https://github.com/turnkeylinux/tracker/issues/1132#issuecomment-409771622) ---- somewhat related to #813 As of https://github.com/turnkeylinux-apps/etherpad/pull/10 part of this issue has been resolved (don't reveal settings file to admins). However, I think that we should consider adding the "no clear text passwords" plugin for v15.0 release of etherpad. @jedmeister ---- Our etherpad appliance runs off nodejs app etherpad-lite (https://github.com/ether/etherpad-lite) which stores the administrator password in **plaintext**. Moreover the administrative users have read/write access to the `settings.json` file from within the admin interface, the same file which stores the plaintext passwords, allowing any authenticated administrator to read any user's password from any location. It also appears that there is or seems to be no way to logout of the admin user. Upstream seems to consider this concerning security risk as non-core functionality and has closed the related issue in preferance of a third party plugin for hashing password, see https://github.com/ether/etherpad-lite/issues/1650 To ensure our users don't enable admin without understanding the security implications, we've decided to disable easy setting of admin password. And document the risks of enabling it.
non_process
etherpad plain text passwords due to etherpad will be deprecated and not released as part of cry as such this issue is being closed and marked won t fix actually i just posted an somewhat related to as of part of this issue has been resolved don t reveal settings file to admins however i think that we should consider adding the no clear text passwords plugin for release of etherpad jedmeister our etherpad appliance runs off nodejs app etherpad lite which stores the administrator password in plaintext moreover the administrative users have read write access to the settings json file from within the admin interface the same file which stores the plaintext passwords allowing any authenticated administrator to read any user s password from any location it also appears that there is or seems to be no way to logout of the admin user upstream seems to consider this concerning security risk as non core functionality and has closed the related issue in preferance of a third party plugin for hashing password see to ensure our users don t enable admin without understanding the security implications we ve decided to disable easy setting of admin password and document the risks of enabling it
0
164,335
6,224,449,009
IssuesEvent
2017-07-10 14:17:00
foundersandcoders/master-reference
https://api.github.com/repos/foundersandcoders/master-reference
closed
Week 3 - Update learning outcomes
priority-2 week-API
+ How to implement a parallel functions. + Introduction to the concept of and importance of software architecture: file structure planning, white-boarding an app, IIFEs and modular architecture
1.0
Week 3 - Update learning outcomes - + How to implement a parallel functions. + Introduction to the concept of and importance of software architecture: file structure planning, white-boarding an app, IIFEs and modular architecture
non_process
week update learning outcomes how to implement a parallel functions introduction to the concept of and importance of software architecture file structure planning white boarding an app iifes and modular architecture
0
88,535
11,101,606,949
IssuesEvent
2019-12-16 21:48:59
archesproject/arches
https://api.github.com/repos/archesproject/arches
opened
Grouping card generates errors when model node is removed
Audience: Administrator Subject: Card Designer Subject: Graph Manager Type: Bug
**Describe the bug** If a the node that generates a card that participates in a grouping card is removed, or somehow else the card ends up as undefined by its id, then the card manager / editor javascript fails as card is set to `undefined`. This, per se, is not a bug (don't do that then) but the inability to then correct it as the card tab of the graph manager fails, is. **To Reproduce** * Create a model with a grouping card * Add other cards to it * remove one of the nodes that participate in the grouping card
1.0
Grouping card generates errors when model node is removed - **Describe the bug** If a the node that generates a card that participates in a grouping card is removed, or somehow else the card ends up as undefined by its id, then the card manager / editor javascript fails as card is set to `undefined`. This, per se, is not a bug (don't do that then) but the inability to then correct it as the card tab of the graph manager fails, is. **To Reproduce** * Create a model with a grouping card * Add other cards to it * remove one of the nodes that participate in the grouping card
non_process
grouping card generates errors when model node is removed describe the bug if a the node that generates a card that participates in a grouping card is removed or somehow else the card ends up as undefined by its id then the card manager editor javascript fails as card is set to undefined this per se is not a bug don t do that then but the inability to then correct it as the card tab of the graph manager fails is to reproduce create a model with a grouping card add other cards to it remove one of the nodes that participate in the grouping card
0
471,423
13,566,443,500
IssuesEvent
2020-09-18 13:18:07
magento/magento2
https://api.github.com/repos/magento/magento2
opened
[Issue] [MFTF] deprecated GoToAttributeGridPageActionGroup
Component: Catalog Component: CatalogSearch Priority: P3 Severity: S3
This issue is automatically created based on existing pull request: magento/magento2#30081: [MFTF] deprecated GoToAttributeGridPageActionGroup --------- This PR deprecated `GoToAttributeGridPageActionGroup`
1.0
[Issue] [MFTF] deprecated GoToAttributeGridPageActionGroup - This issue is automatically created based on existing pull request: magento/magento2#30081: [MFTF] deprecated GoToAttributeGridPageActionGroup --------- This PR deprecated `GoToAttributeGridPageActionGroup`
non_process
deprecated gotoattributegridpageactiongroup this issue is automatically created based on existing pull request magento deprecated gotoattributegridpageactiongroup this pr deprecated gotoattributegridpageactiongroup
0
1,408
2,998,394,777
IssuesEvent
2015-07-23 13:55:30
dotCMS/core
https://api.github.com/repos/dotCMS/core
closed
Improve System Host's lookup in DB
Cat : Performance in progress Type : enhancement
Current methods of searching the System Host are hardcoding boolean values instead of calling DbConnectionFactory.getDBTrue/False methods. We can also replace those SQLQueryFactory calls to DotContent in HostAPIImpl class when it's searching for the System Host in DB.
True
Improve System Host's lookup in DB - Current methods of searching the System Host are hardcoding boolean values instead of calling DbConnectionFactory.getDBTrue/False methods. We can also replace those SQLQueryFactory calls to DotContent in HostAPIImpl class when it's searching for the System Host in DB.
non_process
improve system host s lookup in db current methods of searching the system host are hardcoding boolean values instead of calling dbconnectionfactory getdbtrue false methods we can also replace those sqlqueryfactory calls to dotcontent in hostapiimpl class when it s searching for the system host in db
0
289,369
31,932,917,550
IssuesEvent
2023-09-19 08:38:15
Trinadh465/linux-4.1.15_CVE-2023-4128
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
opened
CVE-2021-42008 (High) detected in linux-stable-rtv4.1.33
Mend: dependency security vulnerability
## CVE-2021-42008 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/hamradio/6pack.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/hamradio/6pack.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The decode_data function in drivers/net/hamradio/6pack.c in the Linux kernel before 5.13.13 has a slab out-of-bounds write. Input from a process that has the CAP_NET_ADMIN capability can lead to root access. <p>Publish Date: 2021-10-05 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-42008>CVE-2021-42008</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-42008">https://www.linuxkernelcves.com/cves/CVE-2021-42008</a></p> <p>Release Date: 2021-10-05</p> <p>Fix Resolution: v4.4.282,v4.9.281,v4.14.245,v4.19.205,v5.4.143,v5.10.61,v5.13.13,v5.14-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-42008 (High) detected in linux-stable-rtv4.1.33 - ## CVE-2021-42008 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-stable-rtv4.1.33</b></p></summary> <p> <p>Julia Cartwright's fork of linux-stable-rt.git</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git>https://git.kernel.org/pub/scm/linux/kernel/git/julia/linux-stable-rt.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/hamradio/6pack.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/hamradio/6pack.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The decode_data function in drivers/net/hamradio/6pack.c in the Linux kernel before 5.13.13 has a slab out-of-bounds write. Input from a process that has the CAP_NET_ADMIN capability can lead to root access. <p>Publish Date: 2021-10-05 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-42008>CVE-2021-42008</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2021-42008">https://www.linuxkernelcves.com/cves/CVE-2021-42008</a></p> <p>Release Date: 2021-10-05</p> <p>Fix Resolution: v4.4.282,v4.9.281,v4.14.245,v4.19.205,v5.4.143,v5.10.61,v5.13.13,v5.14-rc7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in linux stable cve high severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files drivers net hamradio c drivers net hamradio c vulnerability details the decode data function in drivers net hamradio c in the linux kernel before has a slab out of bounds write input from a process that has the cap net admin capability can lead to root access publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
10,944
13,754,705,497
IssuesEvent
2020-10-06 17:20:08
dotnet/runtime
https://api.github.com/repos/dotnet/runtime
closed
System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited hangs in CI
area-System.Diagnostics.Process blocking-clean-ci runtime-mono
Configuration: `net5.0-OSX-Release-x64-Mono_release-OSX.1014.Amd64.Open` https://helix.dot.net/api/2019-06-17/jobs/6fafb787-3aa8-4715-a1f4-2573e59b0da3/workitems/System.Diagnostics.Process.Tests/files/console.af56a1ee.log https://dnceng.visualstudio.com/public/_build/results?buildId=720820&view=ms.vss-test-web.build-test-results-tab&runId=22326810&paneView=debug ``` Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None) Discovered: System.Diagnostics.Process.Tests (found 220 of 290 test cases) Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 12) System.Diagnostics.Tests.ProcessStartInfoTests.ShellExecute_Nano_Fails_Start [SKIP] Condition(s) not met: "IsWindowsNanoServer" Darwin System.Diagnostics.Tests.ProcessTests.TestProcessRecycledPid [SKIP] Condition(s) not met: "IsStressModeEnabledAndRemoteExecutorSupported" System.Diagnostics.Tests.ProcessTests.CanBeFinalized [SKIP] Condition(s) not met: "IsPreciseGcSupported" System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:02:10 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:04:11 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:06:12 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:08:13 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:10:14 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:12:15 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:14:15 ```
1.0
System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited hangs in CI - Configuration: `net5.0-OSX-Release-x64-Mono_release-OSX.1014.Amd64.Open` https://helix.dot.net/api/2019-06-17/jobs/6fafb787-3aa8-4715-a1f4-2573e59b0da3/workitems/System.Diagnostics.Process.Tests/files/console.af56a1ee.log https://dnceng.visualstudio.com/public/_build/results?buildId=720820&view=ms.vss-test-web.build-test-results-tab&runId=22326810&paneView=debug ``` Discovering: System.Diagnostics.Process.Tests (method display = ClassAndMethod, method display options = None) Discovered: System.Diagnostics.Process.Tests (found 220 of 290 test cases) Starting: System.Diagnostics.Process.Tests (parallel test collections = on, max threads = 12) System.Diagnostics.Tests.ProcessStartInfoTests.ShellExecute_Nano_Fails_Start [SKIP] Condition(s) not met: "IsWindowsNanoServer" Darwin System.Diagnostics.Tests.ProcessTests.TestProcessRecycledPid [SKIP] Condition(s) not met: "IsStressModeEnabledAndRemoteExecutorSupported" System.Diagnostics.Tests.ProcessTests.CanBeFinalized [SKIP] Condition(s) not met: "IsPreciseGcSupported" System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:02:10 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:04:11 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:06:12 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:08:13 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:10:14 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:12:15 System.Diagnostics.Process.Tests: [Long Running Test] 'System.Diagnostics.Tests.ProcessWaitingTests.SingleProcess_WaitAfterExited', Elapsed: 00:14:15 ```
process
system diagnostics tests processwaitingtests singleprocess waitafterexited hangs in ci configuration osx release mono release osx open discovering system diagnostics process tests method display classandmethod method display options none discovered system diagnostics process tests found of test cases starting system diagnostics process tests parallel test collections on max threads system diagnostics tests processstartinfotests shellexecute nano fails start condition s not met iswindowsnanoserver darwin system diagnostics tests processtests testprocessrecycledpid condition s not met isstressmodeenabledandremoteexecutorsupported system diagnostics tests processtests canbefinalized condition s not met isprecisegcsupported system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed system diagnostics process tests system diagnostics tests processwaitingtests singleprocess waitafterexited elapsed
1
279,540
8,670,466,758
IssuesEvent
2018-11-29 16:34:10
immidb/idb
https://api.github.com/repos/immidb/idb
closed
Create USCIS App Page (non-modal) for editing
0Feature FA:USCIS PriorityReview SZ:Days User:BAS
Move the USCIS App modal to a page (non-modal). This will prepare to allow for expanding Form usage iin an app.
1.0
Create USCIS App Page (non-modal) for editing - Move the USCIS App modal to a page (non-modal). This will prepare to allow for expanding Form usage iin an app.
non_process
create uscis app page non modal for editing move the uscis app modal to a page non modal this will prepare to allow for expanding form usage iin an app
0
98,828
8,685,475,113
IssuesEvent
2018-12-03 07:53:06
humera987/FXLabs-Test-Automation
https://api.github.com/repos/humera987/FXLabs-Test-Automation
reopened
FX Testing 3 : ApiV1ProjectsIdAutoSuggestionsStatusGetPathParamIdNullValue
FX Testing 3
Project : FX Testing 3 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 500 Headers : {} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/null/auto-suggestions/{status} Request : Response : Not enough variable values available to expand 'status' Logs : Assertion [@StatusCode != 401] resolved-to [500 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [500 != 500] result [Failed]Assertion [@StatusCode != 200] resolved-to [500 != 200] result [Passed]Assertion [@StatusCode != 404] resolved-to [500 != 404] result [Passed] --- FX Bot ---
1.0
FX Testing 3 : ApiV1ProjectsIdAutoSuggestionsStatusGetPathParamIdNullValue - Project : FX Testing 3 Job : UAT Env : UAT Region : US_WEST Result : fail Status Code : 500 Headers : {} Endpoint : http://13.56.210.25/api/v1/api/v1/projects/null/auto-suggestions/{status} Request : Response : Not enough variable values available to expand 'status' Logs : Assertion [@StatusCode != 401] resolved-to [500 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [500 != 500] result [Failed]Assertion [@StatusCode != 200] resolved-to [500 != 200] result [Passed]Assertion [@StatusCode != 404] resolved-to [500 != 404] result [Passed] --- FX Bot ---
non_process
fx testing project fx testing job uat env uat region us west result fail status code headers endpoint request response not enough variable values available to expand status logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
0
302,616
9,284,578,410
IssuesEvent
2019-03-21 02:22:43
OpenBazaar/openbazaar-go
https://api.github.com/repos/OpenBazaar/openbazaar-go
closed
Meta File Gets Corrupted
❗low priority
Sometimes when updating or installing a node, the meta file becomes corrupted. Once this happens the node is unusable. "leveldb/storage: corrupted or incomplete meta file" I've experienced this a number of times when testing, we have a user reporting it on Reddit now: https://www.reddit.com/r/OpenBazaar/comments/8g2ao4/openbazaar_20_megathread_post_bugs/e4m67io/
1.0
Meta File Gets Corrupted - Sometimes when updating or installing a node, the meta file becomes corrupted. Once this happens the node is unusable. "leveldb/storage: corrupted or incomplete meta file" I've experienced this a number of times when testing, we have a user reporting it on Reddit now: https://www.reddit.com/r/OpenBazaar/comments/8g2ao4/openbazaar_20_megathread_post_bugs/e4m67io/
non_process
meta file gets corrupted sometimes when updating or installing a node the meta file becomes corrupted once this happens the node is unusable leveldb storage corrupted or incomplete meta file i ve experienced this a number of times when testing we have a user reporting it on reddit now
0
16,619
21,678,126,798
IssuesEvent
2022-05-09 01:23:13
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add The Wondrous World of Whimsical Willy
suggested title in process
Please add as much of the following info as you can: Title: The Wondrous World of Whimsical Willy Type (film/tv show): tv show Film or show in which it appears: The Powerpuff Girls Is the parent film/show streaming anywhere? yes About when in the parent film/show does it appear? Neighbor Hood Actual footage of the film/show can be seen (yes/no)? yes ![The_Wonderful_Whimsical_World_of_Whimsical_Willy_Theme_Song](https://user-images.githubusercontent.com/88994668/129814684-4e3e9f83-3697-4d80-8541-e233ec528088.png) ![Whimsical_Willy](https://user-images.githubusercontent.com/88994668/129814727-c6804a2d-7e82-4077-adfc-008cb0c93673.png) https://powerpuffgirls.fandom.com/wiki/The_Wondrous_World_of_Whimsical_Willy
1.0
Add The Wondrous World of Whimsical Willy - Please add as much of the following info as you can: Title: The Wondrous World of Whimsical Willy Type (film/tv show): tv show Film or show in which it appears: The Powerpuff Girls Is the parent film/show streaming anywhere? yes About when in the parent film/show does it appear? Neighbor Hood Actual footage of the film/show can be seen (yes/no)? yes ![The_Wonderful_Whimsical_World_of_Whimsical_Willy_Theme_Song](https://user-images.githubusercontent.com/88994668/129814684-4e3e9f83-3697-4d80-8541-e233ec528088.png) ![Whimsical_Willy](https://user-images.githubusercontent.com/88994668/129814727-c6804a2d-7e82-4077-adfc-008cb0c93673.png) https://powerpuffgirls.fandom.com/wiki/The_Wondrous_World_of_Whimsical_Willy
process
add the wondrous world of whimsical willy please add as much of the following info as you can title the wondrous world of whimsical willy type film tv show tv show film or show in which it appears the powerpuff girls is the parent film show streaming anywhere yes about when in the parent film show does it appear neighbor hood actual footage of the film show can be seen yes no yes
1
4,696
3,876,463,294
IssuesEvent
2016-04-12 08:02:31
lionheart/openradar-mirror
https://api.github.com/repos/lionheart/openradar-mirror
opened
21793454: No API to access personal hotspot section in WiFi dropdown
classification:ui/usability reproducible:always status:open
#### Description Summary: The personal hotspot section in the WiFI dropdown shows a network label, signal strength and battery status for nearby personal hotspots. This appears to be private / undocumented API. At yourkarma.com we would like to offer a consistent user experience by showing the same information for our hotspots in the WiFi dropdown. Using the NEHotspotHelper APIs we can make the experience just as seamless as it is now with tethering. Steps to Reproduce: n.a. Expected Results: n.a. Actual Results: n.a. - Product Version: 10.10.4 Created: 2015-07-13T15:20:35.594120 Originated: 2015-07-13T11:20:00 Open Radar Link: http://www.openradar.me/21793454
True
21793454: No API to access personal hotspot section in WiFi dropdown - #### Description Summary: The personal hotspot section in the WiFI dropdown shows a network label, signal strength and battery status for nearby personal hotspots. This appears to be private / undocumented API. At yourkarma.com we would like to offer a consistent user experience by showing the same information for our hotspots in the WiFi dropdown. Using the NEHotspotHelper APIs we can make the experience just as seamless as it is now with tethering. Steps to Reproduce: n.a. Expected Results: n.a. Actual Results: n.a. - Product Version: 10.10.4 Created: 2015-07-13T15:20:35.594120 Originated: 2015-07-13T11:20:00 Open Radar Link: http://www.openradar.me/21793454
non_process
no api to access personal hotspot section in wifi dropdown description summary the personal hotspot section in the wifi dropdown shows a network label signal strength and battery status for nearby personal hotspots this appears to be private undocumented api at yourkarma com we would like to offer a consistent user experience by showing the same information for our hotspots in the wifi dropdown using the nehotspothelper apis we can make the experience just as seamless as it is now with tethering steps to reproduce n a expected results n a actual results n a product version created originated open radar link
0
241,503
18,460,212,680
IssuesEvent
2021-10-15 23:24:15
NCAR/DART
https://api.github.com/repos/NCAR/DART
closed
readme.md, quickstart section needs update
Documentation
the very first part of the 'quickstart for the impatient' section needs to be updated, i think. for 99% of our downloaders we should tell them to simply clone dart, not fork it. their fork will quickly get out of date and it's more repositories to complicate their workflow. for collaborators who are going to develop code with us, they can choose to work from a fork if they want. but these shouldn't be the default instructions. also, we shouldn't have the "large files" listed there for anything older than lanai, and maybe not even lanai. there could be a link to 'older files' if someone wants a record of how to get "historical documents". this section is supposed to be the quickest way to get started with dart. it should have the minimal info and be the simplest way to get something running.
1.0
readme.md, quickstart section needs update - the very first part of the 'quickstart for the impatient' section needs to be updated, i think. for 99% of our downloaders we should tell them to simply clone dart, not fork it. their fork will quickly get out of date and it's more repositories to complicate their workflow. for collaborators who are going to develop code with us, they can choose to work from a fork if they want. but these shouldn't be the default instructions. also, we shouldn't have the "large files" listed there for anything older than lanai, and maybe not even lanai. there could be a link to 'older files' if someone wants a record of how to get "historical documents". this section is supposed to be the quickest way to get started with dart. it should have the minimal info and be the simplest way to get something running.
non_process
readme md quickstart section needs update the very first part of the quickstart for the impatient section needs to be updated i think for of our downloaders we should tell them to simply clone dart not fork it their fork will quickly get out of date and it s more repositories to complicate their workflow for collaborators who are going to develop code with us they can choose to work from a fork if they want but these shouldn t be the default instructions also we shouldn t have the large files listed there for anything older than lanai and maybe not even lanai there could be a link to older files if someone wants a record of how to get historical documents this section is supposed to be the quickest way to get started with dart it should have the minimal info and be the simplest way to get something running
0
14,585
17,703,507,053
IssuesEvent
2021-08-25 03:10:15
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change Term - lifeStage
Term - change Class - Occurrence non-normative Process - complete
## Change term * Submitter: Paula Zermoglio @pzermoglio * Justification (why is this change necessary?): Consistency and clarity * Proponents (who needs this change): Everyone Current Term definition: https://dwc.tdwg.org/terms/#dwc:lifeStage Proposed new attributes of the term: * Term name (in lowerCamelCase): lifeStage * Organized in Class (e.g. Location, Taxon): Occurrence * Definition of the term: The age class or life stage of the **Organism(s)** at the time the Occurrence was recorded. * Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary. * Examples: **`zygote`, `larva`**, `juvenile`, `adult`, **`seedling`, `flowering`, `fruiting`** * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/lifeStage-2017-10-06 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/MycologicalUnit/MycologicalSexualStage or DataSets/DataSet/Units/Unit/MycologicalUnit/MycologicalLiveStages/MycologicalLiveStage **or DataSets/DataSet/Units/Unit/ZoologicalUnit/PhasesOrStages/PhaseOrStage** Current [dwc:lifeStage](http://rs.tdwg.org/dwc/terms/lifeStage) definition: >The age class or life stage of the biological individual(s) at the time the Occurrence was recorded. Why "biological individual(s)" and not "organism(s)" as defined in the [Organism class](http://rs.tdwg.org/dwc/terms/Organism)? (which by the way has such a circular definition!).
1.0
Change Term - lifeStage - ## Change term * Submitter: Paula Zermoglio @pzermoglio * Justification (why is this change necessary?): Consistency and clarity * Proponents (who needs this change): Everyone Current Term definition: https://dwc.tdwg.org/terms/#dwc:lifeStage Proposed new attributes of the term: * Term name (in lowerCamelCase): lifeStage * Organized in Class (e.g. Location, Taxon): Occurrence * Definition of the term: The age class or life stage of the **Organism(s)** at the time the Occurrence was recorded. * Usage comments (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary. * Examples: **`zygote`, `larva`**, `juvenile`, `adult`, **`seedling`, `flowering`, `fruiting`** * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/lifeStage-2017-10-06 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/MycologicalUnit/MycologicalSexualStage or DataSets/DataSet/Units/Unit/MycologicalUnit/MycologicalLiveStages/MycologicalLiveStage **or DataSets/DataSet/Units/Unit/ZoologicalUnit/PhasesOrStages/PhaseOrStage** Current [dwc:lifeStage](http://rs.tdwg.org/dwc/terms/lifeStage) definition: >The age class or life stage of the biological individual(s) at the time the Occurrence was recorded. Why "biological individual(s)" and not "organism(s)" as defined in the [Organism class](http://rs.tdwg.org/dwc/terms/Organism)? (which by the way has such a circular definition!).
process
change term lifestage change term submitter paula zermoglio pzermoglio justification why is this change necessary consistency and clarity proponents who needs this change everyone current term definition proposed new attributes of the term term name in lowercamelcase lifestage organized in class e g location taxon occurrence definition of the term the age class or life stage of the organism s at the time the occurrence was recorded usage comments recommendations regarding content etc recommended best practice is to use a controlled vocabulary examples zygote larva juvenile adult seedling flowering fruiting refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit mycologicalunit mycologicalsexualstage or datasets dataset units unit mycologicalunit mycologicallivestages mycologicallivestage or datasets dataset units unit zoologicalunit phasesorstages phaseorstage current definition the age class or life stage of the biological individual s at the time the occurrence was recorded why biological individual s and not organism s as defined in the which by the way has such a circular definition
1
19,859
26,269,414,059
IssuesEvent
2023-01-06 15:34:57
gobuffalo/pop
https://api.github.com/repos/gobuffalo/pop
reopened
project clean up
s: blocked s: in progress process
There are more than 100 open issues and a few open PRs here. It needs to be cleaned before we go forward. * [x] categorize, tag, and add them in a good fit milestone. (I've created three milestones) * [ ] working on each milestone * [x] #770 as a part of https://github.com/gobuffalo/buffalo/issues/2306 * [x] [milestone v6.1.0](https://github.com/gobuffalo/pop/milestone/24) (released and closed) * [ ] #771 to improve association related things * [ ] #772 to improve columns, keys, and datatypes support along with https://github.com/gobuffalo/fizz/issues/138 * [x] need to set workflow for staled issues (all other modules too) * [x] standardizing issue labels are also ongoing. * [ ] there are more issues in * [ ] https://github.com/gobuffalo/pop/milestone/15 Proposal * [ ] https://github.com/gobuffalo/pop/milestone/13 Backlog
1.0
project clean up - There are more than 100 open issues and a few open PRs here. It needs to be cleaned before we go forward. * [x] categorize, tag, and add them in a good fit milestone. (I've created three milestones) * [ ] working on each milestone * [x] #770 as a part of https://github.com/gobuffalo/buffalo/issues/2306 * [x] [milestone v6.1.0](https://github.com/gobuffalo/pop/milestone/24) (released and closed) * [ ] #771 to improve association related things * [ ] #772 to improve columns, keys, and datatypes support along with https://github.com/gobuffalo/fizz/issues/138 * [x] need to set workflow for staled issues (all other modules too) * [x] standardizing issue labels are also ongoing. * [ ] there are more issues in * [ ] https://github.com/gobuffalo/pop/milestone/15 Proposal * [ ] https://github.com/gobuffalo/pop/milestone/13 Backlog
process
project clean up there are more than open issues and a few open prs here it needs to be cleaned before we go forward categorize tag and add them in a good fit milestone i ve created three milestones working on each milestone as a part of released and closed to improve association related things to improve columns keys and datatypes support along with need to set workflow for staled issues all other modules too standardizing issue labels are also ongoing there are more issues in proposal backlog
1
13,180
15,609,758,823
IssuesEvent
2021-03-19 12:20:08
hashicorp/packer-plugin-docker
https://api.github.com/repos/hashicorp/packer-plugin-docker
opened
Post-Processor: Delete docker image in local build machine after pushing to remote registry
enhancement post-processor/docker
_This issue was originally opened by @karthik101 as hashicorp/packer#5361. It was migrated here as a result of the [Packer plugin split](###blog-post-url###). The original body of the issue is below._ <hr> I do not want docker images piling up in my jenkins-slave. Is there any way to not commit images to local build machine and able to push docker image to remote registry? I tried with "export" in builder and "docker-import" in post-processor but its still keeps a copy. Thanks, Karthik
1.0
Post-Processor: Delete docker image in local build machine after pushing to remote registry - _This issue was originally opened by @karthik101 as hashicorp/packer#5361. It was migrated here as a result of the [Packer plugin split](###blog-post-url###). The original body of the issue is below._ <hr> I do not want docker images piling up in my jenkins-slave. Is there any way to not commit images to local build machine and able to push docker image to remote registry? I tried with "export" in builder and "docker-import" in post-processor but its still keeps a copy. Thanks, Karthik
process
post processor delete docker image in local build machine after pushing to remote registry this issue was originally opened by as hashicorp packer it was migrated here as a result of the blog post url the original body of the issue is below i do not want docker images piling up in my jenkins slave is there any way to not commit images to local build machine and able to push docker image to remote registry i tried with export in builder and docker import in post processor but its still keeps a copy thanks karthik
1
14,596
17,703,565,182
IssuesEvent
2021-08-25 03:17:30
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
New term - verticalDatum
Term - add Class - Location normative Process - complete
* Submitter: John Wieczorek * Proponents (at least two independent parties who need this term): Chapman AD & Wieczorek JR (2020) Georeferencing Best Practices. Copenhagen: GBIF Secretariat. https://doi.org/10.15468/doc-gg7h-s853 * Justification (why is this term necessary?): During the revision of the Best Practices for Georeferencing (Chapman & Wieczorek 2006), it became clear that Darwin Core has no way to provide the reference upon which the values of the elevation terms (minimumElevationInMeters, maximumElevationInMeters, and verbatimElevation) are based. Without this reference, the elevation is ambiguous, as it could refer to the ellipsoid of the coordinate reference system or to the geoid. Interpretation is difficult without the verticalDatum, and could introduce errors of hundreds of meters if assumptions are made in error. * Proposed definition of the new term: "The vertical datum used as the reference upon which the values in the elevation terms are based." * Term name (in lowerCamelCase): verticalDatum * Class (e.g. Location, Taxon): Location * Comment (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary, such as the epsg code for the reference ellipsoid, or "EGM96" for the geoid. * Examples: `EGM84`, `EGM96`, `EGM2008`, `PGM2000A`, `PGM2004`, `PGM2006`, `PGM2007`, `epsg:7030`, `unknown` * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): None * ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): not in ABCD
1.0
New term - verticalDatum - * Submitter: John Wieczorek * Proponents (at least two independent parties who need this term): Chapman AD & Wieczorek JR (2020) Georeferencing Best Practices. Copenhagen: GBIF Secretariat. https://doi.org/10.15468/doc-gg7h-s853 * Justification (why is this term necessary?): During the revision of the Best Practices for Georeferencing (Chapman & Wieczorek 2006), it became clear that Darwin Core has no way to provide the reference upon which the values of the elevation terms (minimumElevationInMeters, maximumElevationInMeters, and verbatimElevation) are based. Without this reference, the elevation is ambiguous, as it could refer to the ellipsoid of the coordinate reference system or to the geoid. Interpretation is difficult without the verticalDatum, and could introduce errors of hundreds of meters if assumptions are made in error. * Proposed definition of the new term: "The vertical datum used as the reference upon which the values in the elevation terms are based." * Term name (in lowerCamelCase): verticalDatum * Class (e.g. Location, Taxon): Location * Comment (recommendations regarding content, etc.): Recommended best practice is to use a controlled vocabulary, such as the epsg code for the reference ellipsoid, or "EGM96" for the geoid. * Examples: `EGM84`, `EGM96`, `EGM2008`, `PGM2000A`, `PGM2004`, `PGM2006`, `PGM2007`, `epsg:7030`, `unknown` * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): None * ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): not in ABCD
process
new term verticaldatum submitter john wieczorek proponents at least two independent parties who need this term chapman ad wieczorek jr georeferencing best practices copenhagen gbif secretariat justification why is this term necessary during the revision of the best practices for georeferencing chapman wieczorek it became clear that darwin core has no way to provide the reference upon which the values of the elevation terms minimumelevationinmeters maximumelevationinmeters and verbatimelevation are based without this reference the elevation is ambiguous as it could refer to the ellipsoid of the coordinate reference system or to the geoid interpretation is difficult without the verticaldatum and could introduce errors of hundreds of meters if assumptions are made in error proposed definition of the new term the vertical datum used as the reference upon which the values in the elevation terms are based term name in lowercamelcase verticaldatum class e g location taxon location comment recommendations regarding content etc recommended best practice is to use a controlled vocabulary such as the epsg code for the reference ellipsoid or for the geoid examples epsg unknown refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable none abcd xpath of the equivalent term in abcd if applicable not in abcd
1
14,360
17,380,776,523
IssuesEvent
2021-07-31 17:07:42
parcel-bundler/parcel
https://api.github.com/repos/parcel-bundler/parcel
closed
@parcel/transformer-sass: Expected "to" or "from"
:bug: Bug :clock1: Waiting CSS Preprocessing ✨ Parcel 2
Build failed. @parcel/transformer-sass: Expected "to" or "from" # 🐛 bug report <!--- Provide a general summary of the issue here --> ## 🎛 Configuration (.babelrc, package.json, cli command) <!--- If describing a bug, tell us what your babel configuration looks like --> .babelrc ```js { "presets": ["@parcel/babel-preset-env"], "plugins": ["@parcel/babel-plugin-transform-runtime"] } ``` .parcelrc ```js { "extends": "@parcel/config-default", "transformers": { } } ``` .scssrc ```js { "includePaths": ["node_modules"] } ``` .package.json ```js { "devDependencies": { "@babel/core": "^7.14.6", "@babel/preset-env": "^7.14.7", "@parcel/babel-plugin-transform-runtime": "^2.0.0-beta.3.1", "@parcel/babel-preset-env": "^2.0.0-beta.3.1", "@parcel/config-default": "^2.0.0-beta.3.1", "@parcel/transformer-postcss": "^2.0.0-beta.3.1", "@parcel/transformer-sass": "^2.0.0-beta.3.1", "autoprefixer": "^10.3.1", "node-sass": "^6.0.1", "parcel": "^2.0.0-beta.3.1", "postcss": "^8.0.9", "postcss-inline-svg": "^5.0.0", "posthtml-include": "^1.7.1", "sass": "^1.35.2" } } ``` ## 🤔 Expected Behavior 调用第三方模块时,编译错误,提示: Build failed. ```js @parcel/transformer-sass: Expected "to" or "from". ╷ 307 │ html[data-isapp="1"]:not(:global(.disable-dark)) &{ │ ^ ╵ node_modules/dark-mode/es/dark-mode.scss 307:7 if-dark() node_modules/dark-mode/es/dark-mode.scss 257:3 background-color() node_modules/account-ui/css/form-state.scss 7:5 @import css/form.scss 3:9 @import css/index.scss 3:9 root stylesheet ``` ## 😯 Current Behavior <!--- Tell us what happens instead of the expected behavior --> <!--- If you are seeing an error, please include the full error message and stack trace --> ## 💁 Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug --> ## 🔦 Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## 💻 Code Sample <!-- Please provide a code repository, gist, code snippet or sample files to reproduce the issue --> ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 2.0.0-beta.3.1 | Node | 16.0.0 | npm/Yarn | yarn | Operating System | macOS Catalina 10.15.5 <!-- Love parcel? Please consider supporting our collective: 👉 https://opencollective.com/parcel/donate -->
1.0
@parcel/transformer-sass: Expected "to" or "from" - Build failed. @parcel/transformer-sass: Expected "to" or "from" # 🐛 bug report <!--- Provide a general summary of the issue here --> ## 🎛 Configuration (.babelrc, package.json, cli command) <!--- If describing a bug, tell us what your babel configuration looks like --> .babelrc ```js { "presets": ["@parcel/babel-preset-env"], "plugins": ["@parcel/babel-plugin-transform-runtime"] } ``` .parcelrc ```js { "extends": "@parcel/config-default", "transformers": { } } ``` .scssrc ```js { "includePaths": ["node_modules"] } ``` .package.json ```js { "devDependencies": { "@babel/core": "^7.14.6", "@babel/preset-env": "^7.14.7", "@parcel/babel-plugin-transform-runtime": "^2.0.0-beta.3.1", "@parcel/babel-preset-env": "^2.0.0-beta.3.1", "@parcel/config-default": "^2.0.0-beta.3.1", "@parcel/transformer-postcss": "^2.0.0-beta.3.1", "@parcel/transformer-sass": "^2.0.0-beta.3.1", "autoprefixer": "^10.3.1", "node-sass": "^6.0.1", "parcel": "^2.0.0-beta.3.1", "postcss": "^8.0.9", "postcss-inline-svg": "^5.0.0", "posthtml-include": "^1.7.1", "sass": "^1.35.2" } } ``` ## 🤔 Expected Behavior 调用第三方模块时,编译错误,提示: Build failed. ```js @parcel/transformer-sass: Expected "to" or "from". ╷ 307 │ html[data-isapp="1"]:not(:global(.disable-dark)) &{ │ ^ ╵ node_modules/dark-mode/es/dark-mode.scss 307:7 if-dark() node_modules/dark-mode/es/dark-mode.scss 257:3 background-color() node_modules/account-ui/css/form-state.scss 7:5 @import css/form.scss 3:9 @import css/index.scss 3:9 root stylesheet ``` ## 😯 Current Behavior <!--- Tell us what happens instead of the expected behavior --> <!--- If you are seeing an error, please include the full error message and stack trace --> ## 💁 Possible Solution <!--- Not obligatory, but suggest a fix/reason for the bug --> ## 🔦 Context <!--- How has this issue affected you? What are you trying to accomplish? --> <!--- Providing context helps us come up with a solution that is most useful in the real world --> ## 💻 Code Sample <!-- Please provide a code repository, gist, code snippet or sample files to reproduce the issue --> ## 🌍 Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> | Software | Version(s) | | ---------------- | ---------- | | Parcel | 2.0.0-beta.3.1 | Node | 16.0.0 | npm/Yarn | yarn | Operating System | macOS Catalina 10.15.5 <!-- Love parcel? Please consider supporting our collective: 👉 https://opencollective.com/parcel/donate -->
process
parcel transformer sass expected to or from build failed parcel transformer sass expected to or from 🐛 bug report 🎛 configuration babelrc package json cli command babelrc js presets plugins parcelrc js extends parcel config default transformers scssrc js includepaths package json js devdependencies babel core babel preset env parcel babel plugin transform runtime beta parcel babel preset env beta parcel config default beta parcel transformer postcss beta parcel transformer sass beta autoprefixer node sass parcel beta postcss postcss inline svg posthtml include sass 🤔 expected behavior 调用第三方模块时,编译错误,提示: build failed js parcel transformer sass expected to or from ╷ │ html not global disable dark │ ╵ node modules dark mode es dark mode scss if dark node modules dark mode es dark mode scss background color node modules account ui css form state scss import css form scss import css index scss root stylesheet 😯 current behavior 💁 possible solution 🔦 context 💻 code sample 🌍 your environment software version s parcel beta node npm yarn yarn operating system macos catalina love parcel please consider supporting our collective 👉
1
15,650
27,633,683,940
IssuesEvent
2023-03-10 12:54:09
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
opened
Renovate only patches package-lock.json, not package.json
type:bug status:requirements priority-5-triage
### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. 35.0.0 ### If you're self-hosting Renovate, select which platform you are using. None ### If you're self-hosting Renovate, tell us what version of the platform you run. Github.com ### Was this something which used to work for you, and then stopped? It used to work, and then stopped ### Describe the bug see: https://github.com/microsoft/azure-devops-extension-tasks/pull/381 https://github.com/microsoft/azure-devops-extension-tasks/actions/runs/4384446516/jobs/7675931768 As of 35.0.0 renovate only patches the `package-lock.json`, but not the `package.json`. Last successful run that created pull requests did patch both the package and lock files: https://github.com/microsoft/azure-devops-extension-tasks/actions/runs/4256868933/jobs/7406283477 ### Relevant debug logs <details><summary>Logs</summary> ``` ``` </details> [log.txt](https://github.com/renovatebot/renovate/files/10942198/log.txt) ### Have you created a minimal reproduction repository? I have read the minimal reproductions documentation and linked to such a repository in the bug description
1.0
Renovate only patches package-lock.json, not package.json - ### How are you running Renovate? Mend Renovate hosted app on github.com ### If you're self-hosting Renovate, tell us what version of Renovate you run. 35.0.0 ### If you're self-hosting Renovate, select which platform you are using. None ### If you're self-hosting Renovate, tell us what version of the platform you run. Github.com ### Was this something which used to work for you, and then stopped? It used to work, and then stopped ### Describe the bug see: https://github.com/microsoft/azure-devops-extension-tasks/pull/381 https://github.com/microsoft/azure-devops-extension-tasks/actions/runs/4384446516/jobs/7675931768 As of 35.0.0 renovate only patches the `package-lock.json`, but not the `package.json`. Last successful run that created pull requests did patch both the package and lock files: https://github.com/microsoft/azure-devops-extension-tasks/actions/runs/4256868933/jobs/7406283477 ### Relevant debug logs <details><summary>Logs</summary> ``` ``` </details> [log.txt](https://github.com/renovatebot/renovate/files/10942198/log.txt) ### Have you created a minimal reproduction repository? I have read the minimal reproductions documentation and linked to such a repository in the bug description
non_process
renovate only patches package lock json not package json how are you running renovate mend renovate hosted app on github com if you re self hosting renovate tell us what version of renovate you run if you re self hosting renovate select which platform you are using none if you re self hosting renovate tell us what version of the platform you run github com was this something which used to work for you and then stopped it used to work and then stopped describe the bug see as of renovate only patches the package lock json but not the package json last successful run that created pull requests did patch both the package and lock files relevant debug logs logs have you created a minimal reproduction repository i have read the minimal reproductions documentation and linked to such a repository in the bug description
0
284,606
21,447,323,759
IssuesEvent
2022-04-25 07:53:48
pravega/pravega
https://api.github.com/repos/pravega/pravega
closed
Release Candidate documentation improvement
area/documentation area/usability
**Problem description** 1. Building source code which is uploaded with RC is failing if we just follow the steps given in document 2. Complete path for "bin" folder can be added in documentation for better clarity on how to create, write and read stream. **Problem location** https://github.com/pravega/pravega/wiki/Pravega-Release-Validation-Checklist **Suggestions for an improvement** Update the https://github.com/pravega/pravega/wiki/Pravega-Release-Validation-Checklist documentation . 1. Running pravega -- > Standalone -- > from source code 2. Create a stream, write and read --- > Complete path for /bin/helloWorldWriter can be given in documentation for better clarify.
1.0
Release Candidate documentation improvement - **Problem description** 1. Building source code which is uploaded with RC is failing if we just follow the steps given in document 2. Complete path for "bin" folder can be added in documentation for better clarity on how to create, write and read stream. **Problem location** https://github.com/pravega/pravega/wiki/Pravega-Release-Validation-Checklist **Suggestions for an improvement** Update the https://github.com/pravega/pravega/wiki/Pravega-Release-Validation-Checklist documentation . 1. Running pravega -- > Standalone -- > from source code 2. Create a stream, write and read --- > Complete path for /bin/helloWorldWriter can be given in documentation for better clarify.
non_process
release candidate documentation improvement problem description building source code which is uploaded with rc is failing if we just follow the steps given in document complete path for bin folder can be added in documentation for better clarity on how to create write and read stream problem location suggestions for an improvement update the documentation running pravega standalone from source code create a stream write and read complete path for bin helloworldwriter can be given in documentation for better clarify
0
359,238
25,227,783,050
IssuesEvent
2022-11-14 17:13:02
JanssenProject/jans
https://api.github.com/repos/JanssenProject/jans
closed
docs: Consent Gathering script
area-documentation
Add consent gathering script documentation ------------------- ### Prepare - [x] Read contribution guidelines - [x] Read license information ------------------ ### Identified code changes NA ------------------- ### Test cases and code coverage NA ------------------- ### Document the changes - [x] Add consent gathering documentation - [x] Test consent gathering scripts (Java and Jython)
1.0
docs: Consent Gathering script - Add consent gathering script documentation ------------------- ### Prepare - [x] Read contribution guidelines - [x] Read license information ------------------ ### Identified code changes NA ------------------- ### Test cases and code coverage NA ------------------- ### Document the changes - [x] Add consent gathering documentation - [x] Test consent gathering scripts (Java and Jython)
non_process
docs consent gathering script add consent gathering script documentation prepare read contribution guidelines read license information identified code changes na test cases and code coverage na document the changes add consent gathering documentation test consent gathering scripts java and jython
0
278,294
24,143,429,983
IssuesEvent
2022-09-21 16:33:31
arturo-lang/arturo
https://api.github.com/repos/arturo-lang/arturo
closed
[VM/exec] Verify it's working correctly
unit-test todo
[VM/exec] Verify it's working correctly apparently, `del` won't do anything if the key did not exist https://github.com/arturo-lang/arturo/blob/d944ac79150549bcff0267c44b561544ca383380/src/vm/exec.nim#L139 ```text Arities[arg.s] = stack.peek(i).params.a.len else: # TODO(VM/exec) Verify it's working correctly # apparently, `del` won't do anything if the key did not exist # labels: unit-test Arities.del(arg.s) # if Arities.hasKey(arg.s): # Arities.del(arg.s) if imports!=VNULL: savedSyms = Syms ``` a1ebbdb5afc6dc715144cdf54e722686c1581a4f
1.0
[VM/exec] Verify it's working correctly - [VM/exec] Verify it's working correctly apparently, `del` won't do anything if the key did not exist https://github.com/arturo-lang/arturo/blob/d944ac79150549bcff0267c44b561544ca383380/src/vm/exec.nim#L139 ```text Arities[arg.s] = stack.peek(i).params.a.len else: # TODO(VM/exec) Verify it's working correctly # apparently, `del` won't do anything if the key did not exist # labels: unit-test Arities.del(arg.s) # if Arities.hasKey(arg.s): # Arities.del(arg.s) if imports!=VNULL: savedSyms = Syms ``` a1ebbdb5afc6dc715144cdf54e722686c1581a4f
non_process
verify it s working correctly verify it s working correctly apparently del won t do anything if the key did not exist text arities stack peek i params a len else todo vm exec verify it s working correctly apparently del won t do anything if the key did not exist labels unit test arities del arg s if arities haskey arg s arities del arg s if imports vnull savedsyms syms
0
22,114
30,643,994,559
IssuesEvent
2023-07-25 02:00:10
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Tue, 25 Jul 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Discovering Spatio-Temporal Rationales for Video Question Answering - **Authors:** Yicong Li, Junbin Xiao, Chun Feng, Xiang Wang, Tat-Seng Chua - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12058 - **Pdf link:** https://arxiv.org/pdf/2307.12058 - **Abstract** This paper strives to solve complex video question answering (VideoQA) which features long video containing multiple objects and events at different time. To tackle the challenge, we highlight the importance of identifying question-critical temporal moments and spatial objects from the vast amount of video content. Towards this, we propose a Spatio-Temporal Rationalization (STR), a differentiable selection module that adaptively collects question-critical moments and objects using cross-modal interaction. The discovered video moments and objects are then served as grounded rationales to support answer reasoning. Based on STR, we further propose TranSTR, a Transformer-style neural network architecture that takes STR as the core and additionally underscores a novel answer interaction mechanism to coordinate STR for answer decoding. Experiments on four datasets show that TranSTR achieves new state-of-the-art (SoTA). Especially, on NExT-QA and Causal-VidQA which feature complex VideoQA, it significantly surpasses the previous SoTA by 5.8\% and 6.8\%, respectively. We then conduct extensive studies to verify the importance of STR as well as the proposed answer interaction mechanism. With the success of TranSTR and our comprehensive analysis, we hope this work can spark more future efforts in complex VideoQA. Code will be released at https://github.com/yl3800/TranSTR. ### Towards Video Anomaly Retrieval from Video Anomaly Detection: New Benchmarks and Model - **Authors:** Peng Wu, Jing Liu, Xiangteng He, Yuxin Peng, Peng Wang, Yanning Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2307.12545 - **Pdf link:** https://arxiv.org/pdf/2307.12545 - **Abstract** Video anomaly detection (VAD) has been paid increasing attention due to its potential applications, its current dominant tasks focus on online detecting anomalies% at the frame level, which can be roughly interpreted as the binary or multiple event classification. However, such a setup that builds relationships between complicated anomalous events and single labels, e.g., ``vandalism'', is superficial, since single labels are deficient to characterize anomalous events. In reality, users tend to search a specific video rather than a series of approximate videos. Therefore, retrieving anomalous events using detailed descriptions is practical and positive but few researches focus on this. In this context, we propose a novel task called Video Anomaly Retrieval (VAR), which aims to pragmatically retrieve relevant anomalous videos by cross-modalities, e.g., language descriptions and synchronous audios. Unlike the current video retrieval where videos are assumed to be temporally well-trimmed with short duration, VAR is devised to retrieve long untrimmed videos which may be partially relevant to the given query. To achieve this, we present two large-scale VAR benchmarks, UCFCrime-AR and XDViolence-AR, constructed on top of prevalent anomaly datasets. Meanwhile, we design a model called Anomaly-Led Alignment Network (ALAN) for VAR. In ALAN, we propose an anomaly-led sampling to focus on key segments in long untrimmed videos. Then, we introduce an efficient pretext task to enhance semantic associations between video-text fine-grained representations. Besides, we leverage two complementary alignments to further match cross-modal contents. Experimental results on two benchmarks reveal the challenges of VAR task and also demonstrate the advantages of our tailored method. ### Revisiting Event-based Video Frame Interpolation - **Authors:** Jiaben Chen, Yichen Zhu, Dongze Lian, Jiaqi Yang, Yifu Wang, Renrui Zhang, Xinhang Liu, Shenhan Qian, Laurent Kneip, Shenghua Gao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2307.12558 - **Pdf link:** https://arxiv.org/pdf/2307.12558 - **Abstract** Dynamic vision sensors or event cameras provide rich complementary information for video frame interpolation. Existing state-of-the-art methods follow the paradigm of combining both synthesis-based and warping networks. However, few of those methods fully respect the intrinsic characteristics of events streams. Given that event cameras only encode intensity changes and polarity rather than color intensities, estimating optical flow from events is arguably more difficult than from RGB information. We therefore propose to incorporate RGB information in an event-guided optical flow refinement strategy. Moreover, in light of the quasi-continuous nature of the time signals provided by event cameras, we propose a divide-and-conquer strategy in which event-based intermediate frame synthesis happens incrementally in multiple simplified stages rather than in a single, long stage. Extensive experiments on both synthetic and real-world datasets show that these modifications lead to more reliable and realistic intermediate frame results than previous video frame interpolation methods. Our findings underline that a careful consideration of event characteristics such as high temporal density and elevated noise benefits interpolation accuracy. ### Damage Vision Mining Opportunity for Imbalanced Anomaly Detection - **Authors:** Takato Yasuno - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12676 - **Pdf link:** https://arxiv.org/pdf/2307.12676 - **Abstract** In past decade, previous balanced datasets have been used to advance algorithms for classification, object detection, semantic segmentation, and anomaly detection in industrial applications. Specifically, for condition-based maintenance, automating visual inspection is crucial to ensure high quality. Deterioration prognostic attempts to optimize the fine decision process for predictive maintenance and proactive repair. In civil infrastructure and living environment, damage data mining cannot avoid the imbalanced data issue because of rare unseen events and high quality status by improved operations. For visual inspection, deteriorated class acquired from the surface of concrete and steel components are occasionally imbalanced. From numerous related surveys, we summarize that imbalanced data problems can be categorized into four types; 1) missing range of target and label valuables, 2) majority-minority class imbalance, 3) foreground-background of spatial imbalance, 4) long-tailed class of pixel-wise imbalance. Since 2015, there has been many imbalanced studies using deep learning approaches that includes regression, image classification, object detection, semantic segmentation. However, anomaly detection for imbalanced data is not yet well known. In the study, we highlight one-class anomaly detection application whether anomalous class or not, and demonstrate clear examples on imbalanced vision datasets: wooden, concrete deterioration, and disaster damage. We provide key results on damage vision mining advantage, hypothesizing that the more effective range of positive ratio, the higher accuracy gain of anomaly detection application. Finally, the applicability of the damage learning methods, limitations, and future works are mentioned. ### Automotive Object Detection via Learning Sparse Events by Temporal Dynamics of Spiking Neurons - **Authors:** Hu Zhang, Luziwei Leng, Kaiwei Che, Qian Liu, Jie Cheng, Qinghai Guo, Jiangxing Liao, Ran Cheng - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12900 - **Pdf link:** https://arxiv.org/pdf/2307.12900 - **Abstract** Event-based sensors, with their high temporal resolution (1us) and dynamical range (120dB), have the potential to be deployed in high-speed platforms such as vehicles and drones. However, the highly sparse and fluctuating nature of events poses challenges for conventional object detection techniques based on Artificial Neural Networks (ANNs). In contrast, Spiking Neural Networks (SNNs) are well-suited for representing event-based data due to their inherent temporal dynamics. In particular, we demonstrate that the membrane potential dynamics can modulate network activity upon fluctuating events and strengthen features of sparse input. In addition, the spike-triggered adaptive threshold can stabilize training which further improves network performance. Based on this, we develop an efficient spiking feature pyramid network for event-based object detection. Our proposed SNN outperforms previous SNNs and sophisticated ANNs with attention mechanisms, achieving a mean average precision (map50) of 47.7% on the Gen1 benchmark dataset. This result significantly surpasses the previous best SNN by 9.7% and demonstrates the potential of SNNs for event-based vision. Our model has a concise architecture while maintaining high accuracy and much lower computation cost as a result of sparse computation. Our code will be publicly available. ## Keyword: event camera ### Revisiting Event-based Video Frame Interpolation - **Authors:** Jiaben Chen, Yichen Zhu, Dongze Lian, Jiaqi Yang, Yifu Wang, Renrui Zhang, Xinhang Liu, Shenhan Qian, Laurent Kneip, Shenghua Gao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2307.12558 - **Pdf link:** https://arxiv.org/pdf/2307.12558 - **Abstract** Dynamic vision sensors or event cameras provide rich complementary information for video frame interpolation. Existing state-of-the-art methods follow the paradigm of combining both synthesis-based and warping networks. However, few of those methods fully respect the intrinsic characteristics of events streams. Given that event cameras only encode intensity changes and polarity rather than color intensities, estimating optical flow from events is arguably more difficult than from RGB information. We therefore propose to incorporate RGB information in an event-guided optical flow refinement strategy. Moreover, in light of the quasi-continuous nature of the time signals provided by event cameras, we propose a divide-and-conquer strategy in which event-based intermediate frame synthesis happens incrementally in multiple simplified stages rather than in a single, long stage. Extensive experiments on both synthetic and real-world datasets show that these modifications lead to more reliable and realistic intermediate frame results than previous video frame interpolation methods. Our findings underline that a careful consideration of event characteristics such as high temporal density and elevated noise benefits interpolation accuracy. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation - **Authors:** Yuncheng Yang, Meng Wei, Junjun He, Jie Yang, Jin Ye, Yun Gu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11958 - **Pdf link:** https://arxiv.org/pdf/2307.11958 - **Abstract** Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources. With the abundance of medical image data, many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from. Hence, it's vital to estimate the source models' transferability (i.e., the ability to generalize across different downstream tasks) for proper and efficient model reuse. To make up for its deficiency when applying transfer learning to medical image segmentation, in this paper, we therefore propose a new Transferability Estimation (TE) method. We first analyze the drawbacks of using the existing TE algorithms for medical image segmentation and then design a source-free TE framework that considers both class consistency and feature variety for better estimation. Extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation. Code is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV ### A Stronger Stitching Algorithm for Fisheye Images based on Deblurring and Registration - **Authors:** Jing Hao, Jingming Xie, Jinyuan Zhang, Moyun Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11997 - **Pdf link:** https://arxiv.org/pdf/2307.11997 - **Abstract** Fisheye lens, which is suitable for panoramic imaging, has the prominent advantage of a large field of view and low cost. However, the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching. Aiming to resolve this drawback, we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning. In the stage of fisheye image correction, we propose the Attention-based Nonlinear Activation Free Network (ANAFNet) to deblur fisheye images corrected by Zhang calibration method. Specifically, ANAFNet adopts the classical single-stage U-shaped architecture based on convolutional neural networks with soft-attention technique and it can restore a sharp image from a blurred image effectively. In the part of image registration, we propose the ORB-FREAK-GMS (OFG), a comprehensive image matching algorithm, to improve the accuracy of image registration. Experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method. ## Keyword: ISP ### LoLep: Single-View View Synthesis with Locally-Learned Planes and Self-Attention Occlusion Inference - **Authors:** Cong Wang, Yu-Ping Wang, Dinesh Manocha - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12217 - **Pdf link:** https://arxiv.org/pdf/2307.12217 - **Abstract** We propose a novel method, LoLep, which regresses Locally-Learned planes from a single RGB image to represent scenes accurately, thus generating better novel views. Without the depth information, regressing appropriate plane locations is a challenging problem. To solve this issue, we pre-partition the disparity space into bins and design a disparity sampler to regress local offsets for multiple planes in each bin. However, only using such a sampler makes the network not convergent; we further propose two optimizing strategies that combine with different disparity distributions of datasets and propose an occlusion-aware reprojection loss as a simple yet effective geometric supervision technique. We also introduce a self-attention mechanism to improve occlusion inference and present a Block-Sampling Self-Attention (BS-SA) module to address the problem of applying self-attention to large feature maps. We demonstrate the effectiveness of our approach and generate state-of-the-art results on different datasets. Compared to MINE, our approach has an LPIPS reduction of 4.8%-9.0% and an RV reduction of 83.1%-84.7%. We also evaluate the performance on real-world images and demonstrate the benefits. ### PG-RCNN: Semantic Surface Point Generation for 3D Object Detection - **Authors:** Inyong Koo, Inyoung Lee, Se-Ho Kim, Hee-Seon Kim, Woo-jin Jeon, Changick Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12637 - **Pdf link:** https://arxiv.org/pdf/2307.12637 - **Abstract** One of the main challenges in LiDAR-based 3D object detection is that the sensors often fail to capture the complete spatial information about the objects due to long distance and occlusion. Two-stage detectors with point cloud completion approaches tackle this problem by adding more points to the regions of interest (RoIs) with a pre-trained network. However, these methods generate dense point clouds of objects for all region proposals, assuming that objects always exist in the RoIs. This leads to the indiscriminate point generation for incorrect proposals as well. Motivated by this, we propose Point Generation R-CNN (PG-RCNN), a novel end-to-end detector that generates semantic surface points of foreground objects for accurate detection. Our method uses a jointly trained RoI point generation module to process the contextual information of RoIs and estimate the complete shape and displacement of foreground objects. For every generated point, PG-RCNN assigns a semantic feature that indicates the estimated foreground probability. Extensive experiments show that the point clouds generated by our method provide geometrically and semantically rich information for refining false positive and misaligned proposals. PG-RCNN achieves competitive performance on the KITTI benchmark, with significantly fewer parameters than state-of-the-art models. The code is available at https://github.com/quotation2520/PG-RCNN. ### Volcanic ash delimitation using Artificial Intelligence based on Pix2Pix - **Authors:** Christian Carrillo, Gissela Torres, Christian Mejia-Escobar - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12970 - **Pdf link:** https://arxiv.org/pdf/2307.12970 - **Abstract** Volcanic eruptions emit ash that can be harmful to human health and cause damage to infrastructure, economic activities and the environment. The delimitation of ash clouds allows to know their behavior and dispersion, which helps in the prevention and mitigation of this phenomenon. Traditional methods take advantage of specialized software programs to process the bands or channels that compose the satellite images. However, their use is limited to experts and demands a lot of time and significant computational resources. In recent years, Artificial Intelligence has been a milestone in the computational treatment of complex problems in different areas. In particular, Deep Learning techniques allow automatic, fast and accurate processing of digital images. The present work proposes the use of the Pix2Pix model, a type of generative adversarial network that, once trained, learns the mapping of input images to output images. The architecture of such a network consisting of a generator and a discriminator provides the versatility needed to produce black and white ash cloud images from multispectral satellite images. The evaluation of the model, based on loss and accuracy plots, a confusion matrix, and visual inspection, indicates a satisfactory solution for accurate ash cloud delineation, applicable in any area of the world and becomes a useful tool in risk management. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Digital Modeling on Large Kernel Metamaterial Neural Network - **Authors:** Quan Liu, Hanyu Zheng, Brandon T. Swartz, Ho hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G. Valentine, Yuankai Huo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.11862 - **Pdf link:** https://arxiv.org/pdf/2307.11862 - **Abstract** Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3x3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI. ### Model Compression Methods for YOLOv5: A Review - **Authors:** Mohammad Jani, Jamil Fayyad, Younes Al-Younes, Homayoun Najjaran - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE) - **Arxiv link:** https://arxiv.org/abs/2307.11904 - **Pdf link:** https://arxiv.org/pdf/2307.11904 - **Abstract** Over the past few years, extensive research has been devoted to enhancing YOLO object detectors. Since its introduction, eight major versions of YOLO have been introduced with the purpose of improving its accuracy and efficiency. While the evident merits of YOLO have yielded to its extensive use in many areas, deploying it on resource-limited devices poses challenges. To address this issue, various neural network compression methods have been developed, which fall under three main categories, namely network pruning, quantization, and knowledge distillation. The fruitful outcomes of utilizing model compression methods, such as lowering memory usage and inference time, make them favorable, if not necessary, for deploying large neural networks on hardware-constrained edge devices. In this review paper, our focus is on pruning and quantization due to their comparative modularity. We categorize them and analyze the practical results of applying those methods to YOLOv5. By doing so, we identify gaps in adapting pruning and quantization for compressing YOLOv5, and provide future directions in this area for further exploration. Among several versions of YOLO, we specifically choose YOLOv5 for its excellent trade-off between recency and popularity in literature. This is the first specific review paper that surveys pruning and quantization methods from an implementation point of view on YOLOv5. Our study is also extendable to newer versions of YOLO as implementing them on resource-limited devices poses the same challenges that persist even today. This paper targets those interested in the practical deployment of model compression methods on YOLOv5, and in exploring different compression techniques that can be used for subsequent versions of YOLO. ## Keyword: RAW ### Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation - **Authors:** Yuncheng Yang, Meng Wei, Junjun He, Jie Yang, Jin Ye, Yun Gu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11958 - **Pdf link:** https://arxiv.org/pdf/2307.11958 - **Abstract** Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources. With the abundance of medical image data, many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from. Hence, it's vital to estimate the source models' transferability (i.e., the ability to generalize across different downstream tasks) for proper and efficient model reuse. To make up for its deficiency when applying transfer learning to medical image segmentation, in this paper, we therefore propose a new Transferability Estimation (TE) method. We first analyze the drawbacks of using the existing TE algorithms for medical image segmentation and then design a source-free TE framework that considers both class consistency and feature variety for better estimation. Extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation. Code is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV ### Learning Vision-and-Language Navigation from YouTube Videos - **Authors:** Kunyang Lin, Peihao Chen, Diwei Huang, Thomas H. Li, Mingkui Tan, Chuang Gan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2307.11984 - **Pdf link:** https://arxiv.org/pdf/2307.11984 - **Abstract** Vision-and-language navigation (VLN) requires an embodied agent to navigate in realistic 3D environments using natural language instructions. Existing VLN methods suffer from training on small-scale environments or unreasonable path-instruction datasets, limiting the generalization to unseen environments. There are massive house tour videos on YouTube, providing abundant real navigation experiences and layout information. However, these videos have not been explored for VLN before. In this paper, we propose to learn an agent from these videos by creating a large-scale dataset which comprises reasonable path-instruction pairs from house tour videos and pre-training the agent on it. To achieve this, we have to tackle the challenges of automatically constructing path-instruction pairs and exploiting real layout knowledge from raw and unlabeled videos. To address these, we first leverage an entropy-based method to construct the nodes of a path trajectory. Then, we propose an action-aware generator for generating instructions from unlabeled trajectories. Last, we devise a trajectory judgment pretext task to encourage the agent to mine the layout knowledge. Experimental results show that our method achieves state-of-the-art performance on two popular benchmarks (R2R and REVERIE). Code is available at https://github.com/JeremyLinky/YouTube-VLN ### A Stronger Stitching Algorithm for Fisheye Images based on Deblurring and Registration - **Authors:** Jing Hao, Jingming Xie, Jinyuan Zhang, Moyun Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11997 - **Pdf link:** https://arxiv.org/pdf/2307.11997 - **Abstract** Fisheye lens, which is suitable for panoramic imaging, has the prominent advantage of a large field of view and low cost. However, the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching. Aiming to resolve this drawback, we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning. In the stage of fisheye image correction, we propose the Attention-based Nonlinear Activation Free Network (ANAFNet) to deblur fisheye images corrected by Zhang calibration method. Specifically, ANAFNet adopts the classical single-stage U-shaped architecture based on convolutional neural networks with soft-attention technique and it can restore a sharp image from a blurred image effectively. In the part of image registration, we propose the ORB-FREAK-GMS (OFG), a comprehensive image matching algorithm, to improve the accuracy of image registration. Experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method. ### On the Connection between Pre-training Data Diversity and Fine-tuning Robustness - **Authors:** Vivek Ramanujan, Thao Nguyen, Sewoong Oh, Ludwig Schmidt, Ali Farhadi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.12532 - **Pdf link:** https://arxiv.org/pdf/2307.12532 - **Abstract** Pre-training has been widely adopted in deep learning to improve model performance, especially when the training data for a target task is limited. In our work, we seek to understand the implications of this training strategy on the generalization properties of downstream models. More specifically, we ask the following question: how do properties of the pre-training distribution affect the robustness of a fine-tuned model? The properties we explore include the label space, label semantics, image diversity, data domains, and data quantity of the pre-training distribution. We find that the primary factor influencing downstream effective robustness (Taori et al., 2020) is data quantity, while other factors have limited significance. For example, reducing the number of ImageNet pre-training classes by 4x while increasing the number of images per class by 4x (that is, keeping total data quantity fixed) does not impact the robustness of fine-tuned models. We demonstrate our findings on pre-training distributions drawn from various natural and synthetic data sources, primarily using the iWildCam-WILDS distribution shift as a test for downstream robustness. ## Keyword: raw image There is no result
2.0
New submissions for Tue, 25 Jul 23 - ## Keyword: events ### Discovering Spatio-Temporal Rationales for Video Question Answering - **Authors:** Yicong Li, Junbin Xiao, Chun Feng, Xiang Wang, Tat-Seng Chua - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12058 - **Pdf link:** https://arxiv.org/pdf/2307.12058 - **Abstract** This paper strives to solve complex video question answering (VideoQA) which features long video containing multiple objects and events at different time. To tackle the challenge, we highlight the importance of identifying question-critical temporal moments and spatial objects from the vast amount of video content. Towards this, we propose a Spatio-Temporal Rationalization (STR), a differentiable selection module that adaptively collects question-critical moments and objects using cross-modal interaction. The discovered video moments and objects are then served as grounded rationales to support answer reasoning. Based on STR, we further propose TranSTR, a Transformer-style neural network architecture that takes STR as the core and additionally underscores a novel answer interaction mechanism to coordinate STR for answer decoding. Experiments on four datasets show that TranSTR achieves new state-of-the-art (SoTA). Especially, on NExT-QA and Causal-VidQA which feature complex VideoQA, it significantly surpasses the previous SoTA by 5.8\% and 6.8\%, respectively. We then conduct extensive studies to verify the importance of STR as well as the proposed answer interaction mechanism. With the success of TranSTR and our comprehensive analysis, we hope this work can spark more future efforts in complex VideoQA. Code will be released at https://github.com/yl3800/TranSTR. ### Towards Video Anomaly Retrieval from Video Anomaly Detection: New Benchmarks and Model - **Authors:** Peng Wu, Jing Liu, Xiangteng He, Yuxin Peng, Peng Wang, Yanning Zhang - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI) - **Arxiv link:** https://arxiv.org/abs/2307.12545 - **Pdf link:** https://arxiv.org/pdf/2307.12545 - **Abstract** Video anomaly detection (VAD) has been paid increasing attention due to its potential applications, its current dominant tasks focus on online detecting anomalies% at the frame level, which can be roughly interpreted as the binary or multiple event classification. However, such a setup that builds relationships between complicated anomalous events and single labels, e.g., ``vandalism'', is superficial, since single labels are deficient to characterize anomalous events. In reality, users tend to search a specific video rather than a series of approximate videos. Therefore, retrieving anomalous events using detailed descriptions is practical and positive but few researches focus on this. In this context, we propose a novel task called Video Anomaly Retrieval (VAR), which aims to pragmatically retrieve relevant anomalous videos by cross-modalities, e.g., language descriptions and synchronous audios. Unlike the current video retrieval where videos are assumed to be temporally well-trimmed with short duration, VAR is devised to retrieve long untrimmed videos which may be partially relevant to the given query. To achieve this, we present two large-scale VAR benchmarks, UCFCrime-AR and XDViolence-AR, constructed on top of prevalent anomaly datasets. Meanwhile, we design a model called Anomaly-Led Alignment Network (ALAN) for VAR. In ALAN, we propose an anomaly-led sampling to focus on key segments in long untrimmed videos. Then, we introduce an efficient pretext task to enhance semantic associations between video-text fine-grained representations. Besides, we leverage two complementary alignments to further match cross-modal contents. Experimental results on two benchmarks reveal the challenges of VAR task and also demonstrate the advantages of our tailored method. ### Revisiting Event-based Video Frame Interpolation - **Authors:** Jiaben Chen, Yichen Zhu, Dongze Lian, Jiaqi Yang, Yifu Wang, Renrui Zhang, Xinhang Liu, Shenhan Qian, Laurent Kneip, Shenghua Gao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2307.12558 - **Pdf link:** https://arxiv.org/pdf/2307.12558 - **Abstract** Dynamic vision sensors or event cameras provide rich complementary information for video frame interpolation. Existing state-of-the-art methods follow the paradigm of combining both synthesis-based and warping networks. However, few of those methods fully respect the intrinsic characteristics of events streams. Given that event cameras only encode intensity changes and polarity rather than color intensities, estimating optical flow from events is arguably more difficult than from RGB information. We therefore propose to incorporate RGB information in an event-guided optical flow refinement strategy. Moreover, in light of the quasi-continuous nature of the time signals provided by event cameras, we propose a divide-and-conquer strategy in which event-based intermediate frame synthesis happens incrementally in multiple simplified stages rather than in a single, long stage. Extensive experiments on both synthetic and real-world datasets show that these modifications lead to more reliable and realistic intermediate frame results than previous video frame interpolation methods. Our findings underline that a careful consideration of event characteristics such as high temporal density and elevated noise benefits interpolation accuracy. ### Damage Vision Mining Opportunity for Imbalanced Anomaly Detection - **Authors:** Takato Yasuno - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12676 - **Pdf link:** https://arxiv.org/pdf/2307.12676 - **Abstract** In past decade, previous balanced datasets have been used to advance algorithms for classification, object detection, semantic segmentation, and anomaly detection in industrial applications. Specifically, for condition-based maintenance, automating visual inspection is crucial to ensure high quality. Deterioration prognostic attempts to optimize the fine decision process for predictive maintenance and proactive repair. In civil infrastructure and living environment, damage data mining cannot avoid the imbalanced data issue because of rare unseen events and high quality status by improved operations. For visual inspection, deteriorated class acquired from the surface of concrete and steel components are occasionally imbalanced. From numerous related surveys, we summarize that imbalanced data problems can be categorized into four types; 1) missing range of target and label valuables, 2) majority-minority class imbalance, 3) foreground-background of spatial imbalance, 4) long-tailed class of pixel-wise imbalance. Since 2015, there has been many imbalanced studies using deep learning approaches that includes regression, image classification, object detection, semantic segmentation. However, anomaly detection for imbalanced data is not yet well known. In the study, we highlight one-class anomaly detection application whether anomalous class or not, and demonstrate clear examples on imbalanced vision datasets: wooden, concrete deterioration, and disaster damage. We provide key results on damage vision mining advantage, hypothesizing that the more effective range of positive ratio, the higher accuracy gain of anomaly detection application. Finally, the applicability of the damage learning methods, limitations, and future works are mentioned. ### Automotive Object Detection via Learning Sparse Events by Temporal Dynamics of Spiking Neurons - **Authors:** Hu Zhang, Luziwei Leng, Kaiwei Che, Qian Liu, Jie Cheng, Qinghai Guo, Jiangxing Liao, Ran Cheng - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12900 - **Pdf link:** https://arxiv.org/pdf/2307.12900 - **Abstract** Event-based sensors, with their high temporal resolution (1us) and dynamical range (120dB), have the potential to be deployed in high-speed platforms such as vehicles and drones. However, the highly sparse and fluctuating nature of events poses challenges for conventional object detection techniques based on Artificial Neural Networks (ANNs). In contrast, Spiking Neural Networks (SNNs) are well-suited for representing event-based data due to their inherent temporal dynamics. In particular, we demonstrate that the membrane potential dynamics can modulate network activity upon fluctuating events and strengthen features of sparse input. In addition, the spike-triggered adaptive threshold can stabilize training which further improves network performance. Based on this, we develop an efficient spiking feature pyramid network for event-based object detection. Our proposed SNN outperforms previous SNNs and sophisticated ANNs with attention mechanisms, achieving a mean average precision (map50) of 47.7% on the Gen1 benchmark dataset. This result significantly surpasses the previous best SNN by 9.7% and demonstrates the potential of SNNs for event-based vision. Our model has a concise architecture while maintaining high accuracy and much lower computation cost as a result of sparse computation. Our code will be publicly available. ## Keyword: event camera ### Revisiting Event-based Video Frame Interpolation - **Authors:** Jiaben Chen, Yichen Zhu, Dongze Lian, Jiaqi Yang, Yifu Wang, Renrui Zhang, Xinhang Liu, Shenhan Qian, Laurent Kneip, Shenghua Gao - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2307.12558 - **Pdf link:** https://arxiv.org/pdf/2307.12558 - **Abstract** Dynamic vision sensors or event cameras provide rich complementary information for video frame interpolation. Existing state-of-the-art methods follow the paradigm of combining both synthesis-based and warping networks. However, few of those methods fully respect the intrinsic characteristics of events streams. Given that event cameras only encode intensity changes and polarity rather than color intensities, estimating optical flow from events is arguably more difficult than from RGB information. We therefore propose to incorporate RGB information in an event-guided optical flow refinement strategy. Moreover, in light of the quasi-continuous nature of the time signals provided by event cameras, we propose a divide-and-conquer strategy in which event-based intermediate frame synthesis happens incrementally in multiple simplified stages rather than in a single, long stage. Extensive experiments on both synthetic and real-world datasets show that these modifications lead to more reliable and realistic intermediate frame results than previous video frame interpolation methods. Our findings underline that a careful consideration of event characteristics such as high temporal density and elevated noise benefits interpolation accuracy. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB ### Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation - **Authors:** Yuncheng Yang, Meng Wei, Junjun He, Jie Yang, Jin Ye, Yun Gu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11958 - **Pdf link:** https://arxiv.org/pdf/2307.11958 - **Abstract** Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources. With the abundance of medical image data, many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from. Hence, it's vital to estimate the source models' transferability (i.e., the ability to generalize across different downstream tasks) for proper and efficient model reuse. To make up for its deficiency when applying transfer learning to medical image segmentation, in this paper, we therefore propose a new Transferability Estimation (TE) method. We first analyze the drawbacks of using the existing TE algorithms for medical image segmentation and then design a source-free TE framework that considers both class consistency and feature variety for better estimation. Extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation. Code is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV ### A Stronger Stitching Algorithm for Fisheye Images based on Deblurring and Registration - **Authors:** Jing Hao, Jingming Xie, Jinyuan Zhang, Moyun Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11997 - **Pdf link:** https://arxiv.org/pdf/2307.11997 - **Abstract** Fisheye lens, which is suitable for panoramic imaging, has the prominent advantage of a large field of view and low cost. However, the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching. Aiming to resolve this drawback, we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning. In the stage of fisheye image correction, we propose the Attention-based Nonlinear Activation Free Network (ANAFNet) to deblur fisheye images corrected by Zhang calibration method. Specifically, ANAFNet adopts the classical single-stage U-shaped architecture based on convolutional neural networks with soft-attention technique and it can restore a sharp image from a blurred image effectively. In the part of image registration, we propose the ORB-FREAK-GMS (OFG), a comprehensive image matching algorithm, to improve the accuracy of image registration. Experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method. ## Keyword: ISP ### LoLep: Single-View View Synthesis with Locally-Learned Planes and Self-Attention Occlusion Inference - **Authors:** Cong Wang, Yu-Ping Wang, Dinesh Manocha - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12217 - **Pdf link:** https://arxiv.org/pdf/2307.12217 - **Abstract** We propose a novel method, LoLep, which regresses Locally-Learned planes from a single RGB image to represent scenes accurately, thus generating better novel views. Without the depth information, regressing appropriate plane locations is a challenging problem. To solve this issue, we pre-partition the disparity space into bins and design a disparity sampler to regress local offsets for multiple planes in each bin. However, only using such a sampler makes the network not convergent; we further propose two optimizing strategies that combine with different disparity distributions of datasets and propose an occlusion-aware reprojection loss as a simple yet effective geometric supervision technique. We also introduce a self-attention mechanism to improve occlusion inference and present a Block-Sampling Self-Attention (BS-SA) module to address the problem of applying self-attention to large feature maps. We demonstrate the effectiveness of our approach and generate state-of-the-art results on different datasets. Compared to MINE, our approach has an LPIPS reduction of 4.8%-9.0% and an RV reduction of 83.1%-84.7%. We also evaluate the performance on real-world images and demonstrate the benefits. ### PG-RCNN: Semantic Surface Point Generation for 3D Object Detection - **Authors:** Inyong Koo, Inyoung Lee, Se-Ho Kim, Hee-Seon Kim, Woo-jin Jeon, Changick Kim - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12637 - **Pdf link:** https://arxiv.org/pdf/2307.12637 - **Abstract** One of the main challenges in LiDAR-based 3D object detection is that the sensors often fail to capture the complete spatial information about the objects due to long distance and occlusion. Two-stage detectors with point cloud completion approaches tackle this problem by adding more points to the regions of interest (RoIs) with a pre-trained network. However, these methods generate dense point clouds of objects for all region proposals, assuming that objects always exist in the RoIs. This leads to the indiscriminate point generation for incorrect proposals as well. Motivated by this, we propose Point Generation R-CNN (PG-RCNN), a novel end-to-end detector that generates semantic surface points of foreground objects for accurate detection. Our method uses a jointly trained RoI point generation module to process the contextual information of RoIs and estimate the complete shape and displacement of foreground objects. For every generated point, PG-RCNN assigns a semantic feature that indicates the estimated foreground probability. Extensive experiments show that the point clouds generated by our method provide geometrically and semantically rich information for refining false positive and misaligned proposals. PG-RCNN achieves competitive performance on the KITTI benchmark, with significantly fewer parameters than state-of-the-art models. The code is available at https://github.com/quotation2520/PG-RCNN. ### Volcanic ash delimitation using Artificial Intelligence based on Pix2Pix - **Authors:** Christian Carrillo, Gissela Torres, Christian Mejia-Escobar - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.12970 - **Pdf link:** https://arxiv.org/pdf/2307.12970 - **Abstract** Volcanic eruptions emit ash that can be harmful to human health and cause damage to infrastructure, economic activities and the environment. The delimitation of ash clouds allows to know their behavior and dispersion, which helps in the prevention and mitigation of this phenomenon. Traditional methods take advantage of specialized software programs to process the bands or channels that compose the satellite images. However, their use is limited to experts and demands a lot of time and significant computational resources. In recent years, Artificial Intelligence has been a milestone in the computational treatment of complex problems in different areas. In particular, Deep Learning techniques allow automatic, fast and accurate processing of digital images. The present work proposes the use of the Pix2Pix model, a type of generative adversarial network that, once trained, learns the mapping of input images to output images. The architecture of such a network consisting of a generator and a discriminator provides the versatility needed to produce black and white ash cloud images from multispectral satellite images. The evaluation of the model, based on loss and accuracy plots, a confusion matrix, and visual inspection, indicates a satisfactory solution for accurate ash cloud delineation, applicable in any area of the world and becomes a useful tool in risk management. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Digital Modeling on Large Kernel Metamaterial Neural Network - **Authors:** Quan Liu, Hanyu Zheng, Brandon T. Swartz, Ho hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G. Valentine, Yuankai Huo - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Image and Video Processing (eess.IV) - **Arxiv link:** https://arxiv.org/abs/2307.11862 - **Pdf link:** https://arxiv.org/pdf/2307.11862 - **Abstract** Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3x3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI. ### Model Compression Methods for YOLOv5: A Review - **Authors:** Mohammad Jani, Jamil Fayyad, Younes Al-Younes, Homayoun Najjaran - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary Computing (cs.NE) - **Arxiv link:** https://arxiv.org/abs/2307.11904 - **Pdf link:** https://arxiv.org/pdf/2307.11904 - **Abstract** Over the past few years, extensive research has been devoted to enhancing YOLO object detectors. Since its introduction, eight major versions of YOLO have been introduced with the purpose of improving its accuracy and efficiency. While the evident merits of YOLO have yielded to its extensive use in many areas, deploying it on resource-limited devices poses challenges. To address this issue, various neural network compression methods have been developed, which fall under three main categories, namely network pruning, quantization, and knowledge distillation. The fruitful outcomes of utilizing model compression methods, such as lowering memory usage and inference time, make them favorable, if not necessary, for deploying large neural networks on hardware-constrained edge devices. In this review paper, our focus is on pruning and quantization due to their comparative modularity. We categorize them and analyze the practical results of applying those methods to YOLOv5. By doing so, we identify gaps in adapting pruning and quantization for compressing YOLOv5, and provide future directions in this area for further exploration. Among several versions of YOLO, we specifically choose YOLOv5 for its excellent trade-off between recency and popularity in literature. This is the first specific review paper that surveys pruning and quantization methods from an implementation point of view on YOLOv5. Our study is also extendable to newer versions of YOLO as implementing them on resource-limited devices poses the same challenges that persist even today. This paper targets those interested in the practical deployment of model compression methods on YOLOv5, and in exploring different compression techniques that can be used for subsequent versions of YOLO. ## Keyword: RAW ### Pick the Best Pre-trained Model: Towards Transferability Estimation for Medical Image Segmentation - **Authors:** Yuncheng Yang, Meng Wei, Junjun He, Jie Yang, Jin Ye, Yun Gu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11958 - **Pdf link:** https://arxiv.org/pdf/2307.11958 - **Abstract** Transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources. With the abundance of medical image data, many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from. Hence, it's vital to estimate the source models' transferability (i.e., the ability to generalize across different downstream tasks) for proper and efficient model reuse. To make up for its deficiency when applying transfer learning to medical image segmentation, in this paper, we therefore propose a new Transferability Estimation (TE) method. We first analyze the drawbacks of using the existing TE algorithms for medical image segmentation and then design a source-free TE framework that considers both class consistency and feature variety for better estimation. Extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation. Code is available at https://github.com/EndoluminalSurgicalVision-IMR/CCFV ### Learning Vision-and-Language Navigation from YouTube Videos - **Authors:** Kunyang Lin, Peihao Chen, Diwei Huang, Thomas H. Li, Mingkui Tan, Chuang Gan - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2307.11984 - **Pdf link:** https://arxiv.org/pdf/2307.11984 - **Abstract** Vision-and-language navigation (VLN) requires an embodied agent to navigate in realistic 3D environments using natural language instructions. Existing VLN methods suffer from training on small-scale environments or unreasonable path-instruction datasets, limiting the generalization to unseen environments. There are massive house tour videos on YouTube, providing abundant real navigation experiences and layout information. However, these videos have not been explored for VLN before. In this paper, we propose to learn an agent from these videos by creating a large-scale dataset which comprises reasonable path-instruction pairs from house tour videos and pre-training the agent on it. To achieve this, we have to tackle the challenges of automatically constructing path-instruction pairs and exploiting real layout knowledge from raw and unlabeled videos. To address these, we first leverage an entropy-based method to construct the nodes of a path trajectory. Then, we propose an action-aware generator for generating instructions from unlabeled trajectories. Last, we devise a trajectory judgment pretext task to encourage the agent to mine the layout knowledge. Experimental results show that our method achieves state-of-the-art performance on two popular benchmarks (R2R and REVERIE). Code is available at https://github.com/JeremyLinky/YouTube-VLN ### A Stronger Stitching Algorithm for Fisheye Images based on Deblurring and Registration - **Authors:** Jing Hao, Jingming Xie, Jinyuan Zhang, Moyun Liu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2307.11997 - **Pdf link:** https://arxiv.org/pdf/2307.11997 - **Abstract** Fisheye lens, which is suitable for panoramic imaging, has the prominent advantage of a large field of view and low cost. However, the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching. Aiming to resolve this drawback, we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning. In the stage of fisheye image correction, we propose the Attention-based Nonlinear Activation Free Network (ANAFNet) to deblur fisheye images corrected by Zhang calibration method. Specifically, ANAFNet adopts the classical single-stage U-shaped architecture based on convolutional neural networks with soft-attention technique and it can restore a sharp image from a blurred image effectively. In the part of image registration, we propose the ORB-FREAK-GMS (OFG), a comprehensive image matching algorithm, to improve the accuracy of image registration. Experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method. ### On the Connection between Pre-training Data Diversity and Fine-tuning Robustness - **Authors:** Vivek Ramanujan, Thao Nguyen, Sewoong Oh, Ludwig Schmidt, Ali Farhadi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2307.12532 - **Pdf link:** https://arxiv.org/pdf/2307.12532 - **Abstract** Pre-training has been widely adopted in deep learning to improve model performance, especially when the training data for a target task is limited. In our work, we seek to understand the implications of this training strategy on the generalization properties of downstream models. More specifically, we ask the following question: how do properties of the pre-training distribution affect the robustness of a fine-tuned model? The properties we explore include the label space, label semantics, image diversity, data domains, and data quantity of the pre-training distribution. We find that the primary factor influencing downstream effective robustness (Taori et al., 2020) is data quantity, while other factors have limited significance. For example, reducing the number of ImageNet pre-training classes by 4x while increasing the number of images per class by 4x (that is, keeping total data quantity fixed) does not impact the robustness of fine-tuned models. We demonstrate our findings on pre-training distributions drawn from various natural and synthetic data sources, primarily using the iWildCam-WILDS distribution shift as a test for downstream robustness. ## Keyword: raw image There is no result
process
new submissions for tue jul keyword events discovering spatio temporal rationales for video question answering authors yicong li junbin xiao chun feng xiang wang tat seng chua subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract this paper strives to solve complex video question answering videoqa which features long video containing multiple objects and events at different time to tackle the challenge we highlight the importance of identifying question critical temporal moments and spatial objects from the vast amount of video content towards this we propose a spatio temporal rationalization str a differentiable selection module that adaptively collects question critical moments and objects using cross modal interaction the discovered video moments and objects are then served as grounded rationales to support answer reasoning based on str we further propose transtr a transformer style neural network architecture that takes str as the core and additionally underscores a novel answer interaction mechanism to coordinate str for answer decoding experiments on four datasets show that transtr achieves new state of the art sota especially on next qa and causal vidqa which feature complex videoqa it significantly surpasses the previous sota by and respectively we then conduct extensive studies to verify the importance of str as well as the proposed answer interaction mechanism with the success of transtr and our comprehensive analysis we hope this work can spark more future efforts in complex videoqa code will be released at towards video anomaly retrieval from video anomaly detection new benchmarks and model authors peng wu jing liu xiangteng he yuxin peng peng wang yanning zhang subjects computer vision and pattern recognition cs cv artificial intelligence cs ai arxiv link pdf link abstract video anomaly detection vad has been paid increasing attention due to its potential applications its current dominant tasks focus on online detecting anomalies at the frame level which can be roughly interpreted as the binary or multiple event classification however such a setup that builds relationships between complicated anomalous events and single labels e g vandalism is superficial since single labels are deficient to characterize anomalous events in reality users tend to search a specific video rather than a series of approximate videos therefore retrieving anomalous events using detailed descriptions is practical and positive but few researches focus on this in this context we propose a novel task called video anomaly retrieval var which aims to pragmatically retrieve relevant anomalous videos by cross modalities e g language descriptions and synchronous audios unlike the current video retrieval where videos are assumed to be temporally well trimmed with short duration var is devised to retrieve long untrimmed videos which may be partially relevant to the given query to achieve this we present two large scale var benchmarks ucfcrime ar and xdviolence ar constructed on top of prevalent anomaly datasets meanwhile we design a model called anomaly led alignment network alan for var in alan we propose an anomaly led sampling to focus on key segments in long untrimmed videos then we introduce an efficient pretext task to enhance semantic associations between video text fine grained representations besides we leverage two complementary alignments to further match cross modal contents experimental results on two benchmarks reveal the challenges of var task and also demonstrate the advantages of our tailored method revisiting event based video frame interpolation authors jiaben chen yichen zhu dongze lian jiaqi yang yifu wang renrui zhang xinhang liu shenhan qian laurent kneip shenghua gao subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract dynamic vision sensors or event cameras provide rich complementary information for video frame interpolation existing state of the art methods follow the paradigm of combining both synthesis based and warping networks however few of those methods fully respect the intrinsic characteristics of events streams given that event cameras only encode intensity changes and polarity rather than color intensities estimating optical flow from events is arguably more difficult than from rgb information we therefore propose to incorporate rgb information in an event guided optical flow refinement strategy moreover in light of the quasi continuous nature of the time signals provided by event cameras we propose a divide and conquer strategy in which event based intermediate frame synthesis happens incrementally in multiple simplified stages rather than in a single long stage extensive experiments on both synthetic and real world datasets show that these modifications lead to more reliable and realistic intermediate frame results than previous video frame interpolation methods our findings underline that a careful consideration of event characteristics such as high temporal density and elevated noise benefits interpolation accuracy damage vision mining opportunity for imbalanced anomaly detection authors takato yasuno subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract in past decade previous balanced datasets have been used to advance algorithms for classification object detection semantic segmentation and anomaly detection in industrial applications specifically for condition based maintenance automating visual inspection is crucial to ensure high quality deterioration prognostic attempts to optimize the fine decision process for predictive maintenance and proactive repair in civil infrastructure and living environment damage data mining cannot avoid the imbalanced data issue because of rare unseen events and high quality status by improved operations for visual inspection deteriorated class acquired from the surface of concrete and steel components are occasionally imbalanced from numerous related surveys we summarize that imbalanced data problems can be categorized into four types missing range of target and label valuables majority minority class imbalance foreground background of spatial imbalance long tailed class of pixel wise imbalance since there has been many imbalanced studies using deep learning approaches that includes regression image classification object detection semantic segmentation however anomaly detection for imbalanced data is not yet well known in the study we highlight one class anomaly detection application whether anomalous class or not and demonstrate clear examples on imbalanced vision datasets wooden concrete deterioration and disaster damage we provide key results on damage vision mining advantage hypothesizing that the more effective range of positive ratio the higher accuracy gain of anomaly detection application finally the applicability of the damage learning methods limitations and future works are mentioned automotive object detection via learning sparse events by temporal dynamics of spiking neurons authors hu zhang luziwei leng kaiwei che qian liu jie cheng qinghai guo jiangxing liao ran cheng subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event based sensors with their high temporal resolution and dynamical range have the potential to be deployed in high speed platforms such as vehicles and drones however the highly sparse and fluctuating nature of events poses challenges for conventional object detection techniques based on artificial neural networks anns in contrast spiking neural networks snns are well suited for representing event based data due to their inherent temporal dynamics in particular we demonstrate that the membrane potential dynamics can modulate network activity upon fluctuating events and strengthen features of sparse input in addition the spike triggered adaptive threshold can stabilize training which further improves network performance based on this we develop an efficient spiking feature pyramid network for event based object detection our proposed snn outperforms previous snns and sophisticated anns with attention mechanisms achieving a mean average precision of on the benchmark dataset this result significantly surpasses the previous best snn by and demonstrates the potential of snns for event based vision our model has a concise architecture while maintaining high accuracy and much lower computation cost as a result of sparse computation our code will be publicly available keyword event camera revisiting event based video frame interpolation authors jiaben chen yichen zhu dongze lian jiaqi yang yifu wang renrui zhang xinhang liu shenhan qian laurent kneip shenghua gao subjects computer vision and pattern recognition cs cv robotics cs ro arxiv link pdf link abstract dynamic vision sensors or event cameras provide rich complementary information for video frame interpolation existing state of the art methods follow the paradigm of combining both synthesis based and warping networks however few of those methods fully respect the intrinsic characteristics of events streams given that event cameras only encode intensity changes and polarity rather than color intensities estimating optical flow from events is arguably more difficult than from rgb information we therefore propose to incorporate rgb information in an event guided optical flow refinement strategy moreover in light of the quasi continuous nature of the time signals provided by event cameras we propose a divide and conquer strategy in which event based intermediate frame synthesis happens incrementally in multiple simplified stages rather than in a single long stage extensive experiments on both synthetic and real world datasets show that these modifications lead to more reliable and realistic intermediate frame results than previous video frame interpolation methods our findings underline that a careful consideration of event characteristics such as high temporal density and elevated noise benefits interpolation accuracy keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb pick the best pre trained model towards transferability estimation for medical image segmentation authors yuncheng yang meng wei junjun he jie yang jin ye yun gu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources with the abundance of medical image data many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from hence it s vital to estimate the source models transferability i e the ability to generalize across different downstream tasks for proper and efficient model reuse to make up for its deficiency when applying transfer learning to medical image segmentation in this paper we therefore propose a new transferability estimation te method we first analyze the drawbacks of using the existing te algorithms for medical image segmentation and then design a source free te framework that considers both class consistency and feature variety for better estimation extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation code is available at a stronger stitching algorithm for fisheye images based on deblurring and registration authors jing hao jingming xie jinyuan zhang moyun liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fisheye lens which is suitable for panoramic imaging has the prominent advantage of a large field of view and low cost however the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching aiming to resolve this drawback we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning in the stage of fisheye image correction we propose the attention based nonlinear activation free network anafnet to deblur fisheye images corrected by zhang calibration method specifically anafnet adopts the classical single stage u shaped architecture based on convolutional neural networks with soft attention technique and it can restore a sharp image from a blurred image effectively in the part of image registration we propose the orb freak gms ofg a comprehensive image matching algorithm to improve the accuracy of image registration experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method keyword isp lolep single view view synthesis with locally learned planes and self attention occlusion inference authors cong wang yu ping wang dinesh manocha subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract we propose a novel method lolep which regresses locally learned planes from a single rgb image to represent scenes accurately thus generating better novel views without the depth information regressing appropriate plane locations is a challenging problem to solve this issue we pre partition the disparity space into bins and design a disparity sampler to regress local offsets for multiple planes in each bin however only using such a sampler makes the network not convergent we further propose two optimizing strategies that combine with different disparity distributions of datasets and propose an occlusion aware reprojection loss as a simple yet effective geometric supervision technique we also introduce a self attention mechanism to improve occlusion inference and present a block sampling self attention bs sa module to address the problem of applying self attention to large feature maps we demonstrate the effectiveness of our approach and generate state of the art results on different datasets compared to mine our approach has an lpips reduction of and an rv reduction of we also evaluate the performance on real world images and demonstrate the benefits pg rcnn semantic surface point generation for object detection authors inyong koo inyoung lee se ho kim hee seon kim woo jin jeon changick kim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract one of the main challenges in lidar based object detection is that the sensors often fail to capture the complete spatial information about the objects due to long distance and occlusion two stage detectors with point cloud completion approaches tackle this problem by adding more points to the regions of interest rois with a pre trained network however these methods generate dense point clouds of objects for all region proposals assuming that objects always exist in the rois this leads to the indiscriminate point generation for incorrect proposals as well motivated by this we propose point generation r cnn pg rcnn a novel end to end detector that generates semantic surface points of foreground objects for accurate detection our method uses a jointly trained roi point generation module to process the contextual information of rois and estimate the complete shape and displacement of foreground objects for every generated point pg rcnn assigns a semantic feature that indicates the estimated foreground probability extensive experiments show that the point clouds generated by our method provide geometrically and semantically rich information for refining false positive and misaligned proposals pg rcnn achieves competitive performance on the kitti benchmark with significantly fewer parameters than state of the art models the code is available at volcanic ash delimitation using artificial intelligence based on authors christian carrillo gissela torres christian mejia escobar subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract volcanic eruptions emit ash that can be harmful to human health and cause damage to infrastructure economic activities and the environment the delimitation of ash clouds allows to know their behavior and dispersion which helps in the prevention and mitigation of this phenomenon traditional methods take advantage of specialized software programs to process the bands or channels that compose the satellite images however their use is limited to experts and demands a lot of time and significant computational resources in recent years artificial intelligence has been a milestone in the computational treatment of complex problems in different areas in particular deep learning techniques allow automatic fast and accurate processing of digital images the present work proposes the use of the model a type of generative adversarial network that once trained learns the mapping of input images to output images the architecture of such a network consisting of a generator and a discriminator provides the versatility needed to produce black and white ash cloud images from multispectral satellite images the evaluation of the model based on loss and accuracy plots a confusion matrix and visual inspection indicates a satisfactory solution for accurate ash cloud delineation applicable in any area of the world and becomes a useful tool in risk management keyword image signal processing there is no result keyword image signal process there is no result keyword compression digital modeling on large kernel metamaterial neural network authors quan liu hanyu zheng brandon t swartz ho hin lee zuhayr asad ivan kravchenko jason g valentine yuankai huo subjects computer vision and pattern recognition cs cv image and video processing eess iv arxiv link pdf link abstract deep neural networks dnns utilized recently are physically deployed with computational units e g cpus and gpus such a design might lead to a heavy computational burden significant latency and intensive power consumption which are critical limitations in applications such as the internet of things iot edge computing and the usage of drones recent advances in optical computational units e g metamaterial have shed light on energy free and light speed neural networks however the digital design of the metamaterial neural network mnn is fundamentally limited by its physical limitations such as precision noise and bandwidth during fabrication moreover the unique advantages of mnn s e g light speed computation are not fully explored via standard convolution kernels in this paper we propose a novel large kernel metamaterial neural network lmnn that maximizes the digital capacity of the state of the art sota mnn with model re parametrization and network compression while also considering the optical limitation explicitly the new digital learning scheme can maximize the learning capacity of mnn while modeling the physical restrictions of meta optic with the proposed lmnn the computation cost of the convolutional front end can be offloaded into fabricated optical hardware the experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency the development of the proposed lmnn is a promising step towards the ultimate goal of energy free and light speed ai model compression methods for a review authors mohammad jani jamil fayyad younes al younes homayoun najjaran subjects computer vision and pattern recognition cs cv machine learning cs lg neural and evolutionary computing cs ne arxiv link pdf link abstract over the past few years extensive research has been devoted to enhancing yolo object detectors since its introduction eight major versions of yolo have been introduced with the purpose of improving its accuracy and efficiency while the evident merits of yolo have yielded to its extensive use in many areas deploying it on resource limited devices poses challenges to address this issue various neural network compression methods have been developed which fall under three main categories namely network pruning quantization and knowledge distillation the fruitful outcomes of utilizing model compression methods such as lowering memory usage and inference time make them favorable if not necessary for deploying large neural networks on hardware constrained edge devices in this review paper our focus is on pruning and quantization due to their comparative modularity we categorize them and analyze the practical results of applying those methods to by doing so we identify gaps in adapting pruning and quantization for compressing and provide future directions in this area for further exploration among several versions of yolo we specifically choose for its excellent trade off between recency and popularity in literature this is the first specific review paper that surveys pruning and quantization methods from an implementation point of view on our study is also extendable to newer versions of yolo as implementing them on resource limited devices poses the same challenges that persist even today this paper targets those interested in the practical deployment of model compression methods on and in exploring different compression techniques that can be used for subsequent versions of yolo keyword raw pick the best pre trained model towards transferability estimation for medical image segmentation authors yuncheng yang meng wei junjun he jie yang jin ye yun gu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract transfer learning is a critical technique in training deep neural networks for the challenging medical image segmentation task that requires enormous resources with the abundance of medical image data many research institutions release models trained on various datasets that can form a huge pool of candidate source models to choose from hence it s vital to estimate the source models transferability i e the ability to generalize across different downstream tasks for proper and efficient model reuse to make up for its deficiency when applying transfer learning to medical image segmentation in this paper we therefore propose a new transferability estimation te method we first analyze the drawbacks of using the existing te algorithms for medical image segmentation and then design a source free te framework that considers both class consistency and feature variety for better estimation extensive experiments show that our method surpasses all current algorithms for transferability estimation in medical image segmentation code is available at learning vision and language navigation from youtube videos authors kunyang lin peihao chen diwei huang thomas h li mingkui tan chuang gan subjects computer vision and pattern recognition cs cv computation and language cs cl arxiv link pdf link abstract vision and language navigation vln requires an embodied agent to navigate in realistic environments using natural language instructions existing vln methods suffer from training on small scale environments or unreasonable path instruction datasets limiting the generalization to unseen environments there are massive house tour videos on youtube providing abundant real navigation experiences and layout information however these videos have not been explored for vln before in this paper we propose to learn an agent from these videos by creating a large scale dataset which comprises reasonable path instruction pairs from house tour videos and pre training the agent on it to achieve this we have to tackle the challenges of automatically constructing path instruction pairs and exploiting real layout knowledge from raw and unlabeled videos to address these we first leverage an entropy based method to construct the nodes of a path trajectory then we propose an action aware generator for generating instructions from unlabeled trajectories last we devise a trajectory judgment pretext task to encourage the agent to mine the layout knowledge experimental results show that our method achieves state of the art performance on two popular benchmarks and reverie code is available at a stronger stitching algorithm for fisheye images based on deblurring and registration authors jing hao jingming xie jinyuan zhang moyun liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract fisheye lens which is suitable for panoramic imaging has the prominent advantage of a large field of view and low cost however the fisheye image has a severe geometric distortion which may interfere with the stage of image registration and stitching aiming to resolve this drawback we devise a stronger stitching algorithm for fisheye images by combining the traditional image processing method with deep learning in the stage of fisheye image correction we propose the attention based nonlinear activation free network anafnet to deblur fisheye images corrected by zhang calibration method specifically anafnet adopts the classical single stage u shaped architecture based on convolutional neural networks with soft attention technique and it can restore a sharp image from a blurred image effectively in the part of image registration we propose the orb freak gms ofg a comprehensive image matching algorithm to improve the accuracy of image registration experimental results demonstrate that panoramic images of superior quality stitching by fisheye images can be obtained through our method on the connection between pre training data diversity and fine tuning robustness authors vivek ramanujan thao nguyen sewoong oh ludwig schmidt ali farhadi subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract pre training has been widely adopted in deep learning to improve model performance especially when the training data for a target task is limited in our work we seek to understand the implications of this training strategy on the generalization properties of downstream models more specifically we ask the following question how do properties of the pre training distribution affect the robustness of a fine tuned model the properties we explore include the label space label semantics image diversity data domains and data quantity of the pre training distribution we find that the primary factor influencing downstream effective robustness taori et al is data quantity while other factors have limited significance for example reducing the number of imagenet pre training classes by while increasing the number of images per class by that is keeping total data quantity fixed does not impact the robustness of fine tuned models we demonstrate our findings on pre training distributions drawn from various natural and synthetic data sources primarily using the iwildcam wilds distribution shift as a test for downstream robustness keyword raw image there is no result
1
86,788
8,050,350,080
IssuesEvent
2018-08-01 13:13:22
mathematicalthinking/encompass
https://api.github.com/repos/mathematicalthinking/encompass
closed
[ENC-386] Image Tagging
Testing improvement
"John Hopkins (jrh327) actually worked on this issue extensively (and I believe there was a related JIRA issue, but I can't find it). So I am creating a new one. From using this feature in-app, it looks like he wasn't able to round it out before he left. You can name and tag portions of images but can't resize the area of the image you wish to tag (even though the cursor changes to indicate the ability). It could also be that he did finish, but changes made thereafter affected the feature, as we currently have no test for resizing image tags. So we should add a test as well I think."
1.0
[ENC-386] Image Tagging - "John Hopkins (jrh327) actually worked on this issue extensively (and I believe there was a related JIRA issue, but I can't find it). So I am creating a new one. From using this feature in-app, it looks like he wasn't able to round it out before he left. You can name and tag portions of images but can't resize the area of the image you wish to tag (even though the cursor changes to indicate the ability). It could also be that he did finish, but changes made thereafter affected the feature, as we currently have no test for resizing image tags. So we should add a test as well I think."
non_process
image tagging john hopkins actually worked on this issue extensively and i believe there was a related jira issue but i can t find it so i am creating a new one from using this feature in app it looks like he wasn t able to round it out before he left you can name and tag portions of images but can t resize the area of the image you wish to tag even though the cursor changes to indicate the ability it could also be that he did finish but changes made thereafter affected the feature as we currently have no test for resizing image tags so we should add a test as well i think
0
12,612
15,013,721,868
IssuesEvent
2021-02-01 05:09:48
threefoldtech/js-sdk
https://api.github.com/repos/threefoldtech/js-sdk
closed
Deployed documentserver does not respond to local onlyoffice desktopapp
priority_major process_duplicate type_bug
VDC: jetserthing Deployed the document server marketplace chatflow and when connected to it it sais everything is okay: ![image](https://user-images.githubusercontent.com/13766992/106119797-06e70680-616f-11eb-9bd1-356b71c82ad2.png) When I try to connect with a local installed onlyoffice desktop it give me a timout ![image](https://user-images.githubusercontent.com/13766992/106119906-29791f80-616f-11eb-9a9b-bbf4556aa154.png)
1.0
Deployed documentserver does not respond to local onlyoffice desktopapp - VDC: jetserthing Deployed the document server marketplace chatflow and when connected to it it sais everything is okay: ![image](https://user-images.githubusercontent.com/13766992/106119797-06e70680-616f-11eb-9bd1-356b71c82ad2.png) When I try to connect with a local installed onlyoffice desktop it give me a timout ![image](https://user-images.githubusercontent.com/13766992/106119906-29791f80-616f-11eb-9a9b-bbf4556aa154.png)
process
deployed documentserver does not respond to local onlyoffice desktopapp vdc jetserthing deployed the document server marketplace chatflow and when connected to it it sais everything is okay when i try to connect with a local installed onlyoffice desktop it give me a timout
1
4,819
3,082,979,144
IssuesEvent
2015-08-24 04:29:49
jeremywrnr/testing
https://api.github.com/repos/jeremywrnr/testing
closed
acdascacdsacsa
bug codepilot
[issue screenshot](http://codepilot.meteor.com/screenshot/TmarXNTkC3ByqDjN4) [live code here](http://codepilot.meteor.com/render/Mv6oDrFYTj7DKWKrf) html: ```html <head> <title>my website</title> </head> <body> <h1>welcome!</h1> <p>you can edit this code - please do!</p> <h2>here is kitty</h2> <img id="kitty" src="http://erinhunter.katecary.co.uk/wp-content/uploads/2014/12/science-cat.jpg"/> </body> ``` css: ```css body { font-family: Arial, sans-serif; padding: 10px; margin: 10px; } h1 { font-size: 3em; } h2 { font-size: 1.5em; } img { max-width: 100%; } ``` js: ```js // rejoice - jquery is enabled! console.log("Hello branched world"); $("#kitty").click(function() { console.log("Thank you for clicking on the science kitty!") }); ```
1.0
acdascacdsacsa - [issue screenshot](http://codepilot.meteor.com/screenshot/TmarXNTkC3ByqDjN4) [live code here](http://codepilot.meteor.com/render/Mv6oDrFYTj7DKWKrf) html: ```html <head> <title>my website</title> </head> <body> <h1>welcome!</h1> <p>you can edit this code - please do!</p> <h2>here is kitty</h2> <img id="kitty" src="http://erinhunter.katecary.co.uk/wp-content/uploads/2014/12/science-cat.jpg"/> </body> ``` css: ```css body { font-family: Arial, sans-serif; padding: 10px; margin: 10px; } h1 { font-size: 3em; } h2 { font-size: 1.5em; } img { max-width: 100%; } ``` js: ```js // rejoice - jquery is enabled! console.log("Hello branched world"); $("#kitty").click(function() { console.log("Thank you for clicking on the science kitty!") }); ```
non_process
acdascacdsacsa html html my website welcome you can edit this code please do here is kitty img id kitty src css css body font family arial sans serif padding margin font size font size img max width js js rejoice jquery is enabled console log hello branched world kitty click function console log thank you for clicking on the science kitty
0
319,896
23,794,410,590
IssuesEvent
2022-09-02 17:50:30
microsoft/CromwellOnAzure
https://api.github.com/repos/microsoft/CromwellOnAzure
closed
TES causes side effect that Files are understood differently between a task's input and command blocks
documentation
Please consider the following wdl ``` version 1.0 task bug { input { File inputFile = "/datasettestinputs/dataset/references/hg38/v0/Homo_sapiens_assembly38.fasta" String inputFileInput = "in input: the file is ~{inputFile}" } command <<< echo ~{inputFileInput} echo "in command: the file is ~{inputFile}" >>> output { } runtime { docker: 'ubuntu:18.04' preemptible: true maxRetries: 3 memory: "14 GB" cpu: "3" disk: "200 GB" } } workflow bug_test { input { } call bug { input: } } ``` In the command section, we echo two things: 1) the inputFile as it is perceived in the input section and 2) the inputFile as it is perceived in the command section. It turns out that these are different values. In the input section, the inputFile is the same string value given in the file definition in the line above. In the command section, the inputFile is the file location after TES has copied it. Here is what I get in stdout ``` in input: the file is /datasettestinputs/dataset/references/hg38/v0/Homo_sapiens_assembly38.fasta in command: the file is /cromwell-executions/bug_test/a1f517bb-1eef-4a86-8853-26e1f44d92a2/call-bug/inputs/datasettestinputs/dataset/references/hg38/v0/Homo_sapiens_assembly38.fasta ``` This is a a subtle difference that I ran into and caused days of searching for the cause and can confuse users. I'm not sure if it can be helped, but minimally users should know about it as a side effect of CoA.
1.0
TES causes side effect that Files are understood differently between a task's input and command blocks - Please consider the following wdl ``` version 1.0 task bug { input { File inputFile = "/datasettestinputs/dataset/references/hg38/v0/Homo_sapiens_assembly38.fasta" String inputFileInput = "in input: the file is ~{inputFile}" } command <<< echo ~{inputFileInput} echo "in command: the file is ~{inputFile}" >>> output { } runtime { docker: 'ubuntu:18.04' preemptible: true maxRetries: 3 memory: "14 GB" cpu: "3" disk: "200 GB" } } workflow bug_test { input { } call bug { input: } } ``` In the command section, we echo two things: 1) the inputFile as it is perceived in the input section and 2) the inputFile as it is perceived in the command section. It turns out that these are different values. In the input section, the inputFile is the same string value given in the file definition in the line above. In the command section, the inputFile is the file location after TES has copied it. Here is what I get in stdout ``` in input: the file is /datasettestinputs/dataset/references/hg38/v0/Homo_sapiens_assembly38.fasta in command: the file is /cromwell-executions/bug_test/a1f517bb-1eef-4a86-8853-26e1f44d92a2/call-bug/inputs/datasettestinputs/dataset/references/hg38/v0/Homo_sapiens_assembly38.fasta ``` This is a a subtle difference that I ran into and caused days of searching for the cause and can confuse users. I'm not sure if it can be helped, but minimally users should know about it as a side effect of CoA.
non_process
tes causes side effect that files are understood differently between a task s input and command blocks please consider the following wdl version task bug input file inputfile datasettestinputs dataset references homo sapiens fasta string inputfileinput in input the file is inputfile command echo inputfileinput echo in command the file is inputfile output runtime docker ubuntu preemptible true maxretries memory gb cpu disk gb workflow bug test input call bug input in the command section we echo two things the inputfile as it is perceived in the input section and the inputfile as it is perceived in the command section it turns out that these are different values in the input section the inputfile is the same string value given in the file definition in the line above in the command section the inputfile is the file location after tes has copied it here is what i get in stdout in input the file is datasettestinputs dataset references homo sapiens fasta in command the file is cromwell executions bug test call bug inputs datasettestinputs dataset references homo sapiens fasta this is a a subtle difference that i ran into and caused days of searching for the cause and can confuse users i m not sure if it can be helped but minimally users should know about it as a side effect of coa
0
17,822
23,748,187,740
IssuesEvent
2022-08-31 17:56:38
googleapis/proto-breaking-change-detector
https://api.github.com/repos/googleapis/proto-breaking-change-detector
opened
remove typed-ast from requirements-dev.txt when dropping python 3.7 build
type: process priority: p2
It seems that typed-ast is only required in python 3.7 runtime.
1.0
remove typed-ast from requirements-dev.txt when dropping python 3.7 build - It seems that typed-ast is only required in python 3.7 runtime.
process
remove typed ast from requirements dev txt when dropping python build it seems that typed ast is only required in python runtime
1
384,992
26,610,955,776
IssuesEvent
2023-01-24 00:08:50
simonw/datasette
https://api.github.com/repos/simonw/datasette
opened
Document how actors are displayed
documentation
https://github.com/simonw/datasette/blob/e4ebef082de90db4e1b8527abc0d582b7ae0bc9d/datasette/utils/__init__.py#L1052-L1056 This logic should be reflected in the documentation on https://docs.datasette.io/en/stable/authentication.html#actors
1.0
Document how actors are displayed - https://github.com/simonw/datasette/blob/e4ebef082de90db4e1b8527abc0d582b7ae0bc9d/datasette/utils/__init__.py#L1052-L1056 This logic should be reflected in the documentation on https://docs.datasette.io/en/stable/authentication.html#actors
non_process
document how actors are displayed this logic should be reflected in the documentation on
0
16,463
21,387,949,202
IssuesEvent
2022-04-21 02:11:07
MicrosoftDocs/windows-uwp
https://api.github.com/repos/MicrosoftDocs/windows-uwp
closed
What about Desktop Bridge apps handling activation?
doc-bug uwp/prod processes-and-threading/tech Pri2
There should be a link or some info on how Desktop Bridge apps should handle activation. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6579b616-2bc6-9a98-f30e-efc0c538f303 * Version Independent ID: 1fe1d869-7eec-2f98-5d5b-b4e6cd294dd6 * Content: [Handle app activation - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/activate-an-app) * Content Source: [windows-apps-src/launch-resume/activate-an-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/activate-an-app.md) * Product: **uwp** * Technology: **processes-and-threading** * GitHub Login: @lastnameholiu * Microsoft Alias: **alholiu**
1.0
What about Desktop Bridge apps handling activation? - There should be a link or some info on how Desktop Bridge apps should handle activation. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 6579b616-2bc6-9a98-f30e-efc0c538f303 * Version Independent ID: 1fe1d869-7eec-2f98-5d5b-b4e6cd294dd6 * Content: [Handle app activation - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/activate-an-app) * Content Source: [windows-apps-src/launch-resume/activate-an-app.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/activate-an-app.md) * Product: **uwp** * Technology: **processes-and-threading** * GitHub Login: @lastnameholiu * Microsoft Alias: **alholiu**
process
what about desktop bridge apps handling activation there should be a link or some info on how desktop bridge apps should handle activation document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login lastnameholiu microsoft alias alholiu
1
37,594
4,827,831,419
IssuesEvent
2016-11-07 14:46:24
RestComm/Restcomm-Connect
https://api.github.com/repos/RestComm/Restcomm-Connect
closed
RVD should retrieve project listings from Restcomm
in progress Visual App Designer
Currently RVD goes through all of the projects in the workspace when a project listing for an owner needs to be generated. This will cause delays and be a waste of resources as the workspace grows. We need to let RVD retrieve the listing from Restcomm using the Applications API.
1.0
RVD should retrieve project listings from Restcomm - Currently RVD goes through all of the projects in the workspace when a project listing for an owner needs to be generated. This will cause delays and be a waste of resources as the workspace grows. We need to let RVD retrieve the listing from Restcomm using the Applications API.
non_process
rvd should retrieve project listings from restcomm currently rvd goes through all of the projects in the workspace when a project listing for an owner needs to be generated this will cause delays and be a waste of resources as the workspace grows we need to let rvd retrieve the listing from restcomm using the applications api
0
215,614
16,682,514,622
IssuesEvent
2021-06-08 02:49:41
snext1220/stext
https://api.github.com/repos/snext1220/stext
closed
シナリオ分類の整理
Testing enhancement
Dでも話題になった件です。 ミドル100(旧Pg100)、ロング200overを新設(リネーム)し、現在不適当な分類となっているシナを整理できればと思っております。ついては、以下の点についてご意見頂ければ幸いです。 + ミドル、ロング~の名称 + 再整理対象のシナ(以下以外にあれば) + アベンジャーズ、カードバトル、日の光、ドラスレ(開幕シナなのでママでも?) ### 追記(覚書) シナリオレベルについても1~5の基準を明示
1.0
シナリオ分類の整理 - Dでも話題になった件です。 ミドル100(旧Pg100)、ロング200overを新設(リネーム)し、現在不適当な分類となっているシナを整理できればと思っております。ついては、以下の点についてご意見頂ければ幸いです。 + ミドル、ロング~の名称 + 再整理対象のシナ(以下以外にあれば) + アベンジャーズ、カードバトル、日の光、ドラスレ(開幕シナなのでママでも?) ### 追記(覚書) シナリオレベルについても1~5の基準を明示
non_process
シナリオ分類の整理 dでも話題になった件です。 ( )、 (リネーム)し、現在不適当な分類となっているシナを整理できればと思っております。ついては、以下の点についてご意見頂ければ幸いです。 ミドル、ロング~の名称 再整理対象のシナ(以下以外にあれば) アベンジャーズ、カードバトル、日の光、ドラスレ(開幕シナなのでママでも?) 追記(覚書) ~
0
4,906
7,784,289,635
IssuesEvent
2018-06-06 12:51:45
nlbdev/pipeline
https://api.github.com/repos/nlbdev/pipeline
closed
EPUB/HTML as input - move colophone
Priority:3 - High enhancement pre-processing
Colophon should be moved to end of book. Works with DTBook as input.
1.0
EPUB/HTML as input - move colophone - Colophon should be moved to end of book. Works with DTBook as input.
process
epub html as input move colophone colophon should be moved to end of book works with dtbook as input
1
87,392
10,544,961,996
IssuesEvent
2019-10-02 18:05:11
sashavmorozov/vlocity-epc-on-steroids
https://api.github.com/repos/sashavmorozov/vlocity-epc-on-steroids
opened
[Doc] Craft a documentation script/formula in the spreadsheet
documentation tools
1. A separate tab that describes structure of each entity-tab 2. A formula that yields markup-like description for the entity-tab 3. A formula that yields markup-like description for the SFDC-object
1.0
[Doc] Craft a documentation script/formula in the spreadsheet - 1. A separate tab that describes structure of each entity-tab 2. A formula that yields markup-like description for the entity-tab 3. A formula that yields markup-like description for the SFDC-object
non_process
craft a documentation script formula in the spreadsheet a separate tab that describes structure of each entity tab a formula that yields markup like description for the entity tab a formula that yields markup like description for the sfdc object
0
8,982
2,615,116,251
IssuesEvent
2015-03-01 05:41:13
chrsmith/google-api-java-client
https://api.github.com/repos/chrsmith/google-api-java-client
closed
AccessControlException thrown when running on Google App Engine
auto-migrated Component-Google-AppEngine Milestone-Version1.1.1 Priority-Critical Type-Defect
``` Version of google-api-java-client (e.g. 1.1.0-alpha)? 1.1.0-alpha Java environment (e.g. Android 2.2, App Engine 1.3.7, Java 6 on Windows)? App Engine 1.3.7 What steps will reproduce the problem? 1. Start program What is the expected output? No exception. What do you see instead? Caused by: java.security.AccessControlException: access denied (java.lang.RuntimePermission setFactory) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:342) at java.security.AccessController.checkPermission(AccessController.java:553) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at java.lang.SecurityManager.checkSetFactory(SecurityManager.java:1629) at java.net.HttpURLConnection.setFollowRedirects(HttpURLConnection.java:267) at com.google.api.client.javanet.NetHttpTransport.<clinit>(NetHttpTransport.java:36) Please provide any additional information below. Was working in version 1.1.0-alpha. ``` Original issue reported on code.google.com by `yan...@google.com` on 13 Sep 2010 at 8:22
1.0
AccessControlException thrown when running on Google App Engine - ``` Version of google-api-java-client (e.g. 1.1.0-alpha)? 1.1.0-alpha Java environment (e.g. Android 2.2, App Engine 1.3.7, Java 6 on Windows)? App Engine 1.3.7 What steps will reproduce the problem? 1. Start program What is the expected output? No exception. What do you see instead? Caused by: java.security.AccessControlException: access denied (java.lang.RuntimePermission setFactory) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:342) at java.security.AccessController.checkPermission(AccessController.java:553) at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:166) at java.lang.SecurityManager.checkSetFactory(SecurityManager.java:1629) at java.net.HttpURLConnection.setFollowRedirects(HttpURLConnection.java:267) at com.google.api.client.javanet.NetHttpTransport.<clinit>(NetHttpTransport.java:36) Please provide any additional information below. Was working in version 1.1.0-alpha. ``` Original issue reported on code.google.com by `yan...@google.com` on 13 Sep 2010 at 8:22
non_process
accesscontrolexception thrown when running on google app engine version of google api java client e g alpha alpha java environment e g android app engine java on windows app engine what steps will reproduce the problem start program what is the expected output no exception what do you see instead caused by java security accesscontrolexception access denied java lang runtimepermission setfactory at java security accesscontrolcontext checkpermission accesscontrolcontext java at java security accesscontroller checkpermission accesscontroller java at java lang securitymanager checkpermission securitymanager java at com google appengine tools development devappserverfactory customsecuritymanager checkpermission devappserverfactory java at java lang securitymanager checksetfactory securitymanager java at java net httpurlconnection setfollowredirects httpurlconnection java at com google api client javanet nethttptransport nethttptransport java please provide any additional information below was working in version alpha original issue reported on code google com by yan google com on sep at
0
13,479
16,010,964,104
IssuesEvent
2021-04-20 10:29:35
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
closed
Rechunker 0.4 is causing build failures
bug process + tools
It looks like prefect is now a required dependency. I suggest we pin to the previous version of rechunker (0.3.3) while this is investigated and fixed.
1.0
Rechunker 0.4 is causing build failures - It looks like prefect is now a required dependency. I suggest we pin to the previous version of rechunker (0.3.3) while this is investigated and fixed.
process
rechunker is causing build failures it looks like prefect is now a required dependency i suggest we pin to the previous version of rechunker while this is investigated and fixed
1
16,787
22,033,434,399
IssuesEvent
2022-05-28 07:30:34
haveno-dex/haveno
https://api.github.com/repos/haveno-dex/haveno
opened
Implement XMR <-> ETH atomic swaps
is:feature a:trade process a:atomic swaps
XMR <-> ETH atomic swaps are being developed at: https://github.com/noot/atomic-swap There is an open CCS proposal at: https://ccs.getmonero.org/proposals/noot-eth-xmr-atomic-swap.html Swaps are currently being tested. Before considering to implement this protocol into Haveno, better wait until it's thoroughly tested and deemed safe enough for production.
1.0
Implement XMR <-> ETH atomic swaps - XMR <-> ETH atomic swaps are being developed at: https://github.com/noot/atomic-swap There is an open CCS proposal at: https://ccs.getmonero.org/proposals/noot-eth-xmr-atomic-swap.html Swaps are currently being tested. Before considering to implement this protocol into Haveno, better wait until it's thoroughly tested and deemed safe enough for production.
process
implement xmr eth atomic swaps xmr eth atomic swaps are being developed at there is an open ccs proposal at swaps are currently being tested before considering to implement this protocol into haveno better wait until it s thoroughly tested and deemed safe enough for production
1