Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
830,277
| 31,999,322,674
|
IssuesEvent
|
2023-09-21 11:14:07
|
lmareksla/DPE_Issues
|
https://api.github.com/repos/lmareksla/DPE_Issues
|
opened
|
It is possilbe that `.temp` remained after processing with Windows
|
bug question low priority
|
### feature/issue description
It is possilbe that `.temp` remained after processing with Windows
### program and data specification
**DPE version:** 1.1.0 230919 33b18fc9
**data type:** t3pa
**used settings of DPE:** standard
**pc configuration:** ubuntu22
### issue originator
LM
### how to reproduce
*list the steps to reproduce the problem
if any files for reproducing are somewhere accessible, add here its path*
### solution description
*describe the solution fixing the issue, put the link to the commit*
### tests
*describe the test procedure, put the link to the protocol if any*
### issue process
- [ ] reproduced
- [ ] solution / root cause discovered
- [ ] solution /fix implemented
- [ ] tested
|
1.0
|
It is possilbe that `.temp` remained after processing with Windows - ### feature/issue description
It is possilbe that `.temp` remained after processing with Windows
### program and data specification
**DPE version:** 1.1.0 230919 33b18fc9
**data type:** t3pa
**used settings of DPE:** standard
**pc configuration:** ubuntu22
### issue originator
LM
### how to reproduce
*list the steps to reproduce the problem
if any files for reproducing are somewhere accessible, add here its path*
### solution description
*describe the solution fixing the issue, put the link to the commit*
### tests
*describe the test procedure, put the link to the protocol if any*
### issue process
- [ ] reproduced
- [ ] solution / root cause discovered
- [ ] solution /fix implemented
- [ ] tested
|
non_process
|
it is possilbe that temp remained after processing with windows feature issue description it is possilbe that temp remained after processing with windows program and data specification dpe version data type used settings of dpe standard pc configuration issue originator lm how to reproduce list the steps to reproduce the problem if any files for reproducing are somewhere accessible add here its path solution description describe the solution fixing the issue put the link to the commit tests describe the test procedure put the link to the protocol if any issue process reproduced solution root cause discovered solution fix implemented tested
| 0
|
4,668
| 7,503,988,067
|
IssuesEvent
|
2018-04-10 00:59:50
|
UnbFeelings/unb-feelings-GQA
|
https://api.github.com/repos/UnbFeelings/unb-feelings-GQA
|
closed
|
Analisar Processo e Artefatos
|
document process wiki
|
Analisar o processo definido pela [equipe de processo][e-processo] para identificar quais partes deste processo e quais artefatos serão auditados pela equipe GQA. Eu recomendo que a definição destes artefatos e processos tenham algum embasamento, seja ele por conta dos objetivos organizacionais, qualidade do produto, ou porque é necessário para a disciplina.
[e-processo]:https://github.com/UnbFeelings/unb-feelings-docs
|
1.0
|
Analisar Processo e Artefatos - Analisar o processo definido pela [equipe de processo][e-processo] para identificar quais partes deste processo e quais artefatos serão auditados pela equipe GQA. Eu recomendo que a definição destes artefatos e processos tenham algum embasamento, seja ele por conta dos objetivos organizacionais, qualidade do produto, ou porque é necessário para a disciplina.
[e-processo]:https://github.com/UnbFeelings/unb-feelings-docs
|
process
|
analisar processo e artefatos analisar o processo definido pela para identificar quais partes deste processo e quais artefatos serão auditados pela equipe gqa eu recomendo que a definição destes artefatos e processos tenham algum embasamento seja ele por conta dos objetivos organizacionais qualidade do produto ou porque é necessário para a disciplina
| 1
|
305,950
| 26,423,349,748
|
IssuesEvent
|
2023-01-13 23:25:01
|
getodk/central-frontend
|
https://api.github.com/repos/getodk/central-frontend
|
closed
|
"Latest Submission" tooltip not shown over date/time
|
needs testing
|
When you hover over a cell of a forms table (on the homepage or in the project overview), then a tooltip will appear over most cells describing what the column shows. One of the columns is the date/time of the latest submission. If you hover over that column, you're supposed to see the text "Latest Submission". However, date/times have their own tooltips to show the date/time in a more precise format that is always absolute, not relative. That means that if you hover over a date/time, you won't see the "Latest Submission" tooltip, only the date/time tooltip. It's only if you hover over the clock icon or the text "(none)" (if there are no submissions) that you see the "Latest Submission" tooltip.
To address this, we will combine these two tooltips. If a date/time is shown, then when you hover over the cell, you will see a single tooltip that includes both the text "Latest Submission" and the absolute date/time.
|
1.0
|
"Latest Submission" tooltip not shown over date/time - When you hover over a cell of a forms table (on the homepage or in the project overview), then a tooltip will appear over most cells describing what the column shows. One of the columns is the date/time of the latest submission. If you hover over that column, you're supposed to see the text "Latest Submission". However, date/times have their own tooltips to show the date/time in a more precise format that is always absolute, not relative. That means that if you hover over a date/time, you won't see the "Latest Submission" tooltip, only the date/time tooltip. It's only if you hover over the clock icon or the text "(none)" (if there are no submissions) that you see the "Latest Submission" tooltip.
To address this, we will combine these two tooltips. If a date/time is shown, then when you hover over the cell, you will see a single tooltip that includes both the text "Latest Submission" and the absolute date/time.
|
non_process
|
latest submission tooltip not shown over date time when you hover over a cell of a forms table on the homepage or in the project overview then a tooltip will appear over most cells describing what the column shows one of the columns is the date time of the latest submission if you hover over that column you re supposed to see the text latest submission however date times have their own tooltips to show the date time in a more precise format that is always absolute not relative that means that if you hover over a date time you won t see the latest submission tooltip only the date time tooltip it s only if you hover over the clock icon or the text none if there are no submissions that you see the latest submission tooltip to address this we will combine these two tooltips if a date time is shown then when you hover over the cell you will see a single tooltip that includes both the text latest submission and the absolute date time
| 0
|
51,613
| 3,013,316,273
|
IssuesEvent
|
2015-07-29 08:08:49
|
N4SJAMK/teamboard-client-react
|
https://api.github.com/repos/N4SJAMK/teamboard-client-react
|
closed
|
Enable profile dialog for guest and disable password change
|
bug HIGH PRIORITY Verified
|
Guest user should be able to set profile image and not able to change password
|
1.0
|
Enable profile dialog for guest and disable password change - Guest user should be able to set profile image and not able to change password
|
non_process
|
enable profile dialog for guest and disable password change guest user should be able to set profile image and not able to change password
| 0
|
69,241
| 14,980,486,848
|
IssuesEvent
|
2021-01-28 13:42:50
|
ConnectionMaster/create-probot-app
|
https://api.github.com/repos/ConnectionMaster/create-probot-app
|
opened
|
CVE-2012-6708 (Medium) detected in jquery-1.8.1.min.js
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: create-probot-app/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: create-probot-app/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ConnectionMaster/create-probot-app/commit/6641b93b270ec2518f9c42a71e68853674ee0768">6641b93b270ec2518f9c42a71e68853674ee0768</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.8.1.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.8.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.8.1/jquery.min.js</a></p>
<p>Path to dependency file: create-probot-app/node_modules/redeyed/examples/browser/index.html</p>
<p>Path to vulnerable library: create-probot-app/node_modules/redeyed/examples/browser/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.8.1.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ConnectionMaster/create-probot-app/commit/6641b93b270ec2518f9c42a71e68853674ee0768">6641b93b270ec2518f9c42a71e68853674ee0768</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file create probot app node modules redeyed examples browser index html path to vulnerable library create probot app node modules redeyed examples browser index html dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
335,622
| 30,055,740,207
|
IssuesEvent
|
2023-06-28 06:37:21
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: cluster_creation failed
|
C-test-failure O-robot O-roachtest todo-deprecate.branch-release-23.1.0
|
roachtest.cluster_creation [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestFipsNightlyGceBazel/9554176?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestFipsNightlyGceBazel/9554176?buildTab=artifacts#/c2c/tpcc/warehouses=1000/duration=60/cutover=30) on release-23.1.0 @ [f1921dbd499fd258a606c4e7180aff7b82b6f900](https://github.com/cockroachdb/cockroach/commits/f1921dbd499fd258a606c4e7180aff7b82b6f900):
```
test c2c/tpcc/warehouses=1000/duration=60/cutover=30 was skipped due to (test_runner.go:678).runWorker: in provider: gce: Command: gcloud [compute instances create --subnet default --scopes cloud-platform --image ubuntu-pro-fips-2004-focal-v20230302 --image-project ubuntu-os-pro-cloud --boot-disk-type pd-ssd --service-account 21965078311-compute@developer.gserviceaccount.com --maintenance-policy MIGRATE --create-disk type=pd-ssd,size=1000GB,auto-delete=yes --machine-type n1-standard-8 --labels usage=roachtest,cluster=teamcity-9554176-1681277509-38-n9cpu8,lifetime=12h0m0s,created=2023-04-12t09_43_47z,roachprod=true, --metadata-from-file startup-script=/tmp/gce-startup-script1566050252 --project cockroach-ephemeral --boot-disk-size=32GB --zone us-east1-b teamcity-9554176-1681277509-38-n9cpu8-0001 teamcity-9554176-1681277509-38-n9cpu8-0002 teamcity-9554176-1681277509-38-n9cpu8-0003 teamcity-9554176-1681277509-38-n9cpu8-0004 teamcity-9554176-1681277509-38-n9cpu8-0005 teamcity-9554176-1681277509-38-n9cpu8-0006 teamcity-9554176-1681277509-38-n9cpu8-0007 teamcity-9554176-1681277509-38-n9cpu8-0008 teamcity-9554176-1681277509-38-n9cpu8-0009]
Output: Created [https://www.googleapis.com/compute/v1/projects/cockroach-ephemeral/zones/us-east1-b/instances/teamcity-9554176-1681277509-38-n9cpu8-0001].
Created [https://www.googleapis.com/compute/v1/projects/cockroach-ephemeral/zones/us-east1-b/instances/teamcity-9554176-1681277509-38-n9cpu8-0006].
Created [https://www.googleapis.com/compute/v1/projects/cockroach-ephemeral/zones/us-east1-b/instances/teamcity-9554176-1681277509-38-n9cpu8-0007].
WARNING: Some requests generated warnings:
- Disk size: '32 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- Quota 'CPUS' exceeded. Limit: 7200.0 in region us-east1.
metric name = compute.googleapis.com/cpus
limit name = CPUS-per-project-region
dimensions = region: us-east1
Try your request in another zone, or view documentation on how to increase quotas: https://cloud.google.com/compute/quotas.: exit status 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #101289 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot branch-release-23.1 release-blocker]
- #101288 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot branch-release-23.1 release-blocker]
- #89810 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-release-22.2.0]
- #87695 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-release-22.2]
- #78601 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-master sync-me-8]
- #78035 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-release-22.1]
</p>
</details>
/cc @cockroachdb/dev-inf
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cluster_creation.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-27028
|
2.0
|
roachtest: cluster_creation failed - roachtest.cluster_creation [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestFipsNightlyGceBazel/9554176?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestFipsNightlyGceBazel/9554176?buildTab=artifacts#/c2c/tpcc/warehouses=1000/duration=60/cutover=30) on release-23.1.0 @ [f1921dbd499fd258a606c4e7180aff7b82b6f900](https://github.com/cockroachdb/cockroach/commits/f1921dbd499fd258a606c4e7180aff7b82b6f900):
```
test c2c/tpcc/warehouses=1000/duration=60/cutover=30 was skipped due to (test_runner.go:678).runWorker: in provider: gce: Command: gcloud [compute instances create --subnet default --scopes cloud-platform --image ubuntu-pro-fips-2004-focal-v20230302 --image-project ubuntu-os-pro-cloud --boot-disk-type pd-ssd --service-account 21965078311-compute@developer.gserviceaccount.com --maintenance-policy MIGRATE --create-disk type=pd-ssd,size=1000GB,auto-delete=yes --machine-type n1-standard-8 --labels usage=roachtest,cluster=teamcity-9554176-1681277509-38-n9cpu8,lifetime=12h0m0s,created=2023-04-12t09_43_47z,roachprod=true, --metadata-from-file startup-script=/tmp/gce-startup-script1566050252 --project cockroach-ephemeral --boot-disk-size=32GB --zone us-east1-b teamcity-9554176-1681277509-38-n9cpu8-0001 teamcity-9554176-1681277509-38-n9cpu8-0002 teamcity-9554176-1681277509-38-n9cpu8-0003 teamcity-9554176-1681277509-38-n9cpu8-0004 teamcity-9554176-1681277509-38-n9cpu8-0005 teamcity-9554176-1681277509-38-n9cpu8-0006 teamcity-9554176-1681277509-38-n9cpu8-0007 teamcity-9554176-1681277509-38-n9cpu8-0008 teamcity-9554176-1681277509-38-n9cpu8-0009]
Output: Created [https://www.googleapis.com/compute/v1/projects/cockroach-ephemeral/zones/us-east1-b/instances/teamcity-9554176-1681277509-38-n9cpu8-0001].
Created [https://www.googleapis.com/compute/v1/projects/cockroach-ephemeral/zones/us-east1-b/instances/teamcity-9554176-1681277509-38-n9cpu8-0006].
Created [https://www.googleapis.com/compute/v1/projects/cockroach-ephemeral/zones/us-east1-b/instances/teamcity-9554176-1681277509-38-n9cpu8-0007].
WARNING: Some requests generated warnings:
- Disk size: '32 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- Quota 'CPUS' exceeded. Limit: 7200.0 in region us-east1.
metric name = compute.googleapis.com/cpus
limit name = CPUS-per-project-region
dimensions = region: us-east1
Try your request in another zone, or view documentation on how to increase quotas: https://cloud.google.com/compute/quotas.: exit status 1
```
<p>Parameters: <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=8</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #101289 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot branch-release-23.1 release-blocker]
- #101288 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot branch-release-23.1 release-blocker]
- #89810 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-release-22.2.0]
- #87695 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-release-22.2]
- #78601 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-master sync-me-8]
- #78035 roachtest: cluster_creation failed [C-test-failure O-roachtest O-robot T-testeng branch-release-22.1]
</p>
</details>
/cc @cockroachdb/dev-inf
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cluster_creation.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-27028
|
non_process
|
roachtest cluster creation failed roachtest cluster creation with on release test tpcc warehouses duration cutover was skipped due to test runner go runworker in provider gce command gcloud output created created created warning some requests generated warnings disk size gb is larger than image size gb you might need to resize the root repartition manually if the operating system does not support automatic resizing see for details error gcloud compute instances create could not fetch resource quota cpus exceeded limit in region us metric name compute googleapis com cpus limit name cpus per project region dimensions region us try your request in another zone or view documentation on how to increase quotas exit status parameters roachtest cloud gce roachtest cpu roachtest ssd help see see same failure on other branches roachtest cluster creation failed roachtest cluster creation failed roachtest cluster creation failed roachtest cluster creation failed roachtest cluster creation failed roachtest cluster creation failed cc cockroachdb dev inf jira issue crdb
| 0
|
102,123
| 4,151,140,703
|
IssuesEvent
|
2016-06-15 19:37:31
|
TheNOOFClan/S.C.S.I.
|
https://api.github.com/repos/TheNOOFClan/S.C.S.I.
|
opened
|
Temporary channels
|
Normal Priority TODO
|
Create commands to add a Temporary channel.
Basic solution:
- Add channel
- Add a poll create command for one hour later that lasts a day (by default)
- Make channel permanent/protected if vote passes, archive and delete if vote fails
|
1.0
|
Temporary channels - Create commands to add a Temporary channel.
Basic solution:
- Add channel
- Add a poll create command for one hour later that lasts a day (by default)
- Make channel permanent/protected if vote passes, archive and delete if vote fails
|
non_process
|
temporary channels create commands to add a temporary channel basic solution add channel add a poll create command for one hour later that lasts a day by default make channel permanent protected if vote passes archive and delete if vote fails
| 0
|
45,917
| 9,829,371,135
|
IssuesEvent
|
2019-06-15 20:07:15
|
GTNewHorizons/NewHorizons
|
https://api.github.com/repos/GTNewHorizons/NewHorizons
|
closed
|
Waterlily Texture Broken 2.0.7.3dev
|
CodeComplete FixedInDev duplicate
|
#### Which modpack version are you using?
2.0.7.3dev
#
#### If in multiplayer; On which server does this happen?
Private
#
#### What do you suggest instead/what changes do you propose?
Waterlily Texture is broken. Tried to spade and replant and same thing.

|
1.0
|
Waterlily Texture Broken 2.0.7.3dev - #### Which modpack version are you using?
2.0.7.3dev
#
#### If in multiplayer; On which server does this happen?
Private
#
#### What do you suggest instead/what changes do you propose?
Waterlily Texture is broken. Tried to spade and replant and same thing.

|
non_process
|
waterlily texture broken which modpack version are you using if in multiplayer on which server does this happen private what do you suggest instead what changes do you propose waterlily texture is broken tried to spade and replant and same thing
| 0
|
118,048
| 9,968,818,098
|
IssuesEvent
|
2019-07-08 16:26:22
|
MattWindsor91/act
|
https://api.github.com/repos/MattWindsor91/act
|
closed
|
splitmus: accept object file stubs
|
Area:Tester Type:Enhancement
|
As well as being able to generate assembly stubs through `act asm gen-stubs` for insertion into Litmus tests, `splitmus` (or a variant thereof!) should be able to accept object files and generate the changes necessary to insert them into a Litmus harness.
|
1.0
|
splitmus: accept object file stubs - As well as being able to generate assembly stubs through `act asm gen-stubs` for insertion into Litmus tests, `splitmus` (or a variant thereof!) should be able to accept object files and generate the changes necessary to insert them into a Litmus harness.
|
non_process
|
splitmus accept object file stubs as well as being able to generate assembly stubs through act asm gen stubs for insertion into litmus tests splitmus or a variant thereof should be able to accept object files and generate the changes necessary to insert them into a litmus harness
| 0
|
6,647
| 9,764,042,466
|
IssuesEvent
|
2019-06-05 14:59:39
|
ESMValGroup/ESMValTool
|
https://api.github.com/repos/ESMValGroup/ESMValTool
|
closed
|
Finish preprocessor masking module
|
preprocessor
|
The preprocessor masking module `esmvaltool/preprocessor/_mask.py` contains many functions, but at the moment only `mask_fillvalues` is actually available in the preprocessor. More masking options are probably required and (partly) implemented (e.g. mask land/ocean?). These functions should be finished and made available to the preprocessor.
Finally, unit tests should be added and the module should be cleaned up: remove unused code, fix prospector warnings.
|
1.0
|
Finish preprocessor masking module - The preprocessor masking module `esmvaltool/preprocessor/_mask.py` contains many functions, but at the moment only `mask_fillvalues` is actually available in the preprocessor. More masking options are probably required and (partly) implemented (e.g. mask land/ocean?). These functions should be finished and made available to the preprocessor.
Finally, unit tests should be added and the module should be cleaned up: remove unused code, fix prospector warnings.
|
process
|
finish preprocessor masking module the preprocessor masking module esmvaltool preprocessor mask py contains many functions but at the moment only mask fillvalues is actually available in the preprocessor more masking options are probably required and partly implemented e g mask land ocean these functions should be finished and made available to the preprocessor finally unit tests should be added and the module should be cleaned up remove unused code fix prospector warnings
| 1
|
13,752
| 16,503,777,139
|
IssuesEvent
|
2021-05-25 16:47:04
|
GoogleCloudPlatform/cloud-code-samples
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-code-samples
|
closed
|
IntelliJ M1 Mac Audit
|
priority: p3 type: process
|
This issue is for keeping track of which samples do/don't work on a M1 Mac machine.
All testing is done on an M1 machine (Mac OS 11.2.2) using the latest version of the language-appropriate Jetbrains IDE. As of now (3/2/21), there is no `Rider for Apple Silicon` available from IntelliJ ([youtrack issue](https://youtrack.jetbrains.com/issue/RIDER-54092), so C# tests were done using the standard Rider version.
### Cloud Run
- [x] Java (https://github.com/GoogleCloudPlatform/cloud-code-samples/pull/605)
- [x] Go
- [x] Node.JS
- [x] dotnet (#607)
- [x] Python: Flask
- [x] Python: Django
### Kubernetes
#### Java (IDEA)
- [x] Guestbook (https://github.com/GoogleCloudPlatform/cloud-code-samples/pull/605)
- [x] Hello World (https://github.com/GoogleCloudPlatform/cloud-code-samples/pull/605)
#### Go (GoLand)
- [x] Guestbook
- [x] Hello World
#### Node.JS (WebStorm)
- [x] Guestbook
- [x] Hello World
#### Python (PyCharm)
- [x] Guestbook: Flask
- [x] Hello World: Flask
- [x] Guestbook: Django
- [x] Hello World: Django
#### dotnet (Rider)
- [x] Guestbook (#607)
- [x] Hello World (#607)
|
1.0
|
IntelliJ M1 Mac Audit - This issue is for keeping track of which samples do/don't work on a M1 Mac machine.
All testing is done on an M1 machine (Mac OS 11.2.2) using the latest version of the language-appropriate Jetbrains IDE. As of now (3/2/21), there is no `Rider for Apple Silicon` available from IntelliJ ([youtrack issue](https://youtrack.jetbrains.com/issue/RIDER-54092), so C# tests were done using the standard Rider version.
### Cloud Run
- [x] Java (https://github.com/GoogleCloudPlatform/cloud-code-samples/pull/605)
- [x] Go
- [x] Node.JS
- [x] dotnet (#607)
- [x] Python: Flask
- [x] Python: Django
### Kubernetes
#### Java (IDEA)
- [x] Guestbook (https://github.com/GoogleCloudPlatform/cloud-code-samples/pull/605)
- [x] Hello World (https://github.com/GoogleCloudPlatform/cloud-code-samples/pull/605)
#### Go (GoLand)
- [x] Guestbook
- [x] Hello World
#### Node.JS (WebStorm)
- [x] Guestbook
- [x] Hello World
#### Python (PyCharm)
- [x] Guestbook: Flask
- [x] Hello World: Flask
- [x] Guestbook: Django
- [x] Hello World: Django
#### dotnet (Rider)
- [x] Guestbook (#607)
- [x] Hello World (#607)
|
process
|
intellij mac audit this issue is for keeping track of which samples do don t work on a mac machine all testing is done on an machine mac os using the latest version of the language appropriate jetbrains ide as of now there is no rider for apple silicon available from intellij so c tests were done using the standard rider version cloud run java go node js dotnet python flask python django kubernetes java idea guestbook hello world go goland guestbook hello world node js webstorm guestbook hello world python pycharm guestbook flask hello world flask guestbook django hello world django dotnet rider guestbook hello world
| 1
|
13,311
| 15,781,881,462
|
IssuesEvent
|
2021-04-01 12:02:16
|
GoogleCloudPlatform/dotnet-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/dotnet-docs-samples
|
closed
|
[Language] Skip tests until fixed.
|
api: language priority: p1 samples type: process
|
They are failing with a permission denied on a bucket being used for resources apparently.
Failures [here](https://source.cloud.google.com/results/invocations/53c7b406-c6e1-488b-9009-94611a20b0a8/targets/github%2Fdotnet-docs-samples%2Flanguage%2Fapi%2FAnalyzeTest%2FTestResults/tests).
I've deactivated the tests in #1061 .
|
1.0
|
[Language] Skip tests until fixed. - They are failing with a permission denied on a bucket being used for resources apparently.
Failures [here](https://source.cloud.google.com/results/invocations/53c7b406-c6e1-488b-9009-94611a20b0a8/targets/github%2Fdotnet-docs-samples%2Flanguage%2Fapi%2FAnalyzeTest%2FTestResults/tests).
I've deactivated the tests in #1061 .
|
process
|
skip tests until fixed they are failing with a permission denied on a bucket being used for resources apparently failures i ve deactivated the tests in
| 1
|
14,489
| 17,603,493,288
|
IssuesEvent
|
2021-08-17 14:27:24
|
qgis/QGIS-Documentation
|
https://api.github.com/repos/qgis/QGIS-Documentation
|
closed
|
[processing] use hours as cost units for service area algorithms (fix #30464) (Request in QGIS)
|
Processing Alg 3.14
|
### Request for documentation
From pull request QGIS/qgis#36032
Author: @alexbruy
QGIS version: 3.14
**[processing] use hours as cost units for service area algorithms (fix #30464)**
### PR Description:
## Description
Use hours as units for "travel cost" parameter in the service area algorithms (when "fastest" strategy is used). This makes them consistent with the output of shortest path algorithms. Fixes #30464.
### Commits tagged with [need-docs] or [FEATURE]
|
1.0
|
[processing] use hours as cost units for service area algorithms (fix #30464) (Request in QGIS) - ### Request for documentation
From pull request QGIS/qgis#36032
Author: @alexbruy
QGIS version: 3.14
**[processing] use hours as cost units for service area algorithms (fix #30464)**
### PR Description:
## Description
Use hours as units for "travel cost" parameter in the service area algorithms (when "fastest" strategy is used). This makes them consistent with the output of shortest path algorithms. Fixes #30464.
### Commits tagged with [need-docs] or [FEATURE]
|
process
|
use hours as cost units for service area algorithms fix request in qgis request for documentation from pull request qgis qgis author alexbruy qgis version use hours as cost units for service area algorithms fix pr description description use hours as units for travel cost parameter in the service area algorithms when fastest strategy is used this makes them consistent with the output of shortest path algorithms fixes commits tagged with or
| 1
|
15,730
| 19,903,066,216
|
IssuesEvent
|
2022-01-25 09:58:44
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Graphical Modeler: "rasterize mesh dataset" does not allow to choose inputs
|
Feedback Processing Bug Mesh Modeller
|
### What is the bug or the crash?
While building a .model3 in the graphical modeler I found out, that the in german, Netzdatensatz rastern Tool, (i.e. Tin-Dataset-Rasterize, might be wrong just my translation) does not work. I used the TIN and TIN-Dataset input, but it won´t let me pick a Dataset. There are no displayed Datasets. If I start the tool from the Toolbox it works, but if i put the tool into an empty model it stops working again. My take is, that there might be some sort of bug in relation to the Model-Builder with this explict tool. First time experiencing this sort of Problem in the builder.
### Steps to reproduce the issue
1. Go to 'Graphical Modeler'
2. Then click onto the 'Rasterize Tin Dataset' algo
3. Try using it through the modeler
4. Compare it to using it through the toolbox
### Versions
3.22.0
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Graphical Modeler: "rasterize mesh dataset" does not allow to choose inputs - ### What is the bug or the crash?
While building a .model3 in the graphical modeler I found out, that the in german, Netzdatensatz rastern Tool, (i.e. Tin-Dataset-Rasterize, might be wrong just my translation) does not work. I used the TIN and TIN-Dataset input, but it won´t let me pick a Dataset. There are no displayed Datasets. If I start the tool from the Toolbox it works, but if i put the tool into an empty model it stops working again. My take is, that there might be some sort of bug in relation to the Model-Builder with this explict tool. First time experiencing this sort of Problem in the builder.
### Steps to reproduce the issue
1. Go to 'Graphical Modeler'
2. Then click onto the 'Rasterize Tin Dataset' algo
3. Try using it through the modeler
4. Compare it to using it through the toolbox
### Versions
3.22.0
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [X] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
graphical modeler rasterize mesh dataset does not allow to choose inputs what is the bug or the crash while building a in the graphical modeler i found out that the in german netzdatensatz rastern tool i e tin dataset rasterize might be wrong just my translation does not work i used the tin and tin dataset input but it won´t let me pick a dataset there are no displayed datasets if i start the tool from the toolbox it works but if i put the tool into an empty model it stops working again my take is that there might be some sort of bug in relation to the model builder with this explict tool first time experiencing this sort of problem in the builder steps to reproduce the issue go to graphical modeler then click onto the rasterize tin dataset algo try using it through the modeler compare it to using it through the toolbox versions supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
198,042
| 6,968,991,513
|
IssuesEvent
|
2017-12-11 01:59:51
|
Aviuz/PrisonLabor
|
https://api.github.com/repos/Aviuz/PrisonLabor
|
closed
|
Bill "Details" Screen is Blank
|
.high priority bug
|
When opening the "Details" screen of any production table/construct, the screen is blank.

The problem began occuring for me, and seemingly others. When you updated the experimental v.0.8.9.1 and "fixed work tabs" after someone reported that prisoners "Forced to Work" would not appear. It worked previously, so i believe it has to do with something you changed...
Beta 0.8.9.1 unstable
|
1.0
|
Bill "Details" Screen is Blank - When opening the "Details" screen of any production table/construct, the screen is blank.

The problem began occuring for me, and seemingly others. When you updated the experimental v.0.8.9.1 and "fixed work tabs" after someone reported that prisoners "Forced to Work" would not appear. It worked previously, so i believe it has to do with something you changed...
Beta 0.8.9.1 unstable
|
non_process
|
bill details screen is blank when opening the details screen of any production table construct the screen is blank the problem began occuring for me and seemingly others when you updated the experimental v and fixed work tabs after someone reported that prisoners forced to work would not appear it worked previously so i believe it has to do with something you changed beta unstable
| 0
|
20,077
| 26,573,081,683
|
IssuesEvent
|
2023-01-21 12:40:37
|
NationalSecurityAgency/ghidra
|
https://api.github.com/repos/NationalSecurityAgency/ghidra
|
closed
|
6809 (6x09.sinc) : Inaccurate JSR and JMP implementations
|
Type: Bug Feature: Processor/MC6800 Status: Internal
|
**Describe the bug**


is incorrect, according to https://www.maddes.net/m6809pm/appendix_a.htm

This results in inaccurate decompilation/disassembly e.g.


|
1.0
|
6809 (6x09.sinc) : Inaccurate JSR and JMP implementations - **Describe the bug**


is incorrect, according to https://www.maddes.net/m6809pm/appendix_a.htm

This results in inaccurate decompilation/disassembly e.g.


|
process
|
sinc inaccurate jsr and jmp implementations describe the bug is incorrect according to this results in inaccurate decompilation disassembly e g
| 1
|
16,176
| 20,622,553,643
|
IssuesEvent
|
2022-03-07 18:54:33
|
googleapis/java-grafeas
|
https://api.github.com/repos/googleapis/java-grafeas
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process api: containeranalysis repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'grafeas' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
* api_shortname 'grafeas' invalid in .repo-metadata.json
☝️ Once you address these problems, you can close this issue.
### Need help?
* [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field.
* [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**.
* Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname grafeas invalid in repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
| 1
|
273,207
| 20,776,202,578
|
IssuesEvent
|
2022-03-16 10:42:46
|
nrwl/nx-set-shas
|
https://api.github.com/repos/nrwl/nx-set-shas
|
closed
|
Document which app permissions are required in v2
|
documentation good first issue
|
First off, thanks for this library - it's very useful.
From what I understand after quickly reading through the README, v1 of this library used git tags to infer the base and head values, whereas v2 uses the GIthub API.
Github allows you to override the permissions granted to the `GITHUB_TOKEN` within your workflow at both the top level and within a specific job. If you do override this value, it implicitly sets every value you don't specify to `none`. As per their own documentation, when overriding you want to provide the minimum amount of access required for your workflow to run.
It would be good to know which API/app scopes were required (or which API endpoints you were hitting) to know how to correctly configure the permissions. I dropped in a few I thought might be needed, only for the job to fail. I then thought to look through the code to identify which API endpoints you were hitting, but you only seem to publish the bundled `dist` directory.
v1 solves my use case for now so it's easier for me to just drop the version back and not worry about permissions, but it would be good to know going forward.
|
1.0
|
Document which app permissions are required in v2 - First off, thanks for this library - it's very useful.
From what I understand after quickly reading through the README, v1 of this library used git tags to infer the base and head values, whereas v2 uses the GIthub API.
Github allows you to override the permissions granted to the `GITHUB_TOKEN` within your workflow at both the top level and within a specific job. If you do override this value, it implicitly sets every value you don't specify to `none`. As per their own documentation, when overriding you want to provide the minimum amount of access required for your workflow to run.
It would be good to know which API/app scopes were required (or which API endpoints you were hitting) to know how to correctly configure the permissions. I dropped in a few I thought might be needed, only for the job to fail. I then thought to look through the code to identify which API endpoints you were hitting, but you only seem to publish the bundled `dist` directory.
v1 solves my use case for now so it's easier for me to just drop the version back and not worry about permissions, but it would be good to know going forward.
|
non_process
|
document which app permissions are required in first off thanks for this library it s very useful from what i understand after quickly reading through the readme of this library used git tags to infer the base and head values whereas uses the github api github allows you to override the permissions granted to the github token within your workflow at both the top level and within a specific job if you do override this value it implicitly sets every value you don t specify to none as per their own documentation when overriding you want to provide the minimum amount of access required for your workflow to run it would be good to know which api app scopes were required or which api endpoints you were hitting to know how to correctly configure the permissions i dropped in a few i thought might be needed only for the job to fail i then thought to look through the code to identify which api endpoints you were hitting but you only seem to publish the bundled dist directory solves my use case for now so it s easier for me to just drop the version back and not worry about permissions but it would be good to know going forward
| 0
|
22,278
| 30,828,749,683
|
IssuesEvent
|
2023-08-01 22:42:55
|
sandsquaretech/AdvancedLayoutCalculator.jl
|
https://api.github.com/repos/sandsquaretech/AdvancedLayoutCalculator.jl
|
opened
|
Merge NgramFrequencyHolders
|
text processing
|
Read multiple documents (in parallel?) to obtain ngrams, merge raw counts, then apply any scaling or cutoffs
|
1.0
|
Merge NgramFrequencyHolders - Read multiple documents (in parallel?) to obtain ngrams, merge raw counts, then apply any scaling or cutoffs
|
process
|
merge ngramfrequencyholders read multiple documents in parallel to obtain ngrams merge raw counts then apply any scaling or cutoffs
| 1
|
16,632
| 21,704,599,483
|
IssuesEvent
|
2022-05-10 08:29:28
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Raster calculator produces empty results layer and no error message if input layer is one that has been renamed in QGIS layers panel
|
Processing Bug
|
Author Name: **Alister Hood** (@AlisterH)
Original Redmine Issue: [20601](https://issues.qgis.org/issues/20601)
Affected QGIS version: 3.4.1
Redmine category:processing/qgis
Assignee: Alessandro Pasotti
---
Right click on a layer in the QGIS "Layers" panel, and rename it.
Try to use it in the raster calculator (i.e. the one listed under QGIS raster analysis in processing). It will not work, but there will be no indication there has been an error (you will just get an empty result layer).
This is bad, because it appears to run successfully, but gives the wrong results.
Maybe this is related to the limitation in the implementation that prevents the raster calculator from listing and using more than one layer with the same name.
|
1.0
|
Raster calculator produces empty results layer and no error message if input layer is one that has been renamed in QGIS layers panel - Author Name: **Alister Hood** (@AlisterH)
Original Redmine Issue: [20601](https://issues.qgis.org/issues/20601)
Affected QGIS version: 3.4.1
Redmine category:processing/qgis
Assignee: Alessandro Pasotti
---
Right click on a layer in the QGIS "Layers" panel, and rename it.
Try to use it in the raster calculator (i.e. the one listed under QGIS raster analysis in processing). It will not work, but there will be no indication there has been an error (you will just get an empty result layer).
This is bad, because it appears to run successfully, but gives the wrong results.
Maybe this is related to the limitation in the implementation that prevents the raster calculator from listing and using more than one layer with the same name.
|
process
|
raster calculator produces empty results layer and no error message if input layer is one that has been renamed in qgis layers panel author name alister hood alisterh original redmine issue affected qgis version redmine category processing qgis assignee alessandro pasotti right click on a layer in the qgis layers panel and rename it try to use it in the raster calculator i e the one listed under qgis raster analysis in processing it will not work but there will be no indication there has been an error you will just get an empty result layer this is bad because it appears to run successfully but gives the wrong results maybe this is related to the limitation in the implementation that prevents the raster calculator from listing and using more than one layer with the same name
| 1
|
64,345
| 7,787,252,062
|
IssuesEvent
|
2018-06-06 21:43:32
|
syndesisio/syndesis
|
https://api.github.com/repos/syndesisio/syndesis
|
closed
|
Connection configuration workflow visual updates
|
cat/design cat/feature
|
The cards used when creating and configuring a connection (/connections/create/configure-fields) should utilize the `.card-pf-heading` class to wrap the `.card-pf-title`, ~~and the "Validate" button as well as the progress indicator should be moved into `.card-pf-title`, aligned to the right.~~
* will address validate button placement and output after @sjcox-rh finalizes the new connection template design. there is also another issue that addresses the validation button placement - https://github.com/syndesisio/syndesis/issues/2538
<img width="710" alt="screen shot 2018-02-09 at 12 28 47 pm" src="https://user-images.githubusercontent.com/35148959/36051941-7345faa8-0db1-11e8-96f8-8c82ddb6c298.png">
|
1.0
|
Connection configuration workflow visual updates - The cards used when creating and configuring a connection (/connections/create/configure-fields) should utilize the `.card-pf-heading` class to wrap the `.card-pf-title`, ~~and the "Validate" button as well as the progress indicator should be moved into `.card-pf-title`, aligned to the right.~~
* will address validate button placement and output after @sjcox-rh finalizes the new connection template design. there is also another issue that addresses the validation button placement - https://github.com/syndesisio/syndesis/issues/2538
<img width="710" alt="screen shot 2018-02-09 at 12 28 47 pm" src="https://user-images.githubusercontent.com/35148959/36051941-7345faa8-0db1-11e8-96f8-8c82ddb6c298.png">
|
non_process
|
connection configuration workflow visual updates the cards used when creating and configuring a connection connections create configure fields should utilize the card pf heading class to wrap the card pf title and the validate button as well as the progress indicator should be moved into card pf title aligned to the right will address validate button placement and output after sjcox rh finalizes the new connection template design there is also another issue that addresses the validation button placement img width alt screen shot at pm src
| 0
|
21,208
| 28,262,807,112
|
IssuesEvent
|
2023-04-07 01:59:34
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] [Bug] JS MetadataProvider not working correctly for questions using Saved Questions/Models as source
|
Type:Bug .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
Apparently not working for native questions either. See https://metaboat.slack.com/archives/C04DN5VRQM6/p1680268395463109
|
1.0
|
[MLv2] [Bug] JS MetadataProvider not working correctly for questions using Saved Questions/Models as source - Apparently not working for native questions either. See https://metaboat.slack.com/archives/C04DN5VRQM6/p1680268395463109
|
process
|
js metadataprovider not working correctly for questions using saved questions models as source apparently not working for native questions either see
| 1
|
12,719
| 15,093,579,174
|
IssuesEvent
|
2021-02-07 01:21:29
|
Maximus5/ConEmu
|
https://api.github.com/repos/Maximus5/ConEmu
|
closed
|
FAR doesn't show output of commands in builtin command line when used under ConEmu
|
processes
|
### Versions
ConEmu build: 210202 x64
OS version: Windows 8.1 Pro x64
Far Manager version: 3.0.5400 x64
### Problem description
When I run a command in FAR's command line it doesn't show the output in the window. After pressing CTRL+O there's only information about the command being executed (but it's executed externally it seems).
For example:
```
Command to be executed:
""C:\Windows\system32\ipconfig.exe"
```
The output was shown normally in the previous version (210128). The problem is also absent when running FAR in cmd.
### Steps to reproduce
1. Open FAR in ConEmu.
2. Run any command that produces output.
### Actual results
Only the information about the command being executed is shown.
### Expected results
The actual output of the command should be visible.
### Additional files
|
1.0
|
FAR doesn't show output of commands in builtin command line when used under ConEmu - ### Versions
ConEmu build: 210202 x64
OS version: Windows 8.1 Pro x64
Far Manager version: 3.0.5400 x64
### Problem description
When I run a command in FAR's command line it doesn't show the output in the window. After pressing CTRL+O there's only information about the command being executed (but it's executed externally it seems).
For example:
```
Command to be executed:
""C:\Windows\system32\ipconfig.exe"
```
The output was shown normally in the previous version (210128). The problem is also absent when running FAR in cmd.
### Steps to reproduce
1. Open FAR in ConEmu.
2. Run any command that produces output.
### Actual results
Only the information about the command being executed is shown.
### Expected results
The actual output of the command should be visible.
### Additional files
|
process
|
far doesn t show output of commands in builtin command line when used under conemu versions conemu build os version windows pro far manager version problem description when i run a command in far s command line it doesn t show the output in the window after pressing ctrl o there s only information about the command being executed but it s executed externally it seems for example command to be executed c windows ipconfig exe the output was shown normally in the previous version the problem is also absent when running far in cmd steps to reproduce open far in conemu run any command that produces output actual results only the information about the command being executed is shown expected results the actual output of the command should be visible additional files
| 1
|
180,902
| 30,591,021,475
|
IssuesEvent
|
2023-07-21 17:03:37
|
department-of-veterans-affairs/va.gov-cms
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
|
closed
|
Link Creation
|
VAMC Drupal engineering Facilities CMS design
|
How might we make it easier for content creators to enter similar links?
## Current situation
When creating two (2) Similar links (i.e. phone numbers) editor cannot copy link and put it in and add extension. Instead, they have to wipe out link and start from scratch.
Drupal Node: Detail page
Drupal Area: Facility page
Field or node instance (in node): Location Services

---
## Tasks
- [ ] _What work is necessary for this story to be completed?_
## Acceptance Criteria
- [ ] _What will be created or happen as a result of this story?_
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
1.0
|
Link Creation - How might we make it easier for content creators to enter similar links?
## Current situation
When creating two (2) Similar links (i.e. phone numbers) editor cannot copy link and put it in and add extension. Instead, they have to wipe out link and start from scratch.
Drupal Node: Detail page
Drupal Area: Facility page
Field or node instance (in node): Location Services

---
## Tasks
- [ ] _What work is necessary for this story to be completed?_
## Acceptance Criteria
- [ ] _What will be created or happen as a result of this story?_
---
## How to configure this issue
- [ ] **Attached to a Milestone** (when will this be completed?)
- [ ] **Attached to an Epic** (what body of work is this a part of?)
- [ ] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [ ] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [ ] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
|
non_process
|
link creation how might we make it easier for content creators to enter similar links current situation when creating two similar links i e phone numbers editor cannot copy link and put it in and add extension instead they have to wipe out link and start from scratch drupal node detail page drupal area facility page field or node instance in node location services tasks what work is necessary for this story to be completed acceptance criteria what will be created or happen as a result of this story how to configure this issue attached to a milestone when will this be completed attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc
| 0
|
1,444
| 2,598,116,881
|
IssuesEvent
|
2015-02-22 05:13:34
|
okTurtles/dnschain
|
https://api.github.com/repos/okTurtles/dnschain
|
opened
|
Update documentation as necessary for 0.5
|
documentation high priority
|
- Explain HTTPS fingerprint autogen (and how to find out what the fingerprint is). Mention the openname-resolver API is coming and link to it.
- Have a section somewhere documenting all supported blockchain TLDs
- Explain `icann.dns`
- Discuss new configuration options for specifying blockchain config file path, throttling, etc.
- Document how devs can easily add support for their blockchain of choice.
|
1.0
|
Update documentation as necessary for 0.5 - - Explain HTTPS fingerprint autogen (and how to find out what the fingerprint is). Mention the openname-resolver API is coming and link to it.
- Have a section somewhere documenting all supported blockchain TLDs
- Explain `icann.dns`
- Discuss new configuration options for specifying blockchain config file path, throttling, etc.
- Document how devs can easily add support for their blockchain of choice.
|
non_process
|
update documentation as necessary for explain https fingerprint autogen and how to find out what the fingerprint is mention the openname resolver api is coming and link to it have a section somewhere documenting all supported blockchain tlds explain icann dns discuss new configuration options for specifying blockchain config file path throttling etc document how devs can easily add support for their blockchain of choice
| 0
|
7,575
| 10,685,982,155
|
IssuesEvent
|
2019-10-22 13:40:28
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
opened
|
Back button does not work on init flow with SQLite selected
|
process/candidate
|
Steps to reproduce:
1. prisma2 init
2. Select Blank Project > SQLite
3. Try to go back using the Back button
<img width="675" alt="Screenshot 2019-10-22 at 15 39 59" src="https://user-images.githubusercontent.com/7689783/67291703-39bd3c00-f4e2-11e9-9cf7-5262aedf9d9f.png">
|
1.0
|
Back button does not work on init flow with SQLite selected - Steps to reproduce:
1. prisma2 init
2. Select Blank Project > SQLite
3. Try to go back using the Back button
<img width="675" alt="Screenshot 2019-10-22 at 15 39 59" src="https://user-images.githubusercontent.com/7689783/67291703-39bd3c00-f4e2-11e9-9cf7-5262aedf9d9f.png">
|
process
|
back button does not work on init flow with sqlite selected steps to reproduce init select blank project sqlite try to go back using the back button img width alt screenshot at src
| 1
|
11,606
| 14,478,922,000
|
IssuesEvent
|
2020-12-10 09:06:54
|
decidim/decidim
|
https://api.github.com/repos/decidim/decidim
|
closed
|
See next meetings of a Process Group
|
contract: process-groups
|
Ref.: PG04
**Is your feature request related to a problem? Please describe.**
As a visitor, I want to see the future Meetings that take place in a Process Group in a map.
**Describe the solution you'd like**
To have the "Next meetings" content block implemented.
For keeping this short, we will not implement "View all"
**Describe alternatives you've considered**
To extend the General Meetings with filters and selectors for Process Groups ( https://www.decidim.barcelona/meetings). This will probably be extended in the future to other participatory spaces (like Processes, Assemblies, Initiatives, Consultations, etc)
To have a page like /processes_groups/X/meetings
To use the general search for making this filtering
**Additional context**

One thing that we've lost with PG is the ability to see all the contents inside of the processes.
This would probably be implemented in the future but at the moment is out of the scope.
**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As a visitor I can see the next 6 meetings of all the processes inside of a PG with a map
|
1.0
|
See next meetings of a Process Group - Ref.: PG04
**Is your feature request related to a problem? Please describe.**
As a visitor, I want to see the future Meetings that take place in a Process Group in a map.
**Describe the solution you'd like**
To have the "Next meetings" content block implemented.
For keeping this short, we will not implement "View all"
**Describe alternatives you've considered**
To extend the General Meetings with filters and selectors for Process Groups ( https://www.decidim.barcelona/meetings). This will probably be extended in the future to other participatory spaces (like Processes, Assemblies, Initiatives, Consultations, etc)
To have a page like /processes_groups/X/meetings
To use the general search for making this filtering
**Additional context**

One thing that we've lost with PG is the ability to see all the contents inside of the processes.
This would probably be implemented in the future but at the moment is out of the scope.
**Does this issue could impact on users private data?**
No
**Acceptance criteria**
- [x] As a visitor I can see the next 6 meetings of all the processes inside of a PG with a map
|
process
|
see next meetings of a process group ref is your feature request related to a problem please describe as a visitor i want to see the future meetings that take place in a process group in a map describe the solution you d like to have the next meetings content block implemented for keeping this short we will not implement view all describe alternatives you ve considered to extend the general meetings with filters and selectors for process groups this will probably be extended in the future to other participatory spaces like processes assemblies initiatives consultations etc to have a page like processes groups x meetings to use the general search for making this filtering additional context one thing that we ve lost with pg is the ability to see all the contents inside of the processes this would probably be implemented in the future but at the moment is out of the scope does this issue could impact on users private data no acceptance criteria as a visitor i can see the next meetings of all the processes inside of a pg with a map
| 1
|
530,633
| 15,435,208,253
|
IssuesEvent
|
2021-03-07 07:44:34
|
worldanvil/worldanvil-bug-tracker
|
https://api.github.com/repos/worldanvil/worldanvil-bug-tracker
|
closed
|
Corrupted Marker prevents all layers & pins from loading.
|
Feature: Maps Priority: Optional Severity: Minor Type: UI / UX
|
**World Anvil Username**: SoulLink
**Feature**: Maps
**Describe the Issue**
When there is an issue in loading a marker it breaks the entire map. Tracking this error down is almost impossible without intimate knowledge of the code and developer codes. The issue on the screenshot below is that mapMarker51ec88 is not defined within the map script at all. This throws a Reference Error.
**Expected behavior**
Catch the reference error (or resolve the underlying issue) and prevent a single marker from disabling the entire layer, group & marker display. This would aid in tracking down localized issues.
**Screenshots**

|
1.0
|
Corrupted Marker prevents all layers & pins from loading. - **World Anvil Username**: SoulLink
**Feature**: Maps
**Describe the Issue**
When there is an issue in loading a marker it breaks the entire map. Tracking this error down is almost impossible without intimate knowledge of the code and developer codes. The issue on the screenshot below is that mapMarker51ec88 is not defined within the map script at all. This throws a Reference Error.
**Expected behavior**
Catch the reference error (or resolve the underlying issue) and prevent a single marker from disabling the entire layer, group & marker display. This would aid in tracking down localized issues.
**Screenshots**

|
non_process
|
corrupted marker prevents all layers pins from loading world anvil username soullink feature maps describe the issue when there is an issue in loading a marker it breaks the entire map tracking this error down is almost impossible without intimate knowledge of the code and developer codes the issue on the screenshot below is that is not defined within the map script at all this throws a reference error expected behavior catch the reference error or resolve the underlying issue and prevent a single marker from disabling the entire layer group marker display this would aid in tracking down localized issues screenshots
| 0
|
20,763
| 27,494,695,047
|
IssuesEvent
|
2023-03-05 02:00:10
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 3 Mar 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
### Delivering Arbitrary-Modal Semantic Segmentation
- **Authors:** Jiaming Zhang, Ruiping Liu, Hao Shi, Kailun Yang, Simon Reiß, Kunyu Peng, Haodong Fu, Kaiwei Wang, Rainer Stiefelhagen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.01480
- **Pdf link:** https://arxiv.org/pdf/2303.01480
- **Abstract**
Multimodal fusion can make semantic segmentation more robust. However, fusing an arbitrary number of modalities remains underexplored. To delve into this problem, we create the DeLiVER arbitrary-modal segmentation benchmark, covering Depth, LiDAR, multiple Views, Events, and RGB. Aside from this, we provide this dataset in four severe weather conditions as well as five sensor failure cases to exploit modal complementarity and resolve partial outages. To make this possible, we present the arbitrary cross-modal segmentation model CMNeXt. It encompasses a Self-Query Hub (SQ-Hub) designed to extract effective information from any modality for subsequent fusion with the RGB representation and adds only negligible amounts of parameters (~0.01M) per additional modality. On top, to efficiently and flexibly harvest discriminative cues from the auxiliary modalities, we introduce the simple Parallel Pooling Mixer (PPX). With extensive experiments on a total of six benchmarks, our CMNeXt achieves state-of-the-art performance on the DeLiVER, KITTI-360, MFNet, NYU Depth V2, UrbanLF, and MCubeS datasets, allowing to scale from 1 to 81 modalities. On the freshly collected DeLiVER, the quad-modal CMNeXt reaches up to 66.30% in mIoU with a +9.10% gain as compared to the mono-modal baseline. The DeLiVER dataset and our code are at: https://jamycheung.github.io/DELIVER.html.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Domain-aware Triplet loss in Domain Generalization
- **Authors:** Kaiyu Guo, Brian Lovell
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.01233
- **Pdf link:** https://arxiv.org/pdf/2303.01233
- **Abstract**
Despite much progress being made in the field of object recognition with the advances of deep learning, there are still several factors negatively affecting the performance of deep learning models. Domain shift is one of these factors and is caused by discrepancies in the distributions of the testing and training data. In this paper, we focus on the problem of compact feature clustering in domain generalization to help optimize the embedding space from multi-domain data. We design a domainaware triplet loss for domain generalization to help the model to not only cluster similar semantic features, but also to disperse features arising from the domain. Unlike previous methods focusing on distribution alignment, our algorithm is designed to disperse domain information in the embedding space. The basic idea is motivated based on the assumption that embedding features can be clustered based on domain information, which is mathematically and empirically supported in this paper. In addition, during our exploration of feature clustering in domain generalization, we note that factors affecting the convergence of metric learning loss in domain generalization are more important than the pre-defined domains. To solve this issue, we utilize two methods to normalize the embedding space, reducing the internal covariate shift of the embedding features. The ablation study demonstrates the effectiveness of our algorithm. Moreover, the experiments on the benchmark datasets, including PACS, VLCS and Office-Home, show that our method outperforms related methods focusing on domain discrepancy. In particular, our results on RegnetY-16 are significantly better than state-of-the-art methods on the benchmark datasets. Our code will be released at https://github.com/workerbcd/DCT
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Disentangling Orthogonal Planes for Indoor Panoramic Room Layout Estimation with Cross-Scale Distortion Awareness
- **Authors:** Zhijie Shen, Zishuo Zheng, Chunyu Lin, Lang Nie, Kang Liao, Yao Zhao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.00971
- **Pdf link:** https://arxiv.org/pdf/2303.00971
- **Abstract**
Based on the Manhattan World assumption, most existing indoor layout estimation schemes focus on recovering layouts from vertically compressed 1D sequences. However, the compression procedure confuses the semantics of different planes, yielding inferior performance with ambiguous interpretability. To address this issue, we propose to disentangle this 1D representation by pre-segmenting orthogonal (vertical and horizontal) planes from a complex scene, explicitly capturing the geometric cues for indoor layout estimation. Considering the symmetry between the floor boundary and ceiling boundary, we also design a soft-flipping fusion strategy to assist the pre-segmentation. Besides, we present a feature assembling mechanism to effectively integrate shallow and deep features with distortion distribution awareness. To compensate for the potential errors in pre-segmentation, we further leverage triple attention to reconstruct the disentangled sequences for better performance. Experiments on four popular benchmarks demonstrate our superiority over existing SoTA solutions, especially on the 3DIoU metric. The code is available at \url{https://github.com/zhijieshen-bjtu/DOPNet}.
### Practical Network Acceleration with Tiny Sets: Hypothesis, Theory, and Algorithm
- **Authors:** Guo-Hua Wang, Jianxin Wu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML)
- **Arxiv link:** https://arxiv.org/abs/2303.00972
- **Pdf link:** https://arxiv.org/pdf/2303.00972
- **Abstract**
Due to data privacy issues, accelerating networks with tiny training sets has become a critical need in practice. Previous methods achieved promising results empirically by filter-level pruning. In this paper, we both study this problem theoretically and propose an effective algorithm aligning well with our theoretical results. First, we propose the finetune convexity hypothesis to explain why recent few-shot compression algorithms do not suffer from overfitting problems. Based on it, a theory is further established to explain these methods for the first time. Compared to naively finetuning a pruned network, feature mimicking is proved to achieve a lower variance of parameters and hence enjoys easier optimization. With our theoretical conclusions, we claim dropping blocks is a fundamentally superior few-shot compression scheme in terms of more convex optimization and a higher acceleration ratio. To choose which blocks to drop, we propose a new metric, recoverability, to effectively measure the difficulty of recovering the compressed network. Finally, we propose an algorithm named PRACTISE to accelerate networks using only tiny training sets. PRACTISE outperforms previous methods by a significant margin. For 22% latency reduction, it surpasses previous methods by on average 7 percentage points on ImageNet-1k. It also works well under data-free or out-of-domain data settings. Our code is at https://github.com/DoctorKey/Practise
## Keyword: RAW
### MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation
- **Authors:** Zongtao He, Liuyi Wang, Shu Li, Qingqing Yan, Chengju Liu, Qijun Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.01396
- **Pdf link:** https://arxiv.org/pdf/2303.01396
- **Abstract**
Vision-and-Language Navigation (VLN) aims to develop intelligent agents to navigate in unseen environments only through language and vision supervision. In the recently proposed continuous settings (continuous VLN), the agent must act in a free 3D space and faces tougher challenges like real-time execution, complex instruction understanding, and long action sequence prediction. For a better performance in continuous VLN, we design a multi-level instruction understanding procedure and propose a novel model, Multi-Level Attention Network (MLANet). The first step of MLANet is to generate sub-instructions efficiently. We design a Fast Sub-instruction Algorithm (FSA) to segment the raw instruction into sub-instructions and generate a new sub-instruction dataset named ``FSASub". FSA is annotation-free and faster than the current method by 70 times, thus fitting the real-time requirement in continuous VLN. To solve the complex instruction understanding problem, MLANet needs a global perception of the instruction and observations. We propose a Multi-Level Attention (MLA) module to fuse vision, low-level semantics, and high-level semantics, which produce features containing a dynamic and global comprehension of the task. MLA also mitigates the adverse effects of noise words, thus ensuring a robust understanding of the instruction. To correctly predict actions in long trajectories, MLANet needs to focus on what sub-instruction is being executed every step. We propose a Peak Attention Loss (PAL) to improve the flexible and adaptive selection of the current sub-instruction. PAL benefits the navigation agent by concentrating its attention on the local information, thus helping the agent predict the most appropriate actions. We train and test MLANet in the standard benchmark. Experiment results show MLANet outperforms baselines by a significant margin.
### Image as Set of Points
- **Authors:** Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.01494
- **Pdf link:** https://arxiv.org/pdf/2303.01494
- **Abstract**
What is an image and how to extract latent features? Convolutional Networks (ConvNets) consider an image as organized pixels in a rectangular shape and extract features via convolutional operation in local region; Vision Transformers (ViTs) treat an image as a sequence of patches and extract features via attention mechanism in a global range. In this work, we introduce a straightforward and promising paradigm for visual representation, which is called Context Clusters. Context clusters (CoCs) view an image as a set of unorganized points and extract features via simplified clustering algorithm. In detail, each point includes the raw feature (e.g., color) and positional information (e.g., coordinates), and a simplified clustering algorithm is employed to group and extract deep features hierarchically. Our CoCs are convolution- and attention-free, and only rely on clustering algorithm for spatial interaction. Owing to the simple design, we show CoCs endow gratifying interpretability via the visualization of clustering process. Our CoCs aim at providing a new perspective on image and visual representation, which may enjoy broad applications in different domains and exhibit profound insights. Even though we are not targeting SOTA performance, COCs still achieve comparable or even better results than ConvNets or ViTs on several benchmarks. Codes are available at: https://github.com/ma-xu/Context-Cluster.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 3 Mar 23 - ## Keyword: events
### Delivering Arbitrary-Modal Semantic Segmentation
- **Authors:** Jiaming Zhang, Ruiping Liu, Hao Shi, Kailun Yang, Simon Reiß, Kunyu Peng, Haodong Fu, Kaiwei Wang, Rainer Stiefelhagen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.01480
- **Pdf link:** https://arxiv.org/pdf/2303.01480
- **Abstract**
Multimodal fusion can make semantic segmentation more robust. However, fusing an arbitrary number of modalities remains underexplored. To delve into this problem, we create the DeLiVER arbitrary-modal segmentation benchmark, covering Depth, LiDAR, multiple Views, Events, and RGB. Aside from this, we provide this dataset in four severe weather conditions as well as five sensor failure cases to exploit modal complementarity and resolve partial outages. To make this possible, we present the arbitrary cross-modal segmentation model CMNeXt. It encompasses a Self-Query Hub (SQ-Hub) designed to extract effective information from any modality for subsequent fusion with the RGB representation and adds only negligible amounts of parameters (~0.01M) per additional modality. On top, to efficiently and flexibly harvest discriminative cues from the auxiliary modalities, we introduce the simple Parallel Pooling Mixer (PPX). With extensive experiments on a total of six benchmarks, our CMNeXt achieves state-of-the-art performance on the DeLiVER, KITTI-360, MFNet, NYU Depth V2, UrbanLF, and MCubeS datasets, allowing to scale from 1 to 81 modalities. On the freshly collected DeLiVER, the quad-modal CMNeXt reaches up to 66.30% in mIoU with a +9.10% gain as compared to the mono-modal baseline. The DeLiVER dataset and our code are at: https://jamycheung.github.io/DELIVER.html.
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Domain-aware Triplet loss in Domain Generalization
- **Authors:** Kaiyu Guo, Brian Lovell
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.01233
- **Pdf link:** https://arxiv.org/pdf/2303.01233
- **Abstract**
Despite much progress being made in the field of object recognition with the advances of deep learning, there are still several factors negatively affecting the performance of deep learning models. Domain shift is one of these factors and is caused by discrepancies in the distributions of the testing and training data. In this paper, we focus on the problem of compact feature clustering in domain generalization to help optimize the embedding space from multi-domain data. We design a domainaware triplet loss for domain generalization to help the model to not only cluster similar semantic features, but also to disperse features arising from the domain. Unlike previous methods focusing on distribution alignment, our algorithm is designed to disperse domain information in the embedding space. The basic idea is motivated based on the assumption that embedding features can be clustered based on domain information, which is mathematically and empirically supported in this paper. In addition, during our exploration of feature clustering in domain generalization, we note that factors affecting the convergence of metric learning loss in domain generalization are more important than the pre-defined domains. To solve this issue, we utilize two methods to normalize the embedding space, reducing the internal covariate shift of the embedding features. The ablation study demonstrates the effectiveness of our algorithm. Moreover, the experiments on the benchmark datasets, including PACS, VLCS and Office-Home, show that our method outperforms related methods focusing on domain discrepancy. In particular, our results on RegnetY-16 are significantly better than state-of-the-art methods on the benchmark datasets. Our code will be released at https://github.com/workerbcd/DCT
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Disentangling Orthogonal Planes for Indoor Panoramic Room Layout Estimation with Cross-Scale Distortion Awareness
- **Authors:** Zhijie Shen, Zishuo Zheng, Chunyu Lin, Lang Nie, Kang Liao, Yao Zhao
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.00971
- **Pdf link:** https://arxiv.org/pdf/2303.00971
- **Abstract**
Based on the Manhattan World assumption, most existing indoor layout estimation schemes focus on recovering layouts from vertically compressed 1D sequences. However, the compression procedure confuses the semantics of different planes, yielding inferior performance with ambiguous interpretability. To address this issue, we propose to disentangle this 1D representation by pre-segmenting orthogonal (vertical and horizontal) planes from a complex scene, explicitly capturing the geometric cues for indoor layout estimation. Considering the symmetry between the floor boundary and ceiling boundary, we also design a soft-flipping fusion strategy to assist the pre-segmentation. Besides, we present a feature assembling mechanism to effectively integrate shallow and deep features with distortion distribution awareness. To compensate for the potential errors in pre-segmentation, we further leverage triple attention to reconstruct the disentangled sequences for better performance. Experiments on four popular benchmarks demonstrate our superiority over existing SoTA solutions, especially on the 3DIoU metric. The code is available at \url{https://github.com/zhijieshen-bjtu/DOPNet}.
### Practical Network Acceleration with Tiny Sets: Hypothesis, Theory, and Algorithm
- **Authors:** Guo-Hua Wang, Jianxin Wu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML)
- **Arxiv link:** https://arxiv.org/abs/2303.00972
- **Pdf link:** https://arxiv.org/pdf/2303.00972
- **Abstract**
Due to data privacy issues, accelerating networks with tiny training sets has become a critical need in practice. Previous methods achieved promising results empirically by filter-level pruning. In this paper, we both study this problem theoretically and propose an effective algorithm aligning well with our theoretical results. First, we propose the finetune convexity hypothesis to explain why recent few-shot compression algorithms do not suffer from overfitting problems. Based on it, a theory is further established to explain these methods for the first time. Compared to naively finetuning a pruned network, feature mimicking is proved to achieve a lower variance of parameters and hence enjoys easier optimization. With our theoretical conclusions, we claim dropping blocks is a fundamentally superior few-shot compression scheme in terms of more convex optimization and a higher acceleration ratio. To choose which blocks to drop, we propose a new metric, recoverability, to effectively measure the difficulty of recovering the compressed network. Finally, we propose an algorithm named PRACTISE to accelerate networks using only tiny training sets. PRACTISE outperforms previous methods by a significant margin. For 22% latency reduction, it surpasses previous methods by on average 7 percentage points on ImageNet-1k. It also works well under data-free or out-of-domain data settings. Our code is at https://github.com/DoctorKey/Practise
## Keyword: RAW
### MLANet: Multi-Level Attention Network with Sub-instruction for Continuous Vision-and-Language Navigation
- **Authors:** Zongtao He, Liuyi Wang, Shu Li, Qingqing Yan, Chengju Liu, Qijun Chen
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Computation and Language (cs.CL); Multimedia (cs.MM)
- **Arxiv link:** https://arxiv.org/abs/2303.01396
- **Pdf link:** https://arxiv.org/pdf/2303.01396
- **Abstract**
Vision-and-Language Navigation (VLN) aims to develop intelligent agents to navigate in unseen environments only through language and vision supervision. In the recently proposed continuous settings (continuous VLN), the agent must act in a free 3D space and faces tougher challenges like real-time execution, complex instruction understanding, and long action sequence prediction. For a better performance in continuous VLN, we design a multi-level instruction understanding procedure and propose a novel model, Multi-Level Attention Network (MLANet). The first step of MLANet is to generate sub-instructions efficiently. We design a Fast Sub-instruction Algorithm (FSA) to segment the raw instruction into sub-instructions and generate a new sub-instruction dataset named ``FSASub". FSA is annotation-free and faster than the current method by 70 times, thus fitting the real-time requirement in continuous VLN. To solve the complex instruction understanding problem, MLANet needs a global perception of the instruction and observations. We propose a Multi-Level Attention (MLA) module to fuse vision, low-level semantics, and high-level semantics, which produce features containing a dynamic and global comprehension of the task. MLA also mitigates the adverse effects of noise words, thus ensuring a robust understanding of the instruction. To correctly predict actions in long trajectories, MLANet needs to focus on what sub-instruction is being executed every step. We propose a Peak Attention Loss (PAL) to improve the flexible and adaptive selection of the current sub-instruction. PAL benefits the navigation agent by concentrating its attention on the local information, thus helping the agent predict the most appropriate actions. We train and test MLANet in the standard benchmark. Experiment results show MLANet outperforms baselines by a significant margin.
### Image as Set of Points
- **Authors:** Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2303.01494
- **Pdf link:** https://arxiv.org/pdf/2303.01494
- **Abstract**
What is an image and how to extract latent features? Convolutional Networks (ConvNets) consider an image as organized pixels in a rectangular shape and extract features via convolutional operation in local region; Vision Transformers (ViTs) treat an image as a sequence of patches and extract features via attention mechanism in a global range. In this work, we introduce a straightforward and promising paradigm for visual representation, which is called Context Clusters. Context clusters (CoCs) view an image as a set of unorganized points and extract features via simplified clustering algorithm. In detail, each point includes the raw feature (e.g., color) and positional information (e.g., coordinates), and a simplified clustering algorithm is employed to group and extract deep features hierarchically. Our CoCs are convolution- and attention-free, and only rely on clustering algorithm for spatial interaction. Owing to the simple design, we show CoCs endow gratifying interpretability via the visualization of clustering process. Our CoCs aim at providing a new perspective on image and visual representation, which may enjoy broad applications in different domains and exhibit profound insights. Even though we are not targeting SOTA performance, COCs still achieve comparable or even better results than ConvNets or ViTs on several benchmarks. Codes are available at: https://github.com/ma-xu/Context-Cluster.
## Keyword: raw image
There is no result
|
process
|
new submissions for fri mar keyword events delivering arbitrary modal semantic segmentation authors jiaming zhang ruiping liu hao shi kailun yang simon reiß kunyu peng haodong fu kaiwei wang rainer stiefelhagen subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract multimodal fusion can make semantic segmentation more robust however fusing an arbitrary number of modalities remains underexplored to delve into this problem we create the deliver arbitrary modal segmentation benchmark covering depth lidar multiple views events and rgb aside from this we provide this dataset in four severe weather conditions as well as five sensor failure cases to exploit modal complementarity and resolve partial outages to make this possible we present the arbitrary cross modal segmentation model cmnext it encompasses a self query hub sq hub designed to extract effective information from any modality for subsequent fusion with the rgb representation and adds only negligible amounts of parameters per additional modality on top to efficiently and flexibly harvest discriminative cues from the auxiliary modalities we introduce the simple parallel pooling mixer ppx with extensive experiments on a total of six benchmarks our cmnext achieves state of the art performance on the deliver kitti mfnet nyu depth urbanlf and mcubes datasets allowing to scale from to modalities on the freshly collected deliver the quad modal cmnext reaches up to in miou with a gain as compared to the mono modal baseline the deliver dataset and our code are at keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp domain aware triplet loss in domain generalization authors kaiyu guo brian lovell subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract despite much progress being made in the field of object recognition with the advances of deep learning there are still several factors negatively affecting the performance of deep learning models domain shift is one of these factors and is caused by discrepancies in the distributions of the testing and training data in this paper we focus on the problem of compact feature clustering in domain generalization to help optimize the embedding space from multi domain data we design a domainaware triplet loss for domain generalization to help the model to not only cluster similar semantic features but also to disperse features arising from the domain unlike previous methods focusing on distribution alignment our algorithm is designed to disperse domain information in the embedding space the basic idea is motivated based on the assumption that embedding features can be clustered based on domain information which is mathematically and empirically supported in this paper in addition during our exploration of feature clustering in domain generalization we note that factors affecting the convergence of metric learning loss in domain generalization are more important than the pre defined domains to solve this issue we utilize two methods to normalize the embedding space reducing the internal covariate shift of the embedding features the ablation study demonstrates the effectiveness of our algorithm moreover the experiments on the benchmark datasets including pacs vlcs and office home show that our method outperforms related methods focusing on domain discrepancy in particular our results on regnety are significantly better than state of the art methods on the benchmark datasets our code will be released at keyword image signal processing there is no result keyword image signal process there is no result keyword compression disentangling orthogonal planes for indoor panoramic room layout estimation with cross scale distortion awareness authors zhijie shen zishuo zheng chunyu lin lang nie kang liao yao zhao subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract based on the manhattan world assumption most existing indoor layout estimation schemes focus on recovering layouts from vertically compressed sequences however the compression procedure confuses the semantics of different planes yielding inferior performance with ambiguous interpretability to address this issue we propose to disentangle this representation by pre segmenting orthogonal vertical and horizontal planes from a complex scene explicitly capturing the geometric cues for indoor layout estimation considering the symmetry between the floor boundary and ceiling boundary we also design a soft flipping fusion strategy to assist the pre segmentation besides we present a feature assembling mechanism to effectively integrate shallow and deep features with distortion distribution awareness to compensate for the potential errors in pre segmentation we further leverage triple attention to reconstruct the disentangled sequences for better performance experiments on four popular benchmarks demonstrate our superiority over existing sota solutions especially on the metric the code is available at url practical network acceleration with tiny sets hypothesis theory and algorithm authors guo hua wang jianxin wu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai machine learning cs lg machine learning stat ml arxiv link pdf link abstract due to data privacy issues accelerating networks with tiny training sets has become a critical need in practice previous methods achieved promising results empirically by filter level pruning in this paper we both study this problem theoretically and propose an effective algorithm aligning well with our theoretical results first we propose the finetune convexity hypothesis to explain why recent few shot compression algorithms do not suffer from overfitting problems based on it a theory is further established to explain these methods for the first time compared to naively finetuning a pruned network feature mimicking is proved to achieve a lower variance of parameters and hence enjoys easier optimization with our theoretical conclusions we claim dropping blocks is a fundamentally superior few shot compression scheme in terms of more convex optimization and a higher acceleration ratio to choose which blocks to drop we propose a new metric recoverability to effectively measure the difficulty of recovering the compressed network finally we propose an algorithm named practise to accelerate networks using only tiny training sets practise outperforms previous methods by a significant margin for latency reduction it surpasses previous methods by on average percentage points on imagenet it also works well under data free or out of domain data settings our code is at keyword raw mlanet multi level attention network with sub instruction for continuous vision and language navigation authors zongtao he liuyi wang shu li qingqing yan chengju liu qijun chen subjects computer vision and pattern recognition cs cv computation and language cs cl multimedia cs mm arxiv link pdf link abstract vision and language navigation vln aims to develop intelligent agents to navigate in unseen environments only through language and vision supervision in the recently proposed continuous settings continuous vln the agent must act in a free space and faces tougher challenges like real time execution complex instruction understanding and long action sequence prediction for a better performance in continuous vln we design a multi level instruction understanding procedure and propose a novel model multi level attention network mlanet the first step of mlanet is to generate sub instructions efficiently we design a fast sub instruction algorithm fsa to segment the raw instruction into sub instructions and generate a new sub instruction dataset named fsasub fsa is annotation free and faster than the current method by times thus fitting the real time requirement in continuous vln to solve the complex instruction understanding problem mlanet needs a global perception of the instruction and observations we propose a multi level attention mla module to fuse vision low level semantics and high level semantics which produce features containing a dynamic and global comprehension of the task mla also mitigates the adverse effects of noise words thus ensuring a robust understanding of the instruction to correctly predict actions in long trajectories mlanet needs to focus on what sub instruction is being executed every step we propose a peak attention loss pal to improve the flexible and adaptive selection of the current sub instruction pal benefits the navigation agent by concentrating its attention on the local information thus helping the agent predict the most appropriate actions we train and test mlanet in the standard benchmark experiment results show mlanet outperforms baselines by a significant margin image as set of points authors xu ma yuqian zhou huan wang can qin bin sun chang liu yun fu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract what is an image and how to extract latent features convolutional networks convnets consider an image as organized pixels in a rectangular shape and extract features via convolutional operation in local region vision transformers vits treat an image as a sequence of patches and extract features via attention mechanism in a global range in this work we introduce a straightforward and promising paradigm for visual representation which is called context clusters context clusters cocs view an image as a set of unorganized points and extract features via simplified clustering algorithm in detail each point includes the raw feature e g color and positional information e g coordinates and a simplified clustering algorithm is employed to group and extract deep features hierarchically our cocs are convolution and attention free and only rely on clustering algorithm for spatial interaction owing to the simple design we show cocs endow gratifying interpretability via the visualization of clustering process our cocs aim at providing a new perspective on image and visual representation which may enjoy broad applications in different domains and exhibit profound insights even though we are not targeting sota performance cocs still achieve comparable or even better results than convnets or vits on several benchmarks codes are available at keyword raw image there is no result
| 1
|
4,766
| 7,633,364,281
|
IssuesEvent
|
2018-05-06 03:52:48
|
pump-io/pump.io
|
https://api.github.com/repos/pump-io/pump.io
|
closed
|
Publish the Docker image to Docker Hub
|
docker packaging release process
|
We need a security support plan for this. @JanKoppe are you okay with me publishing the official image?
|
1.0
|
Publish the Docker image to Docker Hub - We need a security support plan for this. @JanKoppe are you okay with me publishing the official image?
|
process
|
publish the docker image to docker hub we need a security support plan for this jankoppe are you okay with me publishing the official image
| 1
|
326,930
| 24,108,868,327
|
IssuesEvent
|
2022-09-20 09:40:53
|
koaning/scikit-lego
|
https://api.github.com/repos/koaning/scikit-lego
|
closed
|
[DOCS] Requesting more information about RepeatingBasisFunction
|
documentation
|
I am unable to find any API doc of RepeatingBasisFunction (Specially the input parameters), and any background information about how Repeating Basis function works.
|
1.0
|
[DOCS] Requesting more information about RepeatingBasisFunction - I am unable to find any API doc of RepeatingBasisFunction (Specially the input parameters), and any background information about how Repeating Basis function works.
|
non_process
|
requesting more information about repeatingbasisfunction i am unable to find any api doc of repeatingbasisfunction specially the input parameters and any background information about how repeating basis function works
| 0
|
4,503
| 7,349,077,493
|
IssuesEvent
|
2018-03-08 09:24:57
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
UserError: The provided properties are insufficient to retrieve data from data store
|
assigned-to-author data-factory in-process support-request triaged
|
Able to get to the Dataset properties step. "Test Connection" says "Connection successful", however, when trying to "Import Schema" it gives an error:
"UserError: The provided properties are insufficient to retrieve data from data store., activityId: 0df9863d-e740-4758-8c87-6b50dccfe13e"
Have populated the "Advanced" area of the Dataset with what is outlined in the article, but that does not help.
{
"name": "AmazonMWSDataset",
"properties": {
"type": "AmazonMWSObject",
"linkedServiceName": {
"referenceName": "AmazonMWSLinkedService",
"type": "LinkedServiceReference"
}
}
}
Have tested Amazon MWS keys being used, and they are all valid and working.
Something that I am missing?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dcde235c-7268-5226-d2ad-7c5f781812d9
* Version Independent ID: 3fb1530d-39c9-ab07-8832-d1c762340711
* [Content](https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-marketplace-web-service#feedback)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/data-factory/connector-amazon-marketplace-web-service.md)
* Service: data-factory
|
1.0
|
UserError: The provided properties are insufficient to retrieve data from data store - Able to get to the Dataset properties step. "Test Connection" says "Connection successful", however, when trying to "Import Schema" it gives an error:
"UserError: The provided properties are insufficient to retrieve data from data store., activityId: 0df9863d-e740-4758-8c87-6b50dccfe13e"
Have populated the "Advanced" area of the Dataset with what is outlined in the article, but that does not help.
{
"name": "AmazonMWSDataset",
"properties": {
"type": "AmazonMWSObject",
"linkedServiceName": {
"referenceName": "AmazonMWSLinkedService",
"type": "LinkedServiceReference"
}
}
}
Have tested Amazon MWS keys being used, and they are all valid and working.
Something that I am missing?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: dcde235c-7268-5226-d2ad-7c5f781812d9
* Version Independent ID: 3fb1530d-39c9-ab07-8832-d1c762340711
* [Content](https://docs.microsoft.com/en-us/azure/data-factory/connector-amazon-marketplace-web-service#feedback)
* [Content Source](https://github.com/Microsoft/azure-docs/blob/master/articles/data-factory/connector-amazon-marketplace-web-service.md)
* Service: data-factory
|
process
|
usererror the provided properties are insufficient to retrieve data from data store able to get to the dataset properties step test connection says connection successful however when trying to import schema it gives an error usererror the provided properties are insufficient to retrieve data from data store activityid have populated the advanced area of the dataset with what is outlined in the article but that does not help name amazonmwsdataset properties type amazonmwsobject linkedservicename referencename amazonmwslinkedservice type linkedservicereference have tested amazon mws keys being used and they are all valid and working something that i am missing document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id service data factory
| 1
|
169,092
| 13,114,105,379
|
IssuesEvent
|
2020-08-05 07:06:39
|
eshwarnadh/flairtech-vs
|
https://api.github.com/repos/eshwarnadh/flairtech-vs
|
closed
|
Test-ProjectSettings-EditProjectDetails
|
MUI Testing
|
Robot script should be stored
/TaskManagement/ProjectSettings/EditProjectDetails
|
1.0
|
Test-ProjectSettings-EditProjectDetails - Robot script should be stored
/TaskManagement/ProjectSettings/EditProjectDetails
|
non_process
|
test projectsettings editprojectdetails robot script should be stored taskmanagement projectsettings editprojectdetails
| 0
|
697,794
| 23,952,959,822
|
IssuesEvent
|
2022-09-12 13:01:40
|
benicamera/SupplyManager
|
https://api.github.com/repos/benicamera/SupplyManager
|
closed
|
Implement ItemAmount
|
good first issue Priority: High models
|
# Tasks
- [x] Implement class
## Implement class
The class must have following members:
- unit {unit}
- amount {double}
|
1.0
|
Implement ItemAmount - # Tasks
- [x] Implement class
## Implement class
The class must have following members:
- unit {unit}
- amount {double}
|
non_process
|
implement itemamount tasks implement class implement class the class must have following members unit unit amount double
| 0
|
676
| 3,146,911,299
|
IssuesEvent
|
2015-09-15 03:18:07
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
Flagging preprocess grabs too much with check for defaults
|
bug P2 preprocess/filtering
|
The flagging preprocess code copies active revisions into topics so that later steps do not need to evaluate any flagging logic. Active flags are gathered in the `getrules` template. The template ends with the following code that is intended to copy default flag rules (for example, any default rules properties / rev in general, or default rules for `@audience` when an element uses `@audience`). Current code is grabbing too many rules:
```xml
<!-- default flags -->
<xsl:if test="$current/@audience | $current/@platform | $current/@product | $current/@otherprops">
<xsl:copy-of select="$FILTERDOC/val/prop[empty(@att) and @action = 'flag']"/>
</xsl:if>
<xsl:if test="$current/@rev">
<xsl:copy-of select="$FILTERDOC/val/revprop[empty(@att) and @action = 'flag']"/>
</xsl:if>
```
Problems:
* It copies any default flag rule (no attribute specified) but not defaults for a given attribute (if `@audience` is specified we should get the default rule for `@audience`)
* It would copy a (IMHO invalid) flag rule that specifies no attribute but still has a value, even if that value doesn't match anything on the current element
* For revisions, it copies *all* revision elements, because none specify `@att` (that check should be for `@val`) - I originally noticed this issue because every revision setting in my DITAVAL conditions ended up added to elements that specified `@rev`
|
1.0
|
Flagging preprocess grabs too much with check for defaults - The flagging preprocess code copies active revisions into topics so that later steps do not need to evaluate any flagging logic. Active flags are gathered in the `getrules` template. The template ends with the following code that is intended to copy default flag rules (for example, any default rules properties / rev in general, or default rules for `@audience` when an element uses `@audience`). Current code is grabbing too many rules:
```xml
<!-- default flags -->
<xsl:if test="$current/@audience | $current/@platform | $current/@product | $current/@otherprops">
<xsl:copy-of select="$FILTERDOC/val/prop[empty(@att) and @action = 'flag']"/>
</xsl:if>
<xsl:if test="$current/@rev">
<xsl:copy-of select="$FILTERDOC/val/revprop[empty(@att) and @action = 'flag']"/>
</xsl:if>
```
Problems:
* It copies any default flag rule (no attribute specified) but not defaults for a given attribute (if `@audience` is specified we should get the default rule for `@audience`)
* It would copy a (IMHO invalid) flag rule that specifies no attribute but still has a value, even if that value doesn't match anything on the current element
* For revisions, it copies *all* revision elements, because none specify `@att` (that check should be for `@val`) - I originally noticed this issue because every revision setting in my DITAVAL conditions ended up added to elements that specified `@rev`
|
process
|
flagging preprocess grabs too much with check for defaults the flagging preprocess code copies active revisions into topics so that later steps do not need to evaluate any flagging logic active flags are gathered in the getrules template the template ends with the following code that is intended to copy default flag rules for example any default rules properties rev in general or default rules for audience when an element uses audience current code is grabbing too many rules xml problems it copies any default flag rule no attribute specified but not defaults for a given attribute if audience is specified we should get the default rule for audience it would copy a imho invalid flag rule that specifies no attribute but still has a value even if that value doesn t match anything on the current element for revisions it copies all revision elements because none specify att that check should be for val i originally noticed this issue because every revision setting in my ditaval conditions ended up added to elements that specified rev
| 1
|
201,636
| 7,034,537,588
|
IssuesEvent
|
2017-12-27 17:22:17
|
DASSL/ClassDB
|
https://api.github.com/repos/DASSL/ClassDB
|
opened
|
Redundant query in log mgmt (E)
|
extra priority medium
|
The script [`addLogMgmt.sql`](https://github.com/DASSL/ClassDB/blob/05872e166a85db91d76e3d620d0c1b0ba20229ec/src/addLogMgmt.sql#L106-L127) unnecessarily contains essentially the same `SELECT` query twice.
It seems the two `SELECT` queries can be replaced with a derived table or a `WITH` query.
|
1.0
|
Redundant query in log mgmt (E) - The script [`addLogMgmt.sql`](https://github.com/DASSL/ClassDB/blob/05872e166a85db91d76e3d620d0c1b0ba20229ec/src/addLogMgmt.sql#L106-L127) unnecessarily contains essentially the same `SELECT` query twice.
It seems the two `SELECT` queries can be replaced with a derived table or a `WITH` query.
|
non_process
|
redundant query in log mgmt e the script unnecessarily contains essentially the same select query twice it seems the two select queries can be replaced with a derived table or a with query
| 0
|
57,456
| 7,057,881,485
|
IssuesEvent
|
2018-01-04 18:05:13
|
CartoDB/cartodb
|
https://api.github.com/repos/CartoDB/cartodb
|
closed
|
The add analysis button is wrong aligned
|
Design
|
### Context
On the zero case and when there is an analysis, the add new analysis button is not aligned on the same position.
### Steps to Reproduce

### Current Result
There are some pixels of difference. Isn't in the same place.
### Expected result
The button should be in the same position
### Browser and version
<img width="396" alt="captura de pantalla 2017-12-28 a las 15 16 29" src="https://user-images.githubusercontent.com/14546701/34413196-1bb6ed88-ebe2-11e7-8bd1-cdd0bd546a2c.png">
|
1.0
|
The add analysis button is wrong aligned - ### Context
On the zero case and when there is an analysis, the add new analysis button is not aligned on the same position.
### Steps to Reproduce

### Current Result
There are some pixels of difference. Isn't in the same place.
### Expected result
The button should be in the same position
### Browser and version
<img width="396" alt="captura de pantalla 2017-12-28 a las 15 16 29" src="https://user-images.githubusercontent.com/14546701/34413196-1bb6ed88-ebe2-11e7-8bd1-cdd0bd546a2c.png">
|
non_process
|
the add analysis button is wrong aligned context on the zero case and when there is an analysis the add new analysis button is not aligned on the same position steps to reproduce current result there are some pixels of difference isn t in the same place expected result the button should be in the same position browser and version img width alt captura de pantalla a las src
| 0
|
365,074
| 10,775,294,076
|
IssuesEvent
|
2019-11-03 13:25:57
|
vladgh/docker_base_images
|
https://api.github.com/repos/vladgh/docker_base_images
|
closed
|
[minidlna] bridge mode / host mode
|
Priority: Low Type: Enhancement
|
Can't make work network dlna server discovering with bridge mode.
Also (need to add UPD port to readme?), https://help.ubuntu.com/community/MiniDLNA :
```
OPEN_TCP="8200"
OPEN_UDP="1900"
```
It seems, issue related to different networks 172.17.0.0 (container) and 192.1.0.0 (local). Can you please add to the readme working bridge mode configuration?
ps. host mode works fine.
|
1.0
|
[minidlna] bridge mode / host mode - Can't make work network dlna server discovering with bridge mode.
Also (need to add UPD port to readme?), https://help.ubuntu.com/community/MiniDLNA :
```
OPEN_TCP="8200"
OPEN_UDP="1900"
```
It seems, issue related to different networks 172.17.0.0 (container) and 192.1.0.0 (local). Can you please add to the readme working bridge mode configuration?
ps. host mode works fine.
|
non_process
|
bridge mode host mode can t make work network dlna server discovering with bridge mode also need to add upd port to readme open tcp open udp it seems issue related to different networks container and local can you please add to the readme working bridge mode configuration ps host mode works fine
| 0
|
433,564
| 12,506,815,802
|
IssuesEvent
|
2020-06-02 13:12:15
|
haxwell/eog-mobile2
|
https://api.github.com/repos/haxwell/eog-mobile2
|
closed
|
First Time Connection tutorial
|
priority!
|
There should be a tutorial that appears the first time you accept a request from somebody.
Hey this is the first time XXXX and you have connected!
While we believe that people are generally good, there are some bad ones out there. Be safe!
Consider this first connection as if you'd just met XXXX from craigslist.
(perhaps under the header Super Safe / Super Safety or something to that effect..)
(if there are no recommendations required on this offer) Remember, too, you can require that a person get a recommendation from someone you trust before sending you a request.
"Click here" to take a look at the top rules to abide by when meeting someone online for the first time.
|
1.0
|
First Time Connection tutorial - There should be a tutorial that appears the first time you accept a request from somebody.
Hey this is the first time XXXX and you have connected!
While we believe that people are generally good, there are some bad ones out there. Be safe!
Consider this first connection as if you'd just met XXXX from craigslist.
(perhaps under the header Super Safe / Super Safety or something to that effect..)
(if there are no recommendations required on this offer) Remember, too, you can require that a person get a recommendation from someone you trust before sending you a request.
"Click here" to take a look at the top rules to abide by when meeting someone online for the first time.
|
non_process
|
first time connection tutorial there should be a tutorial that appears the first time you accept a request from somebody hey this is the first time xxxx and you have connected while we believe that people are generally good there are some bad ones out there be safe consider this first connection as if you d just met xxxx from craigslist perhaps under the header super safe super safety or something to that effect if there are no recommendations required on this offer remember too you can require that a person get a recommendation from someone you trust before sending you a request click here to take a look at the top rules to abide by when meeting someone online for the first time
| 0
|
563
| 3,023,921,516
|
IssuesEvent
|
2015-08-02 01:49:45
|
HazyResearch/dd-genomics
|
https://api.github.com/repos/HazyResearch/dd-genomics
|
opened
|
[bazaar] Modify parser code so that arbitrary json fields of input are passed -> sentences
|
Preprocessing
|
E.g. so we can pass in a doc_id in addition to section_id, etc...
|
1.0
|
[bazaar] Modify parser code so that arbitrary json fields of input are passed -> sentences - E.g. so we can pass in a doc_id in addition to section_id, etc...
|
process
|
modify parser code so that arbitrary json fields of input are passed sentences e g so we can pass in a doc id in addition to section id etc
| 1
|
16,522
| 21,530,519,744
|
IssuesEvent
|
2022-04-28 23:55:14
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
crashed by Sig 11
|
bug log-processing
|
on macos 11.5.2
```
[PARSING forum.log] {68857} @ {34428/s}/s}
==14331== GoAccess 1.5.1 crashed by Sig 11
==14331==
==14331== VALUES AT CRASH POINT
==14331==
==14331== FILE: forum.log
==14331== Line number: 70669
==14331== Invalid data: 111
==14331== Piping: 0
```
this is line 70669:
```
78.56.32.39 - - [04/Mar/2021:21:09:52 +0100] "GET /++resource++ripe.plonetheme.javascripts/template.js?_=1614888592332 HTTP/1.1" 200 21630 "https://www.ripe.net/participate/mail/forum/anti-abuse-wg/PDEwMDE4MzA4YzIyNWQ2MTMxNjA2NjI4NDQ4ZDZlNjM0QG4wLmx0Pg==?fbclid=IwAR0rVfrBD_kcM0qkVN29sWvz0ony-YFAIC8v_vBTtcSqwOZY0KdoqT5VnE8" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" TLSv1.2
```
|
1.0
|
crashed by Sig 11 - on macos 11.5.2
```
[PARSING forum.log] {68857} @ {34428/s}/s}
==14331== GoAccess 1.5.1 crashed by Sig 11
==14331==
==14331== VALUES AT CRASH POINT
==14331==
==14331== FILE: forum.log
==14331== Line number: 70669
==14331== Invalid data: 111
==14331== Piping: 0
```
this is line 70669:
```
78.56.32.39 - - [04/Mar/2021:21:09:52 +0100] "GET /++resource++ripe.plonetheme.javascripts/template.js?_=1614888592332 HTTP/1.1" 200 21630 "https://www.ripe.net/participate/mail/forum/anti-abuse-wg/PDEwMDE4MzA4YzIyNWQ2MTMxNjA2NjI4NDQ4ZDZlNjM0QG4wLmx0Pg==?fbclid=IwAR0rVfrBD_kcM0qkVN29sWvz0ony-YFAIC8v_vBTtcSqwOZY0KdoqT5VnE8" "Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Firefox/78.0" TLSv1.2
```
|
process
|
crashed by sig on macos s s goaccess crashed by sig values at crash point file forum log line number invalid data piping this is line get resource ripe plonetheme javascripts template js http mozilla linux rv gecko firefox
| 1
|
22,743
| 32,060,035,670
|
IssuesEvent
|
2023-09-24 14:52:39
|
h4sh5/npm-auto-scanner
|
https://api.github.com/repos/h4sh5/npm-auto-scanner
|
opened
|
winglang 0.33.3 has 1 guarddog issues
|
npm-silent-process-execution
|
```{"npm-silent-process-execution":[{"code":" const child = (0, child_process_1.spawn)(process.execPath, [require.resolve('./scripts/detached-export'), awaitedFilePath], {\n detached: true,\n stdio: 'ignore',\n windowsHide: true,\n env: {\n ...proc... }\n });","location":"package/dist/analytics/export.js:10","message":"This package is silently executing another executable"}]}```
|
1.0
|
winglang 0.33.3 has 1 guarddog issues - ```{"npm-silent-process-execution":[{"code":" const child = (0, child_process_1.spawn)(process.execPath, [require.resolve('./scripts/detached-export'), awaitedFilePath], {\n detached: true,\n stdio: 'ignore',\n windowsHide: true,\n env: {\n ...proc... }\n });","location":"package/dist/analytics/export.js:10","message":"This package is silently executing another executable"}]}```
|
process
|
winglang has guarddog issues npm silent process execution n detached true n stdio ignore n windowshide true n env n proc n location package dist analytics export js message this package is silently executing another executable
| 1
|
14,354
| 17,375,437,133
|
IssuesEvent
|
2021-07-30 20:17:15
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
System.Debug conditional example
|
devops-cicd-process/tech devops/prod doc-enhancement
|
I've found that the following works for a step conditional, but the same thing with no single quotes around the 'true' does not work. It would be helpful to have this as an example:
condition: eq(variables['System.debug'], 'true')
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f151218-9a11-0078-e038-f96198a76143
* Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?tabs=yaml&view=azure-devops)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
System.Debug conditional example - I've found that the following works for a step conditional, but the same thing with no single quotes around the 'true' does not work. It would be helpful to have this as an example:
condition: eq(variables['System.debug'], 'true')
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3f151218-9a11-0078-e038-f96198a76143
* Version Independent ID: 09c4d032-62f3-d97c-79d7-6fbfd89910e9
* Content: [Conditions - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/conditions?tabs=yaml&view=azure-devops)
* Content Source: [docs/pipelines/process/conditions.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/conditions.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
system debug conditional example i ve found that the following works for a step conditional but the same thing with no single quotes around the true does not work it would be helpful to have this as an example condition eq variables true document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
22,751
| 32,068,240,179
|
IssuesEvent
|
2023-09-25 05:54:54
|
TensorWarp/Bitfusion
|
https://api.github.com/repos/TensorWarp/Bitfusion
|
opened
|
Multi-GPU support
|
enhancement CUDA-aware MPI Multi-Process Service (MPS) CUDA GPU
|
We are working on the implementation of CUDA Multi-Process Service (MPS) and CUDA-aware MPI for all CUDA kernels, regardless of their origin or complexity.
|
1.0
|
Multi-GPU support - We are working on the implementation of CUDA Multi-Process Service (MPS) and CUDA-aware MPI for all CUDA kernels, regardless of their origin or complexity.
|
process
|
multi gpu support we are working on the implementation of cuda multi process service mps and cuda aware mpi for all cuda kernels regardless of their origin or complexity
| 1
|
718
| 3,206,570,193
|
IssuesEvent
|
2015-10-05 02:31:12
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
doc: request to expand on process "exit" event
|
doc process
|
Reference: https://github.com/nodejs/node/pull/2918#discussion_r39706498
The process "exit" event has a lot of reasons why it will or won't be emitted. Unfortunately, these reasons are not exactly clear from the documentation. That makes it a rather useless event, unless each user goes through a painful, time-consuming trial-and-error phase to figure out how things work (in the worst case, people will find out in production). It also doesn't set a guideline for developers as to how things are supposed to work (sure, we should have unit tests to confirm things work the way they do, but still).
cc @Trott
|
1.0
|
doc: request to expand on process "exit" event - Reference: https://github.com/nodejs/node/pull/2918#discussion_r39706498
The process "exit" event has a lot of reasons why it will or won't be emitted. Unfortunately, these reasons are not exactly clear from the documentation. That makes it a rather useless event, unless each user goes through a painful, time-consuming trial-and-error phase to figure out how things work (in the worst case, people will find out in production). It also doesn't set a guideline for developers as to how things are supposed to work (sure, we should have unit tests to confirm things work the way they do, but still).
cc @Trott
|
process
|
doc request to expand on process exit event reference the process exit event has a lot of reasons why it will or won t be emitted unfortunately these reasons are not exactly clear from the documentation that makes it a rather useless event unless each user goes through a painful time consuming trial and error phase to figure out how things work in the worst case people will find out in production it also doesn t set a guideline for developers as to how things are supposed to work sure we should have unit tests to confirm things work the way they do but still cc trott
| 1
|
18,558
| 24,555,553,663
|
IssuesEvent
|
2022-10-12 15:36:05
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] [Offline indicator] Share button should be disabled in the below mentioned screens when participant is offline
|
Bug P1 Android Process: Fixed Process: Tested QA Process: Tested dev
|
Share button should be disabled in all the below mentioned screens when the participant is offline
1. App glossary
2. Dashboard
3. Consent pdf ( both resources screen and study overview screen)

|
3.0
|
[Android] [Offline indicator] Share button should be disabled in the below mentioned screens when participant is offline - Share button should be disabled in all the below mentioned screens when the participant is offline
1. App glossary
2. Dashboard
3. Consent pdf ( both resources screen and study overview screen)

|
process
|
share button should be disabled in the below mentioned screens when participant is offline share button should be disabled in all the below mentioned screens when the participant is offline app glossary dashboard consent pdf both resources screen and study overview screen
| 1
|
807,819
| 30,020,307,899
|
IssuesEvent
|
2023-06-26 22:30:53
|
calcom/cal.com
|
https://api.github.com/repos/calcom/cal.com
|
closed
|
[CAL-1398] "let user decide how long" causes events to potentially overbook
|
🐛 bug Low priority Stale
|
i had a screenshare with a user who had this active:

and everything beyond 60 (i.e. 120 minutes) would not block correctly.
<sub>[CAL-1398](https://linear.app/calcom/issue/CAL-1398/let-user-decide-how-long-causes-events-to-potentially-overbook)</sub>
|
1.0
|
[CAL-1398] "let user decide how long" causes events to potentially overbook - i had a screenshare with a user who had this active:

and everything beyond 60 (i.e. 120 minutes) would not block correctly.
<sub>[CAL-1398](https://linear.app/calcom/issue/CAL-1398/let-user-decide-how-long-causes-events-to-potentially-overbook)</sub>
|
non_process
|
let user decide how long causes events to potentially overbook i had a screenshare with a user who had this active and everything beyond i e minutes would not block correctly
| 0
|
247,330
| 26,694,101,875
|
IssuesEvent
|
2023-01-27 08:49:30
|
Taraxa-project/taraxa-node
|
https://api.github.com/repos/Taraxa-project/taraxa-node
|
closed
|
Limit packets queue size
|
feature security
|
<!-- Do not forget to add specific label (bug / feature / refactor / ...) and select Project "Ledger" -->
## Task Description
Implement generic protection against ddos by limiting max allowed packets queue size:
- received too many packets (all types) -> more than XX k per YY s time period
- received packets with too big overall size of packets(all types) -> more than XX MB per YY s time period
**TODO:** adjust constants based on measurements on testnet !
## Epic Parent
<!-- The link below should link to its Epic Parent. -->
[Feature: DDoS Protection](https://github.com/Taraxa-project/taraxa-node/issues/1428).
|
True
|
Limit packets queue size - <!-- Do not forget to add specific label (bug / feature / refactor / ...) and select Project "Ledger" -->
## Task Description
Implement generic protection against ddos by limiting max allowed packets queue size:
- received too many packets (all types) -> more than XX k per YY s time period
- received packets with too big overall size of packets(all types) -> more than XX MB per YY s time period
**TODO:** adjust constants based on measurements on testnet !
## Epic Parent
<!-- The link below should link to its Epic Parent. -->
[Feature: DDoS Protection](https://github.com/Taraxa-project/taraxa-node/issues/1428).
|
non_process
|
limit packets queue size task description implement generic protection against ddos by limiting max allowed packets queue size received too many packets all types more than xx k per yy s time period received packets with too big overall size of packets all types more than xx mb per yy s time period todo adjust constants based on measurements on testnet epic parent
| 0
|
18,805
| 24,704,320,278
|
IssuesEvent
|
2022-10-19 17:42:59
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror] zlib-1.2.13
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
Please mirror https://zlib.net/zlib-1.2.13.tar.gz
should be available under "https://mirror.bazel.build/zlib.net/zlib-1.2.13.tar.gz"
|
1.0
|
[Mirror] zlib-1.2.13 - ### Please list the URLs of the archives you'd like to mirror:
Please mirror https://zlib.net/zlib-1.2.13.tar.gz
should be available under "https://mirror.bazel.build/zlib.net/zlib-1.2.13.tar.gz"
|
process
|
zlib please list the urls of the archives you d like to mirror please mirror should be available under
| 1
|
17,947
| 23,939,933,841
|
IssuesEvent
|
2022-09-11 19:10:28
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror] URLs for rules_go v0.35.0
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
https://github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip
https://github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz
https://github.com/golang/tools/archive/refs/tags/v0.1.12.zip
https://github.com/golang/sys/archive/aba9fc2a8ff2c9439446386f616b860442f0cf9a.zip
https://github.com/golang/xerrors/archive/04be3eba64a22a838cdb17b8dca15a52871c08b4.zip
https://github.com/protocolbuffers/protobuf-go/archive/refs/tags/v1.28.1.zip
https://github.com/googleapis/go-genproto/archive/69f6226f97e558dbaa68715071622af0d86b3a17.zip
https://github.com/googleapis/googleapis/archive/8167badf3ce86086c69db2942a8995bb2de56c51.zip
https://github.com/golang/mock/archive/refs/tags/v1.7.0-rc.1.zip
|
1.0
|
[Mirror] URLs for rules_go v0.35.0 - ### Please list the URLs of the archives you'd like to mirror:
https://github.com/bazelbuild/rules_go/releases/download/v0.35.0/rules_go-v0.35.0.zip
https://github.com/bazelbuild/bazel-skylib/releases/download/1.3.0/bazel-skylib-1.3.0.tar.gz
https://github.com/golang/tools/archive/refs/tags/v0.1.12.zip
https://github.com/golang/sys/archive/aba9fc2a8ff2c9439446386f616b860442f0cf9a.zip
https://github.com/golang/xerrors/archive/04be3eba64a22a838cdb17b8dca15a52871c08b4.zip
https://github.com/protocolbuffers/protobuf-go/archive/refs/tags/v1.28.1.zip
https://github.com/googleapis/go-genproto/archive/69f6226f97e558dbaa68715071622af0d86b3a17.zip
https://github.com/googleapis/googleapis/archive/8167badf3ce86086c69db2942a8995bb2de56c51.zip
https://github.com/golang/mock/archive/refs/tags/v1.7.0-rc.1.zip
|
process
|
urls for rules go please list the urls of the archives you d like to mirror
| 1
|
16,041
| 20,189,602,284
|
IssuesEvent
|
2022-02-11 03:20:34
|
maticnetwork/miden
|
https://api.github.com/repos/maticnetwork/miden
|
closed
|
Debug operation
|
good first issue assembly processor
|
We need to implement the `debug` operation according to the specs described [here](https://hackmd.io/YDbjUVHTRn64F4LPelC-NA#Debugging). Implementing this operation would require:
1. Implementing an options struct for the `Debug` operation enum [here](https://github.com/maticnetwork/miden/blob/next/core/src/operations/mod.rs#L353).
2. Implement parsing of the assembly instruction (will need to be added [here](https://github.com/maticnetwork/miden/blob/next/assembly/src/parsers/mod.rs#L18)).
3. Implement processing the `Debug` operation [here](https://github.com/maticnetwork/miden/blob/next/processor/src/operations/decorators.rs#L11).
|
1.0
|
Debug operation - We need to implement the `debug` operation according to the specs described [here](https://hackmd.io/YDbjUVHTRn64F4LPelC-NA#Debugging). Implementing this operation would require:
1. Implementing an options struct for the `Debug` operation enum [here](https://github.com/maticnetwork/miden/blob/next/core/src/operations/mod.rs#L353).
2. Implement parsing of the assembly instruction (will need to be added [here](https://github.com/maticnetwork/miden/blob/next/assembly/src/parsers/mod.rs#L18)).
3. Implement processing the `Debug` operation [here](https://github.com/maticnetwork/miden/blob/next/processor/src/operations/decorators.rs#L11).
|
process
|
debug operation we need to implement the debug operation according to the specs described implementing this operation would require implementing an options struct for the debug operation enum implement parsing of the assembly instruction will need to be added implement processing the debug operation
| 1
|
111,519
| 9,533,756,101
|
IssuesEvent
|
2019-04-29 22:19:56
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Test failures: System.Net.Http.Functional.Tests.DiagnosticsTest / *
|
area-System.Net.Http test bug test-run-core
|
## Types of failures
Affected tests:
* System.Net.Http.Functional.Tests.DiagnosticsTest:
* SendAsync_ExpectedDiagnosticSourceLogging
* SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
* SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
* SendAsync_ExpectedDiagnosticStopOnlyActivityLogging
* System.Net.Http.Functional.Tests.HttpClientHandler_DefaultProxyCredentials_Test:
* ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed
Test `SendAsync_ExpectedDiagnosticStopOnlyActivityLogging`
```
Exit code was 139 but it should have been 42
Expected: True
Actual: False
at System.Diagnostics.RemoteExecutorTestBase.RemoteInvokeHandle.Dispose() in /root/corefx-1192569/src/CoreFx.Private.TestUtilities/src/System/Diagnostics/RemoteExecutorTestBase.cs:line 203
at System.Net.Http.Functional.Tests.DiagnosticsTest.SendAsync_ExpectedDiagnosticStopOnlyActivityLogging() in /root/corefx-1192569/src/System.Net.Http/tests/FunctionalTests/DiagnosticsTests.cs:line 583
```
## History of failures
Day | Build | OS | Test
-- | -- | -- | --
5/31 | 20170531.01 | Ubuntu16.04 | SendAsync_ExpectedDiagnosticSourceLogging
7/8 | 20170708.04 | Suse42.2 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
8/1 | 20170801.01 | Ubuntu17.04 | SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
8/19 | 20170819.02 | Ubuntu17.04 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
8/22 | 20170822.01 | Centos73 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
8/23 | 20170823.07 | Ubuntu17.04 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
8/27 | 20170827.01 | Centos73 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
9/25 | 20170925.03 | Ubuntu17.04 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
9/26 | 20170926.06 | RedHat72 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceLogging
9/26 | 20170926.06 | RedHat72 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
10/1 | 20171001.01 | RedHat72 | [ManagedHandler] ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed
10/26 | 20171026.01 | Suse42.2 | ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed
11/18 | 20171118.03 | Ubuntu17.10 | SendAsync_ExpectedDiagnosticSourceLogging
11/27 | 20171127.01 | Ubuntu16.04 | SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
12/4 | 20171204.02 | Ubuntu17.04 | SendAsync_ExpectedDiagnosticStopOnlyActivityLogging
12/23 | 20171223.03 | Ubuntu17.04 | SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
1/12 | 20180112.01| Ubuntu14.04 | SendAsync_ExpectedDiagnosticStopOnlyActivityLogging - [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180112.01/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.DiagnosticsTest~2FSendAsync_ExpectedDiagnosticStopOnlyActivityLogging)
1/19 | 20180119.01 | Suse42.4 | ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed - [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180119.01/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.HttpClientHandler_DefaultProxyCredentials_Test~2FProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed(useProxy:%20False))
3/31 | 20180331.02 | Ubuntu16.04 | SendAsync_ExpectedDiagnosticSourceLogging
3/31 | 20180331.05 | OpenSuse42.3 | SendAsync_ExpectedDiagnosticSourceLogging
Note: Related to:
* Some failures in #23209
* ManagedHandler failures in #23771
|
2.0
|
Test failures: System.Net.Http.Functional.Tests.DiagnosticsTest / * - ## Types of failures
Affected tests:
* System.Net.Http.Functional.Tests.DiagnosticsTest:
* SendAsync_ExpectedDiagnosticSourceLogging
* SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
* SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
* SendAsync_ExpectedDiagnosticStopOnlyActivityLogging
* System.Net.Http.Functional.Tests.HttpClientHandler_DefaultProxyCredentials_Test:
* ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed
Test `SendAsync_ExpectedDiagnosticStopOnlyActivityLogging`
```
Exit code was 139 but it should have been 42
Expected: True
Actual: False
at System.Diagnostics.RemoteExecutorTestBase.RemoteInvokeHandle.Dispose() in /root/corefx-1192569/src/CoreFx.Private.TestUtilities/src/System/Diagnostics/RemoteExecutorTestBase.cs:line 203
at System.Net.Http.Functional.Tests.DiagnosticsTest.SendAsync_ExpectedDiagnosticStopOnlyActivityLogging() in /root/corefx-1192569/src/System.Net.Http/tests/FunctionalTests/DiagnosticsTests.cs:line 583
```
## History of failures
Day | Build | OS | Test
-- | -- | -- | --
5/31 | 20170531.01 | Ubuntu16.04 | SendAsync_ExpectedDiagnosticSourceLogging
7/8 | 20170708.04 | Suse42.2 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
8/1 | 20170801.01 | Ubuntu17.04 | SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
8/19 | 20170819.02 | Ubuntu17.04 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
8/22 | 20170822.01 | Centos73 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
8/23 | 20170823.07 | Ubuntu17.04 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
8/27 | 20170827.01 | Centos73 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
9/25 | 20170925.03 | Ubuntu17.04 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceUrlFilteredActivityLogging
9/26 | 20170926.06 | RedHat72 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceLogging
9/26 | 20170926.06 | RedHat72 | [ManagedHandler] SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
10/1 | 20171001.01 | RedHat72 | [ManagedHandler] ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed
10/26 | 20171026.01 | Suse42.2 | ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed
11/18 | 20171118.03 | Ubuntu17.10 | SendAsync_ExpectedDiagnosticSourceLogging
11/27 | 20171127.01 | Ubuntu16.04 | SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
12/4 | 20171204.02 | Ubuntu17.04 | SendAsync_ExpectedDiagnosticStopOnlyActivityLogging
12/23 | 20171223.03 | Ubuntu17.04 | SendAsync_ExpectedDiagnosticSourceNewAndDeprecatedEventsLogging
1/12 | 20180112.01| Ubuntu14.04 | SendAsync_ExpectedDiagnosticStopOnlyActivityLogging - [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180112.01/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.DiagnosticsTest~2FSendAsync_ExpectedDiagnosticStopOnlyActivityLogging)
1/19 | 20180119.01 | Suse42.4 | ProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed - [link](https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Fcli~2F/build/20180119.01/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.HttpClientHandler_DefaultProxyCredentials_Test~2FProxySetViaEnvironmentVariable_DefaultProxyCredentialsUsed(useProxy:%20False))
3/31 | 20180331.02 | Ubuntu16.04 | SendAsync_ExpectedDiagnosticSourceLogging
3/31 | 20180331.05 | OpenSuse42.3 | SendAsync_ExpectedDiagnosticSourceLogging
Note: Related to:
* Some failures in #23209
* ManagedHandler failures in #23771
|
non_process
|
test failures system net http functional tests diagnosticstest types of failures affected tests system net http functional tests diagnosticstest sendasync expecteddiagnosticsourcelogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticsourceurlfilteredactivitylogging sendasync expecteddiagnosticstoponlyactivitylogging system net http functional tests httpclienthandler defaultproxycredentials test proxysetviaenvironmentvariable defaultproxycredentialsused test sendasync expecteddiagnosticstoponlyactivitylogging exit code was but it should have been expected true actual false at system diagnostics remoteexecutortestbase remoteinvokehandle dispose in root corefx src corefx private testutilities src system diagnostics remoteexecutortestbase cs line at system net http functional tests diagnosticstest sendasync expecteddiagnosticstoponlyactivitylogging in root corefx src system net http tests functionaltests diagnosticstests cs line history of failures day build os test sendasync expecteddiagnosticsourcelogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticsourceurlfilteredactivitylogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticsourceurlfilteredactivitylogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticsourceurlfilteredactivitylogging sendasync expecteddiagnosticsourcelogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging proxysetviaenvironmentvariable defaultproxycredentialsused proxysetviaenvironmentvariable defaultproxycredentialsused sendasync expecteddiagnosticsourcelogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticstoponlyactivitylogging sendasync expecteddiagnosticsourcenewanddeprecatedeventslogging sendasync expecteddiagnosticstoponlyactivitylogging proxysetviaenvironmentvariable defaultproxycredentialsused sendasync expecteddiagnosticsourcelogging sendasync expecteddiagnosticsourcelogging note related to some failures in managedhandler failures in
| 0
|
12,787
| 15,053,407,744
|
IssuesEvent
|
2021-02-03 16:16:36
|
boxbilling/boxbilling
|
https://api.github.com/repos/boxbilling/boxbilling
|
closed
|
Support for Interworx and DirectAdmin
|
compatibility feature request hosting stale
|
Due to cPanel hike price there are lot of people migrating to Interworx and DirectAdmin, is there a plan to make it compatible?
|
True
|
Support for Interworx and DirectAdmin - Due to cPanel hike price there are lot of people migrating to Interworx and DirectAdmin, is there a plan to make it compatible?
|
non_process
|
support for interworx and directadmin due to cpanel hike price there are lot of people migrating to interworx and directadmin is there a plan to make it compatible
| 0
|
17,941
| 23,937,444,758
|
IssuesEvent
|
2022-09-11 12:40:07
|
OpenDataScotland/the_od_bods
|
https://api.github.com/repos/OpenDataScotland/the_od_bods
|
opened
|
Fix dataset owners in multi-org portals - ARCGIS
|
bug data processing back end
|
Some data portals are aggregated portals themselves meaning there are actually multiple owners but we have been operating on the assumption of a single portal owner.
This issue is for the ARCGIS sources only.
|
1.0
|
Fix dataset owners in multi-org portals - ARCGIS - Some data portals are aggregated portals themselves meaning there are actually multiple owners but we have been operating on the assumption of a single portal owner.
This issue is for the ARCGIS sources only.
|
process
|
fix dataset owners in multi org portals arcgis some data portals are aggregated portals themselves meaning there are actually multiple owners but we have been operating on the assumption of a single portal owner this issue is for the arcgis sources only
| 1
|
112,925
| 9,606,070,734
|
IssuesEvent
|
2019-05-11 06:47:51
|
elgalu/docker-selenium
|
https://api.github.com/repos/elgalu/docker-selenium
|
closed
|
{{CONTAINER_IP}} have to be replaced by __CONTAINER_IP__ in yml files
|
waiting-retest
|
Hello,
in this files :
docker-compose.yml
docker-compose-tests.yml
docker-compose-scales.yml
{{CONTAINER_IP}} have to be replaced by __CONTAINER_IP__
If not, the node is not using the good network interface to communicate with the hub and the hub is very slow.
I tested on a swarm cluster.
Sorry, I'm not familiar with github to do it by myself ...
thanks.
|
1.0
|
{{CONTAINER_IP}} have to be replaced by __CONTAINER_IP__ in yml files - Hello,
in this files :
docker-compose.yml
docker-compose-tests.yml
docker-compose-scales.yml
{{CONTAINER_IP}} have to be replaced by __CONTAINER_IP__
If not, the node is not using the good network interface to communicate with the hub and the hub is very slow.
I tested on a swarm cluster.
Sorry, I'm not familiar with github to do it by myself ...
thanks.
|
non_process
|
container ip have to be replaced by container ip in yml files hello in this files docker compose yml docker compose tests yml docker compose scales yml container ip have to be replaced by container ip if not the node is not using the good network interface to communicate with the hub and the hub is very slow i tested on a swarm cluster sorry i m not familiar with github to do it by myself thanks
| 0
|
16,960
| 22,321,386,523
|
IssuesEvent
|
2022-06-14 06:48:05
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
closed
|
[SQL Connector] Configuration Tests
|
compute/data-processing type/feature
|
Need to write a test against the configuration validation rules, and shows map/enum type configuration works.
|
1.0
|
[SQL Connector] Configuration Tests - Need to write a test against the configuration validation rules, and shows map/enum type configuration works.
|
process
|
configuration tests need to write a test against the configuration validation rules and shows map enum type configuration works
| 1
|
16,079
| 20,249,968,815
|
IssuesEvent
|
2022-02-14 16:56:56
|
ossf/tac
|
https://api.github.com/repos/ossf/tac
|
closed
|
TAC Election: Should we increase the size of the TAC?
|
ElectionProcess
|
Currently the TAC is comprised of 7 seats. It has been suggested that increasing the size of the TAC could improve effectiveness by increasing diversity and varying view points. It has also been mention that too large of a TAC could hamper progress, but also that no one had directly observed this.
Suggestion: Increase to 7 or 9 seats for the next year and revisit before the next election.
|
1.0
|
TAC Election: Should we increase the size of the TAC? - Currently the TAC is comprised of 7 seats. It has been suggested that increasing the size of the TAC could improve effectiveness by increasing diversity and varying view points. It has also been mention that too large of a TAC could hamper progress, but also that no one had directly observed this.
Suggestion: Increase to 7 or 9 seats for the next year and revisit before the next election.
|
process
|
tac election should we increase the size of the tac currently the tac is comprised of seats it has been suggested that increasing the size of the tac could improve effectiveness by increasing diversity and varying view points it has also been mention that too large of a tac could hamper progress but also that no one had directly observed this suggestion increase to or seats for the next year and revisit before the next election
| 1
|
33,643
| 4,847,730,771
|
IssuesEvent
|
2016-11-10 15:42:29
|
researchstudio-sat/webofneeds
|
https://api.github.com/repos/researchstudio-sat/webofneeds
|
closed
|
Remote need data not loading
|
bug testing
|
Not quite sure about the circumstances, but I get those black '?' and no title/descr/...
|
1.0
|
Remote need data not loading - Not quite sure about the circumstances, but I get those black '?' and no title/descr/...
|
non_process
|
remote need data not loading not quite sure about the circumstances but i get those black and no title descr
| 0
|
19,537
| 25,850,778,233
|
IssuesEvent
|
2022-12-13 10:14:51
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
opened
|
Internal: invalid `json-rpc` calls in `prisma` CLI aren't caught by `handlePanic`
|
process/candidate kind/improvement topic: internal tech/typescript topic: error reporting team/schema
|
When using `json-rpc` calls in `prisma`, the `handlePanic` error handling prompt is only called when a valid request is made, but a runtime error occurred. However, other kinds of development errors are simply printed to `stderr` and result in `prisma` terminating with a status code `1`, without triggering `handlePanic`.
In particular, the following development errors aren't properly handled:
- an existing `json-rpc` method is called with an argument of mismatching type (e.g., a negative number is passed to a Rust method that expects `u32`)
- a non-existing `json-rpc` method is called (which may happen when Prisma devs tamper with e.g. [`IntrospectionEngine.ts`](https://github.com/prisma/prisma/blob/4.7.x/packages/internals/src/IntrospectionEngine.ts) and [`MigrateEngine.ts`](https://github.com/prisma/prisma/blob/4.7.x/packages/migrate/src/MigrateEngine.ts)).
|
1.0
|
Internal: invalid `json-rpc` calls in `prisma` CLI aren't caught by `handlePanic` - When using `json-rpc` calls in `prisma`, the `handlePanic` error handling prompt is only called when a valid request is made, but a runtime error occurred. However, other kinds of development errors are simply printed to `stderr` and result in `prisma` terminating with a status code `1`, without triggering `handlePanic`.
In particular, the following development errors aren't properly handled:
- an existing `json-rpc` method is called with an argument of mismatching type (e.g., a negative number is passed to a Rust method that expects `u32`)
- a non-existing `json-rpc` method is called (which may happen when Prisma devs tamper with e.g. [`IntrospectionEngine.ts`](https://github.com/prisma/prisma/blob/4.7.x/packages/internals/src/IntrospectionEngine.ts) and [`MigrateEngine.ts`](https://github.com/prisma/prisma/blob/4.7.x/packages/migrate/src/MigrateEngine.ts)).
|
process
|
internal invalid json rpc calls in prisma cli aren t caught by handlepanic when using json rpc calls in prisma the handlepanic error handling prompt is only called when a valid request is made but a runtime error occurred however other kinds of development errors are simply printed to stderr and result in prisma terminating with a status code without triggering handlepanic in particular the following development errors aren t properly handled an existing json rpc method is called with an argument of mismatching type e g a negative number is passed to a rust method that expects a non existing json rpc method is called which may happen when prisma devs tamper with e g and
| 1
|
718,968
| 24,739,455,566
|
IssuesEvent
|
2022-10-21 02:54:41
|
TencentBlueKing/bk-nodeman
|
https://api.github.com/repos/TencentBlueKing/bk-nodeman
|
closed
|
[FEATURE] Agent 2.0 Linux Agent 安装脚本
|
version/V2.2.X priority/high module/script
|
**你想要什么功能**
Agent 2.0 Linux Agent 安装脚本
**Checklist**
- [x] 删除 1.0 冗余的脚本逻辑
- [x] 卸载 Agent 去除冗余的下载 Agent 包等操作
- [x] report_healthz 上报,需要验证 healthz 的 result_data 可以被 report_log 准确接收并写入 Redis
|
1.0
|
[FEATURE] Agent 2.0 Linux Agent 安装脚本 - **你想要什么功能**
Agent 2.0 Linux Agent 安装脚本
**Checklist**
- [x] 删除 1.0 冗余的脚本逻辑
- [x] 卸载 Agent 去除冗余的下载 Agent 包等操作
- [x] report_healthz 上报,需要验证 healthz 的 result_data 可以被 report_log 准确接收并写入 Redis
|
non_process
|
agent linux agent 安装脚本 你想要什么功能 agent linux agent 安装脚本 checklist 删除 冗余的脚本逻辑 卸载 agent 去除冗余的下载 agent 包等操作 report healthz 上报,需要验证 healthz 的 result data 可以被 report log 准确接收并写入 redis
| 0
|
91,604
| 8,310,115,668
|
IssuesEvent
|
2018-09-24 09:33:02
|
pints-team/pints
|
https://api.github.com/repos/pints-team/pints
|
opened
|
banana problem failing across many samplers
|
functional-testing
|
- [x] AdaptiveCovarianceMCMC
- [ ] DifferentialEvolutionMCMC
- [ ] DreamMCMC
- [x] MetropolisRandomWalkMCMC
- [ ] PopulationMCMC
|
1.0
|
banana problem failing across many samplers - - [x] AdaptiveCovarianceMCMC
- [ ] DifferentialEvolutionMCMC
- [ ] DreamMCMC
- [x] MetropolisRandomWalkMCMC
- [ ] PopulationMCMC
|
non_process
|
banana problem failing across many samplers adaptivecovariancemcmc differentialevolutionmcmc dreammcmc metropolisrandomwalkmcmc populationmcmc
| 0
|
15,718
| 19,861,823,112
|
IssuesEvent
|
2022-01-22 01:11:09
|
googleapis/python-iot
|
https://api.github.com/repos/googleapis/python-iot
|
closed
|
samples.api-client.accesstoken_example.accesstoken_test: test_send_iot_command_to_device failed
|
api: cloudiot type: process samples flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0ca174a21bc0a8e2891511582f5c592062dd50a8
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8b739a51-603c-4458-96be-b0dee3a6cffc), [Sponge](http://sponge2/8b739a51-603c-4458-96be-b0dee3a6cffc)
status: failed
<details><summary>Test output</summary><br><pre>def test_send_iot_command_to_device():
device_id = device_id_template.format(uuid.uuid4())
service_account_email = (
"cloud-iot-test@python-docs-samples-tests.iam.gserviceaccount.com"
)
command_to_be_sent_to_device = "OPEN_DOOR"
manager.open_registry(
service_account_json, project_id, cloud_region, device_pubsub_topic, registry_id
)
manager.create_rs256_device(
service_account_json,
project_id,
cloud_region,
registry_id,
device_id,
rsa_cert_path,
)
# Create device MQTT client and connect to cloud iot mqtt bridge.
mqtt_bridge_hostname = "mqtt.googleapis.com"
mqtt_bridge_port = 8883
mqtt_tls_cert = "resources/roots.pem"
client = cloudiot_mqtt_example.get_client(
project_id,
cloud_region,
registry_id,
device_id,
rsa_private_path,
"RS256",
mqtt_tls_cert,
mqtt_bridge_hostname,
mqtt_bridge_port,
)
accesstoken.send_iot_command_to_device(
cloud_region,
project_id,
registry_id,
device_id,
"RS256",
rsa_private_path,
service_account_email,
> command_to_be_sent_to_device,
)
accesstoken_test.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
accesstoken.py:353: in send_iot_command_to_device
assert command_resp.ok, command_resp.raise_for_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [404]>
def raise_for_status(self):
"""Raises :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn't utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode('utf-8')
except UnicodeDecodeError:
reason = self.reason.decode('iso-8859-1')
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u'%s Client Error: %s for url: %s' % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u'%s Server Error: %s for url: %s' % (self.status_code, reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://cloudiot.googleapis.com/v1/projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-c332037d9aee4e6a92d3b831ff3b26a3-1633079588/devices/test-device-256-0bcc9592-bc28-47b2-bc0f-bbcb267149b3:sendCommandToDevice
.nox/py-3-6/lib/python3.6/site-packages/requests/models.py:953: HTTPError</pre></details>
|
1.0
|
samples.api-client.accesstoken_example.accesstoken_test: test_send_iot_command_to_device failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 0ca174a21bc0a8e2891511582f5c592062dd50a8
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/8b739a51-603c-4458-96be-b0dee3a6cffc), [Sponge](http://sponge2/8b739a51-603c-4458-96be-b0dee3a6cffc)
status: failed
<details><summary>Test output</summary><br><pre>def test_send_iot_command_to_device():
device_id = device_id_template.format(uuid.uuid4())
service_account_email = (
"cloud-iot-test@python-docs-samples-tests.iam.gserviceaccount.com"
)
command_to_be_sent_to_device = "OPEN_DOOR"
manager.open_registry(
service_account_json, project_id, cloud_region, device_pubsub_topic, registry_id
)
manager.create_rs256_device(
service_account_json,
project_id,
cloud_region,
registry_id,
device_id,
rsa_cert_path,
)
# Create device MQTT client and connect to cloud iot mqtt bridge.
mqtt_bridge_hostname = "mqtt.googleapis.com"
mqtt_bridge_port = 8883
mqtt_tls_cert = "resources/roots.pem"
client = cloudiot_mqtt_example.get_client(
project_id,
cloud_region,
registry_id,
device_id,
rsa_private_path,
"RS256",
mqtt_tls_cert,
mqtt_bridge_hostname,
mqtt_bridge_port,
)
accesstoken.send_iot_command_to_device(
cloud_region,
project_id,
registry_id,
device_id,
"RS256",
rsa_private_path,
service_account_email,
> command_to_be_sent_to_device,
)
accesstoken_test.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
accesstoken.py:353: in send_iot_command_to_device
assert command_resp.ok, command_resp.raise_for_status()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <Response [404]>
def raise_for_status(self):
"""Raises :class:`HTTPError`, if one occurred."""
http_error_msg = ''
if isinstance(self.reason, bytes):
# We attempt to decode utf-8 first because some servers
# choose to localize their reason strings. If the string
# isn't utf-8, we fall back to iso-8859-1 for all other
# encodings. (See PR #3538)
try:
reason = self.reason.decode('utf-8')
except UnicodeDecodeError:
reason = self.reason.decode('iso-8859-1')
else:
reason = self.reason
if 400 <= self.status_code < 500:
http_error_msg = u'%s Client Error: %s for url: %s' % (self.status_code, reason, self.url)
elif 500 <= self.status_code < 600:
http_error_msg = u'%s Server Error: %s for url: %s' % (self.status_code, reason, self.url)
if http_error_msg:
> raise HTTPError(http_error_msg, response=self)
E requests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://cloudiot.googleapis.com/v1/projects/python-docs-samples-tests/locations/us-central1/registries/test-registry-c332037d9aee4e6a92d3b831ff3b26a3-1633079588/devices/test-device-256-0bcc9592-bc28-47b2-bc0f-bbcb267149b3:sendCommandToDevice
.nox/py-3-6/lib/python3.6/site-packages/requests/models.py:953: HTTPError</pre></details>
|
process
|
samples api client accesstoken example accesstoken test test send iot command to device failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output def test send iot command to device device id device id template format uuid service account email cloud iot test python docs samples tests iam gserviceaccount com command to be sent to device open door manager open registry service account json project id cloud region device pubsub topic registry id manager create device service account json project id cloud region registry id device id rsa cert path create device mqtt client and connect to cloud iot mqtt bridge mqtt bridge hostname mqtt googleapis com mqtt bridge port mqtt tls cert resources roots pem client cloudiot mqtt example get client project id cloud region registry id device id rsa private path mqtt tls cert mqtt bridge hostname mqtt bridge port accesstoken send iot command to device cloud region project id registry id device id rsa private path service account email command to be sent to device accesstoken test py accesstoken py in send iot command to device assert command resp ok command resp raise for status self def raise for status self raises class httperror if one occurred http error msg if isinstance self reason bytes we attempt to decode utf first because some servers choose to localize their reason strings if the string isn t utf we fall back to iso for all other encodings see pr try reason self reason decode utf except unicodedecodeerror reason self reason decode iso else reason self reason if self status code http error msg u s client error s for url s self status code reason self url elif self status code http error msg u s server error s for url s self status code reason self url if http error msg raise httperror http error msg response self e requests exceptions httperror client error not found for url nox py lib site packages requests models py httperror
| 1
|
191,121
| 6,825,995,535
|
IssuesEvent
|
2017-11-08 12:37:55
|
thinkh/provenance_retrieval
|
https://api.github.com/repos/thinkh/provenance_retrieval
|
closed
|
Go back to last state before searching
|
priority: high type: feature
|
When searching and jumping to a search result, the user needs a back button to continue where she left. This button is especially important when provenance panel is closed.
|
1.0
|
Go back to last state before searching - When searching and jumping to a search result, the user needs a back button to continue where she left. This button is especially important when provenance panel is closed.
|
non_process
|
go back to last state before searching when searching and jumping to a search result the user needs a back button to continue where she left this button is especially important when provenance panel is closed
| 0
|
9,958
| 2,616,014,854
|
IssuesEvent
|
2015-03-02 00:57:34
|
jasonhall/bwapi
|
https://api.github.com/repos/jasonhall/bwapi
|
closed
|
Problem with iterating over Broodwar->getEvents()
|
auto-migrated Component-Logic Priority-High Type-Defect Usability
|
```
std::list<Event> events = Broodwar->getEvents();
for(std::list<Event>::iterator e = events.begin(); e != events.end(); ++e)
{
EventType::Enum et = e->type;
What steps will reproduce the problem?
1. Use official BWAPI 3.3 (debug)
2. Compile a ClientAI (e.g. AIModuleLoader) in Debug and debug it
What is the expected output? What do you see instead?
An assertion error from vc++ sayint the list iterator is not incrementable.
Please provide any additional information below.
The problem is fixed by first copying the list returned by getEvents() and
iterating over it.
Is this a BWAPI problem or is VC++ wrong here?
```
Original issue reported on code.google.com by `tren...@gmail.com` on 25 Nov 2010 at 3:36
|
1.0
|
Problem with iterating over Broodwar->getEvents() - ```
std::list<Event> events = Broodwar->getEvents();
for(std::list<Event>::iterator e = events.begin(); e != events.end(); ++e)
{
EventType::Enum et = e->type;
What steps will reproduce the problem?
1. Use official BWAPI 3.3 (debug)
2. Compile a ClientAI (e.g. AIModuleLoader) in Debug and debug it
What is the expected output? What do you see instead?
An assertion error from vc++ sayint the list iterator is not incrementable.
Please provide any additional information below.
The problem is fixed by first copying the list returned by getEvents() and
iterating over it.
Is this a BWAPI problem or is VC++ wrong here?
```
Original issue reported on code.google.com by `tren...@gmail.com` on 25 Nov 2010 at 3:36
|
non_process
|
problem with iterating over broodwar getevents std list events broodwar getevents for std list iterator e events begin e events end e eventtype enum et e type what steps will reproduce the problem use official bwapi debug compile a clientai e g aimoduleloader in debug and debug it what is the expected output what do you see instead an assertion error from vc sayint the list iterator is not incrementable please provide any additional information below the problem is fixed by first copying the list returned by getevents and iterating over it is this a bwapi problem or is vc wrong here original issue reported on code google com by tren gmail com on nov at
| 0
|
263,407
| 19,909,903,639
|
IssuesEvent
|
2022-01-25 16:11:46
|
iza-institute-of-labor-economics/gettsim
|
https://api.github.com/repos/iza-institute-of-labor-economics/gettsim
|
closed
|
Polish function table in documentation
|
documentation priority-high
|
### Current and desired situation
improve documentation
### Proposed implementation
For functions without docstring text a description of the first parameter is shown. The goal should be to have a docstring for each function.
Also: the second column is hard to read because one needs to scroll to the left. Can we do one of the following:
- have line breaks?
- increase the width of the documentation? It only spans 2/3 of my screen.
|
1.0
|
Polish function table in documentation - ### Current and desired situation
improve documentation
### Proposed implementation
For functions without docstring text a description of the first parameter is shown. The goal should be to have a docstring for each function.
Also: the second column is hard to read because one needs to scroll to the left. Can we do one of the following:
- have line breaks?
- increase the width of the documentation? It only spans 2/3 of my screen.
|
non_process
|
polish function table in documentation current and desired situation improve documentation proposed implementation for functions without docstring text a description of the first parameter is shown the goal should be to have a docstring for each function also the second column is hard to read because one needs to scroll to the left can we do one of the following have line breaks increase the width of the documentation it only spans of my screen
| 0
|
320,115
| 23,802,241,100
|
IssuesEvent
|
2022-09-03 13:31:11
|
typesense/typesense
|
https://api.github.com/repos/typesense/typesense
|
closed
|
Pls add some documentation around backup restore
|
documentation
|
While there is information about snapshotting in the doc, there does not seem to be any info around restore and best practices or gotchas while restoring.
As an example I did this, and I guess its this straight forward, but would be good to have this documented.
1. Snapshot created on 0.22.1 on running TS
2. New server setup with 0.23.1. TS not running.
3. Copied the snapshot to the new server
4. Replaced the `/var/lib/typesense/state` folder with the corresponding `state` folder found in the snapshot
5. Started TS on new server
|
1.0
|
Pls add some documentation around backup restore - While there is information about snapshotting in the doc, there does not seem to be any info around restore and best practices or gotchas while restoring.
As an example I did this, and I guess its this straight forward, but would be good to have this documented.
1. Snapshot created on 0.22.1 on running TS
2. New server setup with 0.23.1. TS not running.
3. Copied the snapshot to the new server
4. Replaced the `/var/lib/typesense/state` folder with the corresponding `state` folder found in the snapshot
5. Started TS on new server
|
non_process
|
pls add some documentation around backup restore while there is information about snapshotting in the doc there does not seem to be any info around restore and best practices or gotchas while restoring as an example i did this and i guess its this straight forward but would be good to have this documented snapshot created on on running ts new server setup with ts not running copied the snapshot to the new server replaced the var lib typesense state folder with the corresponding state folder found in the snapshot started ts on new server
| 0
|
20,093
| 26,623,961,420
|
IssuesEvent
|
2023-01-24 13:19:33
|
UnitTestBot/UTBotJava
|
https://api.github.com/repos/UnitTestBot/UTBotJava
|
opened
|
Timeouts for concrete execution in UtBotSymbolicEngine are not calculated
|
ctg-bug comp-symbolic-engine comp-instrumented-process
|
**Description**
Currently class `UtBotSymbolicEngine` relies on coroutine scope cancellation and always passes as timeout for concrete executor 1 sec. When cancellation happens - it is not propagated in Concrete Executor because invariant for Instrumentation Process is that it cancels itself and always work provided timeout.
We should either:
1. Propagate cancellation inside instrumentation process and change invariants - discuss it with @sergeypospelov
2. Calculated desired timeout in `UtBotSymbolicEngine`.
|
1.0
|
Timeouts for concrete execution in UtBotSymbolicEngine are not calculated - **Description**
Currently class `UtBotSymbolicEngine` relies on coroutine scope cancellation and always passes as timeout for concrete executor 1 sec. When cancellation happens - it is not propagated in Concrete Executor because invariant for Instrumentation Process is that it cancels itself and always work provided timeout.
We should either:
1. Propagate cancellation inside instrumentation process and change invariants - discuss it with @sergeypospelov
2. Calculated desired timeout in `UtBotSymbolicEngine`.
|
process
|
timeouts for concrete execution in utbotsymbolicengine are not calculated description currently class utbotsymbolicengine relies on coroutine scope cancellation and always passes as timeout for concrete executor sec when cancellation happens it is not propagated in concrete executor because invariant for instrumentation process is that it cancels itself and always work provided timeout we should either propagate cancellation inside instrumentation process and change invariants discuss it with sergeypospelov calculated desired timeout in utbotsymbolicengine
| 1
|
11,659
| 14,523,881,821
|
IssuesEvent
|
2020-12-14 10:41:16
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Reintrospection Bug on Relations
|
bug/2-confirmed kind/bug process/candidate team/migrations topic: re-introspection
|
From a report in Slack:
> I’m having an issue where my prisma schema gets updated every time I introspect, here’s the workflow:
> - I make changes to my db with plain sql
> - I introspect my db
> - Prisma schema gets generated
> - I update some values for easier understanding to the prisma client
> - I introspect again
> - Prisma schema gets updated again (overwriting my custom field changes)
>
> Here’s an example:
> Generated by prisma after introspect:
> ```prisma
> model Tag {
> id Int @id @default(autoincrement())
> name String @unique
> post Post[] @relation("post_to_tag")
> @@map("tag")
> }
> ```
> Modified my by afterwards:
> ```prisma
> model Tag {
> id Int @id @default(autoincrement())
> name String @unique
> posts Post[] @relation("post_to_tag") <-------- posts instead of post
> @@map("tag")
> }
> ```
>
> Introspects again and overwrites my changes with the 1st code block again.
https://prisma.slack.com/archives/CA491RJH0/p1607487789070000
|
1.0
|
Reintrospection Bug on Relations - From a report in Slack:
> I’m having an issue where my prisma schema gets updated every time I introspect, here’s the workflow:
> - I make changes to my db with plain sql
> - I introspect my db
> - Prisma schema gets generated
> - I update some values for easier understanding to the prisma client
> - I introspect again
> - Prisma schema gets updated again (overwriting my custom field changes)
>
> Here’s an example:
> Generated by prisma after introspect:
> ```prisma
> model Tag {
> id Int @id @default(autoincrement())
> name String @unique
> post Post[] @relation("post_to_tag")
> @@map("tag")
> }
> ```
> Modified my by afterwards:
> ```prisma
> model Tag {
> id Int @id @default(autoincrement())
> name String @unique
> posts Post[] @relation("post_to_tag") <-------- posts instead of post
> @@map("tag")
> }
> ```
>
> Introspects again and overwrites my changes with the 1st code block again.
https://prisma.slack.com/archives/CA491RJH0/p1607487789070000
|
process
|
reintrospection bug on relations from a report in slack i’m having an issue where my prisma schema gets updated every time i introspect here’s the workflow i make changes to my db with plain sql i introspect my db prisma schema gets generated i update some values for easier understanding to the prisma client i introspect again prisma schema gets updated again overwriting my custom field changes here’s an example generated by prisma after introspect prisma model tag id int id default autoincrement name string unique post post relation post to tag map tag modified my by afterwards prisma model tag id int id default autoincrement name string unique posts post relation post to tag posts instead of post map tag introspects again and overwrites my changes with the code block again
| 1
|
399,388
| 11,747,882,875
|
IssuesEvent
|
2020-03-12 14:21:27
|
containrrr/watchtower
|
https://api.github.com/repos/containrrr/watchtower
|
opened
|
Portainer agent and watchtower remote
|
Priority: Medium Status: Available Type: Question
|
Hi!
Guys I have an question 'bout pissibility to use for remote host running portainer agent container at remote docker host - it's possible to connect with remote key to remote host at tcp://ip:9001?
'Coz I'm just trying to do this, and I have an the next logs:
```
time="2020-03-12T14:16:09Z" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized.",
time="2020-03-12T14:16:10Z" level=fatal msg="Error response from daemon: Client sent an HTTP request to an HTTPS server.",
time="2020-03-12T14:16:10Z" level=debug msg="Retrieving running containers",
time="2020-03-12T14:16:14Z" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized.",
time="2020-03-12T14:16:15Z" level=debug msg="Retrieving running containers",
time="2020-03-12T14:16:15Z" level=fatal msg="Error response from daemon: Client sent an HTTP request to an HTTPS server.",
```
I'm not sure, but my portainer is working with this instance via this remote agent container at port 9001 now, maybe I need to running another container with different port on my remote host only for this watchtower?
I would be glad to any comments on this matter, I would like to keep the watchtower remotely. Thanks in advance!
|
1.0
|
Portainer agent and watchtower remote - Hi!
Guys I have an question 'bout pissibility to use for remote host running portainer agent container at remote docker host - it's possible to connect with remote key to remote host at tcp://ip:9001?
'Coz I'm just trying to do this, and I have an the next logs:
```
time="2020-03-12T14:16:09Z" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized.",
time="2020-03-12T14:16:10Z" level=fatal msg="Error response from daemon: Client sent an HTTP request to an HTTPS server.",
time="2020-03-12T14:16:10Z" level=debug msg="Retrieving running containers",
time="2020-03-12T14:16:14Z" level=debug msg="Sleeping for a second to ensure the docker api client has been properly initialized.",
time="2020-03-12T14:16:15Z" level=debug msg="Retrieving running containers",
time="2020-03-12T14:16:15Z" level=fatal msg="Error response from daemon: Client sent an HTTP request to an HTTPS server.",
```
I'm not sure, but my portainer is working with this instance via this remote agent container at port 9001 now, maybe I need to running another container with different port on my remote host only for this watchtower?
I would be glad to any comments on this matter, I would like to keep the watchtower remotely. Thanks in advance!
|
non_process
|
portainer agent and watchtower remote hi guys i have an question bout pissibility to use for remote host running portainer agent container at remote docker host it s possible to connect with remote key to remote host at tcp ip coz i m just trying to do this and i have an the next logs time level debug msg sleeping for a second to ensure the docker api client has been properly initialized time level fatal msg error response from daemon client sent an http request to an https server time level debug msg retrieving running containers time level debug msg sleeping for a second to ensure the docker api client has been properly initialized time level debug msg retrieving running containers time level fatal msg error response from daemon client sent an http request to an https server i m not sure but my portainer is working with this instance via this remote agent container at port now maybe i need to running another container with different port on my remote host only for this watchtower i would be glad to any comments on this matter i would like to keep the watchtower remotely thanks in advance
| 0
|
15,600
| 8,972,153,038
|
IssuesEvent
|
2019-01-29 17:32:23
|
tesseract-ocr/tesseract
|
https://api.github.com/repos/tesseract-ocr/tesseract
|
closed
|
Significant slow down of Windows Tesseract built with cppan
|
performance question
|
I have noticed that Tesseract processing takes longer with new changes since the release.
Below, I am comparing two similar builds (12-14-2018 and 1-27-2019). The older version performance is comparable to the released code but the new version is slow.
The reason for not using the released version was to reduce the time gap and hence the number of changes that caused the slowdown.
Older Version (12-24-18):
-------------------------
D:\Dev\Attic\tesseract>git show
commit b2ab772016f2caf22b65e87112e3ba3738e9e68a (HEAD -> feature/321-Tesseract-4, origin/feature/321-Tesseract-4)
Merge: 660766f6 231992a9
Author: Charles Weld <charles.weld@gmail.com>
Date: Thu Jul 5 16:25:55 2018 +1000
Merge pull request #414 from nguyenq/feature/321-Tesseract-4
Feature/321 tesseract 4
Newer Version (1-27-19):
------------------------
D:\Dev\tesseract>git show
commit 44038cb5e86a2a89d937c3bc86fa86ff00e58275 (HEAD -> master, origin/master, origin/HEAD)
Merge: 8f87ebb4 4d9bc11f
Author: zdenop <zdenop@gmail.com>
Date: Sun Jan 27 08:24:41 2019 +0100
Merge pull request #2200 from Shreeshrii/master
fix and enable more unittests
Test run using a 7-page TIFF file:
----------------------------------
D:\TessTest>Run
D:\TessTest>cd Tess_12_14
D:\TessTest\Tess_12_14>echo 9:53:06.35
9:53:06.35
D:\TessTest\Tess_12_14>tesseract ..\Page7.tif .
Tesseract Open Source OCR Engine v4.0.0 with Leptonica
Page 1
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
D:\TessTest\Tess_12_14>echo 9:54:46.30
9:54:46.30
D:\TessTest\Tess_12_14>cd ..\Tess_1_27
D:\TessTest\Tess_1_27>tesseract ..\Page7.tif .
Tesseract Open Source OCR Engine v4.0.0 with Leptonica
Page 1
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
D:\TessTest\Tess_1_27>echo 9:57:32.28
9:57:32.28
Summary of the test run:
------------------------
12-24-18 version time: ~100 seconds
1-27-19 version time: ~162 seconds
Source files that changed:
--------------------------
.\api\altorenderer.cpp
.\api\baseapi.cpp
.\api\capi.cpp
.\api\makefile.am
.\api\renderer.cpp
.\api\renderer.h
.\api\tesseractmain.cpp
.\arch\dotproductavx.cpp
.\arch\dotproductsse.cpp
.\arch\dotproductsse.h
.\arch\intsimdmatrix.cpp
.\arch\intsimdmatrix.h
.\arch\intsimdmatrixavx2.cpp
.\arch\intsimdmatrixsse.cpp
.\arch\makefile.am
.\arch\simddetect.cpp
.\ccmain\control.cpp
.\ccmain\tesseractclass.cpp
.\ccmain\tesseractclass.h
.\ccmain\thresholder.cpp
.\ccstruct\otsuthr.cpp
.\classify\cluster.cpp
.\classify\protos.cpp
.\cutil\oldlist.cpp
.\cutil\oldlist.h
.\lstm\networkio.cpp
.\lstm\networkio.h
.\lstm\tfnetwork.cpp
.\lstm\tfnetwork.h
.\lstm\weightmatrix.cpp
.\lstm\weightmatrix.h
.\opencl\makefile.am
.\opencl\oclkernels.h
.\opencl\openclwrapper.cpp
.\opencl\openclwrapper.h
.\textord\linefind.cpp
.\training\commontraining.h
.\training\pango_font_info.cpp
.\training\tesstrain.sh
.\training\tesstrain_utils.sh
.\training\text2image.cpp
.\training\validate_grapheme.cpp
.\training\validate_indic.cpp
.\training\validate_javanese.cpp
.\training\validate_khmer.cpp
.\training\validate_myanmar.cpp
.\training\validator.h
|
True
|
Significant slow down of Windows Tesseract built with cppan - I have noticed that Tesseract processing takes longer with new changes since the release.
Below, I am comparing two similar builds (12-14-2018 and 1-27-2019). The older version performance is comparable to the released code but the new version is slow.
The reason for not using the released version was to reduce the time gap and hence the number of changes that caused the slowdown.
Older Version (12-24-18):
-------------------------
D:\Dev\Attic\tesseract>git show
commit b2ab772016f2caf22b65e87112e3ba3738e9e68a (HEAD -> feature/321-Tesseract-4, origin/feature/321-Tesseract-4)
Merge: 660766f6 231992a9
Author: Charles Weld <charles.weld@gmail.com>
Date: Thu Jul 5 16:25:55 2018 +1000
Merge pull request #414 from nguyenq/feature/321-Tesseract-4
Feature/321 tesseract 4
Newer Version (1-27-19):
------------------------
D:\Dev\tesseract>git show
commit 44038cb5e86a2a89d937c3bc86fa86ff00e58275 (HEAD -> master, origin/master, origin/HEAD)
Merge: 8f87ebb4 4d9bc11f
Author: zdenop <zdenop@gmail.com>
Date: Sun Jan 27 08:24:41 2019 +0100
Merge pull request #2200 from Shreeshrii/master
fix and enable more unittests
Test run using a 7-page TIFF file:
----------------------------------
D:\TessTest>Run
D:\TessTest>cd Tess_12_14
D:\TessTest\Tess_12_14>echo 9:53:06.35
9:53:06.35
D:\TessTest\Tess_12_14>tesseract ..\Page7.tif .
Tesseract Open Source OCR Engine v4.0.0 with Leptonica
Page 1
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
D:\TessTest\Tess_12_14>echo 9:54:46.30
9:54:46.30
D:\TessTest\Tess_12_14>cd ..\Tess_1_27
D:\TessTest\Tess_1_27>tesseract ..\Page7.tif .
Tesseract Open Source OCR Engine v4.0.0 with Leptonica
Page 1
Page 2
Page 3
Page 4
Page 5
Page 6
Page 7
D:\TessTest\Tess_1_27>echo 9:57:32.28
9:57:32.28
Summary of the test run:
------------------------
12-24-18 version time: ~100 seconds
1-27-19 version time: ~162 seconds
Source files that changed:
--------------------------
.\api\altorenderer.cpp
.\api\baseapi.cpp
.\api\capi.cpp
.\api\makefile.am
.\api\renderer.cpp
.\api\renderer.h
.\api\tesseractmain.cpp
.\arch\dotproductavx.cpp
.\arch\dotproductsse.cpp
.\arch\dotproductsse.h
.\arch\intsimdmatrix.cpp
.\arch\intsimdmatrix.h
.\arch\intsimdmatrixavx2.cpp
.\arch\intsimdmatrixsse.cpp
.\arch\makefile.am
.\arch\simddetect.cpp
.\ccmain\control.cpp
.\ccmain\tesseractclass.cpp
.\ccmain\tesseractclass.h
.\ccmain\thresholder.cpp
.\ccstruct\otsuthr.cpp
.\classify\cluster.cpp
.\classify\protos.cpp
.\cutil\oldlist.cpp
.\cutil\oldlist.h
.\lstm\networkio.cpp
.\lstm\networkio.h
.\lstm\tfnetwork.cpp
.\lstm\tfnetwork.h
.\lstm\weightmatrix.cpp
.\lstm\weightmatrix.h
.\opencl\makefile.am
.\opencl\oclkernels.h
.\opencl\openclwrapper.cpp
.\opencl\openclwrapper.h
.\textord\linefind.cpp
.\training\commontraining.h
.\training\pango_font_info.cpp
.\training\tesstrain.sh
.\training\tesstrain_utils.sh
.\training\text2image.cpp
.\training\validate_grapheme.cpp
.\training\validate_indic.cpp
.\training\validate_javanese.cpp
.\training\validate_khmer.cpp
.\training\validate_myanmar.cpp
.\training\validator.h
|
non_process
|
significant slow down of windows tesseract built with cppan i have noticed that tesseract processing takes longer with new changes since the release below i am comparing two similar builds and the older version performance is comparable to the released code but the new version is slow the reason for not using the released version was to reduce the time gap and hence the number of changes that caused the slowdown older version d dev attic tesseract git show commit head feature tesseract origin feature tesseract merge author charles weld date thu jul merge pull request from nguyenq feature tesseract feature tesseract newer version d dev tesseract git show commit head master origin master origin head merge author zdenop date sun jan merge pull request from shreeshrii master fix and enable more unittests test run using a page tiff file d tesstest run d tesstest cd tess d tesstest tess echo d tesstest tess tesseract tif tesseract open source ocr engine with leptonica page page page page page page page d tesstest tess echo d tesstest tess cd tess d tesstest tess tesseract tif tesseract open source ocr engine with leptonica page page page page page page page d tesstest tess echo summary of the test run version time seconds version time seconds source files that changed api altorenderer cpp api baseapi cpp api capi cpp api makefile am api renderer cpp api renderer h api tesseractmain cpp arch dotproductavx cpp arch dotproductsse cpp arch dotproductsse h arch intsimdmatrix cpp arch intsimdmatrix h arch cpp arch intsimdmatrixsse cpp arch makefile am arch simddetect cpp ccmain control cpp ccmain tesseractclass cpp ccmain tesseractclass h ccmain thresholder cpp ccstruct otsuthr cpp classify cluster cpp classify protos cpp cutil oldlist cpp cutil oldlist h lstm networkio cpp lstm networkio h lstm tfnetwork cpp lstm tfnetwork h lstm weightmatrix cpp lstm weightmatrix h opencl makefile am opencl oclkernels h opencl openclwrapper cpp opencl openclwrapper h textord linefind cpp training commontraining h training pango font info cpp training tesstrain sh training tesstrain utils sh training cpp training validate grapheme cpp training validate indic cpp training validate javanese cpp training validate khmer cpp training validate myanmar cpp training validator h
| 0
|
39,370
| 12,663,416,431
|
IssuesEvent
|
2020-06-18 01:17:13
|
TIBCOSoftware/PDToolRelease
|
https://api.github.com/repos/TIBCOSoftware/PDToolRelease
|
opened
|
CVE-2020-2934 (Medium) detected in mysql-connector-java-5.1.14.jar
|
security vulnerability
|
## CVE-2020-2934 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.14.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to vulnerable library: _depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.14.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934>CVE-2020-2934</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/security-alerts/cpuapr2020.html">https://www.oracle.com/security-alerts/cpuapr2020.html</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: mysql:mysql-connector-java:5.1.49,8.0.20</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.14","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49,8.0.20"}],"vulnerabilityIdentifier":"CVE-2020-2934","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-2934 (Medium) detected in mysql-connector-java-5.1.14.jar - ## CVE-2020-2934 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.14.jar</b></p></summary>
<p>MySQL JDBC Type 4 driver</p>
<p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p>
<p>Path to vulnerable library: _depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2http/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST12/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST08/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST06/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST05/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST00_6.2https/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST04/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST02/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST01/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/8.0.0/Release-8.0.0-2019-01-30/PDTool-8.0.0-2019-01-30.r1/PDTool8.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST10/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST09/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-06-12/PDTool-7.0.0-2018-06-12.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST11/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2018-10-01/PDTool-7.0.0-2018-10-01.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST03/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/PDToolRelease/7.0.0/Release-7.0.0-2019-01-30/PDTool-7.0.0-2019-01-30.r1/PDTool7.0.0_installer/installer_source/PDTool/resources/carfiles/TEST07/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar</p>
<p>
Dependency Hierarchy:
- :x: **mysql-connector-java-5.1.14.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).
<p>Publish Date: 2020-04-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934>CVE-2020-2934</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.oracle.com/security-alerts/cpuapr2020.html">https://www.oracle.com/security-alerts/cpuapr2020.html</a></p>
<p>Release Date: 2020-04-15</p>
<p>Fix Resolution: mysql:mysql-connector-java:5.1.49,8.0.20</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.14","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49,8.0.20"}],"vulnerabilityIdentifier":"CVE-2020-2934","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in mysql connector java jar cve medium severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to vulnerable library depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar depth pdtoolrelease release pdtool installer installer source pdtool resources carfiles files conf adapters system mysql mysql connector java bin jar dependency hierarchy x mysql connector java jar vulnerable library vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior and and prior difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise mysql connectors successful attacks require human interaction from a person other than the attacker successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data and unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr n ui r s u c l i l a l publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior and and prior difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise mysql connectors successful attacks require human interaction from a person other than the attacker successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data and unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr n ui r s u c l i l a l vulnerabilityurl
| 0
|
4,116
| 7,059,049,415
|
IssuesEvent
|
2018-01-04 23:07:26
|
Southclaws/pawn
|
https://api.github.com/repos/Southclaws/pawn
|
closed
|
#include with long name are ignored
|
state: stale type: pre-processor
|
When I try
`#include "../include/functions/func_movePlayerGently.pwn"`
it works but when I do:
`#include "../include/functions/saarp_func_movePlayerGently.pwn"`
It doesn't.
Thank you
|
1.0
|
#include with long name are ignored - When I try
`#include "../include/functions/func_movePlayerGently.pwn"`
it works but when I do:
`#include "../include/functions/saarp_func_movePlayerGently.pwn"`
It doesn't.
Thank you
|
process
|
include with long name are ignored when i try include include functions func moveplayergently pwn it works but when i do include include functions saarp func moveplayergently pwn it doesn t thank you
| 1
|
11,380
| 14,222,433,110
|
IssuesEvent
|
2020-11-17 16:52:50
|
ItsJonQ/g2
|
https://api.github.com/repos/ItsJonQ/g2
|
opened
|
Gutenberg Integration + Typography Tools (Roadmap)
|
process
|

I've been waiting for this moment for a long time. After several months of researching, collaborating, building, testing, and iterating, the G2 Components project is nearing a stage where we can confidently map out the initial integration with Gutenberg!
Working with [Global Styles](https://github.com/WordPress/gutenberg/issues/19611) and Design Tools had helped me understand many of the complexities and challenges that come from systemizing a UI layer that could accommodate the expansive and open nature of WordPress and the Gutenberg editor. As such, I believe in the importance of introducing G2 Components to Gutenberg in a way that demonstrates user-facing value. I wanted to provide improvements to the UI experience that end-users could see and feel, rather than a revamped architecture that was superficially invisible.
For the past weeks, I've been collaborating with various designers and developers to refine the project. More importantly, I wanted to make sure that it was going in the right direction. (A big thanks to all of these folks for being supportive and patient with my questions). Based on feedback, we felt like a potential candidate to deliver G2 Components with would be to improve the [Typography Tools](https://g2-components.xyz/iframe.html?id=designtools-presentation-typographypanel--default) for Full Site Editing (FSE) and Global Styles.
Today, I've created a rough (publicly available) roadmap using Miro. I've found this medium to be easier to visualize a large scale project (at a high level). For finer-grain task tracking, we could use familiar workflows like Github projects. With that said, I'm planning on using this Miro board as the primary task coordination/planning format. As such, items on this board may change over time (e.g. moved around or relabeled).

For this Integration/Typography Tools project, I've split up the work into 4 key phases:
* Prep
* Integration
* Build
* Post Build
### Prep

The "Prep" phase contains all of the tasks that prepare G2 for Gutenberg. These tasks involve either cleanup/refactors or resolving critical features (like automatic RTL CSS rendering).
Between the 3 phases, "Prep" is probably the easiest and the shortest. The most crucial part would be to ensure that G2 works correctly with Gutenberg through some initial integration testing.
#### Initial Integration Testing
All of G2 Component's packages are available publicly on npm. They are updated several times a week, with the latest features and refinements from the project.
To test within Gutenberg, we can add the latest G2 Component packages as dependencies within the Gutenberg repo. Once they're in, we can attempt to use the components in various places in the editor.
All we're doing is making sure the Components and it's systems (e.g. `context`, `styles`, `substate`, etc...) are working as expected within the WordPress/Gutenberg environment.
No pull request will be needed, as we won't be submitting any code into the repository (not yet).
#### Prep Completion
Once all the tasks are done, and the initial Gutenberg integration tests feel solid, we can move onto the "Integration" phase.
### Integration

The "Integration" phase involves migrating all of the packages from the G2 Components repo into the Gutenberg repo. The initial packages would be `utils`, `substate`, `create-styles`, `styles`, and `context` (preferably in that order). These systems should not impact the existing Gutenberg code or UI.
These system packages are smaller and self-contained. The migration will help identify and address any issues early (e.g. issues regarding dependencies, build, or types), which will smoothen the move for the components code.
#### Package Locations
The G2 system packages would live as dedicated packages within Gutenberg, with a slight adjustment to their names.
* `@wp-g2/create-styles` -> `@wordpress/ui-create-styles`
* `@wp-g2/context` -> `@wordpress/ui-context`
* `@wp-g2/styles` -> `@wordpress/ui-styles`
* `@wp-g2/substate` -> `@wordpress/ui-substate`
* `@wp-g2/utils` -> `@wordpress/ui-utils`
Note: There are other system packages in G2 Components right now. The plan is to drop those in favour of using their underlying dependencies directly:
* `@wp-g2/a11y` -> `reakit`
* `@wp-g2/animations` -> `framer-motion`
* `@wp-g2/gestures` -> `react-use-gesture`
The only G2 package that differs from the above-mentioned migration would be `@wp-g2/components`.
#### Component Integration
The idea is to move all of the code from `@wp-g2/components` directly into `@wordpress/components`.
The G2 code will mostly be separated into clearly marked directories (most likely `__next/`).
Below is a rough example of what it may look like:
```
src/
├── button/
│ ├── __next/
│ └── index.js
├── card/
│ ├── __next/
│ ├── (other files)
│ └── index.js
├── hstack/
│ └── index.js
└── etc...
```
The G2 code is locally scoped to a singular component directory while encapsulating and signifying that it's the "next"/future version of the component.
In this above example, I've included [HStack](https://github.com/ItsJonQ/g2/tree/master/packages/components/src/HStack) (one of the new layout-based primitive components). It currently doesn't exist within `@wordpress/components`. For cases like this, we would use the G2 code directly rather than containing it within a `__next` directory. Any component that is included in this way should be exported with the `__experimental` prefix, just like any other new UI introduced to the Gutenberg component ecosystem.
The G2 component code can be split into 3 groups:
* UI that is brand new (don't exist within `@wordpress/components`)
* UI that can directly replace existing components (current `__experimental` components)
* UI that to live alongside existing components, but in a "**dormant state**", activated with the ["Context/Adapter" strategy](https://g2components.wordpress.com/2020/11/02/the-path-to-integration/)
#### Context/Adapter Strategy
To ensure integration is done incrementally in a controlled manner, we'll be attempting a strategy that involves G2's Context system and "adapters" for the existing `@wordpress/components` code.
##### The Adapter
The "adapter" is a tiny layer that can smoothly translate Component props (API) from the current WordPress components to the new G2 components. This ensures that the systems (core and 3rd party, especially 3rd party) can continue to operate with zero modification required in their UI codebases.
##### The Context System
The "Context System" works like a "network". It's something that allows components to communicate with each other as well as special "areas" created by the system. By "connecting" our newly adapted components to this network, we can send a signal to specific areas to render the new G2 UI - just like flipping a switch!
#### First Batch
The first batch of components to hit `@wordpress/components` would be the ones required to build the new Typography Tools feature. This batch includes the lowest level and most fundamental components for the G2 UI layer (such as `View` and `Text`).
Splitting the migration of the components into "batches" will allow us to deliver the new Typography Tools faster.
After the necessary code is moved, and the Typography Tools is built, we can continue with the second "batch" to move the remaining G2 component code.
#### Integration Completion
At this stage, Gutenberg should **not** look any different. This is because most of the adapted components are still "dormant". They haven't been signalled to switch to their newer G2 UIs yet. However, all of the foundations, plumbing, and materials have been set for us to construct the new Typography Tools in the "build" phase.
### Build

The "build" phase involves constructing the new Typography Tools experience for Global Styles / FSE.
One task we have to accomplish is to streamline the mechanics of how the Block Editor hooks bind and render controls. During the design and development process of the new Typography experience, we established a new unified reset/remove interaction that we feel confident. As such, we'll need to create a systematic way for these interactions to be handled to benefit both the Typography Tools as well as future Design Tools.
Next, we'll designate this new Typography Tools area as "safe for G2 UI" using the Context System.
With those pieces in place, we can start construction on the new Typography tools. (Finally... Finally! We can build something, haha!).
#### Build Completion
Once the new Typography experience is created, and we're confident it's working correctly with Global Styles and the post editor (with zero regressions caused by G2), we can move onto the final phase, "post-build".
### Post Build

The "post build" phase involves us moving the second (and final) patch of components over from G2 to Gutenberg.
That's it!
### Next
With the 4 phases completed, the G2 components project would be fully integrated with `@wordpress/components` and we'll have a new and improved Typography Tools experience.
This is only the beginning! Integration and Typography tools are the first of many steps in systemizing and improving the UI experience for Gutenberg and WordPress. There's still a lot of work ahead of us. From improving the documentation experience to providing a fully-featured Figma Design System experience.
### Contributing
I'm 100% open to critical feedback on this outline. If you have any ideas of how we can improve things or spot any gaps that I may be missing, please let me know!
I'm going to start by creating tasks, projects, and milestones for the 4 phases in the respective repositories (first in the current G2 repo followed by Gutenberg). From there, I will be (very) actively coordinating, developing, and reviewing updates.
I recognize that the G2 Component project is inherently complex. It's bespoke and innovative systems are sculpted by the unique requirements of Gutenberg and WordPress. As such, please feel free to reach out to me anytime if you have questions on how things work and how things fit together. For the past weeks, I've been streaming (almost) [daily on Twitch](https://www.twitch.tv/itsjonq). I'm happy to answer questions during the live stream as well!
Thank you all for your time!
|
1.0
|
Gutenberg Integration + Typography Tools (Roadmap) - 
I've been waiting for this moment for a long time. After several months of researching, collaborating, building, testing, and iterating, the G2 Components project is nearing a stage where we can confidently map out the initial integration with Gutenberg!
Working with [Global Styles](https://github.com/WordPress/gutenberg/issues/19611) and Design Tools had helped me understand many of the complexities and challenges that come from systemizing a UI layer that could accommodate the expansive and open nature of WordPress and the Gutenberg editor. As such, I believe in the importance of introducing G2 Components to Gutenberg in a way that demonstrates user-facing value. I wanted to provide improvements to the UI experience that end-users could see and feel, rather than a revamped architecture that was superficially invisible.
For the past weeks, I've been collaborating with various designers and developers to refine the project. More importantly, I wanted to make sure that it was going in the right direction. (A big thanks to all of these folks for being supportive and patient with my questions). Based on feedback, we felt like a potential candidate to deliver G2 Components with would be to improve the [Typography Tools](https://g2-components.xyz/iframe.html?id=designtools-presentation-typographypanel--default) for Full Site Editing (FSE) and Global Styles.
Today, I've created a rough (publicly available) roadmap using Miro. I've found this medium to be easier to visualize a large scale project (at a high level). For finer-grain task tracking, we could use familiar workflows like Github projects. With that said, I'm planning on using this Miro board as the primary task coordination/planning format. As such, items on this board may change over time (e.g. moved around or relabeled).

For this Integration/Typography Tools project, I've split up the work into 4 key phases:
* Prep
* Integration
* Build
* Post Build
### Prep

The "Prep" phase contains all of the tasks that prepare G2 for Gutenberg. These tasks involve either cleanup/refactors or resolving critical features (like automatic RTL CSS rendering).
Between the 3 phases, "Prep" is probably the easiest and the shortest. The most crucial part would be to ensure that G2 works correctly with Gutenberg through some initial integration testing.
#### Initial Integration Testing
All of G2 Component's packages are available publicly on npm. They are updated several times a week, with the latest features and refinements from the project.
To test within Gutenberg, we can add the latest G2 Component packages as dependencies within the Gutenberg repo. Once they're in, we can attempt to use the components in various places in the editor.
All we're doing is making sure the Components and it's systems (e.g. `context`, `styles`, `substate`, etc...) are working as expected within the WordPress/Gutenberg environment.
No pull request will be needed, as we won't be submitting any code into the repository (not yet).
#### Prep Completion
Once all the tasks are done, and the initial Gutenberg integration tests feel solid, we can move onto the "Integration" phase.
### Integration

The "Integration" phase involves migrating all of the packages from the G2 Components repo into the Gutenberg repo. The initial packages would be `utils`, `substate`, `create-styles`, `styles`, and `context` (preferably in that order). These systems should not impact the existing Gutenberg code or UI.
These system packages are smaller and self-contained. The migration will help identify and address any issues early (e.g. issues regarding dependencies, build, or types), which will smoothen the move for the components code.
#### Package Locations
The G2 system packages would live as dedicated packages within Gutenberg, with a slight adjustment to their names.
* `@wp-g2/create-styles` -> `@wordpress/ui-create-styles`
* `@wp-g2/context` -> `@wordpress/ui-context`
* `@wp-g2/styles` -> `@wordpress/ui-styles`
* `@wp-g2/substate` -> `@wordpress/ui-substate`
* `@wp-g2/utils` -> `@wordpress/ui-utils`
Note: There are other system packages in G2 Components right now. The plan is to drop those in favour of using their underlying dependencies directly:
* `@wp-g2/a11y` -> `reakit`
* `@wp-g2/animations` -> `framer-motion`
* `@wp-g2/gestures` -> `react-use-gesture`
The only G2 package that differs from the above-mentioned migration would be `@wp-g2/components`.
#### Component Integration
The idea is to move all of the code from `@wp-g2/components` directly into `@wordpress/components`.
The G2 code will mostly be separated into clearly marked directories (most likely `__next/`).
Below is a rough example of what it may look like:
```
src/
├── button/
│ ├── __next/
│ └── index.js
├── card/
│ ├── __next/
│ ├── (other files)
│ └── index.js
├── hstack/
│ └── index.js
└── etc...
```
The G2 code is locally scoped to a singular component directory while encapsulating and signifying that it's the "next"/future version of the component.
In this above example, I've included [HStack](https://github.com/ItsJonQ/g2/tree/master/packages/components/src/HStack) (one of the new layout-based primitive components). It currently doesn't exist within `@wordpress/components`. For cases like this, we would use the G2 code directly rather than containing it within a `__next` directory. Any component that is included in this way should be exported with the `__experimental` prefix, just like any other new UI introduced to the Gutenberg component ecosystem.
The G2 component code can be split into 3 groups:
* UI that is brand new (don't exist within `@wordpress/components`)
* UI that can directly replace existing components (current `__experimental` components)
* UI that to live alongside existing components, but in a "**dormant state**", activated with the ["Context/Adapter" strategy](https://g2components.wordpress.com/2020/11/02/the-path-to-integration/)
#### Context/Adapter Strategy
To ensure integration is done incrementally in a controlled manner, we'll be attempting a strategy that involves G2's Context system and "adapters" for the existing `@wordpress/components` code.
##### The Adapter
The "adapter" is a tiny layer that can smoothly translate Component props (API) from the current WordPress components to the new G2 components. This ensures that the systems (core and 3rd party, especially 3rd party) can continue to operate with zero modification required in their UI codebases.
##### The Context System
The "Context System" works like a "network". It's something that allows components to communicate with each other as well as special "areas" created by the system. By "connecting" our newly adapted components to this network, we can send a signal to specific areas to render the new G2 UI - just like flipping a switch!
#### First Batch
The first batch of components to hit `@wordpress/components` would be the ones required to build the new Typography Tools feature. This batch includes the lowest level and most fundamental components for the G2 UI layer (such as `View` and `Text`).
Splitting the migration of the components into "batches" will allow us to deliver the new Typography Tools faster.
After the necessary code is moved, and the Typography Tools is built, we can continue with the second "batch" to move the remaining G2 component code.
#### Integration Completion
At this stage, Gutenberg should **not** look any different. This is because most of the adapted components are still "dormant". They haven't been signalled to switch to their newer G2 UIs yet. However, all of the foundations, plumbing, and materials have been set for us to construct the new Typography Tools in the "build" phase.
### Build

The "build" phase involves constructing the new Typography Tools experience for Global Styles / FSE.
One task we have to accomplish is to streamline the mechanics of how the Block Editor hooks bind and render controls. During the design and development process of the new Typography experience, we established a new unified reset/remove interaction that we feel confident. As such, we'll need to create a systematic way for these interactions to be handled to benefit both the Typography Tools as well as future Design Tools.
Next, we'll designate this new Typography Tools area as "safe for G2 UI" using the Context System.
With those pieces in place, we can start construction on the new Typography tools. (Finally... Finally! We can build something, haha!).
#### Build Completion
Once the new Typography experience is created, and we're confident it's working correctly with Global Styles and the post editor (with zero regressions caused by G2), we can move onto the final phase, "post-build".
### Post Build

The "post build" phase involves us moving the second (and final) patch of components over from G2 to Gutenberg.
That's it!
### Next
With the 4 phases completed, the G2 components project would be fully integrated with `@wordpress/components` and we'll have a new and improved Typography Tools experience.
This is only the beginning! Integration and Typography tools are the first of many steps in systemizing and improving the UI experience for Gutenberg and WordPress. There's still a lot of work ahead of us. From improving the documentation experience to providing a fully-featured Figma Design System experience.
### Contributing
I'm 100% open to critical feedback on this outline. If you have any ideas of how we can improve things or spot any gaps that I may be missing, please let me know!
I'm going to start by creating tasks, projects, and milestones for the 4 phases in the respective repositories (first in the current G2 repo followed by Gutenberg). From there, I will be (very) actively coordinating, developing, and reviewing updates.
I recognize that the G2 Component project is inherently complex. It's bespoke and innovative systems are sculpted by the unique requirements of Gutenberg and WordPress. As such, please feel free to reach out to me anytime if you have questions on how things work and how things fit together. For the past weeks, I've been streaming (almost) [daily on Twitch](https://www.twitch.tv/itsjonq). I'm happy to answer questions during the live stream as well!
Thank you all for your time!
|
process
|
gutenberg integration typography tools roadmap i ve been waiting for this moment for a long time after several months of researching collaborating building testing and iterating the components project is nearing a stage where we can confidently map out the initial integration with gutenberg working with and design tools had helped me understand many of the complexities and challenges that come from systemizing a ui layer that could accommodate the expansive and open nature of wordpress and the gutenberg editor as such i believe in the importance of introducing components to gutenberg in a way that demonstrates user facing value i wanted to provide improvements to the ui experience that end users could see and feel rather than a revamped architecture that was superficially invisible for the past weeks i ve been collaborating with various designers and developers to refine the project more importantly i wanted to make sure that it was going in the right direction a big thanks to all of these folks for being supportive and patient with my questions based on feedback we felt like a potential candidate to deliver components with would be to improve the for full site editing fse and global styles today i ve created a rough publicly available roadmap using miro i ve found this medium to be easier to visualize a large scale project at a high level for finer grain task tracking we could use familiar workflows like github projects with that said i m planning on using this miro board as the primary task coordination planning format as such items on this board may change over time e g moved around or relabeled for this integration typography tools project i ve split up the work into key phases prep integration build post build prep the prep phase contains all of the tasks that prepare for gutenberg these tasks involve either cleanup refactors or resolving critical features like automatic rtl css rendering between the phases prep is probably the easiest and the shortest the most crucial part would be to ensure that works correctly with gutenberg through some initial integration testing initial integration testing all of component s packages are available publicly on npm they are updated several times a week with the latest features and refinements from the project to test within gutenberg we can add the latest component packages as dependencies within the gutenberg repo once they re in we can attempt to use the components in various places in the editor all we re doing is making sure the components and it s systems e g context styles substate etc are working as expected within the wordpress gutenberg environment no pull request will be needed as we won t be submitting any code into the repository not yet prep completion once all the tasks are done and the initial gutenberg integration tests feel solid we can move onto the integration phase integration the integration phase involves migrating all of the packages from the components repo into the gutenberg repo the initial packages would be utils substate create styles styles and context preferably in that order these systems should not impact the existing gutenberg code or ui these system packages are smaller and self contained the migration will help identify and address any issues early e g issues regarding dependencies build or types which will smoothen the move for the components code package locations the system packages would live as dedicated packages within gutenberg with a slight adjustment to their names wp create styles wordpress ui create styles wp context wordpress ui context wp styles wordpress ui styles wp substate wordpress ui substate wp utils wordpress ui utils note there are other system packages in components right now the plan is to drop those in favour of using their underlying dependencies directly wp reakit wp animations framer motion wp gestures react use gesture the only package that differs from the above mentioned migration would be wp components component integration the idea is to move all of the code from wp components directly into wordpress components the code will mostly be separated into clearly marked directories most likely next below is a rough example of what it may look like src ├── button │ ├── next │ └── index js ├── card │ ├── next │ ├── other files │ └── index js ├── hstack │ └── index js └── etc the code is locally scoped to a singular component directory while encapsulating and signifying that it s the next future version of the component in this above example i ve included one of the new layout based primitive components it currently doesn t exist within wordpress components for cases like this we would use the code directly rather than containing it within a next directory any component that is included in this way should be exported with the experimental prefix just like any other new ui introduced to the gutenberg component ecosystem the component code can be split into groups ui that is brand new don t exist within wordpress components ui that can directly replace existing components current experimental components ui that to live alongside existing components but in a dormant state activated with the context adapter strategy to ensure integration is done incrementally in a controlled manner we ll be attempting a strategy that involves s context system and adapters for the existing wordpress components code the adapter the adapter is a tiny layer that can smoothly translate component props api from the current wordpress components to the new components this ensures that the systems core and party especially party can continue to operate with zero modification required in their ui codebases the context system the context system works like a network it s something that allows components to communicate with each other as well as special areas created by the system by connecting our newly adapted components to this network we can send a signal to specific areas to render the new ui just like flipping a switch first batch the first batch of components to hit wordpress components would be the ones required to build the new typography tools feature this batch includes the lowest level and most fundamental components for the ui layer such as view and text splitting the migration of the components into batches will allow us to deliver the new typography tools faster after the necessary code is moved and the typography tools is built we can continue with the second batch to move the remaining component code integration completion at this stage gutenberg should not look any different this is because most of the adapted components are still dormant they haven t been signalled to switch to their newer uis yet however all of the foundations plumbing and materials have been set for us to construct the new typography tools in the build phase build the build phase involves constructing the new typography tools experience for global styles fse one task we have to accomplish is to streamline the mechanics of how the block editor hooks bind and render controls during the design and development process of the new typography experience we established a new unified reset remove interaction that we feel confident as such we ll need to create a systematic way for these interactions to be handled to benefit both the typography tools as well as future design tools next we ll designate this new typography tools area as safe for ui using the context system with those pieces in place we can start construction on the new typography tools finally finally we can build something haha build completion once the new typography experience is created and we re confident it s working correctly with global styles and the post editor with zero regressions caused by we can move onto the final phase post build post build the post build phase involves us moving the second and final patch of components over from to gutenberg that s it next with the phases completed the components project would be fully integrated with wordpress components and we ll have a new and improved typography tools experience this is only the beginning integration and typography tools are the first of many steps in systemizing and improving the ui experience for gutenberg and wordpress there s still a lot of work ahead of us from improving the documentation experience to providing a fully featured figma design system experience contributing i m open to critical feedback on this outline if you have any ideas of how we can improve things or spot any gaps that i may be missing please let me know i m going to start by creating tasks projects and milestones for the phases in the respective repositories first in the current repo followed by gutenberg from there i will be very actively coordinating developing and reviewing updates i recognize that the component project is inherently complex it s bespoke and innovative systems are sculpted by the unique requirements of gutenberg and wordpress as such please feel free to reach out to me anytime if you have questions on how things work and how things fit together for the past weeks i ve been streaming almost i m happy to answer questions during the live stream as well thank you all for your time
| 1
|
75,910
| 9,347,313,709
|
IssuesEvent
|
2019-03-31 00:06:22
|
rsms/inter
|
https://api.github.com/repos/rsms/inter
|
closed
|
Accents below lowercase letters can be more compact
|
design enhancement
|
**Describe the bug**
While looking into vertical metrics standards for Google Fonts, I realized that my recent PR to Inter was probably not optimal: it's default line height is significantly higher than most comparable fonts.
The root cause of this is that the script I set the vertical metrics with. It sets the `winDescent` to the lowest y-coordinate in the font. Meanwhile, `winAscent` is the highest y-coordinate in the font (Å). ~~This helps avoid clashes between lines of text, because `winDescent` and `winAscent` are used for the total default line height.~~
Actually, the MS OpenType spec says:
> Some legacy applications use the usWinAscent and usWinDescent values to determine default line spacing. This is strongly discouraged. The sTypo* fields should be used for this purpose.
...but these still seem to be the values used in Sketch & TextEdit. I need to do a bit more research to compare vertical metric values in fonts similar to Inter to know for sure.
However, I believe the design suggestions in this issue are valid, either way.
In Inter, the `/ydotbelow` glyph has a dot below its descender. This is logical, but seemingly not typical – many other fonts put it to the right of the `y`. In the case of Inter, it means that the `TypoDescender` is quite low, and as a result, the default line height is abnormally large.
Here are several fonts set at their default line heights, in Sketch:

And one with Inter's line height reduced:

**Expected behavior**
I propose two changes:
1. The `ydotbelow` dot is moved up and to the right, as in comparable fonts
2. The `commaaccentbelow` is made slightly more compact (less critical, but still useful). It is the second-lowest object in the font, and significantly bigger than commaaccents from related designs.
**Environment**
- OS: macOS 10.14, Core Text
- App that renders the font: Sketch, TextEdit, etc
- Version of font: `Version 3.004;git-8321f7c65`
**Additional context**
I wanted to note down my research in an issue, and (unless something big comes up) I'll change these things tomorrow, adjust the vertical metrics again, and submit another PR. Vertical metrics are some of the more important things to really get right before we publish to Google Fonts, because it will obviously make a very big difference to people's layouts if these change after the font is implemented in websites, etc.
|
1.0
|
Accents below lowercase letters can be more compact - **Describe the bug**
While looking into vertical metrics standards for Google Fonts, I realized that my recent PR to Inter was probably not optimal: it's default line height is significantly higher than most comparable fonts.
The root cause of this is that the script I set the vertical metrics with. It sets the `winDescent` to the lowest y-coordinate in the font. Meanwhile, `winAscent` is the highest y-coordinate in the font (Å). ~~This helps avoid clashes between lines of text, because `winDescent` and `winAscent` are used for the total default line height.~~
Actually, the MS OpenType spec says:
> Some legacy applications use the usWinAscent and usWinDescent values to determine default line spacing. This is strongly discouraged. The sTypo* fields should be used for this purpose.
...but these still seem to be the values used in Sketch & TextEdit. I need to do a bit more research to compare vertical metric values in fonts similar to Inter to know for sure.
However, I believe the design suggestions in this issue are valid, either way.
In Inter, the `/ydotbelow` glyph has a dot below its descender. This is logical, but seemingly not typical – many other fonts put it to the right of the `y`. In the case of Inter, it means that the `TypoDescender` is quite low, and as a result, the default line height is abnormally large.
Here are several fonts set at their default line heights, in Sketch:

And one with Inter's line height reduced:

**Expected behavior**
I propose two changes:
1. The `ydotbelow` dot is moved up and to the right, as in comparable fonts
2. The `commaaccentbelow` is made slightly more compact (less critical, but still useful). It is the second-lowest object in the font, and significantly bigger than commaaccents from related designs.
**Environment**
- OS: macOS 10.14, Core Text
- App that renders the font: Sketch, TextEdit, etc
- Version of font: `Version 3.004;git-8321f7c65`
**Additional context**
I wanted to note down my research in an issue, and (unless something big comes up) I'll change these things tomorrow, adjust the vertical metrics again, and submit another PR. Vertical metrics are some of the more important things to really get right before we publish to Google Fonts, because it will obviously make a very big difference to people's layouts if these change after the font is implemented in websites, etc.
|
non_process
|
accents below lowercase letters can be more compact describe the bug while looking into vertical metrics standards for google fonts i realized that my recent pr to inter was probably not optimal it s default line height is significantly higher than most comparable fonts the root cause of this is that the script i set the vertical metrics with it sets the windescent to the lowest y coordinate in the font meanwhile winascent is the highest y coordinate in the font å this helps avoid clashes between lines of text because windescent and winascent are used for the total default line height actually the ms opentype spec says some legacy applications use the uswinascent and uswindescent values to determine default line spacing this is strongly discouraged the stypo fields should be used for this purpose but these still seem to be the values used in sketch textedit i need to do a bit more research to compare vertical metric values in fonts similar to inter to know for sure however i believe the design suggestions in this issue are valid either way in inter the ydotbelow glyph has a dot below its descender this is logical but seemingly not typical – many other fonts put it to the right of the y in the case of inter it means that the typodescender is quite low and as a result the default line height is abnormally large here are several fonts set at their default line heights in sketch and one with inter s line height reduced expected behavior i propose two changes the ydotbelow dot is moved up and to the right as in comparable fonts the commaaccentbelow is made slightly more compact less critical but still useful it is the second lowest object in the font and significantly bigger than commaaccents from related designs environment os macos core text app that renders the font sketch textedit etc version of font version git additional context i wanted to note down my research in an issue and unless something big comes up i ll change these things tomorrow adjust the vertical metrics again and submit another pr vertical metrics are some of the more important things to really get right before we publish to google fonts because it will obviously make a very big difference to people s layouts if these change after the font is implemented in websites etc
| 0
|
19,295
| 25,466,412,800
|
IssuesEvent
|
2022-11-25 05:09:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[IDP] [PM] Getting an error message as "Sign in is not available. Please try again later" when tried to sign in with below mentioned admin credentials
|
Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
|
For the below mentioned admin,
bhoomikav+idpt1@boston-technology.com
Getting an error message as "Sign in is not available. Please try again later" when tried to sign in

|
3.0
|
[IDP] [PM] Getting an error message as "Sign in is not available. Please try again later" when tried to sign in with below mentioned admin credentials - For the below mentioned admin,
bhoomikav+idpt1@boston-technology.com
Getting an error message as "Sign in is not available. Please try again later" when tried to sign in

|
process
|
getting an error message as sign in is not available please try again later when tried to sign in with below mentioned admin credentials for the below mentioned admin bhoomikav boston technology com getting an error message as sign in is not available please try again later when tried to sign in
| 1
|
3,782
| 6,760,944,654
|
IssuesEvent
|
2017-10-24 22:42:03
|
aspnet/IISIntegration
|
https://api.github.com/repos/aspnet/IISIntegration
|
closed
|
Finalizing In-process ANCM.
|
in-process
|
Checklist:
### P0:
- [ ] React to ANCM changes for pInvoke layer https://github.com/aspnet/IISIntegration/issues/430
- [ ] Incorrect response body on large responses. https://github.com/aspnet/IISIntegration/issues/442
### P1:
- [x] Merging UseNativeIIS and UseIISIntegration https://github.com/aspnet/IISIntegration/issues/429 - Fixed with #443
- [ ] Have IIS Integration consume ANCM from nuget package #424
- [ ] Fix Path vs Path Base behavior #427
- [ ] Check content length #433
- [ ] Implement Drain #436
- [ ] Implement Abort #438
### P2:
- [ ] Performance pass https://github.com/aspnet/IISIntegration/issues/432
- [ ] Logging https://github.com/aspnet/IISIntegration/issues/439
- [ ] Mocking ANCM for testing purposes #431
- [ ] Api/Naming review (Further in the future)
### P3:
- [ ] Removing duplication for opaque streams https://github.com/aspnet/IISIntegration/issues/426
- [ ] Auth and Windows Auth https://github.com/aspnet/IISIntegration/issues/428
- [ ] Change design of Request and Respnonse Body #435
### P4:
- [ ] Support running multiple in-process applications (very minor changes here, mostly in ANCM) https://github.com/aspnet/AspNetCoreModule/issues/160
|
1.0
|
Finalizing In-process ANCM. - Checklist:
### P0:
- [ ] React to ANCM changes for pInvoke layer https://github.com/aspnet/IISIntegration/issues/430
- [ ] Incorrect response body on large responses. https://github.com/aspnet/IISIntegration/issues/442
### P1:
- [x] Merging UseNativeIIS and UseIISIntegration https://github.com/aspnet/IISIntegration/issues/429 - Fixed with #443
- [ ] Have IIS Integration consume ANCM from nuget package #424
- [ ] Fix Path vs Path Base behavior #427
- [ ] Check content length #433
- [ ] Implement Drain #436
- [ ] Implement Abort #438
### P2:
- [ ] Performance pass https://github.com/aspnet/IISIntegration/issues/432
- [ ] Logging https://github.com/aspnet/IISIntegration/issues/439
- [ ] Mocking ANCM for testing purposes #431
- [ ] Api/Naming review (Further in the future)
### P3:
- [ ] Removing duplication for opaque streams https://github.com/aspnet/IISIntegration/issues/426
- [ ] Auth and Windows Auth https://github.com/aspnet/IISIntegration/issues/428
- [ ] Change design of Request and Respnonse Body #435
### P4:
- [ ] Support running multiple in-process applications (very minor changes here, mostly in ANCM) https://github.com/aspnet/AspNetCoreModule/issues/160
|
process
|
finalizing in process ancm checklist react to ancm changes for pinvoke layer incorrect response body on large responses merging usenativeiis and useiisintegration fixed with have iis integration consume ancm from nuget package fix path vs path base behavior check content length implement drain implement abort performance pass logging mocking ancm for testing purposes api naming review further in the future removing duplication for opaque streams auth and windows auth change design of request and respnonse body support running multiple in process applications very minor changes here mostly in ancm
| 1
|
172,752
| 21,054,810,299
|
IssuesEvent
|
2022-04-01 01:18:22
|
ziednov007/JavaSpring
|
https://api.github.com/repos/ziednov007/JavaSpring
|
opened
|
CVE-2022-22965 (High) detected in spring-beans-5.0.9.RELEASE.jar
|
security vulnerability
|
## CVE-2022-22965 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.0.9.RELEASE.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /app/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.0.9.RELEASE/65f56fdab1bb90ad059e314d2f2f4cf76f9bdbde/spring-beans-5.0.9.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.0.9.RELEASE/65f56fdab1bb90ad059e314d2f2f4cf76f9bdbde/spring-beans-5.0.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-webmvc-5.0.9.RELEASE.jar
- :x: **spring-beans-5.0.9.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework before 5.2.20 and 5.3.x before 5.3.18 are vulnerable due to a vulnerability in Spring-beans which allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution). Please note that the ease of exploitation may diverge by the code implementation.Currently, the exploit requires JDK 9 or higher, Apache Tomcat as the Servlet container, the application Packaged as WAR, and dependency on spring-webmvc or spring-webflux. Spring Framework 5.3.18 and 5.2.20 have already been released. WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-beans:5.2.20.RELEASE,5.3.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-22965 (High) detected in spring-beans-5.0.9.RELEASE.jar - ## CVE-2022-22965 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-beans-5.0.9.RELEASE.jar</b></p></summary>
<p>Spring Beans</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /app/build.gradle</p>
<p>Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.0.9.RELEASE/65f56fdab1bb90ad059e314d2f2f4cf76f9bdbde/spring-beans-5.0.9.RELEASE.jar,/root/.gradle/caches/modules-2/files-2.1/org.springframework/spring-beans/5.0.9.RELEASE/65f56fdab1bb90ad059e314d2f2f4cf76f9bdbde/spring-beans-5.0.9.RELEASE.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.0.5.RELEASE.jar (Root Library)
- spring-webmvc-5.0.9.RELEASE.jar
- :x: **spring-beans-5.0.9.RELEASE.jar** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Spring Framework before 5.2.20 and 5.3.x before 5.3.18 are vulnerable due to a vulnerability in Spring-beans which allows attackers under certain circumstances to achieve remote code execution, this vulnerability is also known as ״Spring4Shell״ or ״SpringShell״. The current POC related to the attack is done by creating a specially crafted request which manipulates ClassLoader to successfully achieve RCE (Remote Code Execution). Please note that the ease of exploitation may diverge by the code implementation.Currently, the exploit requires JDK 9 or higher, Apache Tomcat as the Servlet container, the application Packaged as WAR, and dependency on spring-webmvc or spring-webflux. Spring Framework 5.3.18 and 5.2.20 have already been released. WhiteSource's research team is carefully observing developments and researching the case. We will keep updating this page and our WhiteSource resources with updates.
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22965>CVE-2022-22965</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement">https://spring.io/blog/2022/03/31/spring-framework-rce-early-announcement</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-beans:5.2.20.RELEASE,5.3.18</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in spring beans release jar cve high severity vulnerability vulnerable library spring beans release jar spring beans library home page a href path to dependency file app build gradle path to vulnerable library root gradle caches modules files org springframework spring beans release spring beans release jar root gradle caches modules files org springframework spring beans release spring beans release jar dependency hierarchy spring boot starter web release jar root library spring webmvc release jar x spring beans release jar vulnerable library vulnerability details spring framework before and x before are vulnerable due to a vulnerability in spring beans which allows attackers under certain circumstances to achieve remote code execution this vulnerability is also known as ״ ״ or ״springshell״ the current poc related to the attack is done by creating a specially crafted request which manipulates classloader to successfully achieve rce remote code execution please note that the ease of exploitation may diverge by the code implementation currently the exploit requires jdk or higher apache tomcat as the servlet container the application packaged as war and dependency on spring webmvc or spring webflux spring framework and have already been released whitesource s research team is carefully observing developments and researching the case we will keep updating this page and our whitesource resources with updates publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring beans release step up your open source security game with whitesource
| 0
|
9,055
| 12,130,306,729
|
IssuesEvent
|
2020-04-23 01:08:46
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from iot/api-client/mqtt_example/requirements.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from iot/api-client/mqtt_example/requirements.txt
|
1.0
|
remove gcp-devrel-py-tools from iot/api-client/mqtt_example/requirements.txt - remove gcp-devrel-py-tools from iot/api-client/mqtt_example/requirements.txt
|
process
|
remove gcp devrel py tools from iot api client mqtt example requirements txt remove gcp devrel py tools from iot api client mqtt example requirements txt
| 1
|
1,105
| 3,587,404,181
|
IssuesEvent
|
2016-01-30 08:45:33
|
mkdocs/mkdocs
|
https://api.github.com/repos/mkdocs/mkdocs
|
opened
|
Update mkdocs.org automatically on a release
|
Process
|
Travis now [releases to PyPI](70adae9ad884631598e6718c115a38b9f6f18915) for us, it would be great if it could then also run mkdocs gh-deploy for us too.
Somebody recently figured out how to do this, we can maybe lear from it: http://dragplus.com/post/id/33416269 (it looks somewhat complicated, it would be great if we could make this easier for people).
|
1.0
|
Update mkdocs.org automatically on a release - Travis now [releases to PyPI](70adae9ad884631598e6718c115a38b9f6f18915) for us, it would be great if it could then also run mkdocs gh-deploy for us too.
Somebody recently figured out how to do this, we can maybe lear from it: http://dragplus.com/post/id/33416269 (it looks somewhat complicated, it would be great if we could make this easier for people).
|
process
|
update mkdocs org automatically on a release travis now for us it would be great if it could then also run mkdocs gh deploy for us too somebody recently figured out how to do this we can maybe lear from it it looks somewhat complicated it would be great if we could make this easier for people
| 1
|
64,850
| 7,844,947,126
|
IssuesEvent
|
2018-06-19 11:21:33
|
rtfd/readthedocs.org
|
https://api.github.com/repos/rtfd/readthedocs.org
|
closed
|
Accumulate some test cases that we want tested around search
|
Needed: design decision
|
Related to the work that @safwanrahman is taking on, it would be helpful for us to outline some of the test cases that we want to verify before making changes to tests. The most helpful tests for us are probably integration tests however. I'm not certain how we want to test this. Things that would be good to test are:
* Search term hit rate, for currently matching queries
* Search term miss rate, for currently unmatched queries or problematic queries we want to resolve
* Result return -- test subproject/superproject relationship return
|
1.0
|
Accumulate some test cases that we want tested around search - Related to the work that @safwanrahman is taking on, it would be helpful for us to outline some of the test cases that we want to verify before making changes to tests. The most helpful tests for us are probably integration tests however. I'm not certain how we want to test this. Things that would be good to test are:
* Search term hit rate, for currently matching queries
* Search term miss rate, for currently unmatched queries or problematic queries we want to resolve
* Result return -- test subproject/superproject relationship return
|
non_process
|
accumulate some test cases that we want tested around search related to the work that safwanrahman is taking on it would be helpful for us to outline some of the test cases that we want to verify before making changes to tests the most helpful tests for us are probably integration tests however i m not certain how we want to test this things that would be good to test are search term hit rate for currently matching queries search term miss rate for currently unmatched queries or problematic queries we want to resolve result return test subproject superproject relationship return
| 0
|
26,444
| 7,837,965,534
|
IssuesEvent
|
2018-06-18 08:35:41
|
JabRef/jabref
|
https://api.github.com/repos/JabRef/jabref
|
closed
|
Adjust eclipse style to intellij (line wrapping)
|
build-system
|
Has intellij one or two default indendation lines? @tobiasdiez
I now changed to align on column
[JabRefModdedStyleSettings.xml.txt](https://github.com/JabRef/jabref/files/1791055/JabRefModdedStyleSettings.xml.txt)




|
1.0
|
Adjust eclipse style to intellij (line wrapping) - Has intellij one or two default indendation lines? @tobiasdiez
I now changed to align on column
[JabRefModdedStyleSettings.xml.txt](https://github.com/JabRef/jabref/files/1791055/JabRefModdedStyleSettings.xml.txt)




|
non_process
|
adjust eclipse style to intellij line wrapping has intellij one or two default indendation lines tobiasdiez i now changed to align on column
| 0
|
74,926
| 7,452,253,499
|
IssuesEvent
|
2018-03-29 07:41:19
|
Kademi/kademi-dev
|
https://api.github.com/repos/Kademi/kademi-dev
|
closed
|
Add an option in OrgLocator component to show only selected orgtypes
|
High priority Ready to Test QA enhancement
|
We need to have an option added to OrgLocator component to only show the selected orgtypes
|
1.0
|
Add an option in OrgLocator component to show only selected orgtypes - We need to have an option added to OrgLocator component to only show the selected orgtypes
|
non_process
|
add an option in orglocator component to show only selected orgtypes we need to have an option added to orglocator component to only show the selected orgtypes
| 0
|
71,653
| 30,914,098,078
|
IssuesEvent
|
2023-08-05 03:58:39
|
Zahlungsmittel/Zahlungsmittel
|
https://api.github.com/repos/Zahlungsmittel/Zahlungsmittel
|
opened
|
[CLOSED] fix_admin_token_renewal
|
bug service: admin frontend imported
|
<a href="https://github.com/ulfgebhardt"><img src="https://avatars.githubusercontent.com/u/1238238?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ulfgebhardt](https://github.com/ulfgebhardt)**
_Friday Nov 26, 2021 at 10:48 GMT_
_Originally opened as https://github.com/gradido/gradido/pull/1139_
----
<!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🍰 Pullrequest
<!-- Describe the Pullrequest. Use Screenshots if possible. -->
renew token
### Issues
<!-- Which Issues does this fix, which are related?
- fixes #XXX
- relates #XXX
-->
- None
### Todo
<!-- In case some parts are still missing, list them here. -->
- [X] None
----
_**[ulfgebhardt](https://github.com/ulfgebhardt)** included the following code: https://github.com/gradido/gradido/pull/1139/commits_
|
1.0
|
[CLOSED] fix_admin_token_renewal - <a href="https://github.com/ulfgebhardt"><img src="https://avatars.githubusercontent.com/u/1238238?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [ulfgebhardt](https://github.com/ulfgebhardt)**
_Friday Nov 26, 2021 at 10:48 GMT_
_Originally opened as https://github.com/gradido/gradido/pull/1139_
----
<!-- You can find the latest issue templates here https://github.com/ulfgebhardt/issue-templates -->
## 🍰 Pullrequest
<!-- Describe the Pullrequest. Use Screenshots if possible. -->
renew token
### Issues
<!-- Which Issues does this fix, which are related?
- fixes #XXX
- relates #XXX
-->
- None
### Todo
<!-- In case some parts are still missing, list them here. -->
- [X] None
----
_**[ulfgebhardt](https://github.com/ulfgebhardt)** included the following code: https://github.com/gradido/gradido/pull/1139/commits_
|
non_process
|
fix admin token renewal issue by friday nov at gmt originally opened as 🍰 pullrequest renew token issues which issues does this fix which are related fixes xxx relates xxx none todo none included the following code
| 0
|
18,161
| 24,195,670,900
|
IssuesEvent
|
2022-09-23 23:03:54
|
B2o5T/graphql-eslint
|
https://api.github.com/repos/B2o5T/graphql-eslint
|
closed
|
Apply graphql-tag-pluck config options
|
kind/enhancement process/candidate
|
**Is your feature request related to a problem? Please describe.**
I'm facing a problem that I can't lint my graphql queries because I'm using custom graphql-tag-pluck tag names that cannot be configured in graphql-eslint, so I think graphql-eslint should allow users to configure its graphql-tag-pluck options. I'm working on a project with multiple schemas with corresponding custom gql tags, and I'd like to use graphql-eslint to lint my queries in these custom tags.
For example, a query to the schema "foo" looks like ``const fooQuery = foo`query FooItems {...}` ``, and a query to schema "bar" looks like ``const barQuery = bar`query BarItems {...}` ``, and graphql-eslint cannot lint these queries because they don't use the tags `gql` or `graphql` or the magic comment `/* GraphQL */` as defined in the source code [here](https://github.com/B2o5T/graphql-eslint/blob/master/packages/plugin/src/processor.ts#L9) and [here](https://github.com/B2o5T/graphql-eslint/blob/master/packages/plugin/src/processor.ts#L22) .
**Describe the solution you'd like**
Since graphql-config is already being used, it can be extended to include options for graphql-tag-pluck, then those options applied to the existing config as follows in `processor.ts`:
```
import graphqlConfig from 'graphql-config';
...
const graphqlTagPluckOptions = graphqlConfig.loadConfigSync({}).getDefault().extensions.graphqlTagPluck
const graphqlTags = graphqlTagPluckOptions.modules.map(({identifier}) => identifier)
const RELEVANT_KEYWORDS = ['gql', 'graphql', '/* GraphQL */', ...graphqlTags] as const;
const blocksMap = new Map<string, Block[]>();
export const processor: Linter.Processor<Block | string> = {
supportsAutofix: true,
preprocess(code, filePath) {
if (RELEVANT_KEYWORDS.every(keyword => !code.includes(keyword))) {
return [code];
}
const extractedDocuments = parseCode({
code,
filePath,
options: {
...graphqlTagPluckOptions
skipIndent: true,
},
});...
```
Configured through graphql-config in the following file `graphql.config.js`:
```
module.exports = {
projects: {
default: {
documents: './src/**/*.{tsx, ts, jsx, js}',
operations: './src/**/*.{tsx, ts, jsx, js}',
extensions: {
graphqlTagPluck: {
modules: [
{
name: 'customTags`,
identifier: 'foo'
},
{
name: 'customTags',
identifier: 'bar'
}
],
globalGqlIdentifierName: ['foo', 'bar'],
gqlMagicComment: '__nomagiccomment__',
},
},
},
},
}
```
**Describe alternatives you've considered**
It seems very clear in the code that custom tags are not supported, so I'm not sure if there are alternatives given that I am using custom tags.
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
|
1.0
|
Apply graphql-tag-pluck config options - **Is your feature request related to a problem? Please describe.**
I'm facing a problem that I can't lint my graphql queries because I'm using custom graphql-tag-pluck tag names that cannot be configured in graphql-eslint, so I think graphql-eslint should allow users to configure its graphql-tag-pluck options. I'm working on a project with multiple schemas with corresponding custom gql tags, and I'd like to use graphql-eslint to lint my queries in these custom tags.
For example, a query to the schema "foo" looks like ``const fooQuery = foo`query FooItems {...}` ``, and a query to schema "bar" looks like ``const barQuery = bar`query BarItems {...}` ``, and graphql-eslint cannot lint these queries because they don't use the tags `gql` or `graphql` or the magic comment `/* GraphQL */` as defined in the source code [here](https://github.com/B2o5T/graphql-eslint/blob/master/packages/plugin/src/processor.ts#L9) and [here](https://github.com/B2o5T/graphql-eslint/blob/master/packages/plugin/src/processor.ts#L22) .
**Describe the solution you'd like**
Since graphql-config is already being used, it can be extended to include options for graphql-tag-pluck, then those options applied to the existing config as follows in `processor.ts`:
```
import graphqlConfig from 'graphql-config';
...
const graphqlTagPluckOptions = graphqlConfig.loadConfigSync({}).getDefault().extensions.graphqlTagPluck
const graphqlTags = graphqlTagPluckOptions.modules.map(({identifier}) => identifier)
const RELEVANT_KEYWORDS = ['gql', 'graphql', '/* GraphQL */', ...graphqlTags] as const;
const blocksMap = new Map<string, Block[]>();
export const processor: Linter.Processor<Block | string> = {
supportsAutofix: true,
preprocess(code, filePath) {
if (RELEVANT_KEYWORDS.every(keyword => !code.includes(keyword))) {
return [code];
}
const extractedDocuments = parseCode({
code,
filePath,
options: {
...graphqlTagPluckOptions
skipIndent: true,
},
});...
```
Configured through graphql-config in the following file `graphql.config.js`:
```
module.exports = {
projects: {
default: {
documents: './src/**/*.{tsx, ts, jsx, js}',
operations: './src/**/*.{tsx, ts, jsx, js}',
extensions: {
graphqlTagPluck: {
modules: [
{
name: 'customTags`,
identifier: 'foo'
},
{
name: 'customTags',
identifier: 'bar'
}
],
globalGqlIdentifierName: ['foo', 'bar'],
gqlMagicComment: '__nomagiccomment__',
},
},
},
},
}
```
**Describe alternatives you've considered**
It seems very clear in the code that custom tags are not supported, so I'm not sure if there are alternatives given that I am using custom tags.
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
|
process
|
apply graphql tag pluck config options is your feature request related to a problem please describe i m facing a problem that i can t lint my graphql queries because i m using custom graphql tag pluck tag names that cannot be configured in graphql eslint so i think graphql eslint should allow users to configure its graphql tag pluck options i m working on a project with multiple schemas with corresponding custom gql tags and i d like to use graphql eslint to lint my queries in these custom tags for example a query to the schema foo looks like const fooquery foo query fooitems and a query to schema bar looks like const barquery bar query baritems and graphql eslint cannot lint these queries because they don t use the tags gql or graphql or the magic comment graphql as defined in the source code and describe the solution you d like since graphql config is already being used it can be extended to include options for graphql tag pluck then those options applied to the existing config as follows in processor ts import graphqlconfig from graphql config const graphqltagpluckoptions graphqlconfig loadconfigsync getdefault extensions graphqltagpluck const graphqltags graphqltagpluckoptions modules map identifier identifier const relevant keywords as const const blocksmap new map export const processor linter processor supportsautofix true preprocess code filepath if relevant keywords every keyword code includes keyword return const extracteddocuments parsecode code filepath options graphqltagpluckoptions skipindent true configured through graphql config in the following file graphql config js module exports projects default documents src tsx ts jsx js operations src tsx ts jsx js extensions graphqltagpluck modules name customtags identifier foo name customtags identifier bar globalgqlidentifiername gqlmagiccomment nomagiccomment describe alternatives you ve considered it seems very clear in the code that custom tags are not supported so i m not sure if there are alternatives given that i am using custom tags additional context
| 1
|
9,711
| 12,706,533,050
|
IssuesEvent
|
2020-06-23 07:22:04
|
prisma/prisma-engines
|
https://api.github.com/repos/prisma/prisma-engines
|
opened
|
Mask Datasource URLs in all artifacts generated by the Migration Engine
|
component: migration engine kind/bug process/candidate
|
Datasource URLs are stored in clear text in the `steps.json`. That means a hardcoded URL will be part of users source code repositories.
We should make sure that other places like generated Readmes etc. are fixed by this as well. (I think simply fixing the diffing process should be fine.)
Relates to: https://github.com/prisma/migrate/issues/310
|
1.0
|
Mask Datasource URLs in all artifacts generated by the Migration Engine - Datasource URLs are stored in clear text in the `steps.json`. That means a hardcoded URL will be part of users source code repositories.
We should make sure that other places like generated Readmes etc. are fixed by this as well. (I think simply fixing the diffing process should be fine.)
Relates to: https://github.com/prisma/migrate/issues/310
|
process
|
mask datasource urls in all artifacts generated by the migration engine datasource urls are stored in clear text in the steps json that means a hardcoded url will be part of users source code repositories we should make sure that other places like generated readmes etc are fixed by this as well i think simply fixing the diffing process should be fine relates to
| 1
|
7,831
| 11,009,644,361
|
IssuesEvent
|
2019-12-04 13:06:38
|
prisma/prisma2
|
https://api.github.com/repos/prisma/prisma2
|
closed
|
PRISMA_QUERY_ENGINE_BINARY being ignored
|
bug/2-confirmed kind/bug process/candidate
|
I'm compiling my own binaries via [Dockerfile](https://github.com/LongLiveCHIEF/prisma2/blob/574dc3de29cbe2f7dff8aa81e47f429909ed0e77/docker/Dockerfile), and setting the binaries on a custom path of `/prisma-engine/binaries/<binary_name>`.
However, when I run `prisma lift` or `prisma generate` commands, I get a node `ENOENT` error because it's trying to spawn the binary from `/usr/local/lib/node_modules` instead of the path I've provided to the explicit binary.
Just to be sure it wasn't a naming problem (see comment https://github.com/prisma/prisma2/issues/938#issuecomment-554780269), i named the binary with the same resolved name for my native platform.
```
$ echo $PRISMA_QUERY_ENGINE_BINARY
/prisma-engine/binaries/query-engine-linux-glibc-libssl1.1.0
$ prisma lift up --preview
Error: Get config Error: Command failed with exit code 2 (ENOENT): /usr/local/lib/node_modules/prisma2/query-engine-linux-glibc-libssl1.1.0 clie --get_config /tmp/d02f6653-7b90-478f-9cf4-98ce4d8b3ba8
spawn /usr/local/lib/node_modules/prisma2/query-engine-linux-glibc-libssl1.1.0 ENOENT
```
|
1.0
|
PRISMA_QUERY_ENGINE_BINARY being ignored - I'm compiling my own binaries via [Dockerfile](https://github.com/LongLiveCHIEF/prisma2/blob/574dc3de29cbe2f7dff8aa81e47f429909ed0e77/docker/Dockerfile), and setting the binaries on a custom path of `/prisma-engine/binaries/<binary_name>`.
However, when I run `prisma lift` or `prisma generate` commands, I get a node `ENOENT` error because it's trying to spawn the binary from `/usr/local/lib/node_modules` instead of the path I've provided to the explicit binary.
Just to be sure it wasn't a naming problem (see comment https://github.com/prisma/prisma2/issues/938#issuecomment-554780269), i named the binary with the same resolved name for my native platform.
```
$ echo $PRISMA_QUERY_ENGINE_BINARY
/prisma-engine/binaries/query-engine-linux-glibc-libssl1.1.0
$ prisma lift up --preview
Error: Get config Error: Command failed with exit code 2 (ENOENT): /usr/local/lib/node_modules/prisma2/query-engine-linux-glibc-libssl1.1.0 clie --get_config /tmp/d02f6653-7b90-478f-9cf4-98ce4d8b3ba8
spawn /usr/local/lib/node_modules/prisma2/query-engine-linux-glibc-libssl1.1.0 ENOENT
```
|
process
|
prisma query engine binary being ignored i m compiling my own binaries via and setting the binaries on a custom path of prisma engine binaries however when i run prisma lift or prisma generate commands i get a node enoent error because it s trying to spawn the binary from usr local lib node modules instead of the path i ve provided to the explicit binary just to be sure it wasn t a naming problem see comment i named the binary with the same resolved name for my native platform echo prisma query engine binary prisma engine binaries query engine linux glibc prisma lift up preview error get config error command failed with exit code enoent usr local lib node modules query engine linux glibc clie get config tmp spawn usr local lib node modules query engine linux glibc enoent
| 1
|
396,678
| 27,130,753,380
|
IssuesEvent
|
2023-02-16 09:36:56
|
mobility-team/mobility
|
https://api.github.com/repos/mobility-team/mobility
|
closed
|
Mettre en place la liste de contributeurs et utilisateurs
|
documentation communication
|
Contributeur : personne et organisation qui contribue au projet Mobility sous la forme de développements, idées, tests.
Utilisateur : personne et organisation qui utilise le package python.
- [ ] Créer un tableau simple au format markdown (https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/organizing-information-with-tables).
- [ ] Entrer AREP et Elioth.
|
1.0
|
Mettre en place la liste de contributeurs et utilisateurs - Contributeur : personne et organisation qui contribue au projet Mobility sous la forme de développements, idées, tests.
Utilisateur : personne et organisation qui utilise le package python.
- [ ] Créer un tableau simple au format markdown (https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/organizing-information-with-tables).
- [ ] Entrer AREP et Elioth.
|
non_process
|
mettre en place la liste de contributeurs et utilisateurs contributeur personne et organisation qui contribue au projet mobility sous la forme de développements idées tests utilisateur personne et organisation qui utilise le package python créer un tableau simple au format markdown entrer arep et elioth
| 0
|
77,475
| 7,575,160,096
|
IssuesEvent
|
2018-04-24 00:05:50
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Changing node Template of a deployed pool doesn't reconcile
|
area/ui kind/bug status/resolved status/to-test
|
**Rancher versions:**
rancher/server:v2.0.0-beta3
You have the option to change the node template of an already deployed pool, but saving it doesn't actually change anything so it no longer reflects the deployed pool.
We should either disable the changing of the node template once deployed, or we should reconcile by destroying existing nodes and redeploying using the new template
|
1.0
|
Changing node Template of a deployed pool doesn't reconcile - **Rancher versions:**
rancher/server:v2.0.0-beta3
You have the option to change the node template of an already deployed pool, but saving it doesn't actually change anything so it no longer reflects the deployed pool.
We should either disable the changing of the node template once deployed, or we should reconcile by destroying existing nodes and redeploying using the new template
|
non_process
|
changing node template of a deployed pool doesn t reconcile rancher versions rancher server you have the option to change the node template of an already deployed pool but saving it doesn t actually change anything so it no longer reflects the deployed pool we should either disable the changing of the node template once deployed or we should reconcile by destroying existing nodes and redeploying using the new template
| 0
|
131,717
| 12,489,129,851
|
IssuesEvent
|
2020-05-31 17:21:46
|
corona-warn-app/cwa-documentation
|
https://api.github.com/repos/corona-warn-app/cwa-documentation
|
closed
|
Deployment code and documentation (Kubernetes)
|
documentation enhancement
|
<!--
Thanks for pointing us to missing information 🙌 ❤️
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.
-->
## What is missing
<!-- Outline the information that you would like to see added. Please be rather specific (e.g., not only 'more information about', but what exactly is missing). -->
Im just wondering where the documentation and source code about the general server side deployment resides. Is the source code already available somewhere?
## Why should it be included
<!-- Which aspects of the corona warn app project cannot be properly understood without this information? -->
I think it’s an important part of the application, especially from the security point of view.
## Where should it be included
<!-- If you think the information should be part of a specific existing document, please let us know. -->
Maybe here or in a dedicated repository.
|
1.0
|
Deployment code and documentation (Kubernetes) - <!--
Thanks for pointing us to missing information 🙌 ❤️
Before opening a new issue, please make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.
-->
## What is missing
<!-- Outline the information that you would like to see added. Please be rather specific (e.g., not only 'more information about', but what exactly is missing). -->
Im just wondering where the documentation and source code about the general server side deployment resides. Is the source code already available somewhere?
## Why should it be included
<!-- Which aspects of the corona warn app project cannot be properly understood without this information? -->
I think it’s an important part of the application, especially from the security point of view.
## Where should it be included
<!-- If you think the information should be part of a specific existing document, please let us know. -->
Maybe here or in a dedicated repository.
|
non_process
|
deployment code and documentation kubernetes thanks for pointing us to missing information 🙌 ❤️ before opening a new issue please make sure that we do not have any duplicates already open you can ensure this by searching the issue list for this repository if there is a duplicate please close your issue and add a comment to the existing issue instead what is missing im just wondering where the documentation and source code about the general server side deployment resides is the source code already available somewhere why should it be included i think it’s an important part of the application especially from the security point of view where should it be included maybe here or in a dedicated repository
| 0
|
13,743
| 16,496,327,583
|
IssuesEvent
|
2021-05-25 10:45:44
|
keep-network/keep-core
|
https://api.github.com/repos/keep-network/keep-core
|
closed
|
Operator contracts for authorization not showing in applications page (Ropsten)
|
:old_key: token dashboard process & client team
|
### Background
Last week we deployed an updated token-dashboard against Ropsten, from master. Last fully functional version of the token dashboard deployed was https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc which did not include the the overview page.
Doing a fresh delegation with authorizer set to the operator address results in beacon and tBTC operator contracts not appearing for authorization in the applications page.
Reviewing an older delegation, that had already authorized beacon and tBTC operator contracts for the current release shows no history of authorization.
By reverting the token-dashboard to the `v1.2.4-rc` client version https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc the applications page was restored to normal behavior. I could both authorize operator contracts for new delegations, and see history for older delegations.
tl;dr think we're looking for some changes between and https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc and https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc
### How the issue was surfaced
- Deploy updated token-dashboard to keep-test (from master)
- [Branch](https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc) for deployment where issue was found.
#### Fresh delegation
1. Get a fresh token grant for account.
2. Login via Metamask with account to token-dashboard.
2. Setup a new delegation. Beneficiary, Authorizer, Operator all set to the same account. 100k KEEP.
3. Go to applications page, empty.
#### Check older delegation
1. Login via metamask with account to token-dashboard.
2. Double check delegation history (should be a delegation that was done before the token-dashboard from master was deployed
3. Go to applications page, empty.
### How to reproduce
1. Get a grant
2. Do a delegation setting operator and authorizer to the same account
3. Check the applications page
|
1.0
|
Operator contracts for authorization not showing in applications page (Ropsten) - ### Background
Last week we deployed an updated token-dashboard against Ropsten, from master. Last fully functional version of the token dashboard deployed was https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc which did not include the the overview page.
Doing a fresh delegation with authorizer set to the operator address results in beacon and tBTC operator contracts not appearing for authorization in the applications page.
Reviewing an older delegation, that had already authorized beacon and tBTC operator contracts for the current release shows no history of authorization.
By reverting the token-dashboard to the `v1.2.4-rc` client version https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc the applications page was restored to normal behavior. I could both authorize operator contracts for new delegations, and see history for older delegations.
tl;dr think we're looking for some changes between and https://github.com/keep-network/keep-core/releases/tag/v1.2.4-rc and https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc
### How the issue was surfaced
- Deploy updated token-dashboard to keep-test (from master)
- [Branch](https://github.com/keep-network/keep-core/tree/sthompson22/token-dash/use-tbtc-v1.0.3-rc) for deployment where issue was found.
#### Fresh delegation
1. Get a fresh token grant for account.
2. Login via Metamask with account to token-dashboard.
2. Setup a new delegation. Beneficiary, Authorizer, Operator all set to the same account. 100k KEEP.
3. Go to applications page, empty.
#### Check older delegation
1. Login via metamask with account to token-dashboard.
2. Double check delegation history (should be a delegation that was done before the token-dashboard from master was deployed
3. Go to applications page, empty.
### How to reproduce
1. Get a grant
2. Do a delegation setting operator and authorizer to the same account
3. Check the applications page
|
process
|
operator contracts for authorization not showing in applications page ropsten background last week we deployed an updated token dashboard against ropsten from master last fully functional version of the token dashboard deployed was which did not include the the overview page doing a fresh delegation with authorizer set to the operator address results in beacon and tbtc operator contracts not appearing for authorization in the applications page reviewing an older delegation that had already authorized beacon and tbtc operator contracts for the current release shows no history of authorization by reverting the token dashboard to the rc client version the applications page was restored to normal behavior i could both authorize operator contracts for new delegations and see history for older delegations tl dr think we re looking for some changes between and and how the issue was surfaced deploy updated token dashboard to keep test from master for deployment where issue was found fresh delegation get a fresh token grant for account login via metamask with account to token dashboard setup a new delegation beneficiary authorizer operator all set to the same account keep go to applications page empty check older delegation login via metamask with account to token dashboard double check delegation history should be a delegation that was done before the token dashboard from master was deployed go to applications page empty how to reproduce get a grant do a delegation setting operator and authorizer to the same account check the applications page
| 1
|
512,390
| 14,895,554,033
|
IssuesEvent
|
2021-01-21 09:15:38
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
chevron-gsc.secure.force.com - site is not usable
|
browser-firefox engine-gecko ml-needsdiagnosis-false os-linux priority-critical
|
<!-- @browser: Firefox 85.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:85.0) Gecko/20100101 Firefox/85.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/65948 -->
**URL**: https://chevron-gsc.secure.force.com/lubeteksupport
**Browser / Version**: Firefox 85.0
**Operating System**: Ubuntu
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Problems with Captcha
**Steps to Reproduce**:
If I fill out the form for technical questions, it will not submit it. It may be due to : "ERROR for site owner:
Invalid domain for site key" in redhighlights next to the Captcha block.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/246c0136-4203-42a2-8604-26a9d150006e.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210114193053</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/5abc6892-267c-462e-9e59-ac7b22adbf51)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
chevron-gsc.secure.force.com - site is not usable - <!-- @browser: Firefox 85.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:85.0) Gecko/20100101 Firefox/85.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/65948 -->
**URL**: https://chevron-gsc.secure.force.com/lubeteksupport
**Browser / Version**: Firefox 85.0
**Operating System**: Ubuntu
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Problems with Captcha
**Steps to Reproduce**:
If I fill out the form for technical questions, it will not submit it. It may be due to : "ERROR for site owner:
Invalid domain for site key" in redhighlights next to the Captcha block.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2021/1/246c0136-4203-42a2-8604-26a9d150006e.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20210114193053</li><li>channel: beta</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2021/1/5abc6892-267c-462e-9e59-ac7b22adbf51)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
chevron gsc secure force com site is not usable url browser version firefox operating system ubuntu tested another browser yes chrome problem type site is not usable description problems with captcha steps to reproduce if i fill out the form for technical questions it will not submit it it may be due to error for site owner invalid domain for site key in redhighlights next to the captcha block view the screenshot img alt screenshot src browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
15,395
| 19,580,009,537
|
IssuesEvent
|
2022-01-04 19:57:38
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
reopened
|
Spawning new terminal with workspace folder is broken since v1.62.0-insider
|
bug insiders-released terminal-process
|
Issue Type: <b>Bug</b>
1. Create workspace file
```
{
"folders": [
{
"name": "scripts",
"path": "."
},
{
"name": "other",
"path": ".//other"
},
],
"settings": {
"terminal.integrated.cwd": "${workspaceFolder}",
}
}
```
Create the **other** folder.
2. Open it in Visual Studio Code on Windows
3. Terminal \ New Terminal, select 'scripts'
4. Observe error:
The terminal process failed to launch: Starting directory (cwd) "D:\sources\app\scripts\D:\sources\app\scripts" does not exist.
VS Code version: Code - Insiders 1.62.0-insider (ff1e16eebb93af79fd6d7af1356c4003a120c563, 2021-10-29T05:16:23.014Z)
OS version: Windows_NT x64 10.0.19043
Restricted Mode: No
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 x 2803)|
|GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization: enabled<br>skia_renderer: enabled_on<br>video_decode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|31.66GB (21.15GB free)|
|Process Argv|--crash-reporter-id 084d9322-068c-436b-a87b-95494a395812|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter -->
|
1.0
|
Spawning new terminal with workspace folder is broken since v1.62.0-insider - Issue Type: <b>Bug</b>
1. Create workspace file
```
{
"folders": [
{
"name": "scripts",
"path": "."
},
{
"name": "other",
"path": ".//other"
},
],
"settings": {
"terminal.integrated.cwd": "${workspaceFolder}",
}
}
```
Create the **other** folder.
2. Open it in Visual Studio Code on Windows
3. Terminal \ New Terminal, select 'scripts'
4. Observe error:
The terminal process failed to launch: Starting directory (cwd) "D:\sources\app\scripts\D:\sources\app\scripts" does not exist.
VS Code version: Code - Insiders 1.62.0-insider (ff1e16eebb93af79fd6d7af1356c4003a120c563, 2021-10-29T05:16:23.014Z)
OS version: Windows_NT x64 10.0.19043
Restricted Mode: No
<details>
<summary>System Info</summary>
|Item|Value|
|---|---|
|CPUs|11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz (8 x 2803)|
|GPU Status|2d_canvas: enabled<br>gpu_compositing: enabled<br>multiple_raster_threads: enabled_on<br>oop_rasterization: enabled<br>opengl: enabled_on<br>rasterization: enabled<br>skia_renderer: enabled_on<br>video_decode: enabled<br>vulkan: disabled_off<br>webgl: enabled<br>webgl2: enabled|
|Load (avg)|undefined|
|Memory (System)|31.66GB (21.15GB free)|
|Process Argv|--crash-reporter-id 084d9322-068c-436b-a87b-95494a395812|
|Screen Reader|no|
|VM|0%|
</details>
<!-- generated by issue reporter -->
|
process
|
spawning new terminal with workspace folder is broken since insider issue type bug create workspace file folders name scripts path name other path other settings terminal integrated cwd workspacefolder create the other folder open it in visual studio code on windows terminal new terminal select scripts observe error the terminal process failed to launch starting directory cwd d sources app scripts d sources app scripts does not exist vs code version code insiders insider os version windows nt restricted mode no system info item value cpus gen intel r core tm x gpu status canvas enabled gpu compositing enabled multiple raster threads enabled on oop rasterization enabled opengl enabled on rasterization enabled skia renderer enabled on video decode enabled vulkan disabled off webgl enabled enabled load avg undefined memory system free process argv crash reporter id screen reader no vm
| 1
|
3,513
| 4,473,906,654
|
IssuesEvent
|
2016-08-26 07:10:02
|
meanjs/mean
|
https://api.github.com/repos/meanjs/mean
|
closed
|
Unlink old(previous) profile images
|
issue:bug issue:security need:pr_required platform:node
|
Currently old profile images are not unlinked upon change and remain on disk. If a user changes his/her profile image frequently that can lead to disk space leakage.
|
True
|
Unlink old(previous) profile images - Currently old profile images are not unlinked upon change and remain on disk. If a user changes his/her profile image frequently that can lead to disk space leakage.
|
non_process
|
unlink old previous profile images currently old profile images are not unlinked upon change and remain on disk if a user changes his her profile image frequently that can lead to disk space leakage
| 0
|
19,854
| 26,255,419,491
|
IssuesEvent
|
2023-01-05 23:59:03
|
googleapis/python-ndb
|
https://api.github.com/repos/googleapis/python-ndb
|
closed
|
Document that this project is in maintenance mode?
|
type: process api: datastore
|
From https://github.com/googleapis/google-cloud-python/issues/10566#issuecomment-1101110046 :
> The current owners of the Datastore libraries team let me know that NDB is not currently being developed, but things may change in the future.
OK! Fair enough. Maybe consider documenting that in the README and on https://googleapis.dev/python/python-ndb/latest/ ? And also whether it's maintained, and just not actively adding features, or totally unmaintained?
(Thank you for all of your work up until this point, btw!)
|
1.0
|
Document that this project is in maintenance mode? - From https://github.com/googleapis/google-cloud-python/issues/10566#issuecomment-1101110046 :
> The current owners of the Datastore libraries team let me know that NDB is not currently being developed, but things may change in the future.
OK! Fair enough. Maybe consider documenting that in the README and on https://googleapis.dev/python/python-ndb/latest/ ? And also whether it's maintained, and just not actively adding features, or totally unmaintained?
(Thank you for all of your work up until this point, btw!)
|
process
|
document that this project is in maintenance mode from the current owners of the datastore libraries team let me know that ndb is not currently being developed but things may change in the future ok fair enough maybe consider documenting that in the readme and on and also whether it s maintained and just not actively adding features or totally unmaintained thank you for all of your work up until this point btw
| 1
|
21,440
| 29,478,635,698
|
IssuesEvent
|
2023-06-02 02:07:49
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
webpack-preprocessor disables sourcemap
|
stage: backlog npm: @cypress/webpack-preprocessor stale
|
I am currently using webpack-preprocessor in my plugin.js
```js
const useBabelRcWp = (function () {
const webpackOptions = findWebpack.getWebpackOptions()
console.log('=========================>webpackoptions', webpackOptions)
webpackOptions.devtool = 'eval-source-map'
const cleanOptions = {
reactScripts: true
}
findWebpack.cleanForCypress(cleanOptions, webpackOptions)
return webpackPreprocessor({
webpackOptions,
watchOptions: {}
})
})()
module.exports = (on, config) => {
require('@cypress/code-coverage/task')(on, config)
on('file:preprocessor', useBabelRcWp)
addMatchImageSnapshotPlugin(on, config)
if (config.testingType === 'component') {
require('@cypress/react/plugins/babel')(on, config)
}
return config
}
```
Even when i explicitly set the devtool option i do not see my test files sourcemap in browser.
|
1.0
|
webpack-preprocessor disables sourcemap - I am currently using webpack-preprocessor in my plugin.js
```js
const useBabelRcWp = (function () {
const webpackOptions = findWebpack.getWebpackOptions()
console.log('=========================>webpackoptions', webpackOptions)
webpackOptions.devtool = 'eval-source-map'
const cleanOptions = {
reactScripts: true
}
findWebpack.cleanForCypress(cleanOptions, webpackOptions)
return webpackPreprocessor({
webpackOptions,
watchOptions: {}
})
})()
module.exports = (on, config) => {
require('@cypress/code-coverage/task')(on, config)
on('file:preprocessor', useBabelRcWp)
addMatchImageSnapshotPlugin(on, config)
if (config.testingType === 'component') {
require('@cypress/react/plugins/babel')(on, config)
}
return config
}
```
Even when i explicitly set the devtool option i do not see my test files sourcemap in browser.
|
process
|
webpack preprocessor disables sourcemap i am currently using webpack preprocessor in my plugin js js const usebabelrcwp function const webpackoptions findwebpack getwebpackoptions console log webpackoptions webpackoptions webpackoptions devtool eval source map const cleanoptions reactscripts true findwebpack cleanforcypress cleanoptions webpackoptions return webpackpreprocessor webpackoptions watchoptions module exports on config require cypress code coverage task on config on file preprocessor usebabelrcwp addmatchimagesnapshotplugin on config if config testingtype component require cypress react plugins babel on config return config even when i explicitly set the devtool option i do not see my test files sourcemap in browser
| 1
|
736,162
| 25,460,707,717
|
IssuesEvent
|
2022-11-24 18:35:42
|
jinh0/cloudberry
|
https://api.github.com/repos/jinh0/cloudberry
|
closed
|
Classes missing VSB data
|
bug high priority
|
It works on my local computer, but not on cloudberry.fyi.... what's going on?
|
1.0
|
Classes missing VSB data - It works on my local computer, but not on cloudberry.fyi.... what's going on?
|
non_process
|
classes missing vsb data it works on my local computer but not on cloudberry fyi what s going on
| 0
|
1,930
| 4,761,371,750
|
IssuesEvent
|
2016-10-25 08:01:09
|
paulkornikov/Pragonas
|
https://api.github.com/repos/paulkornikov/Pragonas
|
closed
|
Service de prolongation d'un budget
|
a-new feature budget contrat processus workload II
|
au niveau du backend:
- prolonger le budget
- générer l'échéancier
|
1.0
|
Service de prolongation d'un budget - au niveau du backend:
- prolonger le budget
- générer l'échéancier
|
process
|
service de prolongation d un budget au niveau du backend prolonger le budget générer l échéancier
| 1
|
2,303
| 5,117,573,717
|
IssuesEvent
|
2017-01-07 18:19:25
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[Subtitles] [FR] #RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ
|
Language: French Process: [6] Approved
|
# Video title
#RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ
# URL
https://www.youtube.com/watch?v=hIrpyKHXry8
# Youtube subtitles language
Français
# Duration
26:42
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&ui=hd&lang=fr&v=hIrpyKHXry8&tab=captions&bl=vmp&action_mde_edit_form=1
|
1.0
|
[Subtitles] [FR] #RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ - # Video title
#RDLS13 : CONDITIONS DE TRAVAIL, AUCHAN, MULLIEZ, IMPÔT, SYRIE, TROPHÉE 100 000 ABONNÉS, ANNONCE FAQ
# URL
https://www.youtube.com/watch?v=hIrpyKHXry8
# Youtube subtitles language
Français
# Duration
26:42
# Subtitles URL
https://www.youtube.com/timedtext_editor?ref=player&ui=hd&lang=fr&v=hIrpyKHXry8&tab=captions&bl=vmp&action_mde_edit_form=1
|
process
|
conditions de travail auchan mulliez impôt syrie trophée abonnés annonce faq video title conditions de travail auchan mulliez impôt syrie trophée abonnés annonce faq url youtube subtitles language français duration subtitles url
| 1
|
9,123
| 6,773,628,590
|
IssuesEvent
|
2017-10-27 07:05:07
|
DistributedTeam/MongoDB
|
https://api.github.com/repos/DistributedTeam/MongoDB
|
opened
|
Can I use more replica nodes to scale?
|
performance reading
|
http://www.askasya.com/post/canreplicashelpscaling/
Will more servers help you handle same reads faster?
I think the answer for simple operational reads is obviously no. If a read takes 10μs then it’s not likely to take 1/5th of that just because there are five servers - this is a single unit of work. That’s the actual duration of the read.
Will more servers help you handle more reads?
Intuitively, it feels like the answer should be “yes” - but that would only be the case if the reads somehow interfered with each other on the single node. If they are reading the same “hot” data then they can be working in parallel up to the limit of your CPUs. So in real life, the answer to whether all your replica nodes together can handle more reads than just your primary is maybe yes and maybe no. Usually no. It all depends on why your single primary cannot handle all of the reads by itself.
|
True
|
Can I use more replica nodes to scale? - http://www.askasya.com/post/canreplicashelpscaling/
Will more servers help you handle same reads faster?
I think the answer for simple operational reads is obviously no. If a read takes 10μs then it’s not likely to take 1/5th of that just because there are five servers - this is a single unit of work. That’s the actual duration of the read.
Will more servers help you handle more reads?
Intuitively, it feels like the answer should be “yes” - but that would only be the case if the reads somehow interfered with each other on the single node. If they are reading the same “hot” data then they can be working in parallel up to the limit of your CPUs. So in real life, the answer to whether all your replica nodes together can handle more reads than just your primary is maybe yes and maybe no. Usually no. It all depends on why your single primary cannot handle all of the reads by itself.
|
non_process
|
can i use more replica nodes to scale will more servers help you handle same reads faster i think the answer for simple operational reads is obviously no if a read takes then it’s not likely to take of that just because there are five servers this is a single unit of work that’s the actual duration of the read will more servers help you handle more reads intuitively it feels like the answer should be “yes” but that would only be the case if the reads somehow interfered with each other on the single node if they are reading the same “hot” data then they can be working in parallel up to the limit of your cpus so in real life the answer to whether all your replica nodes together can handle more reads than just your primary is maybe yes and maybe no usually no it all depends on why your single primary cannot handle all of the reads by itself
| 0
|
40,250
| 9,935,800,976
|
IssuesEvent
|
2019-07-02 17:28:11
|
vector-im/riot-web
|
https://api.github.com/repos/vector-im/riot-web
|
closed
|
Clicking the reactions button causes flashing
|
bug defect feature:aggregations feature:reactions phase:2
|
The scrollbar on the timeline disappears and comes back, the highlight on the message flashes, and the tooltip itself flashes. fwiw my first instinct is to try and click the button
|
1.0
|
Clicking the reactions button causes flashing - The scrollbar on the timeline disappears and comes back, the highlight on the message flashes, and the tooltip itself flashes. fwiw my first instinct is to try and click the button
|
non_process
|
clicking the reactions button causes flashing the scrollbar on the timeline disappears and comes back the highlight on the message flashes and the tooltip itself flashes fwiw my first instinct is to try and click the button
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.