Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12,212
| 14,742,936,114
|
IssuesEvent
|
2021-01-07 13:08:36
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Laser - payment posting issue | Parent: 1483
|
anc-process anp-0.5 ant-bug ant-support
|
In GitLab by @kdjstudios on Jun 25, 2019, 15:54
**Submitted by:** Sharon Carver <scarver@laseranswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-25-61384/conversation
**Server:** External
**Client/Site:** Laser
**Account:** 5147
**Issue:**
Posting payments in SA billing issue:
When a payment is posted, SA billing ‘assigns’ that payment to the oldest invoice. If there is an error in that payment so it needs to be deleted and re-entered with corrected info on the same account, that payment now looks like it is ‘assigned’ to the next newer invoice in line when you look at the payment history. In other words, the deleted payment is not fully deleted as it is still causing errors in payment history.
See our customer acct# 5147 Eastend Apts.
|
1.0
|
Laser - payment posting issue | Parent: 1483 - In GitLab by @kdjstudios on Jun 25, 2019, 15:54
**Submitted by:** Sharon Carver <scarver@laseranswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2019-06-25-61384/conversation
**Server:** External
**Client/Site:** Laser
**Account:** 5147
**Issue:**
Posting payments in SA billing issue:
When a payment is posted, SA billing ‘assigns’ that payment to the oldest invoice. If there is an error in that payment so it needs to be deleted and re-entered with corrected info on the same account, that payment now looks like it is ‘assigned’ to the next newer invoice in line when you look at the payment history. In other words, the deleted payment is not fully deleted as it is still causing errors in payment history.
See our customer acct# 5147 Eastend Apts.
|
process
|
laser payment posting issue parent in gitlab by kdjstudios on jun submitted by sharon carver helpdesk server external client site laser account issue posting payments in sa billing issue when a payment is posted sa billing ‘assigns’ that payment to the oldest invoice if there is an error in that payment so it needs to be deleted and re entered with corrected info on the same account that payment now looks like it is ‘assigned’ to the next newer invoice in line when you look at the payment history in other words the deleted payment is not fully deleted as it is still causing errors in payment history see our customer acct eastend apts
| 1
|
281,362
| 24,387,097,822
|
IssuesEvent
|
2022-10-04 12:40:02
|
HyphaApp/hypha
|
https://api.github.com/repos/HyphaApp/hypha
|
closed
|
Improve visibility of what forms are linked to what funds on the wagtail admin fund page as have been done on the rounds page
|
Type: Enhancement Status: Needs testing
|
Add links to form on the wagtail admin fund page as have been done on the rounds page
**Is your feature request related to a problem? Please describe.**
As a user of Hypha that creates, edits, manages etc. funds for my funding organisation I want to be able to quickly see what forms are linked to a fund display in a similar column format as currently present in the rounds page (See screenshots below) this allows me to better see and understand what forms are linked to a fund so that I might save myself time finding that out (and possibly editing or changing those forms) as part of my workflow.
**Is your feature request related to an existing functionality? Please describe.**
Related to existing Fund and Rounds ages in Wagtail admin.
**Describe the solution you'd like**
Same display table on funds as on rounds.
**Describe alternatives you've considered**
Keeping as is does not 'break' the usage of the software but does make it time consuming
**Additional context**
Rounds page in Wagtail:

Fund page in Wagtail:

**Priority**
- Low priority (annoying, would be nice to not see)
**Affected roles**
- Staff
**Ideal deadline**
Date when you'd like to see this accomplished and a reason, if appropriate.
|
1.0
|
Improve visibility of what forms are linked to what funds on the wagtail admin fund page as have been done on the rounds page - Add links to form on the wagtail admin fund page as have been done on the rounds page
**Is your feature request related to a problem? Please describe.**
As a user of Hypha that creates, edits, manages etc. funds for my funding organisation I want to be able to quickly see what forms are linked to a fund display in a similar column format as currently present in the rounds page (See screenshots below) this allows me to better see and understand what forms are linked to a fund so that I might save myself time finding that out (and possibly editing or changing those forms) as part of my workflow.
**Is your feature request related to an existing functionality? Please describe.**
Related to existing Fund and Rounds ages in Wagtail admin.
**Describe the solution you'd like**
Same display table on funds as on rounds.
**Describe alternatives you've considered**
Keeping as is does not 'break' the usage of the software but does make it time consuming
**Additional context**
Rounds page in Wagtail:

Fund page in Wagtail:

**Priority**
- Low priority (annoying, would be nice to not see)
**Affected roles**
- Staff
**Ideal deadline**
Date when you'd like to see this accomplished and a reason, if appropriate.
|
non_process
|
improve visibility of what forms are linked to what funds on the wagtail admin fund page as have been done on the rounds page add links to form on the wagtail admin fund page as have been done on the rounds page is your feature request related to a problem please describe as a user of hypha that creates edits manages etc funds for my funding organisation i want to be able to quickly see what forms are linked to a fund display in a similar column format as currently present in the rounds page see screenshots below this allows me to better see and understand what forms are linked to a fund so that i might save myself time finding that out and possibly editing or changing those forms as part of my workflow is your feature request related to an existing functionality please describe related to existing fund and rounds ages in wagtail admin describe the solution you d like same display table on funds as on rounds describe alternatives you ve considered keeping as is does not break the usage of the software but does make it time consuming additional context rounds page in wagtail fund page in wagtail priority low priority annoying would be nice to not see affected roles staff ideal deadline date when you d like to see this accomplished and a reason if appropriate
| 0
|
208,817
| 16,163,957,892
|
IssuesEvent
|
2021-05-01 06:01:14
|
GreaterGoodCorp/SuperHelper
|
https://api.github.com/repos/GreaterGoodCorp/SuperHelper
|
closed
|
Missing full stop in command help messages
|
bug misc-documentation priority-low
|
Location:
- `--debug` option of `helper`
- `--list` option of `helper`
|
1.0
|
Missing full stop in command help messages - Location:
- `--debug` option of `helper`
- `--list` option of `helper`
|
non_process
|
missing full stop in command help messages location debug option of helper list option of helper
| 0
|
284,264
| 30,913,612,702
|
IssuesEvent
|
2023-08-05 02:23:51
|
Nivaskumark/kernel_v4.19.72_old
|
https://api.github.com/repos/Nivaskumark/kernel_v4.19.72_old
|
reopened
|
CVE-2020-16119 (High) detected in linux-yoctov5.4.51
|
Mend: dependency security vulnerability
|
## CVE-2020-16119 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Use-after-free vulnerability in the Linux kernel exploitable by a local attacker due to reuse of a DCCP socket with an attached dccps_hc_tx_ccid object as a listener after being released. Fixed in Ubuntu Linux kernel 5.4.0-51.56, 5.3.0-68.63, 4.15.0-121.123, 4.4.0-193.224, 3.13.0.182.191 and 3.2.0-149.196.
<p>Publish Date: 2021-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-16119>CVE-2020-16119</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-16119">https://nvd.nist.gov/vuln/detail/CVE-2020-16119</a></p>
<p>Release Date: 2021-01-14</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-16119 (High) detected in linux-yoctov5.4.51 - ## CVE-2020-16119 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linux-yoctov5.4.51</b></p></summary>
<p>
<p>Yocto Linux Embedded kernel</p>
<p>Library home page: <a href=https://git.yoctoproject.org/git/linux-yocto>https://git.yoctoproject.org/git/linux-yocto</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Nivaskumark/kernel_v4.19.72/commit/ce49083a1c14be2d13cb5e878257d293e6c748bc">ce49083a1c14be2d13cb5e878257d293e6c748bc</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/dccp/minisocks.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
Use-after-free vulnerability in the Linux kernel exploitable by a local attacker due to reuse of a DCCP socket with an attached dccps_hc_tx_ccid object as a listener after being released. Fixed in Ubuntu Linux kernel 5.4.0-51.56, 5.3.0-68.63, 4.15.0-121.123, 4.4.0-193.224, 3.13.0.182.191 and 3.2.0-149.196.
<p>Publish Date: 2021-01-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-16119>CVE-2020-16119</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2020-16119">https://nvd.nist.gov/vuln/detail/CVE-2020-16119</a></p>
<p>Release Date: 2021-01-14</p>
<p>Fix Resolution: linux-libc-headers - 5.14;linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.26+gitAUTOINC+1c60e003c7_27efc3ba68</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linux cve high severity vulnerability vulnerable library linux yocto linux embedded kernel library home page a href found in head commit a href found in base branch master vulnerable source files net dccp minisocks c net dccp minisocks c vulnerability details use after free vulnerability in the linux kernel exploitable by a local attacker due to reuse of a dccp socket with an attached dccps hc tx ccid object as a listener after being released fixed in ubuntu linux kernel and publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux libc headers linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
20,595
| 27,263,874,787
|
IssuesEvent
|
2023-02-22 16:36:36
|
zephyrproject-rtos/zephyr
|
https://api.github.com/repos/zephyrproject-rtos/zephyr
|
closed
|
[RFC] Do not allow self-merge
|
RFC Process
|
## Introduction
This proposal is about introducing a new rule to prevent PRs getting merged by their authors.
### Problem description
There's one edge case that could/should be avoided: people with merge rights merging their own PRs. This is a potential path to quickly get code merged without the necessary amount of review. As soon as it gets two approvals (which can be from colleagues), the author can quickly hit "Merge" before someone else comes and blocks. The majority of developers do not have such privilege.
### Proposed change
A PR can't be merged by its author. Exceptions: PRs with `Trivial` or `Hotfix` tags.
## Detailed RFC
See above.
### Proposed change (Detailed)
See above.
### Dependencies
None
### Concerns and Unresolved Questions
N/A
## Alternatives
Do nothing.
|
1.0
|
[RFC] Do not allow self-merge - ## Introduction
This proposal is about introducing a new rule to prevent PRs getting merged by their authors.
### Problem description
There's one edge case that could/should be avoided: people with merge rights merging their own PRs. This is a potential path to quickly get code merged without the necessary amount of review. As soon as it gets two approvals (which can be from colleagues), the author can quickly hit "Merge" before someone else comes and blocks. The majority of developers do not have such privilege.
### Proposed change
A PR can't be merged by its author. Exceptions: PRs with `Trivial` or `Hotfix` tags.
## Detailed RFC
See above.
### Proposed change (Detailed)
See above.
### Dependencies
None
### Concerns and Unresolved Questions
N/A
## Alternatives
Do nothing.
|
process
|
do not allow self merge introduction this proposal is about introducing a new rule to prevent prs getting merged by their authors problem description there s one edge case that could should be avoided people with merge rights merging their own prs this is a potential path to quickly get code merged without the necessary amount of review as soon as it gets two approvals which can be from colleagues the author can quickly hit merge before someone else comes and blocks the majority of developers do not have such privilege proposed change a pr can t be merged by its author exceptions prs with trivial or hotfix tags detailed rfc see above proposed change detailed see above dependencies none concerns and unresolved questions n a alternatives do nothing
| 1
|
15,435
| 19,635,422,211
|
IssuesEvent
|
2022-01-08 07:08:58
|
varabyte/kobweb
|
https://api.github.com/repos/varabyte/kobweb
|
opened
|
Figure out a testing story
|
process
|
We should at least have a handful of relevant unit tests, if not an integration test or two, running on some sort of CI.
Probably some sort of golden image set of tests is good enough? vs. testing against the DOM, which may be fragile to things like class names or attribute order changing around.
|
1.0
|
Figure out a testing story - We should at least have a handful of relevant unit tests, if not an integration test or two, running on some sort of CI.
Probably some sort of golden image set of tests is good enough? vs. testing against the DOM, which may be fragile to things like class names or attribute order changing around.
|
process
|
figure out a testing story we should at least have a handful of relevant unit tests if not an integration test or two running on some sort of ci probably some sort of golden image set of tests is good enough vs testing against the dom which may be fragile to things like class names or attribute order changing around
| 1
|
303,358
| 26,201,777,577
|
IssuesEvent
|
2023-01-03 18:11:05
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
closed
|
roachtest: cdc/crdb-chaos failed
|
C-test-failure O-robot O-roachtest release-blocker T-cdc branch-release-22.1
|
roachtest.cdc/crdb-chaos [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=artifacts#/cdc/crdb-chaos) on release-22.1 @ [000c9624b56b09d5fbd06557c559b2f910142a9c](https://github.com/cockroachdb/cockroach/commits/000c9624b56b09d5fbd06557c559b2f910142a9c):
```
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2083
| | github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:661
| | github.com/cockroachdb/cockroach/pkg/roachprod.Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:384
| | main.execCmdEx
| | main/pkg/cmd/roachtest/cluster.go:341
| | main.execCmd
| | main/pkg/cmd/roachtest/cluster.go:229
| | main.(*clusterImpl).RunE
| | main/pkg/cmd/roachtest/cluster.go:1954
| | main.(*clusterImpl).Run
| | main/pkg/cmd/roachtest/cluster.go:1932
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccWorkload).run
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:1578
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest.func1
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:186
| | main.(*monitorImpl).Go.func1
| | main/pkg/cmd/roachtest/monitor.go:105
| | golang.org/x/sync/errgroup.(*Group).Go.func1
| | golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:57
| | runtime.goexit
| | GOROOT/src/runtime/asm_amd64.s:1581
| Wraps: (2) one or more parallel execution failure
| Error types: (1) *withstack.withStack (2) *errutil.leafError
Wraps: (5) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString
monitor.go:127,cdc.go:296,cdc.go:759,test_runner.go:883: monitor failure: monitor task failed: pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:296
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerCDC.func5
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:759
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (4) monitor task failed
Wraps: (5) pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *pq.Error
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #77815 roachtest: cdc/crdb-chaos failed [C-test-failure O-roachtest O-robot T-cdc branch-master]
- #68047 roachtest: cdc/crdb-chaos failed [C-test-failure O-roachtest O-robot T-cdc branch-release-21.1]
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/crdb-chaos.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-22916
|
2.0
|
roachtest: cdc/crdb-chaos failed - roachtest.cdc/crdb-chaos [failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=buildLog) with [artifacts](https://teamcity.cockroachdb.com/viewLog.html?buildId=8147270&tab=artifacts#/cdc/crdb-chaos) on release-22.1 @ [000c9624b56b09d5fbd06557c559b2f910142a9c](https://github.com/cockroachdb/cockroach/commits/000c9624b56b09d5fbd06557c559b2f910142a9c):
```
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:2083
| | github.com/cockroachdb/cockroach/pkg/roachprod/install.(*SyncedCluster).Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/install/cluster_synced.go:661
| | github.com/cockroachdb/cockroach/pkg/roachprod.Run
| | github.com/cockroachdb/cockroach/pkg/roachprod/roachprod.go:384
| | main.execCmdEx
| | main/pkg/cmd/roachtest/cluster.go:341
| | main.execCmd
| | main/pkg/cmd/roachtest/cluster.go:229
| | main.(*clusterImpl).RunE
| | main/pkg/cmd/roachtest/cluster.go:1954
| | main.(*clusterImpl).Run
| | main/pkg/cmd/roachtest/cluster.go:1932
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.(*tpccWorkload).run
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:1578
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest.func1
| | github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:186
| | main.(*monitorImpl).Go.func1
| | main/pkg/cmd/roachtest/monitor.go:105
| | golang.org/x/sync/errgroup.(*Group).Go.func1
| | golang.org/x/sync/errgroup/external/org_golang_x_sync/errgroup/errgroup.go:57
| | runtime.goexit
| | GOROOT/src/runtime/asm_amd64.s:1581
| Wraps: (2) one or more parallel execution failure
| Error types: (1) *withstack.withStack (2) *errutil.leafError
Wraps: (5) context canceled
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *cluster.WithCommandDetails (4) *secondary.withSecondaryError (5) *errors.errorString
monitor.go:127,cdc.go:296,cdc.go:759,test_runner.go:883: monitor failure: monitor task failed: pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
(1) attached stack trace
-- stack trace:
| main.(*monitorImpl).WaitE
| main/pkg/cmd/roachtest/monitor.go:115
| main.(*monitorImpl).Wait
| main/pkg/cmd/roachtest/monitor.go:123
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.cdcBasicTest
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:296
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests.registerCDC.func5
| github.com/cockroachdb/cockroach/pkg/cmd/roachtest/tests/cdc.go:759
| [...repeated from below...]
Wraps: (2) monitor failure
Wraps: (3) attached stack trace
-- stack trace:
| main.(*monitorImpl).wait.func2
| main/pkg/cmd/roachtest/monitor.go:171
| runtime.goexit
| GOROOT/src/runtime/asm_amd64.s:1581
Wraps: (4) monitor task failed
Wraps: (5) pq: Use of CHANGEFEED requires an enterprise license. Your evaluation license expired on December 30, 2022. If you're interested in getting a new license, please contact subscriptions@cockroachlabs.com and we can help you out.
Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *pq.Error
```
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
<details><summary>Same failure on other branches</summary>
<p>
- #77815 roachtest: cdc/crdb-chaos failed [C-test-failure O-roachtest O-robot T-cdc branch-master]
- #68047 roachtest: cdc/crdb-chaos failed [C-test-failure O-roachtest O-robot T-cdc branch-release-21.1]
</p>
</details>
/cc @cockroachdb/cdc
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*cdc/crdb-chaos.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
Jira issue: CRDB-22916
|
non_process
|
roachtest cdc crdb chaos failed roachtest cdc crdb chaos with on release github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod install syncedcluster run github com cockroachdb cockroach pkg roachprod install cluster synced go github com cockroachdb cockroach pkg roachprod run github com cockroachdb cockroach pkg roachprod roachprod go main execcmdex main pkg cmd roachtest cluster go main execcmd main pkg cmd roachtest cluster go main clusterimpl rune main pkg cmd roachtest cluster go main clusterimpl run main pkg cmd roachtest cluster go github com cockroachdb cockroach pkg cmd roachtest tests tpccworkload run github com cockroachdb cockroach pkg cmd roachtest tests cdc go github com cockroachdb cockroach pkg cmd roachtest tests cdcbasictest github com cockroachdb cockroach pkg cmd roachtest tests cdc go main monitorimpl go main pkg cmd roachtest monitor go golang org x sync errgroup group go golang org x sync errgroup external org golang x sync errgroup errgroup go runtime goexit goroot src runtime asm s wraps one or more parallel execution failure error types withstack withstack errutil leaferror wraps context canceled error types withstack withstack errutil withprefix cluster withcommanddetails secondary withsecondaryerror errors errorstring monitor go cdc go cdc go test runner go monitor failure monitor task failed pq use of changefeed requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out attached stack trace stack trace main monitorimpl waite main pkg cmd roachtest monitor go main monitorimpl wait main pkg cmd roachtest monitor go github com cockroachdb cockroach pkg cmd roachtest tests cdcbasictest github com cockroachdb cockroach pkg cmd roachtest tests cdc go github com cockroachdb cockroach pkg cmd roachtest tests registercdc github com cockroachdb cockroach pkg cmd roachtest tests cdc go wraps monitor failure wraps attached stack trace stack trace main monitorimpl wait main pkg cmd roachtest monitor go runtime goexit goroot src runtime asm s wraps monitor task failed wraps pq use of changefeed requires an enterprise license your evaluation license expired on december if you re interested in getting a new license please contact subscriptions cockroachlabs com and we can help you out error types withstack withstack errutil withprefix withstack withstack errutil withprefix pq error help see see same failure on other branches roachtest cdc crdb chaos failed roachtest cdc crdb chaos failed cc cockroachdb cdc jira issue crdb
| 0
|
28
| 2,490,322,747
|
IssuesEvent
|
2015-01-02 13:02:59
|
andresriancho/w3af
|
https://api.github.com/repos/andresriancho/w3af
|
opened
|
websockets_links plugin reports multiple vulnerabilities
|
bug priority:medium
|
## Problem
If we scan a site which has WS urls on each page, we'll get one vulnerability report for each page. For sites with more than 20 pages, this will be very annoying for the user.
## Solution
Group the vulnerabilities by web socket URL instead of having multiple vulnerabilities. When finding a new WS just store it somewhere and use the `end()` method to group all the findings and report only one vulnerability for each ws URL.
## References
* https://github.com/andresriancho/w3af/issues/1123
* https://github.com/andresriancho/w3af/pull/5763
|
1.0
|
websockets_links plugin reports multiple vulnerabilities - ## Problem
If we scan a site which has WS urls on each page, we'll get one vulnerability report for each page. For sites with more than 20 pages, this will be very annoying for the user.
## Solution
Group the vulnerabilities by web socket URL instead of having multiple vulnerabilities. When finding a new WS just store it somewhere and use the `end()` method to group all the findings and report only one vulnerability for each ws URL.
## References
* https://github.com/andresriancho/w3af/issues/1123
* https://github.com/andresriancho/w3af/pull/5763
|
non_process
|
websockets links plugin reports multiple vulnerabilities problem if we scan a site which has ws urls on each page we ll get one vulnerability report for each page for sites with more than pages this will be very annoying for the user solution group the vulnerabilities by web socket url instead of having multiple vulnerabilities when finding a new ws just store it somewhere and use the end method to group all the findings and report only one vulnerability for each ws url references
| 0
|
1,945
| 4,770,364,665
|
IssuesEvent
|
2016-10-26 15:05:55
|
nolanjian/Cawler
|
https://api.github.com/repos/nolanjian/Cawler
|
closed
|
TP0018 Read Header & Read Body improvement.
|
In Processing urgent
|
1. If header include chunked, no mater http or https, read in chunked way, reading way should never relative to http or https, http or https just decide use socket or ssl stream.
2. Should have a better function to decide use TCP socket or SSL stream, not the if else if statements.
|
1.0
|
TP0018 Read Header & Read Body improvement. - 1. If header include chunked, no mater http or https, read in chunked way, reading way should never relative to http or https, http or https just decide use socket or ssl stream.
2. Should have a better function to decide use TCP socket or SSL stream, not the if else if statements.
|
process
|
read header read body improvement if header include chunked no mater http or https read in chunked way reading way should never relative to http or https http or https just decide use socket or ssl stream should have a better function to decide use tcp socket or ssl stream not the if else if statements
| 1
|
610
| 3,078,508,777
|
IssuesEvent
|
2015-08-21 10:42:58
|
deb-sandeep/PHPWebApps
|
https://api.github.com/repos/deb-sandeep/PHPWebApps
|
closed
|
Selective release of chapters into production
|
enhancement jove_notes_grammar jove_notes_processor jove_notes_server released / closed
|
***Problem***
There are (will be) times, when the digitization of chapters will move in a faster curve as compared to the current study needs. For example:
1. If we have a database of chapters and we want to make only a certain chapters visible or enabled
2. If we have certain chapters which needs to be re-sequenced in terms of visibility
***Solution***
In the larger scheme of things, this would warrant a fundamental change in storage of data and administrative intervention to move chapters across _staging areas_.
As an immediate solution, introduce a new source tag <code>@skip_generation_at_production</code>. JoveNotesProcessor, depending upon the mode of operation <kbd>development</kbd> or <kbd>production</kbd> will honor this tag in conjunction with the existing <code>@skip_generation</code> tag to process of omit source file compilation.
Please note that this is a tactical solution and doesn't address the full scope of the business problem.
|
1.0
|
Selective release of chapters into production - ***Problem***
There are (will be) times, when the digitization of chapters will move in a faster curve as compared to the current study needs. For example:
1. If we have a database of chapters and we want to make only a certain chapters visible or enabled
2. If we have certain chapters which needs to be re-sequenced in terms of visibility
***Solution***
In the larger scheme of things, this would warrant a fundamental change in storage of data and administrative intervention to move chapters across _staging areas_.
As an immediate solution, introduce a new source tag <code>@skip_generation_at_production</code>. JoveNotesProcessor, depending upon the mode of operation <kbd>development</kbd> or <kbd>production</kbd> will honor this tag in conjunction with the existing <code>@skip_generation</code> tag to process of omit source file compilation.
Please note that this is a tactical solution and doesn't address the full scope of the business problem.
|
process
|
selective release of chapters into production problem there are will be times when the digitization of chapters will move in a faster curve as compared to the current study needs for example if we have a database of chapters and we want to make only a certain chapters visible or enabled if we have certain chapters which needs to be re sequenced in terms of visibility solution in the larger scheme of things this would warrant a fundamental change in storage of data and administrative intervention to move chapters across staging areas as an immediate solution introduce a new source tag skip generation at production jovenotesprocessor depending upon the mode of operation development or production will honor this tag in conjunction with the existing skip generation tag to process of omit source file compilation please note that this is a tactical solution and doesn t address the full scope of the business problem
| 1
|
21,433
| 11,219,415,159
|
IssuesEvent
|
2020-01-07 13:51:22
|
yt-project/yt
|
https://api.github.com/repos/yt-project/yt
|
opened
|
Evaluate benefits of pre-allocation of arrays
|
demeshening index: particle performance
|
At present, the frontends do an indexing check to identify the size of arrays to allocate before conducting any IO. For grid-based frontends, this is not terribly onerous (it does cost floating point operations, but a best effort is made to cache those operations) but for particle frontends, it is quite taxing as it requires an IO pass.
The reason this decision was made was to avoid having to do large-scale concatenation of arrays, which results in a memory doubling at the finalization step. However, this finalization step is often not even required, as nearly all of the operations are chunked anyway.
It is my suspicion that we could speed up *considerably* the operations in yt that use particles if we dropped the pre-allocation requirement, and moved instead to concatenating arrays (or reallocing them) and provide only *upper* bounds on the size of the arrays we expect, rather than *exact* bounds. But, this is just intuition -- which is how the decision was made initially to do the preallocation!
This issue is a placeholder for evaluating this. I believe this could be tested in small-scale by changing how `_count_particles` operates, to have it return `None` and in the case of `None`, to have the IO operations (which are mostly consolidated in `yt/utilities/io_handler.py:BaseIOHandler._read_particle_selection`) to grow a list of values rather than filling-as-they-go with a running index.
|
True
|
Evaluate benefits of pre-allocation of arrays - At present, the frontends do an indexing check to identify the size of arrays to allocate before conducting any IO. For grid-based frontends, this is not terribly onerous (it does cost floating point operations, but a best effort is made to cache those operations) but for particle frontends, it is quite taxing as it requires an IO pass.
The reason this decision was made was to avoid having to do large-scale concatenation of arrays, which results in a memory doubling at the finalization step. However, this finalization step is often not even required, as nearly all of the operations are chunked anyway.
It is my suspicion that we could speed up *considerably* the operations in yt that use particles if we dropped the pre-allocation requirement, and moved instead to concatenating arrays (or reallocing them) and provide only *upper* bounds on the size of the arrays we expect, rather than *exact* bounds. But, this is just intuition -- which is how the decision was made initially to do the preallocation!
This issue is a placeholder for evaluating this. I believe this could be tested in small-scale by changing how `_count_particles` operates, to have it return `None` and in the case of `None`, to have the IO operations (which are mostly consolidated in `yt/utilities/io_handler.py:BaseIOHandler._read_particle_selection`) to grow a list of values rather than filling-as-they-go with a running index.
|
non_process
|
evaluate benefits of pre allocation of arrays at present the frontends do an indexing check to identify the size of arrays to allocate before conducting any io for grid based frontends this is not terribly onerous it does cost floating point operations but a best effort is made to cache those operations but for particle frontends it is quite taxing as it requires an io pass the reason this decision was made was to avoid having to do large scale concatenation of arrays which results in a memory doubling at the finalization step however this finalization step is often not even required as nearly all of the operations are chunked anyway it is my suspicion that we could speed up considerably the operations in yt that use particles if we dropped the pre allocation requirement and moved instead to concatenating arrays or reallocing them and provide only upper bounds on the size of the arrays we expect rather than exact bounds but this is just intuition which is how the decision was made initially to do the preallocation this issue is a placeholder for evaluating this i believe this could be tested in small scale by changing how count particles operates to have it return none and in the case of none to have the io operations which are mostly consolidated in yt utilities io handler py baseiohandler read particle selection to grow a list of values rather than filling as they go with a running index
| 0
|
14,530
| 17,630,663,245
|
IssuesEvent
|
2021-08-19 07:31:55
|
lynnandtonic/nestflix.fun
|
https://api.github.com/repos/lynnandtonic/nestflix.fun
|
closed
|
Add Take My Hand
|
suggested title in process
|
Please add as much of the following info as you can:
Title: Take my Hand
Type (film/tv show): Film
Film or show in which it appears:30 Rock
Is the parent film/show streaming anywhere? yes, netflix
About when in the parent film/show does it appear? 8:38, 12:34
Actual footage of the film/show can be seen (yes/no)? yes (the making of it)
As seen easily - https://www.dailymotion.com/video/x6i4mu8
|
1.0
|
Add Take My Hand - Please add as much of the following info as you can:
Title: Take my Hand
Type (film/tv show): Film
Film or show in which it appears:30 Rock
Is the parent film/show streaming anywhere? yes, netflix
About when in the parent film/show does it appear? 8:38, 12:34
Actual footage of the film/show can be seen (yes/no)? yes (the making of it)
As seen easily - https://www.dailymotion.com/video/x6i4mu8
|
process
|
add take my hand please add as much of the following info as you can title take my hand type film tv show film film or show in which it appears rock is the parent film show streaming anywhere yes netflix about when in the parent film show does it appear actual footage of the film show can be seen yes no yes the making of it as seen easily
| 1
|
5,260
| 8,053,792,601
|
IssuesEvent
|
2018-08-02 01:12:07
|
google/google-http-java-client
|
https://api.github.com/repos/google/google-http-java-client
|
closed
|
Cut a new release
|
:rotating_light: priority: p1 type: process
|
Hi,
are there any plans on making a release anytime soon? We need a relatively new version of apache http client and would really like this [1] fix. We are currently using our own patched version :(.
cc: @ejona86 😄
[1] https://github.com/google/google-http-java-client/commit/a31c6d827ae9c4438c1b1997bcd794967c7544f4
|
1.0
|
Cut a new release - Hi,
are there any plans on making a release anytime soon? We need a relatively new version of apache http client and would really like this [1] fix. We are currently using our own patched version :(.
cc: @ejona86 😄
[1] https://github.com/google/google-http-java-client/commit/a31c6d827ae9c4438c1b1997bcd794967c7544f4
|
process
|
cut a new release hi are there any plans on making a release anytime soon we need a relatively new version of apache http client and would really like this fix we are currently using our own patched version cc 😄
| 1
|
50,264
| 6,343,551,418
|
IssuesEvent
|
2017-07-27 17:53:32
|
quicwg/base-drafts
|
https://api.github.com/repos/quicwg/base-drafts
|
closed
|
Use positive phrasing instead of double negative
|
-http design
|
The setting `SETTINGS_DISABLE_PUSH` is set to `false` by default. This double negation seems needlessly confusing. (*"Setting the push setting to true turns off push?"*)
Suggestion: Renaming it to `SETTINGS_ENABLE_PUSH` and default to `true`. This is easier to understand and matches the parameter defined by HTTP/2.
|
1.0
|
Use positive phrasing instead of double negative - The setting `SETTINGS_DISABLE_PUSH` is set to `false` by default. This double negation seems needlessly confusing. (*"Setting the push setting to true turns off push?"*)
Suggestion: Renaming it to `SETTINGS_ENABLE_PUSH` and default to `true`. This is easier to understand and matches the parameter defined by HTTP/2.
|
non_process
|
use positive phrasing instead of double negative the setting settings disable push is set to false by default this double negation seems needlessly confusing setting the push setting to true turns off push suggestion renaming it to settings enable push and default to true this is easier to understand and matches the parameter defined by http
| 0
|
2,957
| 5,955,599,709
|
IssuesEvent
|
2017-05-28 08:17:26
|
orbardugo/Hahot-Hameshulash
|
https://api.github.com/repos/orbardugo/Hahot-Hameshulash
|
opened
|
Export the graphs and the reports to PDF
|
difficulty 2 in process priorty 2 requirement Ruben
|
Ruben need to find how to export the reports to PDF file.
|
1.0
|
Export the graphs and the reports to PDF - Ruben need to find how to export the reports to PDF file.
|
process
|
export the graphs and the reports to pdf ruben need to find how to export the reports to pdf file
| 1
|
22,556
| 31,770,070,624
|
IssuesEvent
|
2023-09-12 11:12:20
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
skypilot-nightly 1.0.0.dev20230912 has 2 GuardDog issues
|
guarddog exec-base64 silent-process-execution
|
https://pypi.org/project/skypilot-nightly
https://inspector.pypi.io/project/skypilot-nightly
```{
"dependency": "skypilot-nightly",
"version": "1.0.0.dev20230912",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "skypilot-nightly-1.0.0.dev20230912/sky/skylet/log_lib.py:219",
"code": " subprocess.Popen(\n daemon_cmd,\n start_new_session=True,\n # Suppress output\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n # Disa... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
],
"exec-base64": [
{
"location": "skypilot-nightly-1.0.0.dev20230912/sky/cloud_stores.py:116",
"code": " p = subprocess.run(command,\n stdout=subprocess.PIPE,\n shell=True,\n check=True,\n executable='/bin/bash')",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
]
},
"path": "/tmp/tmp1xkio915/skypilot-nightly"
}
}```
|
1.0
|
skypilot-nightly 1.0.0.dev20230912 has 2 GuardDog issues - https://pypi.org/project/skypilot-nightly
https://inspector.pypi.io/project/skypilot-nightly
```{
"dependency": "skypilot-nightly",
"version": "1.0.0.dev20230912",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "skypilot-nightly-1.0.0.dev20230912/sky/skylet/log_lib.py:219",
"code": " subprocess.Popen(\n daemon_cmd,\n start_new_session=True,\n # Suppress output\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n # Disa... )",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
],
"exec-base64": [
{
"location": "skypilot-nightly-1.0.0.dev20230912/sky/cloud_stores.py:116",
"code": " p = subprocess.run(command,\n stdout=subprocess.PIPE,\n shell=True,\n check=True,\n executable='/bin/bash')",
"message": "This package contains a call to the `eval` function with a `base64` encoded string as argument.\nThis is a common method used to hide a malicious payload in a module as static analysis will not decode the\nstring.\n"
}
]
},
"path": "/tmp/tmp1xkio915/skypilot-nightly"
}
}```
|
process
|
skypilot nightly has guarddog issues dependency skypilot nightly version result issues errors results silent process execution location skypilot nightly sky skylet log lib py code subprocess popen n daemon cmd n start new session true n suppress output n stdout subprocess devnull n stderr subprocess devnull n disa message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null exec location skypilot nightly sky cloud stores py code p subprocess run command n stdout subprocess pipe n shell true n check true n executable bin bash message this package contains a call to the eval function with a encoded string as argument nthis is a common method used to hide a malicious payload in a module as static analysis will not decode the nstring n path tmp skypilot nightly
| 1
|
10,800
| 13,609,287,276
|
IssuesEvent
|
2020-09-23 04:50:03
|
googleapis/java-recommender
|
https://api.github.com/repos/googleapis/java-recommender
|
closed
|
Dependency Dashboard
|
api: recommender type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-recommender-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-recommender to v1.2.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/org.apache.maven.plugins-maven-project-info-reports-plugin-3.x -->build(deps): update dependency org.apache.maven.plugins:maven-project-info-reports-plugin to v3.1.1
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-recommender-1.x -->chore(deps): update dependency com.google.cloud:google-cloud-recommender to v1.2.1
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any build deps update dependency org apache maven plugins maven project info reports plugin to chore deps update dependency com google cloud google cloud recommender to check this box to trigger a request for renovate to run again on this repository
| 1
|
14,856
| 18,255,904,492
|
IssuesEvent
|
2021-10-03 02:38:44
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
v.clean rmdupl tool does not remove duplicates.
|
Feedback stale Processing Bug
|
### What is the bug or the crash?
In processing, the v.clean rmdupl tool finds but does not remove the duplicate geometries. Tested with linestrings.
### Steps to reproduce the issue
Simply open processing, select the v.clean, a vector file with a simple two node line and use the rmdupl tool. The duplicate geometry is identified in the "Errors files" but is still present in the "Cleaned" output file.
### Versions
<!--StartFragment--><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.20.1-Odense | QGIS code revision | 1c3c5cd6
-- | -- | -- | --
Qt version | 5.15.2
Python version | 3.9.5
GDAL/OGR version | 3.3.1
PROJ version | 8.1.0
EPSG Registry database version | v10.027 (2021-06-17)
GEOS version | 3.9.1-CAPI-1.14.2
SQLite version | 3.35.2
PDAL version | 2.3.0
PostgreSQL client version | 13.0
SpatiaLite version | 5.0.1
QWT version | 6.1.3
QScintilla2 version | 2.11.5
OS version | Windows 10 Version 2009
| | |
Active Python plugins | GroupPointsWithinDistancedb_managerMetaSearchprocessing
</body></html><!--EndFragment-->
Same in LTL 3.16.9
### Additional context
_No response_
|
1.0
|
v.clean rmdupl tool does not remove duplicates. - ### What is the bug or the crash?
In processing, the v.clean rmdupl tool finds but does not remove the duplicate geometries. Tested with linestrings.
### Steps to reproduce the issue
Simply open processing, select the v.clean, a vector file with a simple two node line and use the rmdupl tool. The duplicate geometry is identified in the "Errors files" but is still present in the "Cleaned" output file.
### Versions
<!--StartFragment--><!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0//EN" "http://www.w3.org/TR/REC-html40/strict.dtd">
<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8" /><style type="text/css">
p, li { white-space: pre-wrap; }
</style></head><body>
QGIS version | 3.20.1-Odense | QGIS code revision | 1c3c5cd6
-- | -- | -- | --
Qt version | 5.15.2
Python version | 3.9.5
GDAL/OGR version | 3.3.1
PROJ version | 8.1.0
EPSG Registry database version | v10.027 (2021-06-17)
GEOS version | 3.9.1-CAPI-1.14.2
SQLite version | 3.35.2
PDAL version | 2.3.0
PostgreSQL client version | 13.0
SpatiaLite version | 5.0.1
QWT version | 6.1.3
QScintilla2 version | 2.11.5
OS version | Windows 10 Version 2009
| | |
Active Python plugins | GroupPointsWithinDistancedb_managerMetaSearchprocessing
</body></html><!--EndFragment-->
Same in LTL 3.16.9
### Additional context
_No response_
|
process
|
v clean rmdupl tool does not remove duplicates what is the bug or the crash in processing the v clean rmdupl tool finds but does not remove the duplicate geometries tested with linestrings steps to reproduce the issue simply open processing select the v clean a vector file with a simple two node line and use the rmdupl tool the duplicate geometry is identified in the errors files but is still present in the cleaned output file versions doctype html public dtd html en p li white space pre wrap qgis version odense qgis code revision qt version python version gdal ogr version proj version epsg registry database version geos version capi sqlite version pdal version postgresql client version spatialite version qwt version version os version windows version active python plugins grouppointswithindistancedb managermetasearchprocessing same in ltl additional context no response
| 1
|
11,363
| 14,175,763,705
|
IssuesEvent
|
2020-11-12 22:10:43
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
packages is not an option resources
|
Pri1 devops-cicd-process/tech devops/prod doc-bug
|
it's confusing to see documentation about 'packages', while it's not a valid option
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
packages is not an option resources - it's confusing to see documentation about 'packages', while it's not a valid option
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: ee4ec9d0-e0d5-4fb4-7c3e-b84abfa290c2
* Version Independent ID: 3e2b80d9-30e5-0c48-49f0-4fcdfedf5eee
* Content: [Resources - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/resources?view=azure-devops&tabs=schema)
* Content Source: [docs/pipelines/process/resources.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/resources.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
packages is not an option resources it s confusing to see documentation about packages while it s not a valid option document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
548
| 3,006,049,164
|
IssuesEvent
|
2015-07-27 07:41:05
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
Добавить в сущность SubjectMessage поле nID_SubjectMessageType а SubjectMessageType реализовать в виде энума
|
In process of testing test
|
поля sName этума:
0 ServiceNeed "Просьба добавить услугу" (умолчательное)
1 ServiceFeedback "Отзыв о услуге"
при этом поле "nID_SubjectMessageType ", если оно не задается в сетере должно ставиться=0
nID_SubjectMessageType = индекс элемента энума
|
1.0
|
Добавить в сущность SubjectMessage поле nID_SubjectMessageType а SubjectMessageType реализовать в виде энума - поля sName этума:
0 ServiceNeed "Просьба добавить услугу" (умолчательное)
1 ServiceFeedback "Отзыв о услуге"
при этом поле "nID_SubjectMessageType ", если оно не задается в сетере должно ставиться=0
nID_SubjectMessageType = индекс элемента энума
|
process
|
добавить в сущность subjectmessage поле nid subjectmessagetype а subjectmessagetype реализовать в виде энума поля sname этума serviceneed просьба добавить услугу умолчательное servicefeedback отзыв о услуге при этом поле nid subjectmessagetype если оно не задается в сетере должно ставиться nid subjectmessagetype индекс элемента энума
| 1
|
12,724
| 15,094,900,483
|
IssuesEvent
|
2021-02-07 08:44:07
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
closed
|
What does 1000m CPU resource mean ?
|
process_duplicate type_question
|
When deploying a solution, it often asks for the flavour. Clear for memory and storage, but what does "1000m", "2000m", ... compute units mean ? 1 core, 2 cores, ... ? why put 1000 ?
<img width="911" alt="Screenshot 2021-02-05 at 10 16 53" src="https://user-images.githubusercontent.com/30384423/107014374-cc4b2280-679b-11eb-9f14-86422a47a295.png">
|
1.0
|
What does 1000m CPU resource mean ? - When deploying a solution, it often asks for the flavour. Clear for memory and storage, but what does "1000m", "2000m", ... compute units mean ? 1 core, 2 cores, ... ? why put 1000 ?
<img width="911" alt="Screenshot 2021-02-05 at 10 16 53" src="https://user-images.githubusercontent.com/30384423/107014374-cc4b2280-679b-11eb-9f14-86422a47a295.png">
|
process
|
what does cpu resource mean when deploying a solution it often asks for the flavour clear for memory and storage but what does compute units mean core cores why put img width alt screenshot at src
| 1
|
10,775
| 13,595,828,530
|
IssuesEvent
|
2020-09-22 04:19:51
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Schema for object parameters
|
Pri2 devops-cicd-process/tech devops/prod product-question
|
Is it possible to set the schema/data types for object parameters? It is possible to set a default value, but is there a way to suggest the shape of the object?
Take the example from the documentation:
```yaml
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
```
I would envision something like this:
```yaml
- name: myObject
type: object
default:
- name: foo
type: string
default: FOO
- name: bar
type: string
default: BAR
- name: things
type: string
values:
- one
- two
- three
default:
- one
- two
- three
```
I realize this won't work though as the parser wouldn't know if you were definining the schema or if this was the exact object that should be the default of `myObject` (ie that object now includes a sequence of `name`/etc objects).
Is there a way to do this, and if so can the docs be updated to reflect? Many of my parameters are objects with sub settings. I do this for organizational purposes but it also makes it easier to override only specific *groups* of settings (ie pass-thru an object of the same schema from a calling template).
The one thing preventing me from switching to typed parameters is that it's not going to gain me much in most places if I cannot define the schema for an object type.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
Schema for object parameters - Is it possible to set the schema/data types for object parameters? It is possible to set a default value, but is there a way to suggest the shape of the object?
Take the example from the documentation:
```yaml
- name: myObject
type: object
default:
foo: FOO
bar: BAR
things:
- one
- two
- three
```
I would envision something like this:
```yaml
- name: myObject
type: object
default:
- name: foo
type: string
default: FOO
- name: bar
type: string
default: BAR
- name: things
type: string
values:
- one
- two
- three
default:
- one
- two
- three
```
I realize this won't work though as the parser wouldn't know if you were definining the schema or if this was the exact object that should be the default of `myObject` (ie that object now includes a sequence of `name`/etc objects).
Is there a way to do this, and if so can the docs be updated to reflect? Many of my parameters are objects with sub settings. I do this for organizational purposes but it also makes it easier to override only specific *groups* of settings (ie pass-thru an object of the same schema from a calling template).
The one thing preventing me from switching to typed parameters is that it's not going to gain me much in most places if I cannot define the schema for an object type.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 790318bb-8220-3241-4ca7-73351074492f
* Version Independent ID: db1da9db-3694-779b-17aa-1ed67fcecf86
* Content: [Use runtime and type-safe parameters - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script)
* Content Source: [docs/pipelines/process/runtime-parameters.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/runtime-parameters.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
schema for object parameters is it possible to set the schema data types for object parameters it is possible to set a default value but is there a way to suggest the shape of the object take the example from the documentation yaml name myobject type object default foo foo bar bar things one two three i would envision something like this yaml name myobject type object default name foo type string default foo name bar type string default bar name things type string values one two three default one two three i realize this won t work though as the parser wouldn t know if you were definining the schema or if this was the exact object that should be the default of myobject ie that object now includes a sequence of name etc objects is there a way to do this and if so can the docs be updated to reflect many of my parameters are objects with sub settings i do this for organizational purposes but it also makes it easier to override only specific groups of settings ie pass thru an object of the same schema from a calling template the one thing preventing me from switching to typed parameters is that it s not going to gain me much in most places if i cannot define the schema for an object type document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
830
| 3,296,831,925
|
IssuesEvent
|
2015-11-02 02:30:00
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
opened
|
color threshold in feedback modules
|
enhancement video processing
|
it will help deal with things filling in backgrounds over time
|
1.0
|
color threshold in feedback modules - it will help deal with things filling in backgrounds over time
|
process
|
color threshold in feedback modules it will help deal with things filling in backgrounds over time
| 1
|
92,688
| 11,699,744,098
|
IssuesEvent
|
2020-03-06 16:10:36
|
tonerdo/coverlet
|
https://api.github.com/repos/tonerdo/coverlet
|
closed
|
Coverlet console + vstest runs, but generates cobertura.xml is empty
|
as-designed
|
Hi,
I'm working on a project that has a mix of c++ and c# projects, hosted on azure devops.
Both types with their own unit test, the C++ with gmock.
Since azure doesn't support vstest.console coverage result's, I'm looking for any workaround.
I have been trying coverlet to get the cobertura.xml that can be published into my pipelines.
But when I do it, it runs everything right, generates the .coverage as expected (I can open it in vs and navigate my hierarchy), but the cobertura.xml is empty:
`<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="1" branch-rate="1" version="1.9" timestamp="1583454090" lines-covered="0" lines-valid="0" branches-covered="0" branches-valid="0">
<sources />
<packages />
</coverage>`
...and, after a sucessfull execution with the regular printing from vstest, I'm getting "NaN%" printed in the console:
`
Calculating coverage result...
Generating report 'C:\tpapps\src\prime\main\lib\TOS36\Debug\coverage.cobertura.xml'
+--------+------+--------+--------+
| Module | Line | Branch | Method |
+--------+------+--------+--------+
+---------+------+--------+--------+
| | Line | Branch | Method |
+---------+------+--------+--------+
| Total | 100% | 100% | 100% |
+---------+------+--------+--------+
| Average | NaN% | NaN% | NaN% |
+---------+------+--------+--------+
`
I've searched on the web, but the "closer" thing is a one that it's printing "∞%", not NaN. Either way, my binaries are already in Debug, not on Release.
This is the command line:
`
./coverlet.exe .\MyUnitTest.exe --target "$vstest" --targetargs "MyUnitTest.exe /InIsolation /Platform:x64 /TestAdapterPath:c:\local\packages\GoogleTestAdapter.0.17.1\build\_common /Enablecodecoverage /Settings:C:\local\cpp.runsettings" --verbosity detailed --include "[*DllFilter*]*" --format "cobertura"
`
|
1.0
|
Coverlet console + vstest runs, but generates cobertura.xml is empty - Hi,
I'm working on a project that has a mix of c++ and c# projects, hosted on azure devops.
Both types with their own unit test, the C++ with gmock.
Since azure doesn't support vstest.console coverage result's, I'm looking for any workaround.
I have been trying coverlet to get the cobertura.xml that can be published into my pipelines.
But when I do it, it runs everything right, generates the .coverage as expected (I can open it in vs and navigate my hierarchy), but the cobertura.xml is empty:
`<?xml version="1.0" encoding="utf-8"?>
<coverage line-rate="1" branch-rate="1" version="1.9" timestamp="1583454090" lines-covered="0" lines-valid="0" branches-covered="0" branches-valid="0">
<sources />
<packages />
</coverage>`
...and, after a sucessfull execution with the regular printing from vstest, I'm getting "NaN%" printed in the console:
`
Calculating coverage result...
Generating report 'C:\tpapps\src\prime\main\lib\TOS36\Debug\coverage.cobertura.xml'
+--------+------+--------+--------+
| Module | Line | Branch | Method |
+--------+------+--------+--------+
+---------+------+--------+--------+
| | Line | Branch | Method |
+---------+------+--------+--------+
| Total | 100% | 100% | 100% |
+---------+------+--------+--------+
| Average | NaN% | NaN% | NaN% |
+---------+------+--------+--------+
`
I've searched on the web, but the "closer" thing is a one that it's printing "∞%", not NaN. Either way, my binaries are already in Debug, not on Release.
This is the command line:
`
./coverlet.exe .\MyUnitTest.exe --target "$vstest" --targetargs "MyUnitTest.exe /InIsolation /Platform:x64 /TestAdapterPath:c:\local\packages\GoogleTestAdapter.0.17.1\build\_common /Enablecodecoverage /Settings:C:\local\cpp.runsettings" --verbosity detailed --include "[*DllFilter*]*" --format "cobertura"
`
|
non_process
|
coverlet console vstest runs but generates cobertura xml is empty hi i m working on a project that has a mix of c and c projects hosted on azure devops both types with their own unit test the c with gmock since azure doesn t support vstest console coverage result s i m looking for any workaround i have been trying coverlet to get the cobertura xml that can be published into my pipelines but when i do it it runs everything right generates the coverage as expected i can open it in vs and navigate my hierarchy but the cobertura xml is empty and after a sucessfull execution with the regular printing from vstest i m getting nan printed in the console calculating coverage result generating report c tpapps src prime main lib debug coverage cobertura xml module line branch method line branch method total average nan nan nan i ve searched on the web but the closer thing is a one that it s printing ∞ not nan either way my binaries are already in debug not on release this is the command line coverlet exe myunittest exe target vstest targetargs myunittest exe inisolation platform testadapterpath c local packages googletestadapter build common enablecodecoverage settings c local cpp runsettings verbosity detailed include format cobertura
| 0
|
11,916
| 8,551,850,297
|
IssuesEvent
|
2018-11-07 19:16:03
|
jowein/ridemo
|
https://api.github.com/repos/jowein/ridemo
|
opened
|
CVE-2012-1098 Medium Severity Vulnerability detected by WhiteSource
|
security vulnerability
|
## CVE-2012-1098 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=24 height=25> Vulnerable Library - <b>rails-3.0.9.gem</b></p></summary>
<p>Ruby on Rails is a full-stack web framework optimized for programmer happiness and sustainable productivity. It encourages beautiful code by favoring convention over configuration.</p>
<p>path: /ridemo/Gemfile.lock</p>
<p>
<p>Library home page: <a href=http://rubygems.org/gems/rails-3.0.9.gem>http://rubygems.org/gems/rails-3.0.9.gem</a></p>
Dependency Hierarchy:
- :x: **rails-3.0.9.gem** (Vulnerable Library)
<p>Found in commit: <a href="https://github.com/jowein/ridemo/commit/360e434258e7163e19454242d82eff6339fa41f7">360e434258e7163e19454242d82eff6339fa41f7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=24 height=25> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in Ruby on Rails 3.0.x before 3.0.12, 3.1.x before 3.1.4, and 3.2.x before 3.2.2 allows remote attackers to inject arbitrary web script or HTML via vectors involving a SafeBuffer object that is manipulated through certain methods.
<p>Publish Date: 2012-03-13
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-1098>CVE-2012-1098</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=24 height=25> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=24 height=25> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=799275">https://bugzilla.redhat.com/show_bug.cgi?id=799275</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Upgrade to version rubygem-activesupport 3.0.12, rubygem-activesupport 3.1.4, rubygem-activesupport 3.2.2 or greater</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2012-1098 Medium Severity Vulnerability detected by WhiteSource - ## CVE-2012-1098 - Medium Severity Vulnerability
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/vulnerability_details.png' width=24 height=25> Vulnerable Library - <b>rails-3.0.9.gem</b></p></summary>
<p>Ruby on Rails is a full-stack web framework optimized for programmer happiness and sustainable productivity. It encourages beautiful code by favoring convention over configuration.</p>
<p>path: /ridemo/Gemfile.lock</p>
<p>
<p>Library home page: <a href=http://rubygems.org/gems/rails-3.0.9.gem>http://rubygems.org/gems/rails-3.0.9.gem</a></p>
Dependency Hierarchy:
- :x: **rails-3.0.9.gem** (Vulnerable Library)
<p>Found in commit: <a href="https://github.com/jowein/ridemo/commit/360e434258e7163e19454242d82eff6339fa41f7">360e434258e7163e19454242d82eff6339fa41f7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/medium_vul.png' width=24 height=25> Vulnerability Details</summary>
<p>
Cross-site scripting (XSS) vulnerability in Ruby on Rails 3.0.x before 3.0.12, 3.1.x before 3.1.4, and 3.2.x before 3.2.2 allows remote attackers to inject arbitrary web script or HTML via vectors involving a SafeBuffer object that is manipulated through certain methods.
<p>Publish Date: 2012-03-13
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2012-1098>CVE-2012-1098</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/cvss3.png' width=24 height=25> CVSS 2 Score Details (<b>4.3</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://www.whitesourcesoftware.com/wp-content/uploads/2018/10/suggested_fix.png' width=24 height=25> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugzilla.redhat.com/show_bug.cgi?id=799275">https://bugzilla.redhat.com/show_bug.cgi?id=799275</a></p>
<p>Release Date: 2017-12-31</p>
<p>Fix Resolution: Upgrade to version rubygem-activesupport 3.0.12, rubygem-activesupport 3.1.4, rubygem-activesupport 3.2.2 or greater</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium severity vulnerability detected by whitesource cve medium severity vulnerability vulnerable library rails gem ruby on rails is a full stack web framework optimized for programmer happiness and sustainable productivity it encourages beautiful code by favoring convention over configuration path ridemo gemfile lock library home page a href dependency hierarchy x rails gem vulnerable library found in commit a href vulnerability details cross site scripting xss vulnerability in ruby on rails x before x before and x before allows remote attackers to inject arbitrary web script or html via vectors involving a safebuffer object that is manipulated through certain methods publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution upgrade to version rubygem activesupport rubygem activesupport rubygem activesupport or greater step up your open source security game with whitesource
| 0
|
623,421
| 19,667,446,445
|
IssuesEvent
|
2022-01-11 00:57:50
|
apcountryman/picolibrary-microchip-megaavr
|
https://api.github.com/repos/apcountryman/picolibrary-microchip-megaavr
|
opened
|
Prepare library for a fundamental redesign
|
priority-normal status-in_development type-refactoring
|
Prepare library for a fundamental redesign by:
- [ ] Removing all files and directories in the `configuration/testing-interactive-atmega2560-arduino-mega-2560/test/` directory
- [ ] Removing all files and directories in the `configuration/testing-interactive-atmega328p-adafruit-metro-mini/test/` directory
- [ ] Removing all files and directories in the `configuration/testing-interactive-atmega328p-arduino-uno/test/` directory
- [ ] Removing all files and directories in the `include/picolibrary/hardware/` directory
- [ ] Removing all files and directories in the `include/picolibrary/microchip/megaavr/` directory with the exception of:
- [ ] `include/picolibrary/microchip/megaavr/version.h`
- [ ] Removing all files and directories in the `source/picolibrary/microchip/megaavr/` directory with the exception of:
- [ ] `source/picolibrary/microchip/megaavr/version.cc.in`
- [ ] Removing all files and directories in the `test/interactive/picolibrary/` directory with the exception of:
- [ ] `test/interactive/picolibrary/CMakeLists.txt`
- [ ] `test/interactive/picolibrary/microchip/CMakeListst.txt`
- [ ] `test/interactive/picolibrary/microchip/megaavr/CMakeLists.txt`
- [ ] Removing uses of the following `picolibrary` CMake options and variables:
- [ ] `PICOLIBRARY_SUPPRESS_HUMAN_READABLE_ERROR_INFORMATION`
- [ ] `PICOLIBRARY_SUPPRESS_HUMAN_READABLE_EVENT_INFORMATION`
- [ ] `PICOLIBRARY_HARDWARE_INCLUDE_DIR`
|
1.0
|
Prepare library for a fundamental redesign - Prepare library for a fundamental redesign by:
- [ ] Removing all files and directories in the `configuration/testing-interactive-atmega2560-arduino-mega-2560/test/` directory
- [ ] Removing all files and directories in the `configuration/testing-interactive-atmega328p-adafruit-metro-mini/test/` directory
- [ ] Removing all files and directories in the `configuration/testing-interactive-atmega328p-arduino-uno/test/` directory
- [ ] Removing all files and directories in the `include/picolibrary/hardware/` directory
- [ ] Removing all files and directories in the `include/picolibrary/microchip/megaavr/` directory with the exception of:
- [ ] `include/picolibrary/microchip/megaavr/version.h`
- [ ] Removing all files and directories in the `source/picolibrary/microchip/megaavr/` directory with the exception of:
- [ ] `source/picolibrary/microchip/megaavr/version.cc.in`
- [ ] Removing all files and directories in the `test/interactive/picolibrary/` directory with the exception of:
- [ ] `test/interactive/picolibrary/CMakeLists.txt`
- [ ] `test/interactive/picolibrary/microchip/CMakeListst.txt`
- [ ] `test/interactive/picolibrary/microchip/megaavr/CMakeLists.txt`
- [ ] Removing uses of the following `picolibrary` CMake options and variables:
- [ ] `PICOLIBRARY_SUPPRESS_HUMAN_READABLE_ERROR_INFORMATION`
- [ ] `PICOLIBRARY_SUPPRESS_HUMAN_READABLE_EVENT_INFORMATION`
- [ ] `PICOLIBRARY_HARDWARE_INCLUDE_DIR`
|
non_process
|
prepare library for a fundamental redesign prepare library for a fundamental redesign by removing all files and directories in the configuration testing interactive arduino mega test directory removing all files and directories in the configuration testing interactive adafruit metro mini test directory removing all files and directories in the configuration testing interactive arduino uno test directory removing all files and directories in the include picolibrary hardware directory removing all files and directories in the include picolibrary microchip megaavr directory with the exception of include picolibrary microchip megaavr version h removing all files and directories in the source picolibrary microchip megaavr directory with the exception of source picolibrary microchip megaavr version cc in removing all files and directories in the test interactive picolibrary directory with the exception of test interactive picolibrary cmakelists txt test interactive picolibrary microchip cmakelistst txt test interactive picolibrary microchip megaavr cmakelists txt removing uses of the following picolibrary cmake options and variables picolibrary suppress human readable error information picolibrary suppress human readable event information picolibrary hardware include dir
| 0
|
98,931
| 8,685,919,451
|
IssuesEvent
|
2018-12-03 09:22:31
|
humera987/FXLabs-Test-Automation
|
https://api.github.com/repos/humera987/FXLabs-Test-Automation
|
reopened
|
FX Testing 3 : ApiV1JobsProjectIdIdGetQueryParamPageEmptyValue
|
FX Testing 3
|
Project : FX Testing 3
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZWFhODBiOTQtOTA1My00YWM5LWI5MDktYjFhMGQ3YzU1MjNh; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 07:45:55 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/jobs/project-id/WiFOgCKY?page=
Request :
Response :
{
"timestamp" : "2018-12-03T07:45:56.513+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/jobs/project-id/WiFOgCKY"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
1.0
|
FX Testing 3 : ApiV1JobsProjectIdIdGetQueryParamPageEmptyValue - Project : FX Testing 3
Job : UAT
Env : UAT
Region : US_WEST
Result : fail
Status Code : 404
Headers : {X-Content-Type-Options=[nosniff], X-XSS-Protection=[1; mode=block], Cache-Control=[no-cache, no-store, max-age=0, must-revalidate], Pragma=[no-cache], Expires=[0], X-Frame-Options=[DENY], Set-Cookie=[SESSION=ZWFhODBiOTQtOTA1My00YWM5LWI5MDktYjFhMGQ3YzU1MjNh; Path=/; HttpOnly], Content-Type=[application/json;charset=UTF-8], Transfer-Encoding=[chunked], Date=[Mon, 03 Dec 2018 07:45:55 GMT]}
Endpoint : http://13.56.210.25/api/v1/api/v1/jobs/project-id/WiFOgCKY?page=
Request :
Response :
{
"timestamp" : "2018-12-03T07:45:56.513+0000",
"status" : 404,
"error" : "Not Found",
"message" : "No message available",
"path" : "/api/v1/api/v1/jobs/project-id/WiFOgCKY"
}
Logs :
Assertion [@StatusCode != 401] resolved-to [404 != 401] result [Passed]Assertion [@StatusCode != 500] resolved-to [404 != 500] result [Passed]Assertion [@StatusCode != 404] resolved-to [404 != 404] result [Failed]Assertion [@StatusCode != 200] resolved-to [404 != 200] result [Passed]
--- FX Bot ---
|
non_process
|
fx testing project fx testing job uat env uat region us west result fail status code headers x content type options x xss protection cache control pragma expires x frame options set cookie content type transfer encoding date endpoint request response timestamp status error not found message no message available path api api jobs project id wifogcky logs assertion resolved to result assertion resolved to result assertion resolved to result assertion resolved to result fx bot
| 0
|
1,982
| 4,809,494,513
|
IssuesEvent
|
2016-11-03 08:45:47
|
paulkornikov/Pragonas
|
https://api.github.com/repos/paulkornikov/Pragonas
|
closed
|
Refactoring write process reports
|
a-enhancement processus workload III
|
pour alléger le texte sauvé en base
passer le traitement en service et non comme méthode du trace log
|
1.0
|
Refactoring write process reports - pour alléger le texte sauvé en base
passer le traitement en service et non comme méthode du trace log
|
process
|
refactoring write process reports pour alléger le texte sauvé en base passer le traitement en service et non comme méthode du trace log
| 1
|
264,072
| 8,304,904,548
|
IssuesEvent
|
2018-09-21 23:41:40
|
python/mypy
|
https://api.github.com/repos/python/mypy
|
opened
|
mypy ignores type errors inside `list` and `dict` calls
|
bug priority-0-high
|
In the following program:
```
from typing import Union, Iterable, Tuple
class A:
def foo(self) -> Iterable[Tuple[int, int]]: pass
def bar(x: int) -> Union[A, int]: ...
list(bar('lol').foo()) # No errors!
dict(bar('lol').foo()) # No errors!
tuple(bar('lol').foo()) # Does error
set(bar('lol').foo()) # Does error
```
two errors ought to be generated for each call (one for `int` not having `.foo`, one for `'lol'` being the wrong type of argument). These errors seem to be suppressed while checking `list` and `dict`, which get filled with `Any`s.
|
1.0
|
mypy ignores type errors inside `list` and `dict` calls - In the following program:
```
from typing import Union, Iterable, Tuple
class A:
def foo(self) -> Iterable[Tuple[int, int]]: pass
def bar(x: int) -> Union[A, int]: ...
list(bar('lol').foo()) # No errors!
dict(bar('lol').foo()) # No errors!
tuple(bar('lol').foo()) # Does error
set(bar('lol').foo()) # Does error
```
two errors ought to be generated for each call (one for `int` not having `.foo`, one for `'lol'` being the wrong type of argument). These errors seem to be suppressed while checking `list` and `dict`, which get filled with `Any`s.
|
non_process
|
mypy ignores type errors inside list and dict calls in the following program from typing import union iterable tuple class a def foo self iterable pass def bar x int union list bar lol foo no errors dict bar lol foo no errors tuple bar lol foo does error set bar lol foo does error two errors ought to be generated for each call one for int not having foo one for lol being the wrong type of argument these errors seem to be suppressed while checking list and dict which get filled with any s
| 0
|
20,848
| 3,422,151,925
|
IssuesEvent
|
2015-12-08 21:47:02
|
dart-lang/sdk
|
https://api.github.com/repos/dart-lang/sdk
|
closed
|
Evaluate if it is possible to not invoke the isolate create callback when Isolate.SpawnFunction is used
|
area-vm Priority-Medium triaged Type-Defect
|
Evaluate if it is possible to not invoke the isolate create callback when
Isolate.SpawnFunction is called. This has some implications when script snapshots are used.
Would be ideal if the spawned isolate is able to clone the script object from the spawning isolate.
I am not sure if there are some issues in dartium with regards to this.
|
1.0
|
Evaluate if it is possible to not invoke the isolate create callback when Isolate.SpawnFunction is used - Evaluate if it is possible to not invoke the isolate create callback when
Isolate.SpawnFunction is called. This has some implications when script snapshots are used.
Would be ideal if the spawned isolate is able to clone the script object from the spawning isolate.
I am not sure if there are some issues in dartium with regards to this.
|
non_process
|
evaluate if it is possible to not invoke the isolate create callback when isolate spawnfunction is used evaluate if it is possible to not invoke the isolate create callback when isolate spawnfunction is called this has some implications when script snapshots are used would be ideal if the spawned isolate is able to clone the script object from the spawning isolate i am not sure if there are some issues in dartium with regards to this
| 0
|
6,852
| 9,992,123,230
|
IssuesEvent
|
2019-07-11 12:48:24
|
googleapis/google-cloud-cpp
|
https://api.github.com/repos/googleapis/google-cloud-cpp
|
closed
|
Ideas to speed up CI builds.
|
status: will not fix testing type: process
|
I have several ideas on how to speed up the build. I am going to record them here so I do not forget, and in case somebody volunteers:
### Cache the build directory
This is what we document in #118, unfortunately all my attempts have failed. Probably the Travis cache is somehow getting stale compared to the code? Or maybe some generated files?
### Cache the Docker images
We use Docker to make build reproducible, and to simulate the experience of our users. Having pre-built images could speed up things, but it is not easy: the image generation takes ~5 minutes. Uploading to the default Travis cache (S3) and downloading from it takes (as I recall, somebody should measure) about 3 minutes. And then there is the storing of the image as a tarball and restoring from the tarball.
Generally, I have found that caching Docker images in Travis is not a good tradeoff.
CircleCI promises better caching for Docker images.
Using Google Container Registry should be better, but requires authentication and I am not comfortable hosting Google credentials in Travis.
### Skip boringssl
When we build gRPC we build all its dependencies, including boringssl, which can be easily replaced by the pre-built openssl installed in the system. Furthermore, building borgingssl requires golang, which adds about 100MiB of packages to the Docker images (out of 250 MiB, sigh).
Unfortunately that is not the experience our users will have, and we want to test for that.
### Use ExternalProject_Add vs. submodules.
When we clone the submodules we clone the full history (for "reasons"). That is a lot of data to download, and accounts for 3 minutes of the builds (of around 20 minutes). Using external projects should be faster (only the last version is downloaded).
|
1.0
|
Ideas to speed up CI builds. - I have several ideas on how to speed up the build. I am going to record them here so I do not forget, and in case somebody volunteers:
### Cache the build directory
This is what we document in #118, unfortunately all my attempts have failed. Probably the Travis cache is somehow getting stale compared to the code? Or maybe some generated files?
### Cache the Docker images
We use Docker to make build reproducible, and to simulate the experience of our users. Having pre-built images could speed up things, but it is not easy: the image generation takes ~5 minutes. Uploading to the default Travis cache (S3) and downloading from it takes (as I recall, somebody should measure) about 3 minutes. And then there is the storing of the image as a tarball and restoring from the tarball.
Generally, I have found that caching Docker images in Travis is not a good tradeoff.
CircleCI promises better caching for Docker images.
Using Google Container Registry should be better, but requires authentication and I am not comfortable hosting Google credentials in Travis.
### Skip boringssl
When we build gRPC we build all its dependencies, including boringssl, which can be easily replaced by the pre-built openssl installed in the system. Furthermore, building borgingssl requires golang, which adds about 100MiB of packages to the Docker images (out of 250 MiB, sigh).
Unfortunately that is not the experience our users will have, and we want to test for that.
### Use ExternalProject_Add vs. submodules.
When we clone the submodules we clone the full history (for "reasons"). That is a lot of data to download, and accounts for 3 minutes of the builds (of around 20 minutes). Using external projects should be faster (only the last version is downloaded).
|
process
|
ideas to speed up ci builds i have several ideas on how to speed up the build i am going to record them here so i do not forget and in case somebody volunteers cache the build directory this is what we document in unfortunately all my attempts have failed probably the travis cache is somehow getting stale compared to the code or maybe some generated files cache the docker images we use docker to make build reproducible and to simulate the experience of our users having pre built images could speed up things but it is not easy the image generation takes minutes uploading to the default travis cache and downloading from it takes as i recall somebody should measure about minutes and then there is the storing of the image as a tarball and restoring from the tarball generally i have found that caching docker images in travis is not a good tradeoff circleci promises better caching for docker images using google container registry should be better but requires authentication and i am not comfortable hosting google credentials in travis skip boringssl when we build grpc we build all its dependencies including boringssl which can be easily replaced by the pre built openssl installed in the system furthermore building borgingssl requires golang which adds about of packages to the docker images out of mib sigh unfortunately that is not the experience our users will have and we want to test for that use externalproject add vs submodules when we clone the submodules we clone the full history for reasons that is a lot of data to download and accounts for minutes of the builds of around minutes using external projects should be faster only the last version is downloaded
| 1
|
192,096
| 22,215,897,972
|
IssuesEvent
|
2022-06-08 01:34:56
|
ShaikUsaf/linux-3.0.35
|
https://api.github.com/repos/ShaikUsaf/linux-3.0.35
|
opened
|
CVE-2017-8890 (High) detected in linuxlinux-3.0.49
|
security vulnerability
|
## CVE-2017-8890 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/inet_connection_sock.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The inet_csk_clone_lock function in net/ipv4/inet_connection_sock.c in the Linux kernel through 4.10.15 allows attackers to cause a denial of service (double free) or possibly have unspecified other impact by leveraging use of the accept system call.
<p>Publish Date: 2017-05-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-8890>CVE-2017-8890</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-8890">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-8890</a></p>
<p>Release Date: 2017-05-10</p>
<p>Fix Resolution: v4.12-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2017-8890 (High) detected in linuxlinux-3.0.49 - ## CVE-2017-8890 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/inet_connection_sock.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The inet_csk_clone_lock function in net/ipv4/inet_connection_sock.c in the Linux kernel through 4.10.15 allows attackers to cause a denial of service (double free) or possibly have unspecified other impact by leveraging use of the accept system call.
<p>Publish Date: 2017-05-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-8890>CVE-2017-8890</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-8890">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-8890</a></p>
<p>Release Date: 2017-05-10</p>
<p>Fix Resolution: v4.12-rc1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files net inet connection sock c vulnerability details the inet csk clone lock function in net inet connection sock c in the linux kernel through allows attackers to cause a denial of service double free or possibly have unspecified other impact by leveraging use of the accept system call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
14,586
| 17,703,511,267
|
IssuesEvent
|
2021-08-25 03:10:46
|
tdwg/dwc
|
https://api.github.com/repos/tdwg/dwc
|
closed
|
Change term - locality
|
Term - change Class - Location non-normative Process - complete
|
## Change term
* Submitter: Paula Zermoglio @pzermoglio
* Justification (why is this change necessary?): Clarity
* Proponents (who needs this change): Anyone interested in data from protected areas
Current Term definition: https://dwc.tdwg.org/terms/#dwc:locality
Proposed new attributes of the term:
* Term name (in lowerCamelCase): locality
* Organized in Class (e.g. Location, Taxon): Location
* Definition of the term: (unchanged): The specific description of the place.
* Usage comments (recommendations regarding content, etc.): Less specific geographic information can be provided in other geographic terms (higherGeography, continent, country, stateProvince, county, municipality, waterBody, island, islandGroup). This term may contain information modified from the original to correct perceived errors or standardize the description.
* Examples: `Bariloche, 25 km NNE via Ruta Nacional 40 (=Ruta 237)`, **`Queets Rainforest, Olympic National Park`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/locality-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Gathering/NamedAreas/NamedArea/AreaName
Many questions have landed on my desk over the years about how to capture information about protected areas using Location terms. As no specific term exists in DwC for that information, our recommendation has always been "include it in the locality / verbatimLocality field as part of the locality description" (usually append/prepend to whatever is there already).
Currently dwc:locality has only one example: 'Bariloche, 25 km NNE via Ruta Nacional 40 (=Ruta 237)'.
I find it would be useful for users to have an extra example of such strings containing protected areas info, it would probably save them a lot of time.
Possible example:
'Olympic National Park, Queets Rainforest"
|
1.0
|
Change term - locality - ## Change term
* Submitter: Paula Zermoglio @pzermoglio
* Justification (why is this change necessary?): Clarity
* Proponents (who needs this change): Anyone interested in data from protected areas
Current Term definition: https://dwc.tdwg.org/terms/#dwc:locality
Proposed new attributes of the term:
* Term name (in lowerCamelCase): locality
* Organized in Class (e.g. Location, Taxon): Location
* Definition of the term: (unchanged): The specific description of the place.
* Usage comments (recommendations regarding content, etc.): Less specific geographic information can be provided in other geographic terms (higherGeography, continent, country, stateProvince, county, municipality, waterBody, island, islandGroup). This term may contain information modified from the original to correct perceived errors or standardize the description.
* Examples: `Bariloche, 25 km NNE via Ruta Nacional 40 (=Ruta 237)`, **`Queets Rainforest, Olympic National Park`**
* Refines (identifier of the broader term this term refines, if applicable): None
* Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/locality-2017-10-06
* ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Gathering/NamedAreas/NamedArea/AreaName
Many questions have landed on my desk over the years about how to capture information about protected areas using Location terms. As no specific term exists in DwC for that information, our recommendation has always been "include it in the locality / verbatimLocality field as part of the locality description" (usually append/prepend to whatever is there already).
Currently dwc:locality has only one example: 'Bariloche, 25 km NNE via Ruta Nacional 40 (=Ruta 237)'.
I find it would be useful for users to have an extra example of such strings containing protected areas info, it would probably save them a lot of time.
Possible example:
'Olympic National Park, Queets Rainforest"
|
process
|
change term locality change term submitter paula zermoglio pzermoglio justification why is this change necessary clarity proponents who needs this change anyone interested in data from protected areas current term definition proposed new attributes of the term term name in lowercamelcase locality organized in class e g location taxon location definition of the term unchanged the specific description of the place usage comments recommendations regarding content etc less specific geographic information can be provided in other geographic terms highergeography continent country stateprovince county municipality waterbody island islandgroup this term may contain information modified from the original to correct perceived errors or standardize the description examples bariloche km nne via ruta nacional ruta queets rainforest olympic national park refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit gathering namedareas namedarea areaname many questions have landed on my desk over the years about how to capture information about protected areas using location terms as no specific term exists in dwc for that information our recommendation has always been include it in the locality verbatimlocality field as part of the locality description usually append prepend to whatever is there already currently dwc locality has only one example bariloche km nne via ruta nacional ruta i find it would be useful for users to have an extra example of such strings containing protected areas info it would probably save them a lot of time possible example olympic national park queets rainforest
| 1
|
4,915
| 7,788,454,781
|
IssuesEvent
|
2018-06-07 04:48:50
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
test: improve `parallel/test-setproctitle.js`
|
process test windows
|
<!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: `master`
* **Platform**: `Windows`
* **Subsystem**: test,process
<!-- Enter your issue details below this comment. -->
Current implementation skips a part of the test claiming [`'Windows does not have "ps" utility'`](https://github.com/nodejs/node/blob/master/test/parallel/test-setproctitle.js#L23) which is not strickly true — Windows has `tasklist` and PowerShell has `ps` aliasing `Get-Proccess`.
The rest of the test should be implemented for Windows.
|
1.0
|
test: improve `parallel/test-setproctitle.js` - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: `master`
* **Platform**: `Windows`
* **Subsystem**: test,process
<!-- Enter your issue details below this comment. -->
Current implementation skips a part of the test claiming [`'Windows does not have "ps" utility'`](https://github.com/nodejs/node/blob/master/test/parallel/test-setproctitle.js#L23) which is not strickly true — Windows has `tasklist` and PowerShell has `ps` aliasing `Get-Proccess`.
The rest of the test should be implemented for Windows.
|
process
|
test improve parallel test setproctitle js thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version master platform windows subsystem test process current implementation skips a part of the test claiming which is not strickly true mdash windows has tasklist and powershell has ps aliasing get proccess the rest of the test should be implemented for windows
| 1
|
3,482
| 6,553,620,167
|
IssuesEvent
|
2017-09-05 23:47:26
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
closed
|
IDOMAL: bad pull location -> many failed submissions
|
in progress ontology processing problem
|
The [IDOMAL ontology](http://bioportal.bioontology.org/ontologies/IDOMAL) has a [pull location](http://purl.obolibrary.org/obo/idomal.obo) that's no longer valid and has resulted in a large number of submissions with status "Error Rdf". We should contact the author to see if there's a new pull location, and perhaps in the mean time disable the pull location by editing the submission. Also, the large number of bad submissions should be deleted.
|
1.0
|
IDOMAL: bad pull location -> many failed submissions - The [IDOMAL ontology](http://bioportal.bioontology.org/ontologies/IDOMAL) has a [pull location](http://purl.obolibrary.org/obo/idomal.obo) that's no longer valid and has resulted in a large number of submissions with status "Error Rdf". We should contact the author to see if there's a new pull location, and perhaps in the mean time disable the pull location by editing the submission. Also, the large number of bad submissions should be deleted.
|
process
|
idomal bad pull location many failed submissions the has a that s no longer valid and has resulted in a large number of submissions with status error rdf we should contact the author to see if there s a new pull location and perhaps in the mean time disable the pull location by editing the submission also the large number of bad submissions should be deleted
| 1
|
20,885
| 27,708,211,644
|
IssuesEvent
|
2023-03-14 12:40:53
|
toggl/track-windows-feedback
|
https://api.github.com/repos/toggl/track-windows-feedback
|
closed
|
Shortcuts (@ and #) don't work in manual mode
|
bug processed
|
**Describe the bug**
If Toggl app is used in manual mode, the shortcuts for project and hashtags don't work.
**Steps to reproduce**
1. Swith the app to Manual Mode
2. Press the "Enter Time Manually" button
3. Press @ or # in the description pop-up and then first letters of project or tag
4. Nothing happens
**Expected behavior**
App should propose the project after pressing @ and the tag after pressing # basic on the letters you are typing.
**Environment (please complete the following information):**
- Version 8.0.9
|
1.0
|
Shortcuts (@ and #) don't work in manual mode - **Describe the bug**
If Toggl app is used in manual mode, the shortcuts for project and hashtags don't work.
**Steps to reproduce**
1. Swith the app to Manual Mode
2. Press the "Enter Time Manually" button
3. Press @ or # in the description pop-up and then first letters of project or tag
4. Nothing happens
**Expected behavior**
App should propose the project after pressing @ and the tag after pressing # basic on the letters you are typing.
**Environment (please complete the following information):**
- Version 8.0.9
|
process
|
shortcuts and don t work in manual mode describe the bug if toggl app is used in manual mode the shortcuts for project and hashtags don t work steps to reproduce swith the app to manual mode press the enter time manually button press or in the description pop up and then first letters of project or tag nothing happens expected behavior app should propose the project after pressing and the tag after pressing basic on the letters you are typing environment please complete the following information version
| 1
|
197,037
| 15,619,071,185
|
IssuesEvent
|
2021-03-20 03:06:23
|
alloploha/HHSwarm
|
https://api.github.com/repos/alloploha/HHSwarm
|
opened
|
Translate README to Korean language. (README를 한국어로 번역)
|
documentation
|
Players from Korea are important auditory, so they might want to contribute to the project.
Need to create [README.ko.md](../blob/main/README.ko.md).
한국 선수들은 청각이 중요하기 때문에 프로젝트에 기여하고 싶을지도 모릅니다.
[README.ko.md](../blob/main/README.ko.md)를 만들어야합니다.
|
1.0
|
Translate README to Korean language. (README를 한국어로 번역) - Players from Korea are important auditory, so they might want to contribute to the project.
Need to create [README.ko.md](../blob/main/README.ko.md).
한국 선수들은 청각이 중요하기 때문에 프로젝트에 기여하고 싶을지도 모릅니다.
[README.ko.md](../blob/main/README.ko.md)를 만들어야합니다.
|
non_process
|
translate readme to korean language readme를 한국어로 번역 players from korea are important auditory so they might want to contribute to the project need to create blob main readme ko md 한국 선수들은 청각이 중요하기 때문에 프로젝트에 기여하고 싶을지도 모릅니다 blob main readme ko md 를 만들어야합니다
| 0
|
18,654
| 24,581,261,453
|
IssuesEvent
|
2022-10-13 15:46:55
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Consent API] State should be changed in the data sharing consent type for the below scenarios
|
Bug P1 Process: Fixed Process: Tested QA Process: Tested dev
|
State should be changed in the data sharing consent type for below scenarios,
**1.**
**AR:** The state of the consent record is 'ACTIVE' for data sharing status Not Provided
**ER:** The state of the consent record should be 'REJECTED' for data sharing status Not Provided
**2.**
**AR:** The state of the consent record is 'ACTIVE' for data sharing status Not Applicable
**ER:** The state of the consent record is 'STATE_UNSPECIFIED' for data sharing status Not Applicable
**Note:** Issue needs to be fixed for below scenario also,
If the previous version of data sharing consent is Provided and the latest version is Not Provided then the state of consent record should be REVOKED.
|
3.0
|
[Consent API] State should be changed in the data sharing consent type for the below scenarios - State should be changed in the data sharing consent type for below scenarios,
**1.**
**AR:** The state of the consent record is 'ACTIVE' for data sharing status Not Provided
**ER:** The state of the consent record should be 'REJECTED' for data sharing status Not Provided
**2.**
**AR:** The state of the consent record is 'ACTIVE' for data sharing status Not Applicable
**ER:** The state of the consent record is 'STATE_UNSPECIFIED' for data sharing status Not Applicable
**Note:** Issue needs to be fixed for below scenario also,
If the previous version of data sharing consent is Provided and the latest version is Not Provided then the state of consent record should be REVOKED.
|
process
|
state should be changed in the data sharing consent type for the below scenarios state should be changed in the data sharing consent type for below scenarios ar the state of the consent record is active for data sharing status not provided er the state of the consent record should be rejected for data sharing status not provided ar the state of the consent record is active for data sharing status not applicable er the state of the consent record is state unspecified for data sharing status not applicable note issue needs to be fixed for below scenario also if the previous version of data sharing consent is provided and the latest version is not provided then the state of consent record should be revoked
| 1
|
16,527
| 21,554,433,359
|
IssuesEvent
|
2022-04-30 06:44:11
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Could not load or assemble system.serviceprocess.servicecontroller on windows services.
|
question area-System.ServiceProcess needs-further-triage
|
Hi, I make a windows service with .net core. It happend some exception when the service start. Can you tell me how to solve it?
-------------------------------------------------------------------------------------------------------------
Application: MqttWorkerService.exe
CoreCLR Version: 6.0.422.16404
.NET Version: 6.0.4
Description: The process was terminated due to an unhandled exception.
Exception Info: System.IO.FileNotFoundException: Could not load file or assembly 'System.ServiceProcess.ServiceController, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. 系统找不到指定的文件。
File name: 'System.ServiceProcess.ServiceController, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at Microsoft.Extensions.Hosting.WindowsServiceLifetimeHostBuilderExtensions.<>c__DisplayClass1_0.<UseWindowsService>b__1(HostBuilderContext hostContext, IServiceCollection services)
at Microsoft.Extensions.Hosting.HostBuilder.CreateServiceProvider()
at Microsoft.Extensions.Hosting.HostBuilder.Build()
at Program.<Main>$(String[] args) in E:\source_code\C_Sharp\MQTTSniffServices\MqttWorkerService\Program.cs:line 3
at Program.<Main>(String[] args)
-------------------------------------------------------------------------------------------------------------------
this is my .net platform info:
Host (useful for support):
Version: 6.0.4
Commit: be98e88c76
.NET SDKs installed:
No SDKs were found.
.NET runtimes installed:
Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
--------------------------------------------------------------------------------------------------------------------------
My PC information:
Microsoft Windows Server 2019 Datacenter
Build 17763
|
1.0
|
Could not load or assemble system.serviceprocess.servicecontroller on windows services. - Hi, I make a windows service with .net core. It happend some exception when the service start. Can you tell me how to solve it?
-------------------------------------------------------------------------------------------------------------
Application: MqttWorkerService.exe
CoreCLR Version: 6.0.422.16404
.NET Version: 6.0.4
Description: The process was terminated due to an unhandled exception.
Exception Info: System.IO.FileNotFoundException: Could not load file or assembly 'System.ServiceProcess.ServiceController, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. 系统找不到指定的文件。
File name: 'System.ServiceProcess.ServiceController, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'
at Microsoft.Extensions.Hosting.WindowsServiceLifetimeHostBuilderExtensions.<>c__DisplayClass1_0.<UseWindowsService>b__1(HostBuilderContext hostContext, IServiceCollection services)
at Microsoft.Extensions.Hosting.HostBuilder.CreateServiceProvider()
at Microsoft.Extensions.Hosting.HostBuilder.Build()
at Program.<Main>$(String[] args) in E:\source_code\C_Sharp\MQTTSniffServices\MqttWorkerService\Program.cs:line 3
at Program.<Main>(String[] args)
-------------------------------------------------------------------------------------------------------------------
this is my .net platform info:
Host (useful for support):
Version: 6.0.4
Commit: be98e88c76
.NET SDKs installed:
No SDKs were found.
.NET runtimes installed:
Microsoft.AspNetCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 3.1.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 6.0.4 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
--------------------------------------------------------------------------------------------------------------------------
My PC information:
Microsoft Windows Server 2019 Datacenter
Build 17763
|
process
|
could not load or assemble system serviceprocess servicecontroller on windows services hi i make a windows service with net core it happend some exception when the service start can you tell me how to solve it application mqttworkerservice exe coreclr version net version description the process was terminated due to an unhandled exception exception info system io filenotfoundexception could not load file or assembly system serviceprocess servicecontroller version culture neutral publickeytoken 系统找不到指定的文件。 file name system serviceprocess servicecontroller version culture neutral publickeytoken at microsoft extensions hosting windowsservicelifetimehostbuilderextensions c b hostbuildercontext hostcontext iservicecollection services at microsoft extensions hosting hostbuilder createserviceprovider at microsoft extensions hosting hostbuilder build at program string args in e source code c sharp mqttsniffservices mqttworkerservice program cs line at program string args this is my net platform info host useful for support version commit net sdks installed no sdks were found net runtimes installed microsoft aspnetcore app microsoft netcore app microsoft netcore app my pc information microsoft windows server datacenter build
| 1
|
2,695
| 5,541,139,717
|
IssuesEvent
|
2017-03-22 12:01:36
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
opened
|
Improve <frameset> support
|
AREA: client AREA: server SYSTEM: resource processing TYPE: proposal
|
The reason of the suggestion is https://github.com/DevExpress/testcafe/issues/1336.
I suggest to do:
- [ ] Load Iframe version of task script into `<frame>`s (at least helps with native dialogs)
|
1.0
|
Improve <frameset> support - The reason of the suggestion is https://github.com/DevExpress/testcafe/issues/1336.
I suggest to do:
- [ ] Load Iframe version of task script into `<frame>`s (at least helps with native dialogs)
|
process
|
improve support the reason of the suggestion is i suggest to do load iframe version of task script into s at least helps with native dialogs
| 1
|
97
| 2,536,912,178
|
IssuesEvent
|
2015-01-26 17:04:54
|
iojs/io.js
|
https://api.github.com/repos/iojs/io.js
|
closed
|
3rd `spawnSync()` with shared `options` argument throws error
|
child_process
|
I have installed io.js, and start testing `spawnSync()`. However, calling `spawnSync()` 3 times with shared `options` argument throws an error.
`test.js`:
```javascript
var spawn = require('child_process').spawnSync;
var ls;
var opts = {
stdio: 'inherit'
};
ls = spawn('ls', [], opts);
ls = spawn('ls', [], opts);
ls = spawn('ls', [], opts);
```
Then run:
```
C:\Users\Kyo\Desktop>iojs test.js
desktop.ini test.js
desktop.ini test.js
child_process.js:905
throw new TypeError('Incorrect value for stdio stream: ' +
^
TypeError: Incorrect value for stdio stream: { type: 'fd', fd: { type: 'fd', fd: 0 } }
at child_process.js:905:13
at Array.reduce (native)
at _validateStdio (child_process.js:829:17)
at spawnSync (child_process.js:1251:19)
at Object.<anonymous> (c:\Users\Kyo\Desktop\test.js:13:10)
at Module._compile (module.js:446:26)
at Object.Module._extensions..js (module.js:464:10)
at Module.load (module.js:341:32)
at Function.Module._load (module.js:296:12)
at Function.Module.runMain (module.js:487:10)
```
Tested on io.js v1.0.3 (Win64) on Windows 7 (64bit).
|
1.0
|
3rd `spawnSync()` with shared `options` argument throws error - I have installed io.js, and start testing `spawnSync()`. However, calling `spawnSync()` 3 times with shared `options` argument throws an error.
`test.js`:
```javascript
var spawn = require('child_process').spawnSync;
var ls;
var opts = {
stdio: 'inherit'
};
ls = spawn('ls', [], opts);
ls = spawn('ls', [], opts);
ls = spawn('ls', [], opts);
```
Then run:
```
C:\Users\Kyo\Desktop>iojs test.js
desktop.ini test.js
desktop.ini test.js
child_process.js:905
throw new TypeError('Incorrect value for stdio stream: ' +
^
TypeError: Incorrect value for stdio stream: { type: 'fd', fd: { type: 'fd', fd: 0 } }
at child_process.js:905:13
at Array.reduce (native)
at _validateStdio (child_process.js:829:17)
at spawnSync (child_process.js:1251:19)
at Object.<anonymous> (c:\Users\Kyo\Desktop\test.js:13:10)
at Module._compile (module.js:446:26)
at Object.Module._extensions..js (module.js:464:10)
at Module.load (module.js:341:32)
at Function.Module._load (module.js:296:12)
at Function.Module.runMain (module.js:487:10)
```
Tested on io.js v1.0.3 (Win64) on Windows 7 (64bit).
|
process
|
spawnsync with shared options argument throws error i have installed io js and start testing spawnsync however calling spawnsync times with shared options argument throws an error test js javascript var spawn require child process spawnsync var ls var opts stdio inherit ls spawn ls opts ls spawn ls opts ls spawn ls opts then run c users kyo desktop iojs test js desktop ini test js desktop ini test js child process js throw new typeerror incorrect value for stdio stream typeerror incorrect value for stdio stream type fd fd type fd fd at child process js at array reduce native at validatestdio child process js at spawnsync child process js at object c users kyo desktop test js at module compile module js at object module extensions js module js at module load module js at function module load module js at function module runmain module js tested on io js on windows
| 1
|
101,824
| 11,259,528,419
|
IssuesEvent
|
2020-01-13 08:33:39
|
c4urself/bump2version
|
https://api.github.com/repos/c4urself/bump2version
|
closed
|
Question: how to use parse to deal only with major version?
|
documentation question
|
Hi, after discovering this more up-to-date fork of the original bumpversion project, I'm in the process of switching over to it and I have a question.
I already use bumpversion to maintain version #'s in a few files, where the version # is the full version # (e.g. 3.2.5). I have a scenario where I want to maintain only the major version # within a go.mod file. Within this file I have a module directive like this:
```
module github.com/IBM/go-sdk-core/v3
```
where "3" represents the current major version of the package. I'd like to be able to configure bumpversion to bump this to "v4" when the next major version is created.
It looks like I could maybe use a combination of the "parse" and "search" directives within a `[bumpversion:file:...]` section in my cfg file. Anyone done this before?
It would be great if the README contained an example of this (after figuring this out, I would be happy to submit a PR for that).
Thanks in advance!
|
1.0
|
Question: how to use parse to deal only with major version? - Hi, after discovering this more up-to-date fork of the original bumpversion project, I'm in the process of switching over to it and I have a question.
I already use bumpversion to maintain version #'s in a few files, where the version # is the full version # (e.g. 3.2.5). I have a scenario where I want to maintain only the major version # within a go.mod file. Within this file I have a module directive like this:
```
module github.com/IBM/go-sdk-core/v3
```
where "3" represents the current major version of the package. I'd like to be able to configure bumpversion to bump this to "v4" when the next major version is created.
It looks like I could maybe use a combination of the "parse" and "search" directives within a `[bumpversion:file:...]` section in my cfg file. Anyone done this before?
It would be great if the README contained an example of this (after figuring this out, I would be happy to submit a PR for that).
Thanks in advance!
|
non_process
|
question how to use parse to deal only with major version hi after discovering this more up to date fork of the original bumpversion project i m in the process of switching over to it and i have a question i already use bumpversion to maintain version s in a few files where the version is the full version e g i have a scenario where i want to maintain only the major version within a go mod file within this file i have a module directive like this module github com ibm go sdk core where represents the current major version of the package i d like to be able to configure bumpversion to bump this to when the next major version is created it looks like i could maybe use a combination of the parse and search directives within a section in my cfg file anyone done this before it would be great if the readme contained an example of this after figuring this out i would be happy to submit a pr for that thanks in advance
| 0
|
15,066
| 18,764,649,385
|
IssuesEvent
|
2021-11-05 21:19:27
|
esmero/strawberryfield
|
https://api.github.com/repos/esmero/strawberryfield
|
closed
|
Use Mediainfo for Video/Audio Tech metadata Extraction
|
enhancement JSON Postprocessors Events and Subscriber Digital Preservation
|
# What?
We need duration/codecs/metadata/etc from Media (Video/Audio) that exif nor pronom can provide. So we go for mediainfo
## Tasks:
- Add mediainfo to the PHP Docker container (DONE! esmero/php-7.4-fpm:1.0.0-RC2-multiarch contains it now)
- Include a Mediainfo PHP library so we do not need to deal with filesystem level parsing of output
- Add a configuration option in this module (same place we use for pdfinfo, etc)
- Write the wrapper code that extracts Media Info and pushes it into JSON
|
1.0
|
Use Mediainfo for Video/Audio Tech metadata Extraction - # What?
We need duration/codecs/metadata/etc from Media (Video/Audio) that exif nor pronom can provide. So we go for mediainfo
## Tasks:
- Add mediainfo to the PHP Docker container (DONE! esmero/php-7.4-fpm:1.0.0-RC2-multiarch contains it now)
- Include a Mediainfo PHP library so we do not need to deal with filesystem level parsing of output
- Add a configuration option in this module (same place we use for pdfinfo, etc)
- Write the wrapper code that extracts Media Info and pushes it into JSON
|
process
|
use mediainfo for video audio tech metadata extraction what we need duration codecs metadata etc from media video audio that exif nor pronom can provide so we go for mediainfo tasks add mediainfo to the php docker container done esmero php fpm multiarch contains it now include a mediainfo php library so we do not need to deal with filesystem level parsing of output add a configuration option in this module same place we use for pdfinfo etc write the wrapper code that extracts media info and pushes it into json
| 1
|
297,581
| 9,178,766,788
|
IssuesEvent
|
2019-03-05 00:18:47
|
xlayers/xlayers
|
https://api.github.com/repos/xlayers/xlayers
|
opened
|
Explore embedding Stackblitz into xLayers
|
Priority: Low Scope: Editor community-help effort2: medium (days) type: discussion / RFC
|
**Is your feature request related to a problem? Please describe.**
We could embed Stackblitz into xLayers editors for seamless integration. See docs: https://stackblitz.com/docs#embedding
|
1.0
|
Explore embedding Stackblitz into xLayers - **Is your feature request related to a problem? Please describe.**
We could embed Stackblitz into xLayers editors for seamless integration. See docs: https://stackblitz.com/docs#embedding
|
non_process
|
explore embedding stackblitz into xlayers is your feature request related to a problem please describe we could embed stackblitz into xlayers editors for seamless integration see docs
| 0
|
483,510
| 13,925,911,416
|
IssuesEvent
|
2020-10-21 17:31:11
|
redhat-developer/vscode-openshift-tools
|
https://api.github.com/repos/redhat-developer/vscode-openshift-tools
|
closed
|
Clicking on icon to open route fails
|
kind/bug priority/major upstream/odo
|
Error running command openshift.url.open: Cannot read property 'status' of undefined. This is likely caused by the extension that contributes openshift.url.open.
<img width="1007" alt="Screen Shot 2020-10-19 at 7 19 10 PM" src="https://user-images.githubusercontent.com/148698/96489769-2ac9b080-1240-11eb-969c-b725f9c96597.png">
see
* https://youtu.be/_sLr0T7jabg?t=305
* odo issue https://github.com/openshift/odo/issues/4125
|
1.0
|
Clicking on icon to open route fails - Error running command openshift.url.open: Cannot read property 'status' of undefined. This is likely caused by the extension that contributes openshift.url.open.
<img width="1007" alt="Screen Shot 2020-10-19 at 7 19 10 PM" src="https://user-images.githubusercontent.com/148698/96489769-2ac9b080-1240-11eb-969c-b725f9c96597.png">
see
* https://youtu.be/_sLr0T7jabg?t=305
* odo issue https://github.com/openshift/odo/issues/4125
|
non_process
|
clicking on icon to open route fails error running command openshift url open cannot read property status of undefined this is likely caused by the extension that contributes openshift url open img width alt screen shot at pm src see odo issue
| 0
|
21,575
| 29,933,092,343
|
IssuesEvent
|
2023-06-22 10:51:35
|
raycast/extensions
|
https://api.github.com/repos/raycast/extensions
|
closed
|
[Kill Process] display process ordered by memory usage
|
feature request extension extension: kill-process
|
### Extension
https://www.raycast.com/rolandleth/kill-process
### Description
Sometimes we not only need to kill process that cost too much cpu, but also too much memory, such as memory leak(and in these cases, process do not often cost too much cpu). I hope we can display memory usage per process and ordered like cpu usage.
### Who will benefit from this feature?
all users using this extension
### Anything else?
Not only list the process in activity monitor, but also provide a mode to aggregate process that is actually from a same app, like open multi tabs in chrome, do not show multi "chrome helper", instead show a "chrome" with the sum up cpu or memory usage.
|
1.0
|
[Kill Process] display process ordered by memory usage - ### Extension
https://www.raycast.com/rolandleth/kill-process
### Description
Sometimes we not only need to kill process that cost too much cpu, but also too much memory, such as memory leak(and in these cases, process do not often cost too much cpu). I hope we can display memory usage per process and ordered like cpu usage.
### Who will benefit from this feature?
all users using this extension
### Anything else?
Not only list the process in activity monitor, but also provide a mode to aggregate process that is actually from a same app, like open multi tabs in chrome, do not show multi "chrome helper", instead show a "chrome" with the sum up cpu or memory usage.
|
process
|
display process ordered by memory usage extension description sometimes we not only need to kill process that cost too much cpu but also too much memory such as memory leak and in these cases process do not often cost too much cpu i hope we can display memory usage per process and ordered like cpu usage who will benefit from this feature all users using this extension anything else not only list the process in activity monitor but also provide a mode to aggregate process that is actually from a same app like open multi tabs in chrome do not show multi chrome helper instead show a chrome with the sum up cpu or memory usage
| 1
|
330,655
| 10,053,748,447
|
IssuesEvent
|
2019-07-21 19:24:14
|
yalla-coop/death
|
https://api.github.com/repos/yalla-coop/death
|
opened
|
Agree on overall file structure and folder/file names
|
discuss priority-3
|
Important we are all clear how we want the file structure to be across back and front. e.g. do we have tests in their own test folder or alongside the file they are testing? do we have a Common folder for common components and Pages for other components? etc
Please show the suggested structure via the comments
|
1.0
|
Agree on overall file structure and folder/file names - Important we are all clear how we want the file structure to be across back and front. e.g. do we have tests in their own test folder or alongside the file they are testing? do we have a Common folder for common components and Pages for other components? etc
Please show the suggested structure via the comments
|
non_process
|
agree on overall file structure and folder file names important we are all clear how we want the file structure to be across back and front e g do we have tests in their own test folder or alongside the file they are testing do we have a common folder for common components and pages for other components etc please show the suggested structure via the comments
| 0
|
17,406
| 23,222,689,557
|
IssuesEvent
|
2022-08-02 19:53:16
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
opened
|
Intermittent S3 emulator test failure
|
bug development-process object-store
|
**Describe the bug**
Intermittently, the `object_store` Emulator Tests test fails like this
```
thread 'aws::tests::s3_test' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: HttpDispatch(HttpDispatchError { message: "Missing information for upload part 2" }) }', object_store/src/aws.rs:1277:59
```
Here is an example failure: https://github.com/apache/arrow-rs/runs/7639063643?check_suite_focus=true that then passed on the subsequent run: https://github.com/apache/arrow-rs/runs/7639161157?check_suite_focus=true
**To Reproduce**
Not sure
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
1.0
|
Intermittent S3 emulator test failure - **Describe the bug**
Intermittently, the `object_store` Emulator Tests test fails like this
```
thread 'aws::tests::s3_test' panicked at 'called `Result::unwrap()` on an `Err` value: Custom { kind: Other, error: HttpDispatch(HttpDispatchError { message: "Missing information for upload part 2" }) }', object_store/src/aws.rs:1277:59
```
Here is an example failure: https://github.com/apache/arrow-rs/runs/7639063643?check_suite_focus=true that then passed on the subsequent run: https://github.com/apache/arrow-rs/runs/7639161157?check_suite_focus=true
**To Reproduce**
Not sure
**Expected behavior**
<!--
A clear and concise description of what you expected to happen.
-->
**Additional context**
<!--
Add any other context about the problem here.
-->
|
process
|
intermittent emulator test failure describe the bug intermittently the object store emulator tests test fails like this thread aws tests test panicked at called result unwrap on an err value custom kind other error httpdispatch httpdispatcherror message missing information for upload part object store src aws rs here is an example failure that then passed on the subsequent run to reproduce not sure expected behavior a clear and concise description of what you expected to happen additional context add any other context about the problem here
| 1
|
12,015
| 14,738,401,889
|
IssuesEvent
|
2021-01-07 04:39:33
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
Laser - crediting a customer for a duplicate payment
|
anc-external anc-process anp-1.5 ant-support
|
In GitLab by @kdjstudios on May 18, 2018, 09:57
**Submitted by:** Sharon Carver <scarver@laseranswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-18-29897/conversation
**Server:** External
**Client/Site:** Laser
**Account:** NA
**Issue:**
We accidentally charged the credit card of a customer twice on two separate dates for a single invoice. Both of these payments were posted to SA billing. We have credited their credit card back the duplicate amount. How do I enter a negative payment or “refund”? I do not want to enter an invoice that will affect revenue in the billing cycle.
|
1.0
|
Laser - crediting a customer for a duplicate payment - In GitLab by @kdjstudios on May 18, 2018, 09:57
**Submitted by:** Sharon Carver <scarver@laseranswering.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-18-29897/conversation
**Server:** External
**Client/Site:** Laser
**Account:** NA
**Issue:**
We accidentally charged the credit card of a customer twice on two separate dates for a single invoice. Both of these payments were posted to SA billing. We have credited their credit card back the duplicate amount. How do I enter a negative payment or “refund”? I do not want to enter an invoice that will affect revenue in the billing cycle.
|
process
|
laser crediting a customer for a duplicate payment in gitlab by kdjstudios on may submitted by sharon carver helpdesk server external client site laser account na issue we accidentally charged the credit card of a customer twice on two separate dates for a single invoice both of these payments were posted to sa billing we have credited their credit card back the duplicate amount how do i enter a negative payment or “refund” i do not want to enter an invoice that will affect revenue in the billing cycle
| 1
|
17,308
| 23,128,259,710
|
IssuesEvent
|
2022-07-28 08:04:58
|
Graylog2/graylog2-server
|
https://api.github.com/repos/Graylog2/graylog2-server
|
closed
|
Pipeline rule with geo lookup and possible empty array
|
processing bug triaged #M
|
I am trying to setup a pipeline rule to get some additional geo information, as described in http://docs.graylog.org/en/3.0/pages/geolocation.html#.
The problem I am facing is that part of the result of the lookup can be empty.
In the below code for the rule setting the country code works fine, but the fields which depend on the subdivisions object can lead to the error
"For rule 'rule geoip lookup': In call to function 'set_field' at 7:2 an exception was thrown: Index: 0, Size: 0".
This happens because the subdivisions object is sometimes empty, for instance:
```
...
"country": {
"confidence": null,
"geoname_id": 1605651,
"is_in_european_union": false,
"iso_code": "TH",
"names": {
"de": "Thailand",
"ru": "Тайланд",
"pt-BR": "Tailândia",
"ja": "タイ王国",
"en": "Thailand",
"fr": "Thaïlande",
"zh-CN": "泰国",
"es": "Tailandia"
}
},
...
"subdivisions": []
```
Code for the rule:
```
rule "rule geoip subdivision lookup"
when
has_field("IP")
then
let geo = lookup("geoip-lookup", to_string($message.IP));
set_field("IP_country_code", geo["country"].iso_code);
set_field("IP_region_code", geo["subdivisions"].[0].iso_code);
set_field("IP_region_name", geo["subdivisions"].[0].names.en);
end
```
Currently it seems not to be possible to only set the fiels IP_region_code and IP_region_name, when the array is not empty - or to use a default value, if the array is empty.
(Also posted in https://community.graylog.org/t/pipeline-rule-with-geo-lookup-and-possible-empty-array/10013)
|
1.0
|
Pipeline rule with geo lookup and possible empty array - I am trying to setup a pipeline rule to get some additional geo information, as described in http://docs.graylog.org/en/3.0/pages/geolocation.html#.
The problem I am facing is that part of the result of the lookup can be empty.
In the below code for the rule setting the country code works fine, but the fields which depend on the subdivisions object can lead to the error
"For rule 'rule geoip lookup': In call to function 'set_field' at 7:2 an exception was thrown: Index: 0, Size: 0".
This happens because the subdivisions object is sometimes empty, for instance:
```
...
"country": {
"confidence": null,
"geoname_id": 1605651,
"is_in_european_union": false,
"iso_code": "TH",
"names": {
"de": "Thailand",
"ru": "Тайланд",
"pt-BR": "Tailândia",
"ja": "タイ王国",
"en": "Thailand",
"fr": "Thaïlande",
"zh-CN": "泰国",
"es": "Tailandia"
}
},
...
"subdivisions": []
```
Code for the rule:
```
rule "rule geoip subdivision lookup"
when
has_field("IP")
then
let geo = lookup("geoip-lookup", to_string($message.IP));
set_field("IP_country_code", geo["country"].iso_code);
set_field("IP_region_code", geo["subdivisions"].[0].iso_code);
set_field("IP_region_name", geo["subdivisions"].[0].names.en);
end
```
Currently it seems not to be possible to only set the fiels IP_region_code and IP_region_name, when the array is not empty - or to use a default value, if the array is empty.
(Also posted in https://community.graylog.org/t/pipeline-rule-with-geo-lookup-and-possible-empty-array/10013)
|
process
|
pipeline rule with geo lookup and possible empty array i am trying to setup a pipeline rule to get some additional geo information as described in the problem i am facing is that part of the result of the lookup can be empty in the below code for the rule setting the country code works fine but the fields which depend on the subdivisions object can lead to the error for rule rule geoip lookup in call to function set field at an exception was thrown index size this happens because the subdivisions object is sometimes empty for instance country confidence null geoname id is in european union false iso code th names de thailand ru тайланд pt br tailândia ja タイ王国 en thailand fr thaïlande zh cn 泰国 es tailandia subdivisions code for the rule rule rule geoip subdivision lookup when has field ip then let geo lookup geoip lookup to string message ip set field ip country code geo iso code set field ip region code geo iso code set field ip region name geo names en end currently it seems not to be possible to only set the fiels ip region code and ip region name when the array is not empty or to use a default value if the array is empty also posted in
| 1
|
292
| 2,731,595,689
|
IssuesEvent
|
2015-04-16 21:15:44
|
hammerlab/pileup.js
|
https://api.github.com/repos/hammerlab/pileup.js
|
closed
|
Create a SAMRead class
|
process
|
Reads are currently parsed from decompressed BAM blocks directly via jBinary. It would be significantly more efficient to create a `SAMRead` class which was instantiated with an `ArrayBuffer` containing its data. It could fetch desired fields on-demand.
For bonus points, generate this class from the `BamAlignment` jBinary type. But that's probably overkill.
|
1.0
|
Create a SAMRead class - Reads are currently parsed from decompressed BAM blocks directly via jBinary. It would be significantly more efficient to create a `SAMRead` class which was instantiated with an `ArrayBuffer` containing its data. It could fetch desired fields on-demand.
For bonus points, generate this class from the `BamAlignment` jBinary type. But that's probably overkill.
|
process
|
create a samread class reads are currently parsed from decompressed bam blocks directly via jbinary it would be significantly more efficient to create a samread class which was instantiated with an arraybuffer containing its data it could fetch desired fields on demand for bonus points generate this class from the bamalignment jbinary type but that s probably overkill
| 1
|
8,395
| 11,565,787,089
|
IssuesEvent
|
2020-02-20 11:10:06
|
Open-EO/openeo-api
|
https://api.github.com/repos/Open-EO/openeo-api
|
opened
|
Make optional fields nullable in process-related endpoints?
|
data discovery processes
|
Originated in #260: Usually in responses like GET /jobs/{job_id} or GET /services/{service_id} most (all?) optional fields can be set to null, which makes implementation a bit easier. For the process graphs at /process_graphs none of the fields can be set to null and so they must actually be missing from the response if no data is available. That's more difficult to implement, but aligned with /processes. Should we introduce nullable for some/all fields in /process_graphs and/or /processes?
More considerations:
There are different schemas for pre-defined processes, user-defined processes etc. Each of them required a different set of properties. Of course, the required properties should not be nullable. This means the OpenAPI schemas would get quite messy when allowing null only for optional fields.
Maybe we need to look at the individual fields:
* id: Not sure, is usually required.
* summary: Not nullable? Could just respond with an empty string.
* description: Not nullable? Could just respond with an empty string.
* categories: Not nullable: it could simply be an empty array.
* parameters: Nullable. There's no default value one could use as an empty array means no parameter, which is different from not providing the data at all (unknown parameters).
* returns: Nullable? If specified, requires a schema. We could allow setting an empty array as "void" data type. An empty object is "any" data type.
* deprecated: Not nullable: it could simply be set to it's default value (false).
* experimental: Not nullable: it could simply be set to it's default value (false).
* exceptions: Not nullable: it could simply be an empty object.
* examples: Not nullable: it could simply be an empty array.
* links: Not nullable: it could simply be an empty array.
* process_graphs: Is usually required, except for pre-defined processes.
|
1.0
|
Make optional fields nullable in process-related endpoints? - Originated in #260: Usually in responses like GET /jobs/{job_id} or GET /services/{service_id} most (all?) optional fields can be set to null, which makes implementation a bit easier. For the process graphs at /process_graphs none of the fields can be set to null and so they must actually be missing from the response if no data is available. That's more difficult to implement, but aligned with /processes. Should we introduce nullable for some/all fields in /process_graphs and/or /processes?
More considerations:
There are different schemas for pre-defined processes, user-defined processes etc. Each of them required a different set of properties. Of course, the required properties should not be nullable. This means the OpenAPI schemas would get quite messy when allowing null only for optional fields.
Maybe we need to look at the individual fields:
* id: Not sure, is usually required.
* summary: Not nullable? Could just respond with an empty string.
* description: Not nullable? Could just respond with an empty string.
* categories: Not nullable: it could simply be an empty array.
* parameters: Nullable. There's no default value one could use as an empty array means no parameter, which is different from not providing the data at all (unknown parameters).
* returns: Nullable? If specified, requires a schema. We could allow setting an empty array as "void" data type. An empty object is "any" data type.
* deprecated: Not nullable: it could simply be set to it's default value (false).
* experimental: Not nullable: it could simply be set to it's default value (false).
* exceptions: Not nullable: it could simply be an empty object.
* examples: Not nullable: it could simply be an empty array.
* links: Not nullable: it could simply be an empty array.
* process_graphs: Is usually required, except for pre-defined processes.
|
process
|
make optional fields nullable in process related endpoints originated in usually in responses like get jobs job id or get services service id most all optional fields can be set to null which makes implementation a bit easier for the process graphs at process graphs none of the fields can be set to null and so they must actually be missing from the response if no data is available that s more difficult to implement but aligned with processes should we introduce nullable for some all fields in process graphs and or processes more considerations there are different schemas for pre defined processes user defined processes etc each of them required a different set of properties of course the required properties should not be nullable this means the openapi schemas would get quite messy when allowing null only for optional fields maybe we need to look at the individual fields id not sure is usually required summary not nullable could just respond with an empty string description not nullable could just respond with an empty string categories not nullable it could simply be an empty array parameters nullable there s no default value one could use as an empty array means no parameter which is different from not providing the data at all unknown parameters returns nullable if specified requires a schema we could allow setting an empty array as void data type an empty object is any data type deprecated not nullable it could simply be set to it s default value false experimental not nullable it could simply be set to it s default value false exceptions not nullable it could simply be an empty object examples not nullable it could simply be an empty array links not nullable it could simply be an empty array process graphs is usually required except for pre defined processes
| 1
|
17,424
| 23,246,317,429
|
IssuesEvent
|
2022-08-03 20:33:25
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Custom Materials Floating point issues in display
|
bug Process Heating Application
|
Create solid/liquid and set any of the parameters to 1.8%. Results in 1.8000000000000003% when you preview it. Doesn't happen for any other.
|
1.0
|
Custom Materials Floating point issues in display - Create solid/liquid and set any of the parameters to 1.8%. Results in 1.8000000000000003% when you preview it. Doesn't happen for any other.
|
process
|
custom materials floating point issues in display create solid liquid and set any of the parameters to results in when you preview it doesn t happen for any other
| 1
|
661
| 3,130,891,986
|
IssuesEvent
|
2015-09-09 12:09:28
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
На бэке (wf-base) реализовать отсылку уведомления на почту, при создании новой услуги, и в теле отправлять ИД заявки, с добавленной последней цифрой контрольной суммы по алгоритму Луна
|
active bug hi priority In process of testing test
|
- Реализовать это прямо в классе в классе org.activiti.rest.interceptor.RequestProcessingInterceptor
в методе:
saveHistory(HttpServletRequest request, HttpServletResponse response, boolean saveHistory) throws IOException {
по событию boolean setTask
- Отсылать почту по образу и подобию того, как это реализовано в методе:
setDocumentLink(Long nID_Document, String sFIO, String sTarget, String sTelephone, Long nMS, String sMail) throws Exception{
Класса: DocumentAccessDaoImpl
Пэкэджа: org.wf.dp.dniprorada.dao
- Заголовок:
Вы подали заявку №23543253 на услугу, через портал igov.org.ua
- Текст письма:
Вы подали заявку №23543253 на услугу, через портал igov.org.ua
(Вы всегда сможете посмотреть ее текущий статус на портале в разделе "Статусы")
При поступлении Вашей заявки в систему госоргана - Вам будет дополнительно направлено персональное письмо - уведомление.
- Электронный адрес взять из получаемого объекта в реквесте (в логах можно глянуть пример объекта)
Вот пример залогированного пришедшего тела запроса, из которого можно взять электронку({"id":"email","value":"al.dubilet@gmail.com"}):
(пока затачиваемся на такой id)
2015-08-23_21:48:33.424 | INFO | org.activiti.rest.interceptor.RequestProcessingInterceptor- sRequestBody: {"processDefinitionId":"kiev_dms_1:49:2732672","businessKey":"key","properties":[{
"id":"bankIdlastName","value":"ДУБІЛЕТ"},{"id":"bankIdfirstName","value":"ДМИТРО"},{"id":"bankIdmiddleName","value":"ОЛЕКСАНДРОВИЧ"},{"id":"Dateofbirth","value":"1"},{"id":"Areabirth","valu
e":"2"},{"id":"bankIdPassport","value":"АМ765369 ЖОВТНЕВИМ РВ ДМУ УМВС УКРАЇНИ В ДНІПРОПЕТРОВСЬКІЙ ОБЛАСТІ 18.03.2002"},{"id":"bankId_scan_passport","value":null},{"id":"Nationality","value
":"3"},{"id":"kids","value":"no"},{"id":"text1","value":"Будь ласка, вкажіть дані Вашої дитини, якщо її вік перевищую 14 років"},{"id":"ChildName1","value":"456"},{"id":"kidsCitizenship","v
alue":"7"},{"id":"oldAddress","value":"8"},{"id":"newAddressLabel","value":"Заповніть деталі вашої нової адреси"},{"id":"RegistrationAddress","value":"9"},{"id":"newStreet","value":"10"},{"
id":"newHouse","value":"11"},{"id":"newCorp","value":"12"},{"id":"newApartment","value":"13"},{"id":"militaryDoc","value":null},{"id":"bringDoc","value":"other"},{"id":"bringDocOther","valu
e":"14"},{"id":"phone","value":"+380 67 503 8800"},{"id":"email","value":"al.dubilet@gmail.com"},{"id":"visitDay","value":"26/08/2015"},{"id":"visitTime","value":"15"},{"id":"warning","valu
e":"Подаючи звернення, Ви підтверджуєте достовірність усіх зазначених у зверненні даних і надаєте свою згоду на обробку Ваших персональних даних"},{"id":"sBody_1","value":null},{"id":"sBody
_2","value":null},{"id":"sBody_3","value":null},{"id":"sBody_4","value":null},{"id":"sBody_5","value":null},{"id":"sBody_6","value":null},{"id":"sBody_7","value":null}],"nID_Subject":20045}
2015-08-23_21:48:33.424 | INFO | org.activiti.rest.interceptor.RequestProcessingInterceptor- call service HistoryEvent_Service!!!!!!!!!!!
2015-08-23_21:48:33.460 | INFO | org.activiti.rest.interceptor.RequestProcessingInterceptor- https://test.igov.org.ua/wf-central/service/services/addHistoryEvent_Service: {sProcessInstanceN
ame=Київська ДМС - Реєстрація місця проживання/перебування особи!, nID_Proccess=2732706, nID_Subject=20045, sID_Status=Заявка подана}
|
1.0
|
На бэке (wf-base) реализовать отсылку уведомления на почту, при создании новой услуги, и в теле отправлять ИД заявки, с добавленной последней цифрой контрольной суммы по алгоритму Луна - - Реализовать это прямо в классе в классе org.activiti.rest.interceptor.RequestProcessingInterceptor
в методе:
saveHistory(HttpServletRequest request, HttpServletResponse response, boolean saveHistory) throws IOException {
по событию boolean setTask
- Отсылать почту по образу и подобию того, как это реализовано в методе:
setDocumentLink(Long nID_Document, String sFIO, String sTarget, String sTelephone, Long nMS, String sMail) throws Exception{
Класса: DocumentAccessDaoImpl
Пэкэджа: org.wf.dp.dniprorada.dao
- Заголовок:
Вы подали заявку №23543253 на услугу, через портал igov.org.ua
- Текст письма:
Вы подали заявку №23543253 на услугу, через портал igov.org.ua
(Вы всегда сможете посмотреть ее текущий статус на портале в разделе "Статусы")
При поступлении Вашей заявки в систему госоргана - Вам будет дополнительно направлено персональное письмо - уведомление.
- Электронный адрес взять из получаемого объекта в реквесте (в логах можно глянуть пример объекта)
Вот пример залогированного пришедшего тела запроса, из которого можно взять электронку({"id":"email","value":"al.dubilet@gmail.com"}):
(пока затачиваемся на такой id)
2015-08-23_21:48:33.424 | INFO | org.activiti.rest.interceptor.RequestProcessingInterceptor- sRequestBody: {"processDefinitionId":"kiev_dms_1:49:2732672","businessKey":"key","properties":[{
"id":"bankIdlastName","value":"ДУБІЛЕТ"},{"id":"bankIdfirstName","value":"ДМИТРО"},{"id":"bankIdmiddleName","value":"ОЛЕКСАНДРОВИЧ"},{"id":"Dateofbirth","value":"1"},{"id":"Areabirth","valu
e":"2"},{"id":"bankIdPassport","value":"АМ765369 ЖОВТНЕВИМ РВ ДМУ УМВС УКРАЇНИ В ДНІПРОПЕТРОВСЬКІЙ ОБЛАСТІ 18.03.2002"},{"id":"bankId_scan_passport","value":null},{"id":"Nationality","value
":"3"},{"id":"kids","value":"no"},{"id":"text1","value":"Будь ласка, вкажіть дані Вашої дитини, якщо її вік перевищую 14 років"},{"id":"ChildName1","value":"456"},{"id":"kidsCitizenship","v
alue":"7"},{"id":"oldAddress","value":"8"},{"id":"newAddressLabel","value":"Заповніть деталі вашої нової адреси"},{"id":"RegistrationAddress","value":"9"},{"id":"newStreet","value":"10"},{"
id":"newHouse","value":"11"},{"id":"newCorp","value":"12"},{"id":"newApartment","value":"13"},{"id":"militaryDoc","value":null},{"id":"bringDoc","value":"other"},{"id":"bringDocOther","valu
e":"14"},{"id":"phone","value":"+380 67 503 8800"},{"id":"email","value":"al.dubilet@gmail.com"},{"id":"visitDay","value":"26/08/2015"},{"id":"visitTime","value":"15"},{"id":"warning","valu
e":"Подаючи звернення, Ви підтверджуєте достовірність усіх зазначених у зверненні даних і надаєте свою згоду на обробку Ваших персональних даних"},{"id":"sBody_1","value":null},{"id":"sBody
_2","value":null},{"id":"sBody_3","value":null},{"id":"sBody_4","value":null},{"id":"sBody_5","value":null},{"id":"sBody_6","value":null},{"id":"sBody_7","value":null}],"nID_Subject":20045}
2015-08-23_21:48:33.424 | INFO | org.activiti.rest.interceptor.RequestProcessingInterceptor- call service HistoryEvent_Service!!!!!!!!!!!
2015-08-23_21:48:33.460 | INFO | org.activiti.rest.interceptor.RequestProcessingInterceptor- https://test.igov.org.ua/wf-central/service/services/addHistoryEvent_Service: {sProcessInstanceN
ame=Київська ДМС - Реєстрація місця проживання/перебування особи!, nID_Proccess=2732706, nID_Subject=20045, sID_Status=Заявка подана}
|
process
|
на бэке wf base реализовать отсылку уведомления на почту при создании новой услуги и в теле отправлять ид заявки с добавленной последней цифрой контрольной суммы по алгоритму луна реализовать это прямо в классе в классе org activiti rest interceptor requestprocessinginterceptor в методе savehistory httpservletrequest request httpservletresponse response boolean savehistory throws ioexception по событию boolean settask отсылать почту по образу и подобию того как это реализовано в методе setdocumentlink long nid document string sfio string starget string stelephone long nms string smail throws exception класса documentaccessdaoimpl пэкэджа org wf dp dniprorada dao заголовок вы подали заявку № на услугу через портал igov org ua текст письма вы подали заявку № на услугу через портал igov org ua вы всегда сможете посмотреть ее текущий статус на портале в разделе статусы при поступлении вашей заявки в систему госоргана вам будет дополнительно направлено персональное письмо уведомление электронный адрес взять из получаемого объекта в реквесте в логах можно глянуть пример объекта вот пример залогированного пришедшего тела запроса из которого можно взять электронку id email value al dubilet gmail com пока затачиваемся на такой id info org activiti rest interceptor requestprocessinginterceptor srequestbody processdefinitionid kiev dms businesskey key properties id bankidlastname value дубілет id bankidfirstname value дмитро id bankidmiddlename value олександрович id dateofbirth value id areabirth valu e id bankidpassport value жовтневим рв дму умвс україни в дніпропетровській області id bankid scan passport value null id nationality value id kids value no id value будь ласка вкажіть дані вашої дитини якщо її вік перевищую років id value id kidscitizenship v alue id oldaddress value id newaddresslabel value заповніть деталі вашої нової адреси id registrationaddress value id newstreet value id newhouse value id newcorp value id newapartment value id militarydoc value null id bringdoc value other id bringdocother valu e id phone value id email value al dubilet gmail com id visitday value id visittime value id warning valu e подаючи звернення ви підтверджуєте достовірність усіх зазначених у зверненні даних і надаєте свою згоду на обробку ваших персональних даних id sbody value null id sbody value null id sbody value null id sbody value null id sbody value null id sbody value null id sbody value null nid subject info org activiti rest interceptor requestprocessinginterceptor call service historyevent service info org activiti rest interceptor requestprocessinginterceptor sprocessinstancen ame київська дмс реєстрація місця проживання перебування особи nid proccess nid subject sid status заявка подана
| 1
|
15,691
| 19,848,061,269
|
IssuesEvent
|
2022-01-21 09:10:18
|
ooi-data/CE02SHSM-RID27-03-CTDBPC000-telemetered-ctdbp_cdef_dcl_instrument
|
https://api.github.com/repos/ooi-data/CE02SHSM-RID27-03-CTDBPC000-telemetered-ctdbp_cdef_dcl_instrument
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:10:18.195784.
## Details
Flow name: `CE02SHSM-RID27-03-CTDBPC000-telemetered-ctdbp_cdef_dcl_instrument`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T09:10:18.195784.
## Details
Flow name: `CE02SHSM-RID27-03-CTDBPC000-telemetered-ctdbp_cdef_dcl_instrument`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name telemetered ctdbp cdef dcl instrument task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
246,275
| 7,894,149,651
|
IssuesEvent
|
2018-06-28 20:28:34
|
aowen87/BAR
|
https://api.github.com/repos/aowen87/BAR
|
closed
|
OSX Mavericks Build Issues
|
Likelihood: 3 - Occasional OS: All Priority: Normal Severity: 3 - Major Irritation Support Group: Any bug version: trunk
|
*Qt Build Failures:*
Downloading qt-*-src-4.8.6, editing by_qt.sh with the new version, and adding –cc clang —cxx clang++ fixed it
*VTK Build Failures:*
Removing the garbage collection flag (-fobjc-gc) from VTK’s CMakeLists.txt and then re-tarring it did the trick
*CCMIO Build Failures:*
The new version of OSX was not recognized so it reverted to ‘unknown’. Adding an unknown directory to libccmio-2.6.1/config and copying the qmake stuff from libccmio-2.6.1/config /i386-apple-darwin8 and re-tarring fixed it
*NetCDF Build Failures:*
Punted on this one - Removed the switch from build_visit
*Pyside->shiboken-1.1.1 Build Failures: (fatal error: 'tr1/functional' file not found)*
- The tr1 namespace is no longer supported with the new compiler (functionality is in std) so I had to remove it from: pyside-combined-1.1.1/shiboken-1.1.1/ext/sparsehash/google/sparsehash/sparseconfig.h
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kevin Griffin
Original creation: 06/12/2014 12:32 pm
Original update: 12/10/2014 02:25 pm
Ticket number: 1871
|
1.0
|
OSX Mavericks Build Issues - *Qt Build Failures:*
Downloading qt-*-src-4.8.6, editing by_qt.sh with the new version, and adding –cc clang —cxx clang++ fixed it
*VTK Build Failures:*
Removing the garbage collection flag (-fobjc-gc) from VTK’s CMakeLists.txt and then re-tarring it did the trick
*CCMIO Build Failures:*
The new version of OSX was not recognized so it reverted to ‘unknown’. Adding an unknown directory to libccmio-2.6.1/config and copying the qmake stuff from libccmio-2.6.1/config /i386-apple-darwin8 and re-tarring fixed it
*NetCDF Build Failures:*
Punted on this one - Removed the switch from build_visit
*Pyside->shiboken-1.1.1 Build Failures: (fatal error: 'tr1/functional' file not found)*
- The tr1 namespace is no longer supported with the new compiler (functionality is in std) so I had to remove it from: pyside-combined-1.1.1/shiboken-1.1.1/ext/sparsehash/google/sparsehash/sparseconfig.h
-----------------------REDMINE MIGRATION-----------------------
This ticket was migrated from Redmine. The following information
could not be accurately captured in the new ticket:
Original author: Kevin Griffin
Original creation: 06/12/2014 12:32 pm
Original update: 12/10/2014 02:25 pm
Ticket number: 1871
|
non_process
|
osx mavericks build issues qt build failures downloading qt src editing by qt sh with the new version and adding –cc clang —cxx clang fixed it vtk build failures removing the garbage collection flag fobjc gc from vtk’s cmakelists txt and then re tarring it did the trick ccmio build failures the new version of osx was not recognized so it reverted to ‘unknown’ adding an unknown directory to libccmio config and copying the qmake stuff from libccmio config apple and re tarring fixed it netcdf build failures punted on this one removed the switch from build visit pyside shiboken build failures fatal error functional file not found the namespace is no longer supported with the new compiler functionality is in std so i had to remove it from pyside combined shiboken ext sparsehash google sparsehash sparseconfig h redmine migration this ticket was migrated from redmine the following information could not be accurately captured in the new ticket original author kevin griffin original creation pm original update pm ticket number
| 0
|
16,558
| 21,571,678,742
|
IssuesEvent
|
2022-05-02 08:59:46
|
bitPogo/kmock
|
https://api.github.com/repos/bitPogo/kmock
|
opened
|
Relaxation fails for Generics
|
bug kmock-processor
|
## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently Relaxation for Interfaces with generics causes a Compiler Error.
|
1.0
|
Relaxation fails for Generics - ## Description
<!--- Provide a detailed introduction to the issue itself, and why you consider it to be a bug -->
Currently Relaxation for Interfaces with generics causes a Compiler Error.
|
process
|
relaxation fails for generics description currently relaxation for interfaces with generics causes a compiler error
| 1
|
55,257
| 7,968,500,528
|
IssuesEvent
|
2018-07-16 03:31:24
|
fluidtrends/carmel
|
https://api.github.com/repos/fluidtrends/carmel
|
closed
|
First challenge
|
BOUNTY Documentation Done ⭐️10 VP
|
In order to run the following line `npm i -g chunky-cli` on a Windows machine, you should have git installed on your machine. Download the package from https://git-scm.com/download/win and reboot the computer after installing it.
|
1.0
|
First challenge - In order to run the following line `npm i -g chunky-cli` on a Windows machine, you should have git installed on your machine. Download the package from https://git-scm.com/download/win and reboot the computer after installing it.
|
non_process
|
first challenge in order to run the following line npm i g chunky cli on a windows machine you should have git installed on your machine download the package from and reboot the computer after installing it
| 0
|
48,218
| 13,308,896,279
|
IssuesEvent
|
2020-08-26 02:22:44
|
ekirmayer/devopsloft
|
https://api.github.com/repos/ekirmayer/devopsloft
|
opened
|
CVE-2019-10906 (High) detected in Jinja2-2.10-py2.py3-none-any.whl
|
security vulnerability
|
## CVE-2019-10906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Jinja2-2.10-py2.py3-none-any.whl</b></p></summary>
<p>A very fast and expressive template engine.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/devopsloft/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-scm/devopsloft/requirements.txt,/devopsloft/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Jinja2-2.10-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ekirmayer/devopsloft/commit/591ecbad4a1be23dfdaf157d87f29be71dc33b60">591ecbad4a1be23dfdaf157d87f29be71dc33b60</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Pallets Jinja before 2.10.1, str.format_map allows a sandbox escape.
<p>Publish Date: 2019-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10906>CVE-2019-10906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10906">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10906</a></p>
<p>Release Date: 2019-04-07</p>
<p>Fix Resolution: 2.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-10906 (High) detected in Jinja2-2.10-py2.py3-none-any.whl - ## CVE-2019-10906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Jinja2-2.10-py2.py3-none-any.whl</b></p></summary>
<p>A very fast and expressive template engine.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /tmp/ws-scm/devopsloft/requirements.txt</p>
<p>Path to vulnerable library: /tmp/ws-scm/devopsloft/requirements.txt,/devopsloft/requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Jinja2-2.10-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ekirmayer/devopsloft/commit/591ecbad4a1be23dfdaf157d87f29be71dc33b60">591ecbad4a1be23dfdaf157d87f29be71dc33b60</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Pallets Jinja before 2.10.1, str.format_map allows a sandbox escape.
<p>Publish Date: 2019-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10906>CVE-2019-10906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10906">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10906</a></p>
<p>Release Date: 2019-04-07</p>
<p>Fix Resolution: 2.10.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in none any whl cve high severity vulnerability vulnerable library none any whl a very fast and expressive template engine library home page a href path to dependency file tmp ws scm devopsloft requirements txt path to vulnerable library tmp ws scm devopsloft requirements txt devopsloft requirements txt dependency hierarchy x none any whl vulnerable library found in head commit a href vulnerability details in pallets jinja before str format map allows a sandbox escape publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
793
| 3,274,564,345
|
IssuesEvent
|
2015-10-26 11:37:49
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
colspec added by default to tables in OT2.1 causes non optimal presentation
|
enhancement P2 preprocess
|
In preprocessing (NormalizeFilter.java ?) colspecs are added for columns in tables that have no colspec specified by the user.
For users of XEP and AntennaHouse the best results are achieved by letting the FO processor decide on the width of the columns based on the content of those columns.
Where an author does require a specific column width ratio they will specify colspecs in the usual way.
Now, after preprocessing I cannot tell if the specified colowidths are deliberately added by the user (and set to be all the same width) or added by the preprocessor.
Please can we have an Ant param that controls if the colwidth is added automatically or not by the pre processor (or at least some additional indication that the colwidths that exist were not specified by the author)
example below
```xml
<table frame="all" id="table_3vm_b3f_lf" outputclass="tablewidthcolumn dochistory">
<tgroup cols="4">
<thead>.....
```
becomes
```xml
<table frame="all" id="table_3vm_b3f_lf" outputclass="tablewidthcolumn dochistory" class="- topic/table ">
<tgroup cols="4" class="- topic/tgroup ">
<colspec class="- topic/colspec " colnum="1" colname="col1" colwidth="1*"/>
<colspec class="- topic/colspec " colnum="2" colname="col2" colwidth="1*"/>
<colspec class="- topic/colspec " colnum="3" colname="col3" colwidth="1*"/>
<colspec class="- topic/colspec " colnum="4" colname="col4" colwidth="1*"/>
```
|
1.0
|
colspec added by default to tables in OT2.1 causes non optimal presentation - In preprocessing (NormalizeFilter.java ?) colspecs are added for columns in tables that have no colspec specified by the user.
For users of XEP and AntennaHouse the best results are achieved by letting the FO processor decide on the width of the columns based on the content of those columns.
Where an author does require a specific column width ratio they will specify colspecs in the usual way.
Now, after preprocessing I cannot tell if the specified colowidths are deliberately added by the user (and set to be all the same width) or added by the preprocessor.
Please can we have an Ant param that controls if the colwidth is added automatically or not by the pre processor (or at least some additional indication that the colwidths that exist were not specified by the author)
example below
```xml
<table frame="all" id="table_3vm_b3f_lf" outputclass="tablewidthcolumn dochistory">
<tgroup cols="4">
<thead>.....
```
becomes
```xml
<table frame="all" id="table_3vm_b3f_lf" outputclass="tablewidthcolumn dochistory" class="- topic/table ">
<tgroup cols="4" class="- topic/tgroup ">
<colspec class="- topic/colspec " colnum="1" colname="col1" colwidth="1*"/>
<colspec class="- topic/colspec " colnum="2" colname="col2" colwidth="1*"/>
<colspec class="- topic/colspec " colnum="3" colname="col3" colwidth="1*"/>
<colspec class="- topic/colspec " colnum="4" colname="col4" colwidth="1*"/>
```
|
process
|
colspec added by default to tables in causes non optimal presentation in preprocessing normalizefilter java colspecs are added for columns in tables that have no colspec specified by the user for users of xep and antennahouse the best results are achieved by letting the fo processor decide on the width of the columns based on the content of those columns where an author does require a specific column width ratio they will specify colspecs in the usual way now after preprocessing i cannot tell if the specified colowidths are deliberately added by the user and set to be all the same width or added by the preprocessor please can we have an ant param that controls if the colwidth is added automatically or not by the pre processor or at least some additional indication that the colwidths that exist were not specified by the author example below xml becomes xml
| 1
|
285,494
| 31,154,698,654
|
IssuesEvent
|
2023-08-16 12:25:33
|
Trinadh465/linux-4.1.15_CVE-2018-5873
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2018-5873
|
opened
|
CVE-2018-10940 (Medium) detected in linuxlinux-4.1.52
|
Mend: dependency security vulnerability
|
## CVE-2018-10940 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/cdrom/cdrom.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/cdrom/cdrom.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The cdrom_ioctl_media_changed function in drivers/cdrom/cdrom.c in the Linux kernel before 4.16.6 allows local attackers to use a incorrect bounds check in the CDROM driver CDROM_MEDIA_CHANGED ioctl to read out kernel memory.
<p>Publish Date: 2018-05-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-10940>CVE-2018-10940</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10940">https://nvd.nist.gov/vuln/detail/CVE-2018-10940</a></p>
<p>Release Date: 2018-05-09</p>
<p>Fix Resolution: 4.16.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-10940 (Medium) detected in linuxlinux-4.1.52 - ## CVE-2018-10940 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.1.52</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2018-5873/commit/32145daf0c96b012284199f23418243e0168269f">32145daf0c96b012284199f23418243e0168269f</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/cdrom/cdrom.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/cdrom/cdrom.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The cdrom_ioctl_media_changed function in drivers/cdrom/cdrom.c in the Linux kernel before 4.16.6 allows local attackers to use a incorrect bounds check in the CDROM driver CDROM_MEDIA_CHANGED ioctl to read out kernel memory.
<p>Publish Date: 2018-05-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-10940>CVE-2018-10940</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-10940">https://nvd.nist.gov/vuln/detail/CVE-2018-10940</a></p>
<p>Release Date: 2018-05-09</p>
<p>Fix Resolution: 4.16.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files drivers cdrom cdrom c drivers cdrom cdrom c vulnerability details the cdrom ioctl media changed function in drivers cdrom cdrom c in the linux kernel before allows local attackers to use a incorrect bounds check in the cdrom driver cdrom media changed ioctl to read out kernel memory publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
15,460
| 19,675,670,631
|
IssuesEvent
|
2022-01-11 12:05:38
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
`json` module has no `decode` member in bzlmod module implementation
|
under investigation type: support / not a bug (process) team-ExternalDeps
|
### Description of the problem / feature request:
It is impossible to parse json in a bzlmod module implementation since the [json](https://docs.bazel.build/versions/main/skylark/lib/json.html) module doesn't contain a `decode` method
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```starlark
def _my_module_impl(mctx):
print("Attributes: %s" % dir(json))
my_module = module_extension(
_my_module_impl,
)
```
The output I see from Bazel 5rc3 is:
`Attributes: ["to_json", "to_proto", "write_artifact_spec", "write_exclusion_spec", "write_override_license_types_spec", "write_repository_credentials_spec", "write_repository_spec"]`
### Have you found anything relevant by searching the web?
No
|
1.0
|
`json` module has no `decode` member in bzlmod module implementation - ### Description of the problem / feature request:
It is impossible to parse json in a bzlmod module implementation since the [json](https://docs.bazel.build/versions/main/skylark/lib/json.html) module doesn't contain a `decode` method
### Bugs: what's the simplest, easiest way to reproduce this bug? Please provide a minimal example if possible.
```starlark
def _my_module_impl(mctx):
print("Attributes: %s" % dir(json))
my_module = module_extension(
_my_module_impl,
)
```
The output I see from Bazel 5rc3 is:
`Attributes: ["to_json", "to_proto", "write_artifact_spec", "write_exclusion_spec", "write_override_license_types_spec", "write_repository_credentials_spec", "write_repository_spec"]`
### Have you found anything relevant by searching the web?
No
|
process
|
json module has no decode member in bzlmod module implementation description of the problem feature request it is impossible to parse json in a bzlmod module implementation since the module doesn t contain a decode method bugs what s the simplest easiest way to reproduce this bug please provide a minimal example if possible starlark def my module impl mctx print attributes s dir json my module module extension my module impl the output i see from bazel is attributes have you found anything relevant by searching the web no
| 1
|
22,542
| 31,717,302,641
|
IssuesEvent
|
2023-09-10 02:00:07
|
lizhihao6/get-daily-arxiv-noti
|
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
|
opened
|
New submissions for Fri, 8 Sep 23
|
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
|
## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Distribution-Aware Prompt Tuning for Vision-Language Models
- **Authors:** Eulrang Cho, Jooyeon Kim, Hyunwoo J. Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03406
- **Pdf link:** https://arxiv.org/pdf/2309.03406
- **Abstract**
Pre-trained vision-language models (VLMs) have shown impressive performance on various downstream tasks by utilizing knowledge learned from large data. In general, the performance of VLMs on target tasks can be further improved by prompt tuning, which adds context to the input image or text. By leveraging data from target tasks, various prompt-tuning methods have been studied in the literature. A key to prompt tuning is the feature space alignment between two modalities via learnable vectors with model parameters fixed. We observed that the alignment becomes more effective when embeddings of each modality are `well-arranged' in the latent space. Inspired by this observation, we proposed distribution-aware prompt tuning (DAPT) for vision-language models, which is simple yet effective. Specifically, the prompts are learned by maximizing inter-dispersion, the distance between classes, as well as minimizing the intra-dispersion measured by the distance between embeddings from the same class. Our extensive experiments on 11 benchmark datasets demonstrate that our method significantly improves generalizability. The code is available at https://github.com/mlvlab/DAPT.
### Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation
- **Authors:** Xiaohan Cui, Long Ma, Tengyu Ma, Jinyuan Liu, Xin Fan, Risheng Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03548
- **Pdf link:** https://arxiv.org/pdf/2309.03548
- **Abstract**
Object detection in low-light scenarios has attracted much attention in the past few years. A mainstream and representative scheme introduces enhancers as the pre-processing for regular detectors. However, because of the disparity in task objectives between the enhancer and detector, this paradigm cannot shine at its best ability. In this work, we try to arouse the potential of enhancer + detector. Different from existing works, we extend the illumination-based enhancers (our newly designed or existing) as a scene decomposition module, whose removed illumination is exploited as the auxiliary in the detector for extracting detection-friendly features. A semantic aggregation module is further established for integrating multi-scale scene-related semantic information in the context space. Actually, our built scheme successfully transforms the "trash" (i.e., the ignored illumination in the detector) into the "treasure" for the detector. Plenty of experiments are conducted to reveal our superiority against other state-of-the-art methods. The code will be public if it is accepted.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Dynamic Frame Interpolation in Wavelet Domain
- **Authors:** Lingtong Kong, Boyuan Jiang, Donghao Luo, Wenqing Chu, Ying Tai, Chengjie Wang, Jie Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03508
- **Pdf link:** https://arxiv.org/pdf/2309.03508
- **Abstract**
Video frame interpolation is an important low-level vision task, which can increase frame rate for more fluent visual experience. Existing methods have achieved great success by employing advanced motion models and synthesis networks. However, the spatial redundancy when synthesizing the target frame has not been fully explored, that can result in lots of inefficient computation. On the other hand, the computation compression degree in frame interpolation is highly dependent on both texture distribution and scene motion, which demands to understand the spatial-temporal information of each input frame pair for a better compression degree selection. In this work, we propose a novel two-stage frame interpolation framework termed WaveletVFI to address above problems. It first estimates intermediate optical flow with a lightweight motion perception network, and then a wavelet synthesis network uses flow aligned context features to predict multi-scale wavelet coefficients with sparse convolution for efficient target frame reconstruction, where the sparse valid masks that control computation in each scale are determined by a crucial threshold ratio. Instead of setting a fixed value like previous methods, we find that embedding a classifier in the motion perception network to learn a dynamic threshold for each sample can achieve more computation reduction with almost no loss of accuracy. On the common high resolution and animation frame interpolation benchmarks, proposed WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts. Code is available at https://github.com/ltkong218/WaveletVFI.
## Keyword: RAW
### MEGANet: Multi-Scale Edge-Guided Attention Network for Weak Boundary Polyp Segmentation
- **Authors:** Nhat-Tan Bui, Dinh-Hieu Hoang, Quang-Thuc Nguyen, Minh-Triet Tran, Ngan Le
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03329
- **Pdf link:** https://arxiv.org/pdf/2309.03329
- **Abstract**
Efficient polyp segmentation in healthcare plays a critical role in enabling early diagnosis of colorectal cancer. However, the segmentation of polyps presents numerous challenges, including the intricate distribution of backgrounds, variations in polyp sizes and shapes, and indistinct boundaries. Defining the boundary between the foreground (i.e. polyp itself) and the background (surrounding tissue) is difficult. To mitigate these challenges, we propose Multi-Scale Edge-Guided Attention Network (MEGANet) tailored specifically for polyp segmentation within colonoscopy images. This network draws inspiration from the fusion of a classical edge detection technique with an attention mechanism. By combining these techniques, MEGANet effectively preserves high-frequency information, notably edges and boundaries, which tend to erode as neural networks deepen. MEGANet is designed as an end-to-end framework, encompassing three key modules: an encoder, which is responsible for capturing and abstracting the features from the input image, a decoder, which focuses on salient features, and the Edge-Guided Attention module (EGA) that employs the Laplacian Operator to accentuate polyp boundaries. Extensive experiments, both qualitative and quantitative, on five benchmark datasets, demonstrate that our EGANet outperforms other existing SOTA methods under six evaluation metrics. Our code is available at \url{https://github.com/DinhHieuHoang/MEGANet}
### Prompt-based Context- and Domain-aware Pretraining for Vision and Language Navigation
- **Authors:** Ting Liu, Wansen Wu, Yue Hu, Youkai Wang, Kai Xu, Quanjun Yin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03661
- **Pdf link:** https://arxiv.org/pdf/2309.03661
- **Abstract**
With strong representation capabilities, pretrained vision-language models are widely used in vision and language navigation (VLN). However, most of them are trained on web-crawled general-purpose datasets, which incurs a considerable domain gap when used for VLN tasks. Another challenge for VLN is how the agent understands the contextual relations between actions on a trajectory and performs cross-modal alignment sequentially. In this paper, we propose a novel Prompt-bAsed coNtext- and Domain-Aware (PANDA) pretraining framework to address these problems. It performs prompting in two stages. In the domain-aware stage, we apply a low-cost prompt tuning paradigm to learn soft visual prompts from an in-domain dataset for equipping the pretrained models with object-level and scene-level cross-modal alignment in VLN tasks. Furthermore, in the context-aware stage, we design a set of hard context prompts to capture the sequence-level semantics and instill both out-of-context and contextual knowledge in the instruction into cross-modal representations. They enable further tuning of the pretrained models via contrastive learning. Experimental results on both R2R and REVERIE show the superiority of PANDA compared to previous state-of-the-art methods.
## Keyword: raw image
There is no result
|
2.0
|
New submissions for Fri, 8 Sep 23 - ## Keyword: events
There is no result
## Keyword: event camera
There is no result
## Keyword: events camera
There is no result
## Keyword: white balance
There is no result
## Keyword: color contrast
There is no result
## Keyword: AWB
There is no result
## Keyword: ISP
### Distribution-Aware Prompt Tuning for Vision-Language Models
- **Authors:** Eulrang Cho, Jooyeon Kim, Hyunwoo J. Kim
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03406
- **Pdf link:** https://arxiv.org/pdf/2309.03406
- **Abstract**
Pre-trained vision-language models (VLMs) have shown impressive performance on various downstream tasks by utilizing knowledge learned from large data. In general, the performance of VLMs on target tasks can be further improved by prompt tuning, which adds context to the input image or text. By leveraging data from target tasks, various prompt-tuning methods have been studied in the literature. A key to prompt tuning is the feature space alignment between two modalities via learnable vectors with model parameters fixed. We observed that the alignment becomes more effective when embeddings of each modality are `well-arranged' in the latent space. Inspired by this observation, we proposed distribution-aware prompt tuning (DAPT) for vision-language models, which is simple yet effective. Specifically, the prompts are learned by maximizing inter-dispersion, the distance between classes, as well as minimizing the intra-dispersion measured by the distance between embeddings from the same class. Our extensive experiments on 11 benchmark datasets demonstrate that our method significantly improves generalizability. The code is available at https://github.com/mlvlab/DAPT.
### Trash to Treasure: Low-Light Object Detection via Decomposition-and-Aggregation
- **Authors:** Xiaohan Cui, Long Ma, Tengyu Ma, Jinyuan Liu, Xin Fan, Risheng Liu
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03548
- **Pdf link:** https://arxiv.org/pdf/2309.03548
- **Abstract**
Object detection in low-light scenarios has attracted much attention in the past few years. A mainstream and representative scheme introduces enhancers as the pre-processing for regular detectors. However, because of the disparity in task objectives between the enhancer and detector, this paradigm cannot shine at its best ability. In this work, we try to arouse the potential of enhancer + detector. Different from existing works, we extend the illumination-based enhancers (our newly designed or existing) as a scene decomposition module, whose removed illumination is exploited as the auxiliary in the detector for extracting detection-friendly features. A semantic aggregation module is further established for integrating multi-scale scene-related semantic information in the context space. Actually, our built scheme successfully transforms the "trash" (i.e., the ignored illumination in the detector) into the "treasure" for the detector. Plenty of experiments are conducted to reveal our superiority against other state-of-the-art methods. The code will be public if it is accepted.
## Keyword: image signal processing
There is no result
## Keyword: image signal process
There is no result
## Keyword: compression
### Dynamic Frame Interpolation in Wavelet Domain
- **Authors:** Lingtong Kong, Boyuan Jiang, Donghao Luo, Wenqing Chu, Ying Tai, Chengjie Wang, Jie Yang
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03508
- **Pdf link:** https://arxiv.org/pdf/2309.03508
- **Abstract**
Video frame interpolation is an important low-level vision task, which can increase frame rate for more fluent visual experience. Existing methods have achieved great success by employing advanced motion models and synthesis networks. However, the spatial redundancy when synthesizing the target frame has not been fully explored, that can result in lots of inefficient computation. On the other hand, the computation compression degree in frame interpolation is highly dependent on both texture distribution and scene motion, which demands to understand the spatial-temporal information of each input frame pair for a better compression degree selection. In this work, we propose a novel two-stage frame interpolation framework termed WaveletVFI to address above problems. It first estimates intermediate optical flow with a lightweight motion perception network, and then a wavelet synthesis network uses flow aligned context features to predict multi-scale wavelet coefficients with sparse convolution for efficient target frame reconstruction, where the sparse valid masks that control computation in each scale are determined by a crucial threshold ratio. Instead of setting a fixed value like previous methods, we find that embedding a classifier in the motion perception network to learn a dynamic threshold for each sample can achieve more computation reduction with almost no loss of accuracy. On the common high resolution and animation frame interpolation benchmarks, proposed WaveletVFI can reduce computation up to 40% while maintaining similar accuracy, making it perform more efficiently against other state-of-the-arts. Code is available at https://github.com/ltkong218/WaveletVFI.
## Keyword: RAW
### MEGANet: Multi-Scale Edge-Guided Attention Network for Weak Boundary Polyp Segmentation
- **Authors:** Nhat-Tan Bui, Dinh-Hieu Hoang, Quang-Thuc Nguyen, Minh-Triet Tran, Ngan Le
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03329
- **Pdf link:** https://arxiv.org/pdf/2309.03329
- **Abstract**
Efficient polyp segmentation in healthcare plays a critical role in enabling early diagnosis of colorectal cancer. However, the segmentation of polyps presents numerous challenges, including the intricate distribution of backgrounds, variations in polyp sizes and shapes, and indistinct boundaries. Defining the boundary between the foreground (i.e. polyp itself) and the background (surrounding tissue) is difficult. To mitigate these challenges, we propose Multi-Scale Edge-Guided Attention Network (MEGANet) tailored specifically for polyp segmentation within colonoscopy images. This network draws inspiration from the fusion of a classical edge detection technique with an attention mechanism. By combining these techniques, MEGANet effectively preserves high-frequency information, notably edges and boundaries, which tend to erode as neural networks deepen. MEGANet is designed as an end-to-end framework, encompassing three key modules: an encoder, which is responsible for capturing and abstracting the features from the input image, a decoder, which focuses on salient features, and the Edge-Guided Attention module (EGA) that employs the Laplacian Operator to accentuate polyp boundaries. Extensive experiments, both qualitative and quantitative, on five benchmark datasets, demonstrate that our EGANet outperforms other existing SOTA methods under six evaluation metrics. Our code is available at \url{https://github.com/DinhHieuHoang/MEGANet}
### Prompt-based Context- and Domain-aware Pretraining for Vision and Language Navigation
- **Authors:** Ting Liu, Wansen Wu, Yue Hu, Youkai Wang, Kai Xu, Quanjun Yin
- **Subjects:** Computer Vision and Pattern Recognition (cs.CV)
- **Arxiv link:** https://arxiv.org/abs/2309.03661
- **Pdf link:** https://arxiv.org/pdf/2309.03661
- **Abstract**
With strong representation capabilities, pretrained vision-language models are widely used in vision and language navigation (VLN). However, most of them are trained on web-crawled general-purpose datasets, which incurs a considerable domain gap when used for VLN tasks. Another challenge for VLN is how the agent understands the contextual relations between actions on a trajectory and performs cross-modal alignment sequentially. In this paper, we propose a novel Prompt-bAsed coNtext- and Domain-Aware (PANDA) pretraining framework to address these problems. It performs prompting in two stages. In the domain-aware stage, we apply a low-cost prompt tuning paradigm to learn soft visual prompts from an in-domain dataset for equipping the pretrained models with object-level and scene-level cross-modal alignment in VLN tasks. Furthermore, in the context-aware stage, we design a set of hard context prompts to capture the sequence-level semantics and instill both out-of-context and contextual knowledge in the instruction into cross-modal representations. They enable further tuning of the pretrained models via contrastive learning. Experimental results on both R2R and REVERIE show the superiority of PANDA compared to previous state-of-the-art methods.
## Keyword: raw image
There is no result
|
process
|
new submissions for fri sep keyword events there is no result keyword event camera there is no result keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp distribution aware prompt tuning for vision language models authors eulrang cho jooyeon kim hyunwoo j kim subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract pre trained vision language models vlms have shown impressive performance on various downstream tasks by utilizing knowledge learned from large data in general the performance of vlms on target tasks can be further improved by prompt tuning which adds context to the input image or text by leveraging data from target tasks various prompt tuning methods have been studied in the literature a key to prompt tuning is the feature space alignment between two modalities via learnable vectors with model parameters fixed we observed that the alignment becomes more effective when embeddings of each modality are well arranged in the latent space inspired by this observation we proposed distribution aware prompt tuning dapt for vision language models which is simple yet effective specifically the prompts are learned by maximizing inter dispersion the distance between classes as well as minimizing the intra dispersion measured by the distance between embeddings from the same class our extensive experiments on benchmark datasets demonstrate that our method significantly improves generalizability the code is available at trash to treasure low light object detection via decomposition and aggregation authors xiaohan cui long ma tengyu ma jinyuan liu xin fan risheng liu subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract object detection in low light scenarios has attracted much attention in the past few years a mainstream and representative scheme introduces enhancers as the pre processing for regular detectors however because of the disparity in task objectives between the enhancer and detector this paradigm cannot shine at its best ability in this work we try to arouse the potential of enhancer detector different from existing works we extend the illumination based enhancers our newly designed or existing as a scene decomposition module whose removed illumination is exploited as the auxiliary in the detector for extracting detection friendly features a semantic aggregation module is further established for integrating multi scale scene related semantic information in the context space actually our built scheme successfully transforms the trash i e the ignored illumination in the detector into the treasure for the detector plenty of experiments are conducted to reveal our superiority against other state of the art methods the code will be public if it is accepted keyword image signal processing there is no result keyword image signal process there is no result keyword compression dynamic frame interpolation in wavelet domain authors lingtong kong boyuan jiang donghao luo wenqing chu ying tai chengjie wang jie yang subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract video frame interpolation is an important low level vision task which can increase frame rate for more fluent visual experience existing methods have achieved great success by employing advanced motion models and synthesis networks however the spatial redundancy when synthesizing the target frame has not been fully explored that can result in lots of inefficient computation on the other hand the computation compression degree in frame interpolation is highly dependent on both texture distribution and scene motion which demands to understand the spatial temporal information of each input frame pair for a better compression degree selection in this work we propose a novel two stage frame interpolation framework termed waveletvfi to address above problems it first estimates intermediate optical flow with a lightweight motion perception network and then a wavelet synthesis network uses flow aligned context features to predict multi scale wavelet coefficients with sparse convolution for efficient target frame reconstruction where the sparse valid masks that control computation in each scale are determined by a crucial threshold ratio instead of setting a fixed value like previous methods we find that embedding a classifier in the motion perception network to learn a dynamic threshold for each sample can achieve more computation reduction with almost no loss of accuracy on the common high resolution and animation frame interpolation benchmarks proposed waveletvfi can reduce computation up to while maintaining similar accuracy making it perform more efficiently against other state of the arts code is available at keyword raw meganet multi scale edge guided attention network for weak boundary polyp segmentation authors nhat tan bui dinh hieu hoang quang thuc nguyen minh triet tran ngan le subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract efficient polyp segmentation in healthcare plays a critical role in enabling early diagnosis of colorectal cancer however the segmentation of polyps presents numerous challenges including the intricate distribution of backgrounds variations in polyp sizes and shapes and indistinct boundaries defining the boundary between the foreground i e polyp itself and the background surrounding tissue is difficult to mitigate these challenges we propose multi scale edge guided attention network meganet tailored specifically for polyp segmentation within colonoscopy images this network draws inspiration from the fusion of a classical edge detection technique with an attention mechanism by combining these techniques meganet effectively preserves high frequency information notably edges and boundaries which tend to erode as neural networks deepen meganet is designed as an end to end framework encompassing three key modules an encoder which is responsible for capturing and abstracting the features from the input image a decoder which focuses on salient features and the edge guided attention module ega that employs the laplacian operator to accentuate polyp boundaries extensive experiments both qualitative and quantitative on five benchmark datasets demonstrate that our eganet outperforms other existing sota methods under six evaluation metrics our code is available at url prompt based context and domain aware pretraining for vision and language navigation authors ting liu wansen wu yue hu youkai wang kai xu quanjun yin subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract with strong representation capabilities pretrained vision language models are widely used in vision and language navigation vln however most of them are trained on web crawled general purpose datasets which incurs a considerable domain gap when used for vln tasks another challenge for vln is how the agent understands the contextual relations between actions on a trajectory and performs cross modal alignment sequentially in this paper we propose a novel prompt based context and domain aware panda pretraining framework to address these problems it performs prompting in two stages in the domain aware stage we apply a low cost prompt tuning paradigm to learn soft visual prompts from an in domain dataset for equipping the pretrained models with object level and scene level cross modal alignment in vln tasks furthermore in the context aware stage we design a set of hard context prompts to capture the sequence level semantics and instill both out of context and contextual knowledge in the instruction into cross modal representations they enable further tuning of the pretrained models via contrastive learning experimental results on both and reverie show the superiority of panda compared to previous state of the art methods keyword raw image there is no result
| 1
|
4,105
| 7,055,072,341
|
IssuesEvent
|
2018-01-04 05:51:00
|
log2timeline/plaso
|
https://api.github.com/repos/log2timeline/plaso
|
closed
|
User account detection fails on studentpc1 tests image
|
bug preprocessing
|
```
2017-08-21 21:12:24,576 [WARNING] (MainProcess) PID:23523 <manager> Unable to find any user accounts on the system.
2017-08-21 21:12:25,450 [WARNING] (MainProcess) PID:23523 <manager> Unable to find any user accounts on the system.
2017-08-21 21:12:25,459 [INFO] (MainProcess) PID:23523 <engine> Preprocessing detected platforms: Windows, Windows
2017-08-21 21:12:25,459 [INFO] (MainProcess) PID:23523 <log2timeline_tool> Parser filter expression changed to: win7
```
|
1.0
|
User account detection fails on studentpc1 tests image - ```
2017-08-21 21:12:24,576 [WARNING] (MainProcess) PID:23523 <manager> Unable to find any user accounts on the system.
2017-08-21 21:12:25,450 [WARNING] (MainProcess) PID:23523 <manager> Unable to find any user accounts on the system.
2017-08-21 21:12:25,459 [INFO] (MainProcess) PID:23523 <engine> Preprocessing detected platforms: Windows, Windows
2017-08-21 21:12:25,459 [INFO] (MainProcess) PID:23523 <log2timeline_tool> Parser filter expression changed to: win7
```
|
process
|
user account detection fails on tests image mainprocess pid unable to find any user accounts on the system mainprocess pid unable to find any user accounts on the system mainprocess pid preprocessing detected platforms windows windows mainprocess pid parser filter expression changed to
| 1
|
9,907
| 12,948,469,975
|
IssuesEvent
|
2020-07-19 04:41:44
|
OI-wiki/OI-wiki
|
https://api.github.com/repos/OI-wiki/OI-wiki
|
closed
|
规范 MathJax 公式
|
Discussion / 需要讨论 Format fix needed / 格式需修正 Need Processing / 需要处理 Work in Progress / 施工中 help wanted / 需要帮助
|
重构 TeX 导出工具期间发现了不少问题。这些问题导致导出 TeX PDF 时出现版面错误、符号错误等现象。包括但不限于以下:
页面 string/bm.md:
> 首先考虑 $delta_1$ 不起作用的情况,也就是发现失配字符在 $pat$ 上重现的位置在已经匹配完的 $m$ 个字符中,这种情况的概率 $\textit{probdelta_1_worthless}$ 为
* 存在滥用 MathJax 公式的现象,甚至单个变量名也要用 $ 套起来;
* ~~命名不规范(TeX 不允许同一字符后连续接两个下标);~~ 这一点属于 MathJax 和 LaTeX 的差异,不需要处理
页面 geometry/distance.md
> $(y_1 - y_2 \lt 0)\rightarrow |x_1-x_2|+|y_1-y_2|=x_1 - y_1 - (x_2 - y_2)$
* \lt 命令与 TeX 不兼容(需要 \newcommand 之类的 dirty hack);
页面 math/permutation_group.md
> 对于两个置换 $f=\pmatrix{a_1,a_2,\dots,a_n\\a_{p_1},a_{p_2},\dots,a_{p_n}}$ 和 $g=\pmatrix{a_{p_1},a_{p_2},\dots,a_{p_n}\\a_{q_1},a_{q_2},\dots,a_{q_n}}$
* \pmatrix 是过时写法 (old form),应使用 \begin{pmatrix} ... \end{pmatrix} 代替。
math 套汉字的问题
$公式中出现中文的情况$
## 可能需要做的修复
检查目前 OI Wiki 的所有页面,规范公式书写格式,尽可能保证公式同时被 MathJax 和 TeX 接受。
(接受的标准为 TeX 编译器不报 error 和 warning)
|
1.0
|
规范 MathJax 公式 - 重构 TeX 导出工具期间发现了不少问题。这些问题导致导出 TeX PDF 时出现版面错误、符号错误等现象。包括但不限于以下:
页面 string/bm.md:
> 首先考虑 $delta_1$ 不起作用的情况,也就是发现失配字符在 $pat$ 上重现的位置在已经匹配完的 $m$ 个字符中,这种情况的概率 $\textit{probdelta_1_worthless}$ 为
* 存在滥用 MathJax 公式的现象,甚至单个变量名也要用 $ 套起来;
* ~~命名不规范(TeX 不允许同一字符后连续接两个下标);~~ 这一点属于 MathJax 和 LaTeX 的差异,不需要处理
页面 geometry/distance.md
> $(y_1 - y_2 \lt 0)\rightarrow |x_1-x_2|+|y_1-y_2|=x_1 - y_1 - (x_2 - y_2)$
* \lt 命令与 TeX 不兼容(需要 \newcommand 之类的 dirty hack);
页面 math/permutation_group.md
> 对于两个置换 $f=\pmatrix{a_1,a_2,\dots,a_n\\a_{p_1},a_{p_2},\dots,a_{p_n}}$ 和 $g=\pmatrix{a_{p_1},a_{p_2},\dots,a_{p_n}\\a_{q_1},a_{q_2},\dots,a_{q_n}}$
* \pmatrix 是过时写法 (old form),应使用 \begin{pmatrix} ... \end{pmatrix} 代替。
math 套汉字的问题
$公式中出现中文的情况$
## 可能需要做的修复
检查目前 OI Wiki 的所有页面,规范公式书写格式,尽可能保证公式同时被 MathJax 和 TeX 接受。
(接受的标准为 TeX 编译器不报 error 和 warning)
|
process
|
规范 mathjax 公式 重构 tex 导出工具期间发现了不少问题。这些问题导致导出 tex pdf 时出现版面错误、符号错误等现象。包括但不限于以下: 页面 string bm md: 首先考虑 delta 不起作用的情况,也就是发现失配字符在 pat 上重现的位置在已经匹配完的 m 个字符中,这种情况的概率 textit probdelta worthless 为 存在滥用 mathjax 公式的现象,甚至单个变量名也要用 套起来; 命名不规范(tex 不允许同一字符后连续接两个下标); 这一点属于 mathjax 和 latex 的差异,不需要处理 页面 geometry distance md y y lt rightarrow x x y y x y x y lt 命令与 tex 不兼容(需要 newcommand 之类的 dirty hack); 页面 math permutation group md 对于两个置换 f pmatrix a a dots a n a p a p dots a p n 和 g pmatrix a p a p dots a p n a q a q dots a q n pmatrix 是过时写法 old form ,应使用 begin pmatrix end pmatrix 代替。 math 套汉字的问题 公式中出现中文的情况 可能需要做的修复 检查目前 oi wiki 的所有页面,规范公式书写格式,尽可能保证公式同时被 mathjax 和 tex 接受。 (接受的标准为 tex 编译器不报 error 和 warning)
| 1
|
152,702
| 13,465,029,875
|
IssuesEvent
|
2020-09-09 20:10:29
|
egobillot/calltop
|
https://api.github.com/repos/egobillot/calltop
|
closed
|
[doc] add a page in the wiki to help building python3 with dtrace support
|
documentation
|
With ubuntu 20.04 python3 package comes with the dtrace support that allows the tracing with usdt (ebpf). That not the case for the previous ubuntu version. Other linux distributions may have the same problem. It's important the describe how to build python with dtrace enable (--with-dtrace flag)
A wiki pages would do the job.
|
1.0
|
[doc] add a page in the wiki to help building python3 with dtrace support - With ubuntu 20.04 python3 package comes with the dtrace support that allows the tracing with usdt (ebpf). That not the case for the previous ubuntu version. Other linux distributions may have the same problem. It's important the describe how to build python with dtrace enable (--with-dtrace flag)
A wiki pages would do the job.
|
non_process
|
add a page in the wiki to help building with dtrace support with ubuntu package comes with the dtrace support that allows the tracing with usdt ebpf that not the case for the previous ubuntu version other linux distributions may have the same problem it s important the describe how to build python with dtrace enable with dtrace flag a wiki pages would do the job
| 0
|
687,825
| 23,540,228,876
|
IssuesEvent
|
2022-08-20 09:12:45
|
pyg-team/pytorch_geometric
|
https://api.github.com/repos/pyg-team/pytorch_geometric
|
opened
|
`dblp` dataset download link is broken
|
bug 0 - Priority P0
|
### 🐛 Describe the bug
The `dblp` dataset is currently broken as the dropbox [link](https://www.dropbox.com/s/yh4grpeks87ugr2/DBLP_processed.zip?dl=1) is no longer live. This was showing up in failing CI tests for all PRs.
Possible fixes:
- Find another source of the same data (or upload the same if we have it anywhere).
- Recreate the data from the dataset which is available [here](https://dblp.org/faq/How+can+I+download+the+whole+dblp+dataset.html).
Note that we have added a skipped test [here](https://github.com/pyg-team/pytorch_geometric/pull/5250/files#r950671979) which should be re-enabled once this dataset is fixed.
### Environment
* PyG version: 2.1.0
* PyTorch version: 1.12
* OS: CI
|
1.0
|
`dblp` dataset download link is broken - ### 🐛 Describe the bug
The `dblp` dataset is currently broken as the dropbox [link](https://www.dropbox.com/s/yh4grpeks87ugr2/DBLP_processed.zip?dl=1) is no longer live. This was showing up in failing CI tests for all PRs.
Possible fixes:
- Find another source of the same data (or upload the same if we have it anywhere).
- Recreate the data from the dataset which is available [here](https://dblp.org/faq/How+can+I+download+the+whole+dblp+dataset.html).
Note that we have added a skipped test [here](https://github.com/pyg-team/pytorch_geometric/pull/5250/files#r950671979) which should be re-enabled once this dataset is fixed.
### Environment
* PyG version: 2.1.0
* PyTorch version: 1.12
* OS: CI
|
non_process
|
dblp dataset download link is broken 🐛 describe the bug the dblp dataset is currently broken as the dropbox is no longer live this was showing up in failing ci tests for all prs possible fixes find another source of the same data or upload the same if we have it anywhere recreate the data from the dataset which is available note that we have added a skipped test which should be re enabled once this dataset is fixed environment pyg version pytorch version os ci
| 0
|
7,736
| 2,925,271,422
|
IssuesEvent
|
2015-06-26 03:33:15
|
jubatus/jubatus
|
https://api.github.com/repos/jubatus/jubatus
|
closed
|
C++ client tests are not working
|
test
|
Currently, all tests are being run with `jubaregression` and `jubaregression_proxy`, even for tests for other services (like classifier, recommender, ...etc.)
`test_gtest.py` holds the service name as a class field, but it should be an instance field, as every tests use different service to each other.
https://github.com/jubatus/jubatus/blob/0.7.0/client_test/test_gtest.py#L47
Moreover, the test failures seems not reported.
|
1.0
|
C++ client tests are not working - Currently, all tests are being run with `jubaregression` and `jubaregression_proxy`, even for tests for other services (like classifier, recommender, ...etc.)
`test_gtest.py` holds the service name as a class field, but it should be an instance field, as every tests use different service to each other.
https://github.com/jubatus/jubatus/blob/0.7.0/client_test/test_gtest.py#L47
Moreover, the test failures seems not reported.
|
non_process
|
c client tests are not working currently all tests are being run with jubaregression and jubaregression proxy even for tests for other services like classifier recommender etc test gtest py holds the service name as a class field but it should be an instance field as every tests use different service to each other moreover the test failures seems not reported
| 0
|
268,727
| 8,410,915,127
|
IssuesEvent
|
2018-10-12 12:19:00
|
nlbdev/pipeline
|
https://api.github.com/repos/nlbdev/pipeline
|
opened
|
Fallback mechanism for matrix tables that can not be rendered as matrix tables
|
Priority:2 - Medium dotify enhancement
|
There shouldn't be an exception if matrix tables can not be rendered as matrix tables. It should rather fall back to render as a list, and print a warning.
|
1.0
|
Fallback mechanism for matrix tables that can not be rendered as matrix tables - There shouldn't be an exception if matrix tables can not be rendered as matrix tables. It should rather fall back to render as a list, and print a warning.
|
non_process
|
fallback mechanism for matrix tables that can not be rendered as matrix tables there shouldn t be an exception if matrix tables can not be rendered as matrix tables it should rather fall back to render as a list and print a warning
| 0
|
11,927
| 14,704,278,503
|
IssuesEvent
|
2021-01-04 16:15:41
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
Service Connections: Required Template across Organizations
|
Pri2 devops-cicd-process/tech devops/prod doc-enhancement
|
When it talks about using Required Template to require pipelines to extend from a template when they use a service connection, it does not show how to reference an Azure DevOps git repository which lives in a different organization, or even a different tenant (I need answers for both). It is unclear whether this is possible at all. Please clarify.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b067a175-f640-7503-9c1e-f0130c6dbeda
* Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36
* Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass)
* Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/approvals.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @shashban
* Microsoft Alias: **shashban**
|
1.0
|
Service Connections: Required Template across Organizations -
When it talks about using Required Template to require pipelines to extend from a template when they use a service connection, it does not show how to reference an Azure DevOps git repository which lives in a different organization, or even a different tenant (I need answers for both). It is unclear whether this is possible at all. Please clarify.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: b067a175-f640-7503-9c1e-f0130c6dbeda
* Version Independent ID: ff743c7b-a103-eae6-4478-62ba995a4b36
* Content: [Pipeline deployment approvals - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/approvals?view=azure-devops&tabs=check-pass)
* Content Source: [docs/pipelines/process/approvals.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/approvals.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @shashban
* Microsoft Alias: **shashban**
|
process
|
service connections required template across organizations when it talks about using required template to require pipelines to extend from a template when they use a service connection it does not show how to reference an azure devops git repository which lives in a different organization or even a different tenant i need answers for both it is unclear whether this is possible at all please clarify document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login shashban microsoft alias shashban
| 1
|
248,240
| 26,784,984,657
|
IssuesEvent
|
2023-02-01 01:30:18
|
tongni1975/containers-may19-2020-MyWork
|
https://api.github.com/repos/tongni1975/containers-may19-2020-MyWork
|
opened
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz
|
security vulnerability
|
## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /northwind/client/package.json</p>
<p>Path to vulnerable library: /northwind/client/node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- cli-7.2.3.tgz (Root Library)
- update-0.12.3.tgz
- pacote-9.1.1.tgz
- make-fetch-happen-4.0.1.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tongni1975/containers-may19-2020-MyWork/commit/7799e7271c1aa78ce2352bc88c61608623e6f15c">7799e7271c1aa78ce2352bc88c61608623e6f15c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz - ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /northwind/client/package.json</p>
<p>Path to vulnerable library: /northwind/client/node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- cli-7.2.3.tgz (Root Library)
- update-0.12.3.tgz
- pacote-9.1.1.tgz
- make-fetch-happen-4.0.1.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/tongni1975/containers-may19-2020-MyWork/commit/7799e7271c1aa78ce2352bc88c61608623e6f15c">7799e7271c1aa78ce2352bc88c61608623e6f15c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in http cache semantics tgz cve medium severity vulnerability vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file northwind client package json path to vulnerable library northwind client node modules http cache semantics package json dependency hierarchy cli tgz root library update tgz pacote tgz make fetch happen tgz x http cache semantics tgz vulnerable library found in head commit a href vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics step up your open source security game with mend
| 0
|
18,309
| 24,420,587,155
|
IssuesEvent
|
2022-10-05 19:55:28
|
PetterVargas/statuspage
|
https://api.github.com/repos/PetterVargas/statuspage
|
closed
|
🛑 Contifico contiprocess is down
|
status contifico-contiprocess
|
In [`4911f66`](https://github.com/PetterVargas/statuspage/commit/4911f66f2be210d398e7b82eefae1d7e422d56a8
), Contifico contiprocess (https://contiprocess.contifico.com) was **down**:
- HTTP code: 521
- Response time: 110 ms
|
1.0
|
🛑 Contifico contiprocess is down - In [`4911f66`](https://github.com/PetterVargas/statuspage/commit/4911f66f2be210d398e7b82eefae1d7e422d56a8
), Contifico contiprocess (https://contiprocess.contifico.com) was **down**:
- HTTP code: 521
- Response time: 110 ms
|
process
|
🛑 contifico contiprocess is down in contifico contiprocess was down http code response time ms
| 1
|
342,426
| 24,742,033,468
|
IssuesEvent
|
2022-10-21 06:25:57
|
crescentpartha/CheatSheets-for-Developers
|
https://api.github.com/repos/crescentpartha/CheatSheets-for-Developers
|
closed
|
Add more html tags and attributes in html-cheatsheet.md file
|
documentation help wanted good first issue hacktoberfest hacktoberfest-2022
|
**Descriptions**
> add more `html tags and attributes`
**Expected behavior**
> Enrich the [html-cheatsheet.md](https://github.com/crescentpartha/CheatSheets-for-Developers/blob/main/CheatSheets/html-cheatsheet.md) file by adding more frequently used html tags and attributes.
**Screenshots or Links**
> If applicable, add **screenshots** or a modified **UI link** to help explain your contribution.
---
Please, fill free to contribute to this project. Anyone including `newcomers` can make a contribution to this repo by adding `basic Commands` or `Keyboard Shortcuts` or `tags and attributes`. Please, give a `star` to support this project.
|
1.0
|
Add more html tags and attributes in html-cheatsheet.md file - **Descriptions**
> add more `html tags and attributes`
**Expected behavior**
> Enrich the [html-cheatsheet.md](https://github.com/crescentpartha/CheatSheets-for-Developers/blob/main/CheatSheets/html-cheatsheet.md) file by adding more frequently used html tags and attributes.
**Screenshots or Links**
> If applicable, add **screenshots** or a modified **UI link** to help explain your contribution.
---
Please, fill free to contribute to this project. Anyone including `newcomers` can make a contribution to this repo by adding `basic Commands` or `Keyboard Shortcuts` or `tags and attributes`. Please, give a `star` to support this project.
|
non_process
|
add more html tags and attributes in html cheatsheet md file descriptions add more html tags and attributes expected behavior enrich the file by adding more frequently used html tags and attributes screenshots or links if applicable add screenshots or a modified ui link to help explain your contribution please fill free to contribute to this project anyone including newcomers can make a contribution to this repo by adding basic commands or keyboard shortcuts or tags and attributes please give a star to support this project
| 0
|
438,251
| 12,624,860,940
|
IssuesEvent
|
2020-06-14 08:46:00
|
lorenzwalthert/precommit
|
https://api.github.com/repos/lorenzwalthert/precommit
|
closed
|
Don't run autoupdate() if .pre-commit-config.yaml is already present?
|
Complexity: Low Priority: Medium Status: Unassigned Type: Enhancement
|
Advantage
A way to increase the probability that up-to-date hooks are used. Otherwise, people might forget to run `autoupdate()` and don't benefit from improvement.
Disadvantage
Bound to convolute contributions, as the hook versions should be managed by the repo maintainer.
|
1.0
|
Don't run autoupdate() if .pre-commit-config.yaml is already present? - Advantage
A way to increase the probability that up-to-date hooks are used. Otherwise, people might forget to run `autoupdate()` and don't benefit from improvement.
Disadvantage
Bound to convolute contributions, as the hook versions should be managed by the repo maintainer.
|
non_process
|
don t run autoupdate if pre commit config yaml is already present advantage a way to increase the probability that up to date hooks are used otherwise people might forget to run autoupdate and don t benefit from improvement disadvantage bound to convolute contributions as the hook versions should be managed by the repo maintainer
| 0
|
43,671
| 9,478,758,719
|
IssuesEvent
|
2019-04-20 01:12:18
|
coderedcorp/coderedcms
|
https://api.github.com/repos/coderedcorp/coderedcms
|
closed
|
Generic Categories/Taxonomies
|
enhancement skill: coderedcms
|
A classic CMS function is the ability to group pages together using categories. Blogs especially need this, but any other parent-child relationship can benefit from this as well (think: product categories, portfolios, etc.)
Drawing inspiration from WordPress here, it actually has a pretty good way of handling these via taxonomies. We could implement something similar, albeit less cryptic (see https://codex.wordpress.org/Taxonomies)
Concrete models/snippets in coderedcms, for example:
* `Category` (Taxonomy)
* `CategoryItem` (Term) - with foreign key to `Category`
Then, in CoderedPage, a M2M for `CategoryItem`. Ideally, it would be rendered in the wagtail admin grouped by Category somehow for visual representation. We could even go a step further and under the display child page settings of a page, have the ability to specify an M2M of `Category` to render filtering options for those categories on the front-end.
Tags could potentially be used for this functionality, but they would do not benefit from the 2-level relationship. It would be easy to offer Category dropdowns on a blog landing or product landing page for example, and let the user filter by category. Whereas tags only offer a single level (i.e. select a tag) many of which would not be relevant for that particular page since tags are global.
Currently, we handle hard categorization by using sub pages and more sub pages. But the proposed category/taxonomy relationship would enable more graph-like relationships. For hard category classifications the `/parent/category/child/` tree structure would still work fine. But with this proposal a page could be queried by multiple categories which is useful for filtering scenarios. The taggit structure seems a bit too flaky to support a clean implementation of this at the moment.
One issue for discussion is that it would require a concrete model in coderedcms to function. There's really no way this could be implemented with abstract models. If you wanted to extend the category, for example to add some extra fields, you would have to either create a through model of your own or just not use the built in categories at all.
CC @corysutyak and @FlipperPA for thoughts on this.
|
1.0
|
Generic Categories/Taxonomies - A classic CMS function is the ability to group pages together using categories. Blogs especially need this, but any other parent-child relationship can benefit from this as well (think: product categories, portfolios, etc.)
Drawing inspiration from WordPress here, it actually has a pretty good way of handling these via taxonomies. We could implement something similar, albeit less cryptic (see https://codex.wordpress.org/Taxonomies)
Concrete models/snippets in coderedcms, for example:
* `Category` (Taxonomy)
* `CategoryItem` (Term) - with foreign key to `Category`
Then, in CoderedPage, a M2M for `CategoryItem`. Ideally, it would be rendered in the wagtail admin grouped by Category somehow for visual representation. We could even go a step further and under the display child page settings of a page, have the ability to specify an M2M of `Category` to render filtering options for those categories on the front-end.
Tags could potentially be used for this functionality, but they would do not benefit from the 2-level relationship. It would be easy to offer Category dropdowns on a blog landing or product landing page for example, and let the user filter by category. Whereas tags only offer a single level (i.e. select a tag) many of which would not be relevant for that particular page since tags are global.
Currently, we handle hard categorization by using sub pages and more sub pages. But the proposed category/taxonomy relationship would enable more graph-like relationships. For hard category classifications the `/parent/category/child/` tree structure would still work fine. But with this proposal a page could be queried by multiple categories which is useful for filtering scenarios. The taggit structure seems a bit too flaky to support a clean implementation of this at the moment.
One issue for discussion is that it would require a concrete model in coderedcms to function. There's really no way this could be implemented with abstract models. If you wanted to extend the category, for example to add some extra fields, you would have to either create a through model of your own or just not use the built in categories at all.
CC @corysutyak and @FlipperPA for thoughts on this.
|
non_process
|
generic categories taxonomies a classic cms function is the ability to group pages together using categories blogs especially need this but any other parent child relationship can benefit from this as well think product categories portfolios etc drawing inspiration from wordpress here it actually has a pretty good way of handling these via taxonomies we could implement something similar albeit less cryptic see concrete models snippets in coderedcms for example category taxonomy categoryitem term with foreign key to category then in coderedpage a for categoryitem ideally it would be rendered in the wagtail admin grouped by category somehow for visual representation we could even go a step further and under the display child page settings of a page have the ability to specify an of category to render filtering options for those categories on the front end tags could potentially be used for this functionality but they would do not benefit from the level relationship it would be easy to offer category dropdowns on a blog landing or product landing page for example and let the user filter by category whereas tags only offer a single level i e select a tag many of which would not be relevant for that particular page since tags are global currently we handle hard categorization by using sub pages and more sub pages but the proposed category taxonomy relationship would enable more graph like relationships for hard category classifications the parent category child tree structure would still work fine but with this proposal a page could be queried by multiple categories which is useful for filtering scenarios the taggit structure seems a bit too flaky to support a clean implementation of this at the moment one issue for discussion is that it would require a concrete model in coderedcms to function there s really no way this could be implemented with abstract models if you wanted to extend the category for example to add some extra fields you would have to either create a through model of your own or just not use the built in categories at all cc corysutyak and flipperpa for thoughts on this
| 0
|
10,650
| 13,448,899,637
|
IssuesEvent
|
2020-09-08 16:05:18
|
GetTerminus/terminus-oss
|
https://api.github.com/repos/GetTerminus/terminus-oss
|
closed
|
Storybook: Set up demo stories for all packages
|
Epic Focus: community Goal: Process Improvement Needs: planning Type: chore
|
Demo stories should be small, focused and meant to be the primary point of exploration.
TODO: create subtasks
|
1.0
|
Storybook: Set up demo stories for all packages - Demo stories should be small, focused and meant to be the primary point of exploration.
TODO: create subtasks
|
process
|
storybook set up demo stories for all packages demo stories should be small focused and meant to be the primary point of exploration todo create subtasks
| 1
|
3,994
| 6,922,672,254
|
IssuesEvent
|
2017-11-30 04:47:55
|
ncbo/bioportal-project
|
https://api.github.com/repos/ncbo/bioportal-project
|
opened
|
problem parsing SWEET
|
ontology processing problem
|
The SWEET ontology in BioPortal has about 12 classes with the name "error#', where # is a number from 1 to 12 or so. The log reports errors but these do not give a clue what this is or where it is happening (see below).
It's possible this is related to the fact the root ontology (http://sweetontology.net/sweetAll) #includes all the many individual ontologies. But I don't think Protege sees an error.
Any tips?
I, [2017-11-28T21:01:07.008500 #6803] INFO -- : ["2017-11-28T20:58:33 [main] INFO o.s.n.o.OntologyParserCommand - Parsing invocation with values: ParserInvocation [inputRepositoryFolder=null, outputRepositoryFolder=/srv/ncbo/repository/SWEET/2, masterFileName=/srv/ncbo/repository/SWEET/2/sweetAll, invocationId=0, parserLog=, userReasoner= true]\n\n2017-11-28T20:58:33 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - executor ...\n\n2017-11-28T20:58:34 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - Input repository folder is null. Unique file being parsed.\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error1 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error2 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error3 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error4 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error5 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error6 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error7 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error8 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error9 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error10 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error11 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error12 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error13 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error14 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error15 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error16 for type Class\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyMetrics - Calculating metrics for /srv/ncbo/repository/SWEET/2/sweetAll\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyMetrics - Finished metrics calculation for /srv/ncbo/repository/SWEET/2/sweetAll in 4 milliseconds\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyMetrics - Generated metrics CSV file for /srv/ncbo/repository/SWEET/2/sweetAll\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - Ontology document format: org.semanticweb.owlapi.formats.TurtleDocumentFormat\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO
|
1.0
|
problem parsing SWEET - The SWEET ontology in BioPortal has about 12 classes with the name "error#', where # is a number from 1 to 12 or so. The log reports errors but these do not give a clue what this is or where it is happening (see below).
It's possible this is related to the fact the root ontology (http://sweetontology.net/sweetAll) #includes all the many individual ontologies. But I don't think Protege sees an error.
Any tips?
I, [2017-11-28T21:01:07.008500 #6803] INFO -- : ["2017-11-28T20:58:33 [main] INFO o.s.n.o.OntologyParserCommand - Parsing invocation with values: ParserInvocation [inputRepositoryFolder=null, outputRepositoryFolder=/srv/ncbo/repository/SWEET/2, masterFileName=/srv/ncbo/repository/SWEET/2/sweetAll, invocationId=0, parserLog=, userReasoner= true]\n\n2017-11-28T20:58:33 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - executor ...\n\n2017-11-28T20:58:34 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - Input repository folder is null. Unique file being parsed.\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error1 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error2 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error3 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error4 for type Class\n\n2017-11-28T20:58:58 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error5 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error6 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error7 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error8 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error9 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error10 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error11 for type Class\n\n2017-11-28T20:59:12 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error12 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error13 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error14 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error15 for type Class\n\n2017-11-28T20:59:13 [main] ERROR o.s.o.r.rdfxml.parser.OWLRDFConsumer - Entity not properly recognized, missing triples in input? http://org.semanticweb.owlapi/error#Error16 for type Class\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyMetrics - Calculating metrics for /srv/ncbo/repository/SWEET/2/sweetAll\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyMetrics - Finished metrics calculation for /srv/ncbo/repository/SWEET/2/sweetAll in 4 milliseconds\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyMetrics - Generated metrics CSV file for /srv/ncbo/repository/SWEET/2/sweetAll\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - Ontology document format: org.semanticweb.owlapi.formats.TurtleDocumentFormat\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO o.s.ncbo.oapiwrapper.OntologyParser - isPrefixOWLOntologyFormat: true\n\n2017-11-28T21:01:05 [main] INFO
|
process
|
problem parsing sweet the sweet ontology in bioportal has about classes with the name error where is a number from to or so the log reports errors but these do not give a clue what this is or where it is happening see below it s possible this is related to the fact the root ontology includes all the many individual ontologies but i don t think protege sees an error any tips i info info o s n o ontologyparsercommand parsing invocation with values parserinvocation n info o s ncbo oapiwrapper ontologyparser executor n info o s ncbo oapiwrapper ontologyparser input repository folder is null unique file being parsed n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n error o s o r rdfxml parser owlrdfconsumer entity not properly recognized missing triples in input for type class n info o s ncbo oapiwrapper ontologymetrics calculating metrics for srv ncbo repository sweet sweetall n info o s ncbo oapiwrapper ontologymetrics finished metrics calculation for srv ncbo repository sweet sweetall in milliseconds n info o s ncbo oapiwrapper ontologymetrics generated metrics csv file for srv ncbo repository sweet sweetall n info o s ncbo oapiwrapper ontologyparser ontology document format org semanticweb owlapi formats turtledocumentformat n info o s ncbo oapiwrapper ontologyparser isprefixowlontologyformat true n info o s ncbo oapiwrapper ontologyparser isprefixowlontologyformat true n info o s ncbo oapiwrapper ontologyparser isprefixowlontologyformat true n info o s ncbo oapiwrapper ontologyparser isprefixowlontologyformat true n info
| 1
|
2,719
| 5,584,198,047
|
IssuesEvent
|
2017-03-29 03:48:15
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Use own className on pagination (onPageChange button group)
|
enhancement inprocess
|
how can i give assign className to pagination, onPageChange(i do want to give custom style to this button group.)
|
1.0
|
Use own className on pagination (onPageChange button group) - how can i give assign className to pagination, onPageChange(i do want to give custom style to this button group.)
|
process
|
use own classname on pagination onpagechange button group how can i give assign classname to pagination onpagechange i do want to give custom style to this button group
| 1
|
11,836
| 14,655,540,374
|
IssuesEvent
|
2020-12-28 11:15:11
|
Jeffail/benthos
|
https://api.github.com/repos/Jeffail/benthos
|
closed
|
Convert redis client to UniversalClient to support simple, cluster, and sentinel deployments
|
caches enhancement inputs outputs processors
|
I was thinking about adding support for https://pkg.go.dev/github.com/go-redis/redis/v7#UniversalClient and converting the existing redis clients to use that.
Without digging too deeply, I believe this would be backward-compatible. With some optional parameters added to the config, could enable cluster or sentinel support. The single address -> multiple could be handled a few different ways: single string, comma-separated urls, separate list of urls - whatever is most aligned with existing components.
Their code looks pretty simple (https://github.com/go-redis/redis/blob/master/universal.go#L199-L206) and looks to be a wrapper around the same client type that benthos is currently using.
```go
func NewUniversalClient(opts *UniversalOptions) UniversalClient {
if opts.MasterName != "" {
return NewFailoverClient(opts.Failover())
} else if len(opts.Addrs) > 1 {
return NewClusterClient(opts.Cluster())
}
return NewClient(opts.Simple())
}
```
Any thoughts/opinions/experience along these lines would be great to have!
|
1.0
|
Convert redis client to UniversalClient to support simple, cluster, and sentinel deployments - I was thinking about adding support for https://pkg.go.dev/github.com/go-redis/redis/v7#UniversalClient and converting the existing redis clients to use that.
Without digging too deeply, I believe this would be backward-compatible. With some optional parameters added to the config, could enable cluster or sentinel support. The single address -> multiple could be handled a few different ways: single string, comma-separated urls, separate list of urls - whatever is most aligned with existing components.
Their code looks pretty simple (https://github.com/go-redis/redis/blob/master/universal.go#L199-L206) and looks to be a wrapper around the same client type that benthos is currently using.
```go
func NewUniversalClient(opts *UniversalOptions) UniversalClient {
if opts.MasterName != "" {
return NewFailoverClient(opts.Failover())
} else if len(opts.Addrs) > 1 {
return NewClusterClient(opts.Cluster())
}
return NewClient(opts.Simple())
}
```
Any thoughts/opinions/experience along these lines would be great to have!
|
process
|
convert redis client to universalclient to support simple cluster and sentinel deployments i was thinking about adding support for and converting the existing redis clients to use that without digging too deeply i believe this would be backward compatible with some optional parameters added to the config could enable cluster or sentinel support the single address multiple could be handled a few different ways single string comma separated urls separate list of urls whatever is most aligned with existing components their code looks pretty simple and looks to be a wrapper around the same client type that benthos is currently using go func newuniversalclient opts universaloptions universalclient if opts mastername return newfailoverclient opts failover else if len opts addrs return newclusterclient opts cluster return newclient opts simple any thoughts opinions experience along these lines would be great to have
| 1
|
106,729
| 4,283,045,129
|
IssuesEvent
|
2016-07-15 11:48:46
|
ubuntudesign/snapcraft.io
|
https://api.github.com/repos/ubuntudesign/snapcraft.io
|
closed
|
change heading on hello world section
|
Priority: critical
|
please change “hello” world tour to “hello world" tour as per copy doc.
|
1.0
|
change heading on hello world section - please change “hello” world tour to “hello world" tour as per copy doc.
|
non_process
|
change heading on hello world section please change “hello” world tour to “hello world tour as per copy doc
| 0
|
9,080
| 12,150,601,447
|
IssuesEvent
|
2020-04-24 18:17:02
|
googleapis/google-cloud-cpp-common
|
https://api.github.com/repos/googleapis/google-cloud-cpp-common
|
closed
|
all: use shfmt to format shell scripts
|
type: process
|
Carlos suggested setting up [shfmt](https://github.com/mvdan/sh) to automatically format our .sh files
From their doc page:
> to get the formatting appropriate for Google's Style guide, use `shfmt -i 2 -ci`
|
1.0
|
all: use shfmt to format shell scripts - Carlos suggested setting up [shfmt](https://github.com/mvdan/sh) to automatically format our .sh files
From their doc page:
> to get the formatting appropriate for Google's Style guide, use `shfmt -i 2 -ci`
|
process
|
all use shfmt to format shell scripts carlos suggested setting up to automatically format our sh files from their doc page to get the formatting appropriate for google s style guide use shfmt i ci
| 1
|
19,020
| 25,026,055,686
|
IssuesEvent
|
2022-11-04 08:06:08
|
NEARWEEK/CORE
|
https://api.github.com/repos/NEARWEEK/CORE
|
closed
|
Content Creation & Marketing Master Plan
|
content Process
|
NEARWEEK is transitioning towards a new Content strategy. This Content Strategy is outlined in the Marketing Master Plan. In this document, all the types of content are specified in roles, responsibilities and processes.
## 🎉 Subtasks
- [ ] Content Plan
- [ ] Channel Planning
- [ ] Process Plan
- [ ] Distribution Plan
- [ ] Create a new content schedule to reflect new objectives
- [ ] Figure out new criteria for the selection or projects
- [ ] Adjust current workflow
- [ ] Adjust Social Media strategy and timing for publication
- [ ] Ongoing OKR: Dragon's Den Launched
- [ ] Ongoing OKR: Ship 2 content pieces about NW DAO/Bounty Platform
- [ ] Ongoing OKR: Open Source Guide/landing page for submitting to DAO
- [ ] Ongoing OKR: Publishing Guide
## 🤼♂️ Reviewer
@P3ter-NEARWEEK
## 🔗 Work doc(s) / inspirational links
https://docs.google.com/document/d/1Pjrk_1tjQVeHOfz15N8QXh24YDg0As65K3vkBadsAfc/edit?usp=sharing
Notes on new direction:
-Maximum 2 high quality blog posts produced by NW on the most interesting and high level projects;
-Implementation of the NW Content DAO for delegating other requests coming from projects on overview pieces;
-Focus on NW original content that features the main news from across the ecosystem and reports on the biggest developments;
-Biweekly open DAO call
-Monthly product updates
-NEAR yearly
Questions to address:
-Do we still aim on publishing 4 articles per week?
-What is the idea for the Biweekly DAO calls? @Kisgus
-Are we going to publish on our website the pieces of content that are published through the NW DAO?
-When do we want to initiate this transition?
|
1.0
|
Content Creation & Marketing Master Plan - NEARWEEK is transitioning towards a new Content strategy. This Content Strategy is outlined in the Marketing Master Plan. In this document, all the types of content are specified in roles, responsibilities and processes.
## 🎉 Subtasks
- [ ] Content Plan
- [ ] Channel Planning
- [ ] Process Plan
- [ ] Distribution Plan
- [ ] Create a new content schedule to reflect new objectives
- [ ] Figure out new criteria for the selection or projects
- [ ] Adjust current workflow
- [ ] Adjust Social Media strategy and timing for publication
- [ ] Ongoing OKR: Dragon's Den Launched
- [ ] Ongoing OKR: Ship 2 content pieces about NW DAO/Bounty Platform
- [ ] Ongoing OKR: Open Source Guide/landing page for submitting to DAO
- [ ] Ongoing OKR: Publishing Guide
## 🤼♂️ Reviewer
@P3ter-NEARWEEK
## 🔗 Work doc(s) / inspirational links
https://docs.google.com/document/d/1Pjrk_1tjQVeHOfz15N8QXh24YDg0As65K3vkBadsAfc/edit?usp=sharing
Notes on new direction:
-Maximum 2 high quality blog posts produced by NW on the most interesting and high level projects;
-Implementation of the NW Content DAO for delegating other requests coming from projects on overview pieces;
-Focus on NW original content that features the main news from across the ecosystem and reports on the biggest developments;
-Biweekly open DAO call
-Monthly product updates
-NEAR yearly
Questions to address:
-Do we still aim on publishing 4 articles per week?
-What is the idea for the Biweekly DAO calls? @Kisgus
-Are we going to publish on our website the pieces of content that are published through the NW DAO?
-When do we want to initiate this transition?
|
process
|
content creation marketing master plan nearweek is transitioning towards a new content strategy this content strategy is outlined in the marketing master plan in this document all the types of content are specified in roles responsibilities and processes 🎉 subtasks content plan channel planning process plan distribution plan create a new content schedule to reflect new objectives figure out new criteria for the selection or projects adjust current workflow adjust social media strategy and timing for publication ongoing okr dragon s den launched ongoing okr ship content pieces about nw dao bounty platform ongoing okr open source guide landing page for submitting to dao ongoing okr publishing guide 🤼♂️ reviewer nearweek 🔗 work doc s inspirational links notes on new direction maximum high quality blog posts produced by nw on the most interesting and high level projects implementation of the nw content dao for delegating other requests coming from projects on overview pieces focus on nw original content that features the main news from across the ecosystem and reports on the biggest developments biweekly open dao call monthly product updates near yearly questions to address do we still aim on publishing articles per week what is the idea for the biweekly dao calls kisgus are we going to publish on our website the pieces of content that are published through the nw dao when do we want to initiate this transition
| 1
|
5,123
| 7,891,628,868
|
IssuesEvent
|
2018-06-28 12:48:59
|
gvwilson/teachtogether.tech
|
https://api.github.com/repos/gvwilson/teachtogether.tech
|
closed
|
Ch06 Juha Sorva
|
Ch06 Process
|
@gvwilson commented on [Thu May 17 2018](https://github.com/gvwilson/h2tp/issues/62)
- Deciding what to teach (use authentic tasks): This point here is the problematic one: how to provide authenticity when the learners are novices? How to follow the "phonicsy" advice from the previous chapter and still be authentic and motivating? Parsons problems and MCQ (and worked examples, even) aren’t the most authentic things. Perhaps you could discuss this tension a bit more somewhere in the chapter? I expect it’s something that many teachers (novice and expert) struggle with. (Cf. what I wrote in the previous chapter about recent work in CLT and the principles we used in "Research-Based Design of the First Weeks of CS1".)
|
1.0
|
Ch06 Juha Sorva - @gvwilson commented on [Thu May 17 2018](https://github.com/gvwilson/h2tp/issues/62)
- Deciding what to teach (use authentic tasks): This point here is the problematic one: how to provide authenticity when the learners are novices? How to follow the "phonicsy" advice from the previous chapter and still be authentic and motivating? Parsons problems and MCQ (and worked examples, even) aren’t the most authentic things. Perhaps you could discuss this tension a bit more somewhere in the chapter? I expect it’s something that many teachers (novice and expert) struggle with. (Cf. what I wrote in the previous chapter about recent work in CLT and the principles we used in "Research-Based Design of the First Weeks of CS1".)
|
process
|
juha sorva gvwilson commented on deciding what to teach use authentic tasks this point here is the problematic one how to provide authenticity when the learners are novices how to follow the phonicsy advice from the previous chapter and still be authentic and motivating parsons problems and mcq and worked examples even aren’t the most authentic things perhaps you could discuss this tension a bit more somewhere in the chapter i expect it’s something that many teachers novice and expert struggle with cf what i wrote in the previous chapter about recent work in clt and the principles we used in research based design of the first weeks of
| 1
|
142,075
| 11,454,455,886
|
IssuesEvent
|
2020-02-06 17:06:05
|
LLK/scratch-gui
|
https://api.github.com/repos/LLK/scratch-gui
|
opened
|
Flaky integration test: project-state-test.js
|
testing
|
### Expected Behavior
Test should pass or fail consistently
### Actual Behavior
The "File->New resets project title" test passed fine here:
https://travis-ci.org/LLK/scratch-gui/builds/646943990?utm_source=github_status&utm_medium=notification
...but, when this PR was merged to develop, the same test failed.
(tests were rerun here https://travis-ci.org/LLK/scratch-gui/builds/646946661?utm_source=github_status&utm_medium=notification so you can't see the failure anymore)
|
1.0
|
Flaky integration test: project-state-test.js - ### Expected Behavior
Test should pass or fail consistently
### Actual Behavior
The "File->New resets project title" test passed fine here:
https://travis-ci.org/LLK/scratch-gui/builds/646943990?utm_source=github_status&utm_medium=notification
...but, when this PR was merged to develop, the same test failed.
(tests were rerun here https://travis-ci.org/LLK/scratch-gui/builds/646946661?utm_source=github_status&utm_medium=notification so you can't see the failure anymore)
|
non_process
|
flaky integration test project state test js expected behavior test should pass or fail consistently actual behavior the file new resets project title test passed fine here but when this pr was merged to develop the same test failed tests were rerun here so you can t see the failure anymore
| 0
|
106,819
| 11,499,181,788
|
IssuesEvent
|
2020-02-12 13:30:06
|
EricLacey/BadWeather
|
https://api.github.com/repos/EricLacey/BadWeather
|
closed
|
Review Proposal Document and Comment changes
|
Deliverable documentation help wanted
|
For those who have been working on the alpha, just check in and read over the document and **COMMENT** changes that need to be made. That way the proposal group can make changes and be aware of them.
|
1.0
|
Review Proposal Document and Comment changes - For those who have been working on the alpha, just check in and read over the document and **COMMENT** changes that need to be made. That way the proposal group can make changes and be aware of them.
|
non_process
|
review proposal document and comment changes for those who have been working on the alpha just check in and read over the document and comment changes that need to be made that way the proposal group can make changes and be aware of them
| 0
|
2,591
| 5,349,072,366
|
IssuesEvent
|
2017-02-18 12:18:36
|
jlm2017/jlm-video-subtitles
|
https://api.github.com/repos/jlm2017/jlm-video-subtitles
|
closed
|
[subtitles] [fr] Hollande, Valls et les ministres : les 40 personnes qui bloquent le pays
|
Language: French Process: [6] Approved
|
# Video title
Hollande, Valls et les ministres : les 40 personnes qui bloquent le pays
# URL
https://www.youtube.com/watch?v=LE6g4G-vGG4
# Youtube subtitles language
French
# Duration
19:43
# Subtitles URL
https://www.youtube.com/timedtext_editor?lang=fr&bl=vmp&tab=captions&ref=player&action_mde_edit_form=1&ui=hd&v=LE6g4G-vGG4
|
1.0
|
[subtitles] [fr] Hollande, Valls et les ministres : les 40 personnes qui bloquent le pays - # Video title
Hollande, Valls et les ministres : les 40 personnes qui bloquent le pays
# URL
https://www.youtube.com/watch?v=LE6g4G-vGG4
# Youtube subtitles language
French
# Duration
19:43
# Subtitles URL
https://www.youtube.com/timedtext_editor?lang=fr&bl=vmp&tab=captions&ref=player&action_mde_edit_form=1&ui=hd&v=LE6g4G-vGG4
|
process
|
hollande valls et les ministres les personnes qui bloquent le pays video title hollande valls et les ministres les personnes qui bloquent le pays url youtube subtitles language french duration subtitles url
| 1
|
10,840
| 4,103,143,554
|
IssuesEvent
|
2016-06-04 13:32:59
|
sgmap/api-communes
|
https://api.github.com/repos/sgmap/api-communes
|
closed
|
Docker
|
code
|
- [x] Ajout d'un `Dockerfile`
- [x] <del>Publier sur [Docker Hub](https://hub.docker.com)</del> Superseeded by #21
- [x] Documenter l'utilisation
|
1.0
|
Docker - - [x] Ajout d'un `Dockerfile`
- [x] <del>Publier sur [Docker Hub](https://hub.docker.com)</del> Superseeded by #21
- [x] Documenter l'utilisation
|
non_process
|
docker ajout d un dockerfile publier sur superseeded by documenter l utilisation
| 0
|
335,371
| 24,466,190,974
|
IssuesEvent
|
2022-10-07 15:12:39
|
scylladb/scylla-monitoring
|
https://api.github.com/repos/scylladb/scylla-monitoring
|
closed
|
Fix the warning in the documentation for version 4.1
|
documentation
|
When you select the unstable (unreleased) version of Monitoring Stack, the message tells you that you're viewing the documentation for a **previous** Monitoring:

For such versions, the following message should be displayed:

Fix:
Add version 4.1 to unstable versions in https://github.com/scylladb/scylla-monitoring/blob/master/docs/source/conf.py.
`UNSTABLE_VERSIONS = ['master', 'branch-4.1']`
|
1.0
|
Fix the warning in the documentation for version 4.1 - When you select the unstable (unreleased) version of Monitoring Stack, the message tells you that you're viewing the documentation for a **previous** Monitoring:

For such versions, the following message should be displayed:

Fix:
Add version 4.1 to unstable versions in https://github.com/scylladb/scylla-monitoring/blob/master/docs/source/conf.py.
`UNSTABLE_VERSIONS = ['master', 'branch-4.1']`
|
non_process
|
fix the warning in the documentation for version when you select the unstable unreleased version of monitoring stack the message tells you that you re viewing the documentation for a previous monitoring for such versions the following message should be displayed fix add version to unstable versions in unstable versions
| 0
|
38,031
| 5,164,342,161
|
IssuesEvent
|
2017-01-17 10:14:45
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
github.com/cockroachdb/cockroach/pkg/storage: TestReplicateQueueDownReplicate failed under stress
|
Robot test-failure
|
SHA: https://github.com/cockroachdb/cockroach/commits/ffc0c336351e06b68e7982b5ac6008ba75aa0a66
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=true
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=120190&tab=buildLog
```
W170117 10:14:31.381651 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.382794 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.385037 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.393132 1943902 server/node.go:355 [n?] **** cluster 1518aab1-42fd-403c-b8ff-6323ba8a4269 has been created
I170117 10:14:31.393204 1943902 server/node.go:356 [n?] **** add additional nodes by specifying --join=127.0.0.1:52162
I170117 10:14:31.395828 1943902 storage/store.go:1250 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170117 10:14:31.396050 1943902 server/node.go:439 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0}
I170117 10:14:31.396216 1943902 server/node.go:324 [n1] node ID 1 initialized
I170117 10:14:31.396415 1943902 gossip/gossip.go:292 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:52162" > attrs:<> locality:<>
I170117 10:14:31.396878 1943902 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I170117 10:14:31.397035 1943902 server/node.go:571 [n1] connecting to gossip network to verify cluster ID...
I170117 10:14:31.397131 1943902 server/node.go:595 [n1] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.397997 1943902 server/node.go:374 [n1] node=1: started with [[]=] engine(s) and attributes []
I170117 10:14:31.398092 1943902 sql/executor.go:322 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:52162}
I170117 10:14:31.401739 1943902 server/server.go:629 [n1] starting https server at 127.0.0.1:55608
I170117 10:14:31.401820 1943902 server/server.go:630 [n1] starting grpc/postgres server at 127.0.0.1:52162
I170117 10:14:31.402024 1943902 server/server.go:631 [n1] advertising CockroachDB node at 127.0.0.1:52162
I170117 10:14:31.403518 1945109 storage/split_queue.go:99 [split,n1,s1,r1/1:/M{in-ax},@c431e0ef00] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I170117 10:14:31.410204 1945109 storage/replica_command.go:2354 [split,n1,s1,r1/1:/M{in-ax},@c431e0ef00] initiating a split of this range at key /Table/11 [r2]
E170117 10:14:31.453385 1945110 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.454252 1945109 storage/queue.go:599 [split,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] unable to split [n1,s1,r1/1:/{Min-Table/11}] at key "/Table/12/0": key range /Table/12/0-/Table/12/0 outside of bounds of range /Min-/Max
I170117 10:14:31.454927 1945109 storage/split_queue.go:99 [split,n1,s1,r2/1:/{Table/11-Max},@c437e4c300] splitting at keys [/Table/12/0 /Table/13/0 /Table/14/0]
I170117 10:14:31.455289 1945109 storage/replica_command.go:2354 [split,n1,s1,r2/1:/{Table/11-Max},@c437e4c300] initiating a split of this range at key /Table/12 [r3]
E170117 10:14:31.455349 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I170117 10:14:31.464742 1945088 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:52162} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648071397961127}
I170117 10:14:31.469704 1943902 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
E170117 10:14:31.475972 1945110 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.477707 1945109 storage/queue.go:599 [split,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] unable to split [n1,s1,r2/1:/Table/1{1-2}] at key "/Table/13/0": key range /Table/13/0-/Table/13/0 outside of bounds of range /Table/11-/Max
I170117 10:14:31.478714 1945109 storage/split_queue.go:99 [split,n1,s1,r3/1:/{Table/12-Max},@c436c5b200] splitting at keys [/Table/13/0 /Table/14/0]
E170117 10:14:31.479995 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I170117 10:14:31.481357 1945109 storage/replica_command.go:2354 [split,n1,s1,r3/1:/{Table/12-Max},@c436c5b200] initiating a split of this range at key /Table/13 [r4]
E170117 10:14:31.488557 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.514367 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.514825 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.517206 1945110 storage/queue.go:610 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.518126 1945109 storage/queue.go:599 [split,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] unable to split [n1,s1,r3/1:/Table/1{2-3}] at key "/Table/14/0": key range /Table/14/0-/Table/14/0 outside of bounds of range /Table/12-/Max
I170117 10:14:31.519826 1945109 storage/split_queue.go:99 [split,n1,s1,r4/1:/{Table/13-Max},@c4361d9800] splitting at keys [/Table/14/0]
I170117 10:14:31.520296 1945109 storage/replica_command.go:2354 [split,n1,s1,r4/1:/{Table/13-Max},@c4361d9800] initiating a split of this range at key /Table/14 [r5]
I170117 10:14:31.534934 1943902 server/server.go:686 [n1] done ensuring all necessary migrations have run
I170117 10:14:31.534999 1943902 server/server.go:688 [n1] serving sql connections
E170117 10:14:31.548308 1945110 storage/queue.go:610 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.548490 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.550140 1948280 storage/queue.go:610 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.550862 1945110 storage/queue.go:610 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.551148 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
W170117 10:14:31.553600 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
W170117 10:14:31.555011 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.556117 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.557727 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.558030 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:31.558145 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:31.568433 1956629 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:45964}
I170117 10:14:31.568589 1956596 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:31.570828 1956645 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:31.571358 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.574028 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170117 10:14:31.578916 1943902 server/node.go:317 [n?] new node allocated ID 2
I170117 10:14:31.579063 1943902 gossip/gossip.go:292 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45964" > attrs:<> locality:<>
I170117 10:14:31.579308 1943902 server/node.go:374 [n2] node=2: started with [[]=] engine(s) and attributes []
I170117 10:14:31.579380 1943902 sql/executor.go:322 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:45964}
I170117 10:14:31.580294 1956607 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I170117 10:14:31.581188 1943902 server/server.go:629 [n2] starting https server at 127.0.0.1:41515
I170117 10:14:31.581262 1943902 server/server.go:630 [n2] starting grpc/postgres server at 127.0.0.1:45964
I170117 10:14:31.581319 1943902 server/server.go:631 [n2] advertising CockroachDB node at 127.0.0.1:45964
I170117 10:14:31.584500 1943902 server/server.go:686 [n2] done ensuring all necessary migrations have run
I170117 10:14:31.584784 1943902 server/server.go:688 [n2] serving sql connections
I170117 10:14:31.585605 1956917 server/node.go:552 [n2] bootstrapped store [n2,s2]
E170117 10:14:31.586646 1948280 storage/queue.go:610 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.587464 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.587931 1948280 storage/queue.go:610 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.588437 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.589675 1948280 storage/queue.go:610 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
W170117 10:14:31.600446 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
I170117 10:14:31.602016 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] generated preemptive snapshot 398bcb69 at index 17
W170117 10:14:31.602401 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.605033 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.609101 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.609213 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:31.609308 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:31.615143 1948280 storage/store.go:3275 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] streamed snapshot: kv pairs: 34, log entries: 7, 1ms
I170117 10:14:31.616541 1959722 storage/replica_raftstorage.go:575 [n2,s2,r4/?:{-},@c43bb58300] applying preemptive snapshot at index 17 (id=398bcb69, encoded size=10570, 1 rocksdb batches, 7 log entries)
I170117 10:14:31.617581 1959722 storage/replica_raftstorage.go:583 [n2,s2,r4/?:/Table/1{3-4},@c43bb58300] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:31.620048 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] change replicas (remove {2 2 2}): read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:31.624260 1958860 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:31.625275 1960705 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:53298}
I170117 10:14:31.627729 1956921 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:45964} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648071579278340}
I170117 10:14:31.627825 1960808 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:31.627912 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.628571 1960815 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170117 10:14:31.631014 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170117 10:14:31.634275 1961438 storage/replica.go:2385 [n1,s1,r4/1:/Table/1{3-4},@c4361d9800] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170117 10:14:31.634564 1943902 server/node.go:317 [n?] new node allocated ID 3
I170117 10:14:31.634716 1943902 gossip/gossip.go:292 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:53298" > attrs:<> locality:<>
I170117 10:14:31.635102 1943902 server/node.go:374 [n3] node=3: started with [[]=] engine(s) and attributes []
I170117 10:14:31.635206 1943902 sql/executor.go:322 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:53298}
I170117 10:14:31.637337 1961776 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I170117 10:14:31.639471 1961841 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I170117 10:14:31.640105 1943902 server/server.go:629 [n3] starting https server at 127.0.0.1:37479
I170117 10:14:31.640441 1943902 server/server.go:630 [n3] starting grpc/postgres server at 127.0.0.1:53298
I170117 10:14:31.640510 1943902 server/server.go:631 [n3] advertising CockroachDB node at 127.0.0.1:53298
I170117 10:14:31.641251 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] generated preemptive snapshot 5b72a57b at index 54
I170117 10:14:31.645105 1961525 server/node.go:552 [n3] bootstrapped store [n3,s3]
I170117 10:14:31.645940 1945110 storage/store.go:3275 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] streamed snapshot: kv pairs: 637, log entries: 44, 4ms
I170117 10:14:31.648832 1962274 storage/replica_raftstorage.go:575 [n2,s2,r1/?:{-},@c43bb58c00] applying preemptive snapshot at index 54 (id=5b72a57b, encoded size=319266, 1 rocksdb batches, 44 log entries)
I170117 10:14:31.649795 1963131 storage/raft_transport.go:437 [n2] raft transport stream to node 1 established
I170117 10:14:31.654134 1943902 server/server.go:686 [n3] done ensuring all necessary migrations have run
I170117 10:14:31.654201 1943902 server/server.go:688 [n3] serving sql connections
I170117 10:14:31.656581 1962274 storage/replica_raftstorage.go:583 [n2,s2,r1/?:/{Min-Table/11},@c43bb58c00] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=3ms commit=2ms]
I170117 10:14:31.659479 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] change replicas (remove {2 2 2}): read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
W170117 10:14:31.660104 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
W170117 10:14:31.663013 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.664422 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.667444 1961529 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:53298} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648071635071268}
I170117 10:14:31.667623 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.667771 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:31.667910 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:31.671449 1965351 storage/replica.go:2385 [n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170117 10:14:31.674097 1964633 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:31.674221 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] generated preemptive snapshot 72d7c1c6 at index 19
I170117 10:14:31.674350 1965707 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:49743}
I170117 10:14:31.677802 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.679407 1965881 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:31.681206 1965881 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170117 10:14:31.682242 1965881 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I170117 10:14:31.682385 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170117 10:14:31.691483 1948280 storage/store.go:3275 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] streamed snapshot: kv pairs: 10, log entries: 9, 0ms
I170117 10:14:31.692651 1966713 storage/replica_raftstorage.go:575 [n3,s3,r2/?:{-},@c4304f2000] applying preemptive snapshot at index 19 (id=72d7c1c6, encoded size=13578, 1 rocksdb batches, 9 log entries)
I170117 10:14:31.693512 1966713 storage/replica_raftstorage.go:583 [n3,s3,r2/?:/Table/1{1-2},@c4304f2000] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:31.695863 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] change replicas (remove {3 3 2}): read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.794482 1943902 server/node.go:317 [n?] new node allocated ID 4
I170117 10:14:32.794595 1943902 gossip/gossip.go:292 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:49743" > attrs:<> locality:<>
I170117 10:14:32.794805 1943902 server/node.go:374 [n4] node=4: started with [[]=] engine(s) and attributes []
I170117 10:14:32.794877 1943902 sql/executor.go:322 [n4] creating distSQLPlanner with address {tcp 127.0.0.1:49743}
I170117 10:14:32.796593 1943902 server/server.go:629 [n4] starting https server at 127.0.0.1:49214
I170117 10:14:32.796648 1943902 server/server.go:630 [n4] starting grpc/postgres server at 127.0.0.1:49743
I170117 10:14:32.796702 1943902 server/server.go:631 [n4] advertising CockroachDB node at 127.0.0.1:49743
I170117 10:14:32.797643 1966460 storage/stores.go:312 [n1] wrote 3 node addresses to persistent storage
I170117 10:14:32.798175 1972096 storage/stores.go:312 [n3] wrote 3 node addresses to persistent storage
I170117 10:14:32.798458 1971824 storage/stores.go:312 [n2] wrote 3 node addresses to persistent storage
I170117 10:14:32.798577 1943902 server/server.go:686 [n4] done ensuring all necessary migrations have run
I170117 10:14:32.798638 1943902 server/server.go:688 [n4] serving sql connections
I170117 10:14:32.820172 1971847 server/node.go:552 [n4] bootstrapped store [n4,s4]
W170117 10:14:32.833470 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
W170117 10:14:32.835696 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:32.836736 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:32.838363 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:32.838471 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:32.838562 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:32.847789 1971851 sql/event_log.go:95 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:49743} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648072794774912}
I170117 10:14:32.857722 1974856 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:32.857877 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] change replicas (remove {3 3 2}): read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.858068 1976237 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:47683}
I170117 10:14:32.858920 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:32.859887 1976286 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:32.860377 1976290 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170117 10:14:32.860525 1976290 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I170117 10:14:32.860672 1976290 storage/stores.go:312 [n?] wrote 4 node addresses to persistent storage
I170117 10:14:32.862378 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
W170117 10:14:32.862959 1976757 storage/intent_resolver.go:338 [n1,s1,r1/1:/{Min-Table/11}]: failed to push during intent resolution: failed to push "change-replica" id=4e89ac21 key=/Local/Range/"\x93"/RangeDescriptor rw=true pri=0.00818092 iso=SERIALIZABLE stat=PENDING epo=0 ts=1484648072.804897335,1 orig=1484648071.693983802,0 max=1484648071.693983802,0 wto=false rop=false
I170117 10:14:32.866261 1943902 server/node.go:317 [n?] new node allocated ID 5
I170117 10:14:32.866380 1943902 gossip/gossip.go:292 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:47683" > attrs:<> locality:<>
I170117 10:14:32.866882 1943902 server/node.go:374 [n5] node=5: started with [[]=] engine(s) and attributes []
I170117 10:14:32.866965 1943902 sql/executor.go:322 [n5] creating distSQLPlanner with address {tcp 127.0.0.1:47683}
I170117 10:14:32.867699 1976237 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:45964})
I170117 10:14:32.868113 1977274 storage/stores.go:312 [n1] wrote 4 node addresses to persistent storage
I170117 10:14:32.868605 1974856 gossip/client.go:130 [n5] closing client to node 1 (127.0.0.1:52162): received forward from node 1 to 2 (127.0.0.1:45964)
I170117 10:14:32.868899 1977341 storage/stores.go:312 [n4] wrote 4 node addresses to persistent storage
I170117 10:14:32.869324 1977364 storage/stores.go:312 [n2] wrote 4 node addresses to persistent storage
I170117 10:14:32.869865 1977325 storage/stores.go:312 [n3] wrote 4 node addresses to persistent storage
I170117 10:14:32.871036 1943902 server/server.go:629 [n5] starting https server at 127.0.0.1:39444
I170117 10:14:32.871105 1943902 server/server.go:630 [n5] starting grpc/postgres server at 127.0.0.1:47683
I170117 10:14:32.871283 1943902 server/server.go:631 [n5] advertising CockroachDB node at 127.0.0.1:47683
I170117 10:14:32.871958 1977429 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:45964
I170117 10:14:32.879061 1977143 server/node.go:552 [n5] bootstrapped store [n5,s5]
I170117 10:14:32.885614 1943902 server/server.go:686 [n5] done ensuring all necessary migrations have run
I170117 10:14:32.885710 1943902 server/server.go:688 [n5] serving sql connections
I170117 10:14:32.900328 1980287 storage/replica.go:2385 [n1,s1,r2/1:/Table/1{1-2},@c437e4c300] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170117 10:14:32.902945 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] generated preemptive snapshot c19f0d7e at index 25
I170117 10:14:32.904054 1948280 storage/store.go:3275 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] streamed snapshot: kv pairs: 36, log entries: 15, 1ms
I170117 10:14:32.906263 1980738 storage/replica_raftstorage.go:575 [n3,s3,r3/?:{-},@c4444f2300] applying preemptive snapshot at index 25 (id=c19f0d7e, encoded size=24048, 1 rocksdb batches, 15 log entries)
I170117 10:14:32.908685 1980960 storage/raft_transport.go:437 [n3] raft transport stream to node 1 established
I170117 10:14:32.911404 1980738 storage/replica_raftstorage.go:583 [n3,s3,r3/?:/Table/1{2-3},@c4444f2300] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I170117 10:14:32.923045 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] change replicas (remove {3 3 2}): read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.929520 1977147 sql/event_log.go:95 [n5] Event: "node_join", target: 5, info: {Descriptor:{NodeID:5 Address:{NetworkField:tcp AddressField:127.0.0.1:47683} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648072866847476}
I170117 10:14:32.944446 1983958 storage/replica.go:2385 [n1,s1,r3/1:/Table/1{2-3},@c436c5b200] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170117 10:14:32.948091 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] generated preemptive snapshot ffaca17f at index 13
I170117 10:14:32.959184 1948280 storage/store.go:3275 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] streamed snapshot: kv pairs: 10, log entries: 3, 1ms
I170117 10:14:32.959759 1985123 storage/replica_raftstorage.go:575 [n5,s5,r5/?:{-},@c43ff45800] applying preemptive snapshot at index 13 (id=ffaca17f, encoded size=1712, 1 rocksdb batches, 3 log entries)
I170117 10:14:32.960309 1985123 storage/replica_raftstorage.go:583 [n5,s5,r5/?:/{Table/14-Max},@c43ff45800] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:32.963330 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] change replicas (remove {5 5 2}): read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.977497 1986403 storage/replica.go:2385 [n1,s1,r5/1:/{Table/14-Max},@c438a2a000] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:2}]
I170117 10:14:32.979718 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] generated preemptive snapshot fefd4a0f at index 82
I170117 10:14:32.990018 1987423 storage/raft_transport.go:437 [n5] raft transport stream to node 1 established
I170117 10:14:33.004933 1948280 storage/store.go:3275 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] streamed snapshot: kv pairs: 1251, log entries: 72, 8ms
I170117 10:14:33.005672 1987892 storage/replica_raftstorage.go:575 [n4,s4,r1/?:{-},@c435adcf00] applying preemptive snapshot at index 82 (id=fefd4a0f, encoded size=689172, 1 rocksdb batches, 72 log entries)
I170117 10:14:33.011666 1987892 storage/replica_raftstorage.go:583 [n4,s4,r1/?:/{Min-Table/11},@c435adcf00] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=3ms commit=2ms]
I170117 10:14:33.013703 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] change replicas (remove {4 4 3}): read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170117 10:14:33.040141 1989217 storage/replica.go:2385 [n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:33.045109 1948280 storage/queue.go:662 [n1,replicate] purgatory is now empty
I170117 10:14:33.046300 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] generated preemptive snapshot a8c74731 at index 33
I170117 10:14:33.047952 1945110 storage/store.go:3275 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] streamed snapshot: kv pairs: 71, log entries: 23, 1ms
I170117 10:14:33.048528 1989758 storage/replica_raftstorage.go:575 [n5,s5,r4/?:{-},@c437b50900] applying preemptive snapshot at index 33 (id=a8c74731, encoded size=37627, 1 rocksdb batches, 23 log entries)
I170117 10:14:33.051376 1989758 storage/replica_raftstorage.go:583 [n5,s5,r4/?:/Table/1{3-4},@c437b50900] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I170117 10:14:33.054249 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] change replicas (remove {5 5 3}): read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170117 10:14:33.058134 1990851 storage/raft_transport.go:437 [n4] raft transport stream to node 1 established
I170117 10:14:33.076887 1992278 storage/replica.go:2385 [n1,s1,r4/1:/Table/1{3-4},@c4361d9800] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I170117 10:14:33.096275 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] generated preemptive snapshot 92c3bb82 at index 16
I170117 10:14:33.097886 1945110 storage/store.go:3275 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] streamed snapshot: kv pairs: 11, log entries: 6, 0ms
I170117 10:14:33.099214 1993185 storage/replica_raftstorage.go:575 [n4,s4,r5/?:{-},@c42d02a600] applying preemptive snapshot at index 16 (id=92c3bb82, encoded size=4673, 1 rocksdb batches, 6 log entries)
I170117 10:14:33.099815 1993185 storage/replica_raftstorage.go:583 [n4,s4,r5/?:/{Table/14-Max},@c42d02a600] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.101931 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] change replicas (remove {4 4 3}): read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:5 store_id:5 replica_id:2 > next_replica_id:3
W170117 10:14:33.116419 357864 gossip/gossip.go:1143 [n2] first range unavailable; trying remaining resolvers
I170117 10:14:33.116737 1994780 gossip/client.go:125 [n2] started gossip client to 127.0.0.1:57345
I170117 10:14:33.125943 1995537 storage/replica.go:2385 [n1,s1,r5/1:/{Table/14-Max},@c438a2a000] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:33.138819 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] generated preemptive snapshot 7c6d8b00 at index 27
I170117 10:14:33.140535 1945110 storage/store.go:3275 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] streamed snapshot: kv pairs: 12, log entries: 17, 1ms
I170117 10:14:33.141171 1996976 storage/replica_raftstorage.go:575 [n5,s5,r2/?:{-},@c42d02a900] applying preemptive snapshot at index 27 (id=7c6d8b00, encoded size=20333, 1 rocksdb batches, 17 log entries)
I170117 10:14:33.142085 1996976 storage/replica_raftstorage.go:583 [n5,s5,r2/?:/Table/1{1-2},@c42d02a900] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.143900 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] change replicas (remove {5 5 3}): read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170117 10:14:33.175911 1999007 storage/replica.go:2385 [n1,s1,r2/1:/Table/1{1-2},@c437e4c300] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I170117 10:14:33.192385 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] generated preemptive snapshot c022e25b at index 30
I170117 10:14:33.195462 1945110 storage/store.go:3275 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] streamed snapshot: kv pairs: 42, log entries: 20, 3ms
I170117 10:14:33.196753 2000488 storage/replica_raftstorage.go:575 [n4,s4,r3/?:{-},@c42d02af00] applying preemptive snapshot at index 30 (id=c022e25b, encoded size=31362, 1 rocksdb batches, 20 log entries)
I170117 10:14:33.198380 2000488 storage/replica_raftstorage.go:583 [n4,s4,r3/?:/Table/1{2-3},@c42d02af00] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170117 10:14:33.203206 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] change replicas (remove {4 4 3}): read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170117 10:14:33.248672 2003182 storage/replica.go:2385 [n1,s1,r3/1:/Table/1{2-3},@c436c5b200] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:33.422856 2006556 storage/replica_command.go:2354 [n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] initiating a split of this range at key "m" [r6]
I170117 10:14:33.466334 1943902 storage/replica_raftstorage.go:410 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] generated preemptive snapshot 4b7bca36 at index 10
I170117 10:14:33.466997 1943902 storage/store.go:3275 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] streamed snapshot: kv pairs: 32, log entries: 0, 0ms
I170117 10:14:33.467701 2010658 storage/replica_raftstorage.go:575 [n3,s3,r6/?:{-},@c42d02b800] applying preemptive snapshot at index 10 (id=4b7bca36, encoded size=5486, 1 rocksdb batches, 0 log entries)
I170117 10:14:33.468143 2010658 storage/replica_raftstorage.go:583 [n3,s3,r6/?:{"m"-/Table/11},@c42d02b800] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.470143 1943902 storage/replica_command.go:3210 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {3 3 4}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I170117 10:14:33.497104 2012956 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:3 StoreID:3 ReplicaID:4}]
I170117 10:14:33.510599 1943902 storage/replica_raftstorage.go:410 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] generated preemptive snapshot d056e62f at index 14
I170117 10:14:33.512959 1943902 storage/store.go:3275 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] streamed snapshot: kv pairs: 34, log entries: 4, 2ms
I170117 10:14:33.515650 2014571 storage/replica_raftstorage.go:575 [n5,s5,r6/?:{-},@c435360000] applying preemptive snapshot at index 14 (id=d056e62f, encoded size=9349, 1 rocksdb batches, 4 log entries)
I170117 10:14:33.516970 2014571 storage/replica_raftstorage.go:583 [n5,s5,r6/?:{"m"-/Table/11},@c435360000] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.520052 1943902 storage/replica_command.go:3210 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {5 5 5}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:3 store_id:3 replica_id:4 > next_replica_id:5
I170117 10:14:33.541909 2016813 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:5}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:3 StoreID:3 ReplicaID:4} {NodeID:5 StoreID:5 ReplicaID:5}]
I170117 10:14:34.872680 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {3 3 4}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:3 store_id:3 replica_id:4 > replicas:<node_id:5 store_id:5 replica_id:5 > next_replica_id:6
I170117 10:14:34.908086 2030172 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:5 StoreID:5 ReplicaID:5}]
I170117 10:14:34.914566 1981018 storage/store.go:3131 [n3,s3,r6/4:{"m"-/Table/11},@c42d02b800] added to replica GC queue (peer suggestion)
I170117 10:14:34.925424 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {2 2 2}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:5 store_id:5 replica_id:5 > next_replica_id:6
I170117 10:14:34.940261 1962523 storage/store.go:2106 [replicaGC,n3,s3,r6/4:{"m"-/Table/11},@c42d02b800] removing replica
I170117 10:14:34.946970 1962523 storage/replica.go:731 [replicaGC,n3,s3,r6/4:{"m"-/Table/11},@c42d02b800] removed 30 (19+11) keys in 0ms [clear=0ms commit=0ms]
I170117 10:14:34.969108 2034573 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing REMOVE_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:5} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:34.974417 1963163 storage/store.go:3131 [n2,s2,r6/2:{"m"-/Table/11},@c437b51800] added to replica GC queue (peer suggestion)
I170117 10:14:34.997322 1957576 storage/store.go:2106 [replicaGC,n2,s2,r6/2:{"m"-/Table/11},@c437b51800] removing replica
I170117 10:14:34.998497 1957576 storage/replica.go:731 [replicaGC,n2,s2,r6/2:{"m"-/Table/11},@c437b51800] removed 30 (19+11) keys in 0ms [clear=0ms commit=0ms]
W170117 10:14:35.636993 1990851 storage/raft_transport.go:443 [n4] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W170117 10:14:35.637526 1962769 storage/raft_transport.go:443 [n1] raft transport stream to node 2 failed: EOF
W170117 10:14:35.637678 1980724 storage/raft_transport.go:443 [n1] raft transport stream to node 3 failed: EOF
W170117 10:14:35.637810 1980960 storage/raft_transport.go:443 [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I170117 10:14:35.637846 1946120 vendor/google.golang.org/grpc/transport/http2_client.go:1123 transport: http2Client.notifyError got notified that the client transport was broken EOF.
W170117 10:14:35.637984 1987360 storage/raft_transport.go:443 [n1] raft transport stream to node 5 failed: EOF
I170117 10:14:35.638036 1945608 vendor/google.golang.org/grpc/clientconn.go:766 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:52162: operation was canceled"; Reconnecting to {127.0.0.1:52162 <nil>}
W170117 10:14:35.638081 1963131 storage/raft_transport.go:443 [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I170117 10:14:35.638147 1946163 vendor/google.golang.org/grpc/transport/http2_server.go:320 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:52162->127.0.0.1:49922: use of closed network connection
I170117 10:14:35.638193 1945608 vendor/google.golang.org/grpc/clientconn.go:866 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W170117 10:14:35.638255 1987423 storage/raft_transport.go:443 [n5] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
leaktest.go:93: Leaked goroutine: goroutine 1994780 [select]:
github.com/cockroachdb/cockroach/pkg/gossip.(*client).gossip(0xc427e031e0, 0x2b20b54b0230, 0xc42e692180, 0xc426813b00, 0x261ad80, 0xc42b9a0020, 0xc428c4e870, 0xc42ba8f7c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:309 +0x3ec
github.com/cockroachdb/cockroach/pkg/gossip.(*client).start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:126 +0x4f7
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc428c4e870, 0xc428272340)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
leaktest.go:93: Leaked goroutine: goroutine 1994786 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.newClientStream.func3(0x261a8a0, 0xc426a0fa40, 0xc43e79b500, 0xc435b6f7c0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:234 +0x426
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.newClientStream
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:254 +0xcba
leaktest.go:93: Leaked goroutine: goroutine 1994796 [select]:
github.com/cockroachdb/cockroach/pkg/gossip.(*server).Gossip(0xc424c3b800, 0x261ade0, 0xc42b9a01f0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:190 +0x57b
github.com/cockroachdb/cockroach/pkg/gossip._Gossip_Gossip_Handler(0x19d6280, 0xc424c3b800, 0x2619280, 0xc42d94c360, 0xc41f7f15a4, 0xc4208e8d70)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/gossip.pb.go:209 +0xbb
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc424c3b740, 0x261a900, 0xc42aeced00, 0xc43e79b600, 0xc4258697d0, 0x25e32a0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:807 +0x7d6
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc424c3b740, 0x261a900, 0xc42aeced00, 0xc43e79b600, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:897 +0xc36
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42af60a10, 0xc424c3b740, 0x261a900, 0xc42aeced00, 0xc43e79b600)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:469 +0xab
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:470 +0xa3
leaktest.go:93: Leaked goroutine: goroutine 1994846 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc42e692800, 0xc42e143f30, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:140 +0x69a
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*Stream).Read(0xc43e79b600, 0xc42e143f30, 0x5, 0x5, 0xc444530b00, 0xc4426ed8e8, 0x640f40)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:325 +0x5c
io.ReadAtLeast(0x2601460, 0xc43e79b600, 0xc42e143f30, 0x5, 0x5, 0x5, 0x32, 0x32, 0x0)
/usr/local/go/src/io/io.go:307 +0xa4
io.ReadFull(0x2601460, 0xc43e79b600, 0xc42e143f30, 0x5, 0x5, 0xc427588940, 0x37, 0x37)
/usr/local/go/src/io/io.go:325 +0x58
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*parser).recvMsg(0xc42e143f20, 0x7fffffff, 0x1967ee0, 0x4, 0x0, 0x7, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:233 +0x6f
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.recv(0xc42e143f20, 0x2612860, 0x2b21650, 0xc43e79b600, 0x0, 0x0, 0x197b060, 0xc42e8af1c0, 0x7fffffff, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:329 +0x4d
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*serverStream).RecvMsg(0xc42d94c360, 0x197b060, 0xc42e8af1c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:607 +0x11e
github.com/cockroachdb/cockroach/pkg/gossip.(*gossipGossipServer).Recv(0xc42b9a01f0, 0xdec3fc, 0x18a5e20, 0xc424c3b830)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/gossip.pb.go:228 +0x62
github.com/cockroachdb/cockroach/pkg/gossip.(Gossip_GossipServer).Recv-fm(0xc424c3b830, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:153 +0x2f
github.com/cockroachdb/cockroach/pkg/gossip.(*server).gossipReceiver(0xc424c3b800, 0x2b20b54b0230, 0xc42e692880, 0xc4331b95f8, 0xc42e6928c0, 0xc4426edf40, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:321 +0x3fe
github.com/cockroachdb/cockroach/pkg/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:153 +0x99
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc428c4e870, 0xc43e4252c0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
leaktest.go:93: Leaked goroutine: goroutine 1994921 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc42e692280, 0xc42e143cd0, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:140 +0x69a
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*Stream).Read(0xc43e79b500, 0xc42e143cd0, 0x5, 0x5, 0xc443afcbc8, 0x5fd401, 0xc42a331fa8)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:325 +0x5c
io.ReadAtLeast(0x2601460, 0xc43e79b500, 0xc42e143cd0, 0x5, 0x5, 0x5, 0x1, 0xc443afcc38, 0x5f58bf)
/usr/local/go/src/io/io.go:307 +0xa4
io.ReadFull(0x2601460, 0xc43e79b500, 0xc42e143cd0, 0x5, 0x5, 0xc42ba1ed08, 0xc42ba1ed00, 0x25ee810)
/usr/local/go/src/io/io.go:325 +0x58
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*parser).recvMsg(0xc42e143cc0, 0x7fffffff, 0xc4267ecf30, 0xc443afce88, 0xb0b82e, 0xc4267ecf30, 0xc429a80de0, 0xc400000003)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:233 +0x6f
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.recv(0xc42e143cc0, 0x2612860, 0x2b21650, 0xc43e79b500, 0x0, 0x0, 0x1967ee0, 0xc43136fcc0, 0x7fffffff, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:329 +0x4d
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*clientStream).RecvMsg(0xc435b6f7c0, 0x1967ee0, 0xc43136fcc0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:382 +0x11b
github.com/cockroachdb/cockroach/pkg/gossip.(*gossipGossipClient).Recv(0xc42b9a0020, 0x2b20b54b0230, 0xc42e692180, 0xc426813b00)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/gossip.pb.go:192 +0x62
github.com/cockroachdb/cockroach/pkg/gossip.(*client).gossip.func2.1(0x261ad80, 0xc42b9a0020, 0xc427e031e0, 0x2b20b54b0230, 0xc42e692180, 0xc426813b00, 0x616443, 0xc42f0e3f48)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:297 +0x35
github.com/cockroachdb/cockroach/pkg/gossip.(*client).gossip.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:305 +0xd7
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc428c4e870, 0xc435cbe870)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
```
|
1.0
|
github.com/cockroachdb/cockroach/pkg/storage: TestReplicateQueueDownReplicate failed under stress - SHA: https://github.com/cockroachdb/cockroach/commits/ffc0c336351e06b68e7982b5ac6008ba75aa0a66
Parameters:
```
COCKROACH_PROPOSER_EVALUATED_KV=true
TAGS=deadlock
GOFLAGS=
```
Stress build found a failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=120190&tab=buildLog
```
W170117 10:14:31.381651 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.382794 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.385037 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.393132 1943902 server/node.go:355 [n?] **** cluster 1518aab1-42fd-403c-b8ff-6323ba8a4269 has been created
I170117 10:14:31.393204 1943902 server/node.go:356 [n?] **** add additional nodes by specifying --join=127.0.0.1:52162
I170117 10:14:31.395828 1943902 storage/store.go:1250 [n1] [n1,s1]: failed initial metrics computation: [n1,s1]: system config not yet available
I170117 10:14:31.396050 1943902 server/node.go:439 [n1] initialized store [n1,s1]: {Capacity:536870912 Available:536870912 RangeCount:1 LeaseCount:0}
I170117 10:14:31.396216 1943902 server/node.go:324 [n1] node ID 1 initialized
I170117 10:14:31.396415 1943902 gossip/gossip.go:292 [n1] NodeDescriptor set to node_id:1 address:<network_field:"tcp" address_field:"127.0.0.1:52162" > attrs:<> locality:<>
I170117 10:14:31.396878 1943902 storage/stores.go:296 [n1] read 0 node addresses from persistent storage
I170117 10:14:31.397035 1943902 server/node.go:571 [n1] connecting to gossip network to verify cluster ID...
I170117 10:14:31.397131 1943902 server/node.go:595 [n1] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.397997 1943902 server/node.go:374 [n1] node=1: started with [[]=] engine(s) and attributes []
I170117 10:14:31.398092 1943902 sql/executor.go:322 [n1] creating distSQLPlanner with address {tcp 127.0.0.1:52162}
I170117 10:14:31.401739 1943902 server/server.go:629 [n1] starting https server at 127.0.0.1:55608
I170117 10:14:31.401820 1943902 server/server.go:630 [n1] starting grpc/postgres server at 127.0.0.1:52162
I170117 10:14:31.402024 1943902 server/server.go:631 [n1] advertising CockroachDB node at 127.0.0.1:52162
I170117 10:14:31.403518 1945109 storage/split_queue.go:99 [split,n1,s1,r1/1:/M{in-ax},@c431e0ef00] splitting at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0]
I170117 10:14:31.410204 1945109 storage/replica_command.go:2354 [split,n1,s1,r1/1:/M{in-ax},@c431e0ef00] initiating a split of this range at key /Table/11 [r2]
E170117 10:14:31.453385 1945110 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.454252 1945109 storage/queue.go:599 [split,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] unable to split [n1,s1,r1/1:/{Min-Table/11}] at key "/Table/12/0": key range /Table/12/0-/Table/12/0 outside of bounds of range /Min-/Max
I170117 10:14:31.454927 1945109 storage/split_queue.go:99 [split,n1,s1,r2/1:/{Table/11-Max},@c437e4c300] splitting at keys [/Table/12/0 /Table/13/0 /Table/14/0]
I170117 10:14:31.455289 1945109 storage/replica_command.go:2354 [split,n1,s1,r2/1:/{Table/11-Max},@c437e4c300] initiating a split of this range at key /Table/12 [r3]
E170117 10:14:31.455349 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I170117 10:14:31.464742 1945088 sql/event_log.go:95 [n1] Event: "node_join", target: 1, info: {Descriptor:{NodeID:1 Address:{NetworkField:tcp AddressField:127.0.0.1:52162} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648071397961127}
I170117 10:14:31.469704 1943902 sql/event_log.go:95 [n1] Event: "alter_table", target: 12, info: {TableName:eventlog Statement:ALTER TABLE system.eventlog ALTER COLUMN uniqueID SET DEFAULT uuid_v4() User:node MutationID:0 CascadeDroppedViews:[]}
E170117 10:14:31.475972 1945110 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.477707 1945109 storage/queue.go:599 [split,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] unable to split [n1,s1,r2/1:/Table/1{1-2}] at key "/Table/13/0": key range /Table/13/0-/Table/13/0 outside of bounds of range /Table/11-/Max
I170117 10:14:31.478714 1945109 storage/split_queue.go:99 [split,n1,s1,r3/1:/{Table/12-Max},@c436c5b200] splitting at keys [/Table/13/0 /Table/14/0]
E170117 10:14:31.479995 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
I170117 10:14:31.481357 1945109 storage/replica_command.go:2354 [split,n1,s1,r3/1:/{Table/12-Max},@c436c5b200] initiating a split of this range at key /Table/13 [r4]
E170117 10:14:31.488557 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.514367 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.514825 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.517206 1945110 storage/queue.go:610 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.518126 1945109 storage/queue.go:599 [split,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] unable to split [n1,s1,r3/1:/Table/1{2-3}] at key "/Table/14/0": key range /Table/14/0-/Table/14/0 outside of bounds of range /Table/12-/Max
I170117 10:14:31.519826 1945109 storage/split_queue.go:99 [split,n1,s1,r4/1:/{Table/13-Max},@c4361d9800] splitting at keys [/Table/14/0]
I170117 10:14:31.520296 1945109 storage/replica_command.go:2354 [split,n1,s1,r4/1:/{Table/13-Max},@c4361d9800] initiating a split of this range at key /Table/14 [r5]
I170117 10:14:31.534934 1943902 server/server.go:686 [n1] done ensuring all necessary migrations have run
I170117 10:14:31.534999 1943902 server/server.go:688 [n1] serving sql connections
E170117 10:14:31.548308 1945110 storage/queue.go:610 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.548490 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.550140 1948280 storage/queue.go:610 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.550862 1945110 storage/queue.go:610 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.551148 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
W170117 10:14:31.553600 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
W170117 10:14:31.555011 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.556117 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.557727 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.558030 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:31.558145 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:31.568433 1956629 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:45964}
I170117 10:14:31.568589 1956596 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:31.570828 1956645 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:31.571358 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.574028 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170117 10:14:31.578916 1943902 server/node.go:317 [n?] new node allocated ID 2
I170117 10:14:31.579063 1943902 gossip/gossip.go:292 [n2] NodeDescriptor set to node_id:2 address:<network_field:"tcp" address_field:"127.0.0.1:45964" > attrs:<> locality:<>
I170117 10:14:31.579308 1943902 server/node.go:374 [n2] node=2: started with [[]=] engine(s) and attributes []
I170117 10:14:31.579380 1943902 sql/executor.go:322 [n2] creating distSQLPlanner with address {tcp 127.0.0.1:45964}
I170117 10:14:31.580294 1956607 storage/stores.go:312 [n1] wrote 1 node addresses to persistent storage
I170117 10:14:31.581188 1943902 server/server.go:629 [n2] starting https server at 127.0.0.1:41515
I170117 10:14:31.581262 1943902 server/server.go:630 [n2] starting grpc/postgres server at 127.0.0.1:45964
I170117 10:14:31.581319 1943902 server/server.go:631 [n2] advertising CockroachDB node at 127.0.0.1:45964
I170117 10:14:31.584500 1943902 server/server.go:686 [n2] done ensuring all necessary migrations have run
I170117 10:14:31.584784 1943902 server/server.go:688 [n2] serving sql connections
I170117 10:14:31.585605 1956917 server/node.go:552 [n2] bootstrapped store [n2,s2]
E170117 10:14:31.586646 1948280 storage/queue.go:610 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.587464 1948280 storage/queue.go:610 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.587931 1948280 storage/queue.go:610 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.588437 1948280 storage/queue.go:610 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
E170117 10:14:31.589675 1948280 storage/queue.go:610 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] purgatory: 0 of 1 store with an attribute matching []; likely not enough nodes in cluster
W170117 10:14:31.600446 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
I170117 10:14:31.602016 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] generated preemptive snapshot 398bcb69 at index 17
W170117 10:14:31.602401 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.605033 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.609101 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.609213 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:31.609308 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:31.615143 1948280 storage/store.go:3275 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] streamed snapshot: kv pairs: 34, log entries: 7, 1ms
I170117 10:14:31.616541 1959722 storage/replica_raftstorage.go:575 [n2,s2,r4/?:{-},@c43bb58300] applying preemptive snapshot at index 17 (id=398bcb69, encoded size=10570, 1 rocksdb batches, 7 log entries)
I170117 10:14:31.617581 1959722 storage/replica_raftstorage.go:583 [n2,s2,r4/?:/Table/1{3-4},@c43bb58300] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:31.620048 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] change replicas (remove {2 2 2}): read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:31.624260 1958860 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:31.625275 1960705 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:53298}
I170117 10:14:31.627729 1956921 sql/event_log.go:95 [n2] Event: "node_join", target: 2, info: {Descriptor:{NodeID:2 Address:{NetworkField:tcp AddressField:127.0.0.1:45964} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648071579278340}
I170117 10:14:31.627825 1960808 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:31.627912 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.628571 1960815 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170117 10:14:31.631014 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170117 10:14:31.634275 1961438 storage/replica.go:2385 [n1,s1,r4/1:/Table/1{3-4},@c4361d9800] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170117 10:14:31.634564 1943902 server/node.go:317 [n?] new node allocated ID 3
I170117 10:14:31.634716 1943902 gossip/gossip.go:292 [n3] NodeDescriptor set to node_id:3 address:<network_field:"tcp" address_field:"127.0.0.1:53298" > attrs:<> locality:<>
I170117 10:14:31.635102 1943902 server/node.go:374 [n3] node=3: started with [[]=] engine(s) and attributes []
I170117 10:14:31.635206 1943902 sql/executor.go:322 [n3] creating distSQLPlanner with address {tcp 127.0.0.1:53298}
I170117 10:14:31.637337 1961776 storage/stores.go:312 [n1] wrote 2 node addresses to persistent storage
I170117 10:14:31.639471 1961841 storage/stores.go:312 [n2] wrote 2 node addresses to persistent storage
I170117 10:14:31.640105 1943902 server/server.go:629 [n3] starting https server at 127.0.0.1:37479
I170117 10:14:31.640441 1943902 server/server.go:630 [n3] starting grpc/postgres server at 127.0.0.1:53298
I170117 10:14:31.640510 1943902 server/server.go:631 [n3] advertising CockroachDB node at 127.0.0.1:53298
I170117 10:14:31.641251 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] generated preemptive snapshot 5b72a57b at index 54
I170117 10:14:31.645105 1961525 server/node.go:552 [n3] bootstrapped store [n3,s3]
I170117 10:14:31.645940 1945110 storage/store.go:3275 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] streamed snapshot: kv pairs: 637, log entries: 44, 4ms
I170117 10:14:31.648832 1962274 storage/replica_raftstorage.go:575 [n2,s2,r1/?:{-},@c43bb58c00] applying preemptive snapshot at index 54 (id=5b72a57b, encoded size=319266, 1 rocksdb batches, 44 log entries)
I170117 10:14:31.649795 1963131 storage/raft_transport.go:437 [n2] raft transport stream to node 1 established
I170117 10:14:31.654134 1943902 server/server.go:686 [n3] done ensuring all necessary migrations have run
I170117 10:14:31.654201 1943902 server/server.go:688 [n3] serving sql connections
I170117 10:14:31.656581 1962274 storage/replica_raftstorage.go:583 [n2,s2,r1/?:/{Min-Table/11},@c43bb58c00] applied preemptive snapshot in 7ms [clear=0ms batch=0ms entries=3ms commit=2ms]
I170117 10:14:31.659479 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] change replicas (remove {2 2 2}): read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
W170117 10:14:31.660104 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
W170117 10:14:31.663013 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:31.664422 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:31.667444 1961529 sql/event_log.go:95 [n3] Event: "node_join", target: 3, info: {Descriptor:{NodeID:3 Address:{NetworkField:tcp AddressField:127.0.0.1:53298} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648071635071268}
I170117 10:14:31.667623 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:31.667771 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:31.667910 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:31.671449 1965351 storage/replica.go:2385 [n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] proposing ADD_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2}]
I170117 10:14:31.674097 1964633 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:31.674221 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] generated preemptive snapshot 72d7c1c6 at index 19
I170117 10:14:31.674350 1965707 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:49743}
I170117 10:14:31.677802 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:31.679407 1965881 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:31.681206 1965881 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170117 10:14:31.682242 1965881 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I170117 10:14:31.682385 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
I170117 10:14:31.691483 1948280 storage/store.go:3275 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] streamed snapshot: kv pairs: 10, log entries: 9, 0ms
I170117 10:14:31.692651 1966713 storage/replica_raftstorage.go:575 [n3,s3,r2/?:{-},@c4304f2000] applying preemptive snapshot at index 19 (id=72d7c1c6, encoded size=13578, 1 rocksdb batches, 9 log entries)
I170117 10:14:31.693512 1966713 storage/replica_raftstorage.go:583 [n3,s3,r2/?:/Table/1{1-2},@c4304f2000] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:31.695863 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] change replicas (remove {3 3 2}): read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.794482 1943902 server/node.go:317 [n?] new node allocated ID 4
I170117 10:14:32.794595 1943902 gossip/gossip.go:292 [n4] NodeDescriptor set to node_id:4 address:<network_field:"tcp" address_field:"127.0.0.1:49743" > attrs:<> locality:<>
I170117 10:14:32.794805 1943902 server/node.go:374 [n4] node=4: started with [[]=] engine(s) and attributes []
I170117 10:14:32.794877 1943902 sql/executor.go:322 [n4] creating distSQLPlanner with address {tcp 127.0.0.1:49743}
I170117 10:14:32.796593 1943902 server/server.go:629 [n4] starting https server at 127.0.0.1:49214
I170117 10:14:32.796648 1943902 server/server.go:630 [n4] starting grpc/postgres server at 127.0.0.1:49743
I170117 10:14:32.796702 1943902 server/server.go:631 [n4] advertising CockroachDB node at 127.0.0.1:49743
I170117 10:14:32.797643 1966460 storage/stores.go:312 [n1] wrote 3 node addresses to persistent storage
I170117 10:14:32.798175 1972096 storage/stores.go:312 [n3] wrote 3 node addresses to persistent storage
I170117 10:14:32.798458 1971824 storage/stores.go:312 [n2] wrote 3 node addresses to persistent storage
I170117 10:14:32.798577 1943902 server/server.go:686 [n4] done ensuring all necessary migrations have run
I170117 10:14:32.798638 1943902 server/server.go:688 [n4] serving sql connections
I170117 10:14:32.820172 1971847 server/node.go:552 [n4] bootstrapped store [n4,s4]
W170117 10:14:32.833470 1943902 gossip/gossip.go:1138 [n?] no incoming or outgoing connections
W170117 10:14:32.835696 1943902 server/status/runtime.go:116 Could not parse build timestamp: parsing time "" as "2006/01/02 15:04:05": cannot parse "" as "2006"
I170117 10:14:32.836736 1943902 server/config.go:456 1 storage engine initialized
I170117 10:14:32.838363 1943902 server/node.go:426 [n?] store [n0,s0] not bootstrapped
I170117 10:14:32.838471 1943902 storage/stores.go:296 [n?] read 0 node addresses from persistent storage
I170117 10:14:32.838562 1943902 server/node.go:571 [n?] connecting to gossip network to verify cluster ID...
I170117 10:14:32.847789 1971851 sql/event_log.go:95 [n4] Event: "node_join", target: 4, info: {Descriptor:{NodeID:4 Address:{NetworkField:tcp AddressField:127.0.0.1:49743} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648072794774912}
I170117 10:14:32.857722 1974856 gossip/client.go:125 [n?] started gossip client to 127.0.0.1:52162
I170117 10:14:32.857877 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] change replicas (remove {3 3 2}): read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.858068 1976237 gossip/server.go:285 [n1] received initial cluster-verification connection from {tcp 127.0.0.1:47683}
I170117 10:14:32.858920 1943902 server/node.go:595 [n?] node connected via gossip and verified as part of cluster "1518aab1-42fd-403c-b8ff-6323ba8a4269"
I170117 10:14:32.859887 1976286 storage/stores.go:312 [n?] wrote 1 node addresses to persistent storage
I170117 10:14:32.860377 1976290 storage/stores.go:312 [n?] wrote 2 node addresses to persistent storage
I170117 10:14:32.860525 1976290 storage/stores.go:312 [n?] wrote 3 node addresses to persistent storage
I170117 10:14:32.860672 1976290 storage/stores.go:312 [n?] wrote 4 node addresses to persistent storage
I170117 10:14:32.862378 1943902 kv/dist_sender.go:367 [n?] unable to determine this node's attributes for replica selection; node is most likely bootstrapping
W170117 10:14:32.862959 1976757 storage/intent_resolver.go:338 [n1,s1,r1/1:/{Min-Table/11}]: failed to push during intent resolution: failed to push "change-replica" id=4e89ac21 key=/Local/Range/"\x93"/RangeDescriptor rw=true pri=0.00818092 iso=SERIALIZABLE stat=PENDING epo=0 ts=1484648072.804897335,1 orig=1484648071.693983802,0 max=1484648071.693983802,0 wto=false rop=false
I170117 10:14:32.866261 1943902 server/node.go:317 [n?] new node allocated ID 5
I170117 10:14:32.866380 1943902 gossip/gossip.go:292 [n5] NodeDescriptor set to node_id:5 address:<network_field:"tcp" address_field:"127.0.0.1:47683" > attrs:<> locality:<>
I170117 10:14:32.866882 1943902 server/node.go:374 [n5] node=5: started with [[]=] engine(s) and attributes []
I170117 10:14:32.866965 1943902 sql/executor.go:322 [n5] creating distSQLPlanner with address {tcp 127.0.0.1:47683}
I170117 10:14:32.867699 1976237 gossip/server.go:263 [n1] refusing gossip from node 5 (max 3 conns); forwarding to 2 ({tcp 127.0.0.1:45964})
I170117 10:14:32.868113 1977274 storage/stores.go:312 [n1] wrote 4 node addresses to persistent storage
I170117 10:14:32.868605 1974856 gossip/client.go:130 [n5] closing client to node 1 (127.0.0.1:52162): received forward from node 1 to 2 (127.0.0.1:45964)
I170117 10:14:32.868899 1977341 storage/stores.go:312 [n4] wrote 4 node addresses to persistent storage
I170117 10:14:32.869324 1977364 storage/stores.go:312 [n2] wrote 4 node addresses to persistent storage
I170117 10:14:32.869865 1977325 storage/stores.go:312 [n3] wrote 4 node addresses to persistent storage
I170117 10:14:32.871036 1943902 server/server.go:629 [n5] starting https server at 127.0.0.1:39444
I170117 10:14:32.871105 1943902 server/server.go:630 [n5] starting grpc/postgres server at 127.0.0.1:47683
I170117 10:14:32.871283 1943902 server/server.go:631 [n5] advertising CockroachDB node at 127.0.0.1:47683
I170117 10:14:32.871958 1977429 gossip/client.go:125 [n5] started gossip client to 127.0.0.1:45964
I170117 10:14:32.879061 1977143 server/node.go:552 [n5] bootstrapped store [n5,s5]
I170117 10:14:32.885614 1943902 server/server.go:686 [n5] done ensuring all necessary migrations have run
I170117 10:14:32.885710 1943902 server/server.go:688 [n5] serving sql connections
I170117 10:14:32.900328 1980287 storage/replica.go:2385 [n1,s1,r2/1:/Table/1{1-2},@c437e4c300] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170117 10:14:32.902945 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] generated preemptive snapshot c19f0d7e at index 25
I170117 10:14:32.904054 1948280 storage/store.go:3275 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] streamed snapshot: kv pairs: 36, log entries: 15, 1ms
I170117 10:14:32.906263 1980738 storage/replica_raftstorage.go:575 [n3,s3,r3/?:{-},@c4444f2300] applying preemptive snapshot at index 25 (id=c19f0d7e, encoded size=24048, 1 rocksdb batches, 15 log entries)
I170117 10:14:32.908685 1980960 storage/raft_transport.go:437 [n3] raft transport stream to node 1 established
I170117 10:14:32.911404 1980738 storage/replica_raftstorage.go:583 [n3,s3,r3/?:/Table/1{2-3},@c4444f2300] applied preemptive snapshot in 5ms [clear=0ms batch=0ms entries=3ms commit=0ms]
I170117 10:14:32.923045 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] change replicas (remove {3 3 2}): read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.929520 1977147 sql/event_log.go:95 [n5] Event: "node_join", target: 5, info: {Descriptor:{NodeID:5 Address:{NetworkField:tcp AddressField:127.0.0.1:47683} Attrs: Locality:} ClusterID:1518aab1-42fd-403c-b8ff-6323ba8a4269 StartedAt:1484648072866847476}
I170117 10:14:32.944446 1983958 storage/replica.go:2385 [n1,s1,r3/1:/Table/1{2-3},@c436c5b200] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2}]
I170117 10:14:32.948091 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] generated preemptive snapshot ffaca17f at index 13
I170117 10:14:32.959184 1948280 storage/store.go:3275 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] streamed snapshot: kv pairs: 10, log entries: 3, 1ms
I170117 10:14:32.959759 1985123 storage/replica_raftstorage.go:575 [n5,s5,r5/?:{-},@c43ff45800] applying preemptive snapshot at index 13 (id=ffaca17f, encoded size=1712, 1 rocksdb batches, 3 log entries)
I170117 10:14:32.960309 1985123 storage/replica_raftstorage.go:583 [n5,s5,r5/?:/{Table/14-Max},@c43ff45800] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:32.963330 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] change replicas (remove {5 5 2}): read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > next_replica_id:2
I170117 10:14:32.977497 1986403 storage/replica.go:2385 [n1,s1,r5/1:/{Table/14-Max},@c438a2a000] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:2}]
I170117 10:14:32.979718 1948280 storage/replica_raftstorage.go:410 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] generated preemptive snapshot fefd4a0f at index 82
I170117 10:14:32.990018 1987423 storage/raft_transport.go:437 [n5] raft transport stream to node 1 established
I170117 10:14:33.004933 1948280 storage/store.go:3275 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] streamed snapshot: kv pairs: 1251, log entries: 72, 8ms
I170117 10:14:33.005672 1987892 storage/replica_raftstorage.go:575 [n4,s4,r1/?:{-},@c435adcf00] applying preemptive snapshot at index 82 (id=fefd4a0f, encoded size=689172, 1 rocksdb batches, 72 log entries)
I170117 10:14:33.011666 1987892 storage/replica_raftstorage.go:583 [n4,s4,r1/?:/{Min-Table/11},@c435adcf00] applied preemptive snapshot in 6ms [clear=0ms batch=0ms entries=3ms commit=2ms]
I170117 10:14:33.013703 1948280 storage/replica_command.go:3210 [replicate,n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] change replicas (remove {4 4 3}): read existing descriptor range_id:1 start_key:"" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170117 10:14:33.040141 1989217 storage/replica.go:2385 [n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:33.045109 1948280 storage/queue.go:662 [n1,replicate] purgatory is now empty
I170117 10:14:33.046300 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] generated preemptive snapshot a8c74731 at index 33
I170117 10:14:33.047952 1945110 storage/store.go:3275 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] streamed snapshot: kv pairs: 71, log entries: 23, 1ms
I170117 10:14:33.048528 1989758 storage/replica_raftstorage.go:575 [n5,s5,r4/?:{-},@c437b50900] applying preemptive snapshot at index 33 (id=a8c74731, encoded size=37627, 1 rocksdb batches, 23 log entries)
I170117 10:14:33.051376 1989758 storage/replica_raftstorage.go:583 [n5,s5,r4/?:/Table/1{3-4},@c437b50900] applied preemptive snapshot in 3ms [clear=0ms batch=0ms entries=2ms commit=0ms]
I170117 10:14:33.054249 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r4/1:/Table/1{3-4},@c4361d9800] change replicas (remove {5 5 3}): read existing descriptor range_id:4 start_key:"\225" end_key:"\226" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > next_replica_id:3
I170117 10:14:33.058134 1990851 storage/raft_transport.go:437 [n4] raft transport stream to node 1 established
I170117 10:14:33.076887 1992278 storage/replica.go:2385 [n1,s1,r4/1:/Table/1{3-4},@c4361d9800] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I170117 10:14:33.096275 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] generated preemptive snapshot 92c3bb82 at index 16
I170117 10:14:33.097886 1945110 storage/store.go:3275 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] streamed snapshot: kv pairs: 11, log entries: 6, 0ms
I170117 10:14:33.099214 1993185 storage/replica_raftstorage.go:575 [n4,s4,r5/?:{-},@c42d02a600] applying preemptive snapshot at index 16 (id=92c3bb82, encoded size=4673, 1 rocksdb batches, 6 log entries)
I170117 10:14:33.099815 1993185 storage/replica_raftstorage.go:583 [n4,s4,r5/?:/{Table/14-Max},@c42d02a600] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.101931 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r5/1:/{Table/14-Max},@c438a2a000] change replicas (remove {4 4 3}): read existing descriptor range_id:5 start_key:"\226" end_key:"\377\377" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:5 store_id:5 replica_id:2 > next_replica_id:3
W170117 10:14:33.116419 357864 gossip/gossip.go:1143 [n2] first range unavailable; trying remaining resolvers
I170117 10:14:33.116737 1994780 gossip/client.go:125 [n2] started gossip client to 127.0.0.1:57345
I170117 10:14:33.125943 1995537 storage/replica.go:2385 [n1,s1,r5/1:/{Table/14-Max},@c438a2a000] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:33.138819 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] generated preemptive snapshot 7c6d8b00 at index 27
I170117 10:14:33.140535 1945110 storage/store.go:3275 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] streamed snapshot: kv pairs: 12, log entries: 17, 1ms
I170117 10:14:33.141171 1996976 storage/replica_raftstorage.go:575 [n5,s5,r2/?:{-},@c42d02a900] applying preemptive snapshot at index 27 (id=7c6d8b00, encoded size=20333, 1 rocksdb batches, 17 log entries)
I170117 10:14:33.142085 1996976 storage/replica_raftstorage.go:583 [n5,s5,r2/?:/Table/1{1-2},@c42d02a900] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.143900 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r2/1:/Table/1{1-2},@c437e4c300] change replicas (remove {5 5 3}): read existing descriptor range_id:2 start_key:"\223" end_key:"\224" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170117 10:14:33.175911 1999007 storage/replica.go:2385 [n1,s1,r2/1:/Table/1{1-2},@c437e4c300] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:5 StoreID:5 ReplicaID:3}]
I170117 10:14:33.192385 1945110 storage/replica_raftstorage.go:410 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] generated preemptive snapshot c022e25b at index 30
I170117 10:14:33.195462 1945110 storage/store.go:3275 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] streamed snapshot: kv pairs: 42, log entries: 20, 3ms
I170117 10:14:33.196753 2000488 storage/replica_raftstorage.go:575 [n4,s4,r3/?:{-},@c42d02af00] applying preemptive snapshot at index 30 (id=c022e25b, encoded size=31362, 1 rocksdb batches, 20 log entries)
I170117 10:14:33.198380 2000488 storage/replica_raftstorage.go:583 [n4,s4,r3/?:/Table/1{2-3},@c42d02af00] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=1ms commit=0ms]
I170117 10:14:33.203206 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r3/1:/Table/1{2-3},@c436c5b200] change replicas (remove {4 4 3}): read existing descriptor range_id:3 start_key:"\224" end_key:"\225" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:3 store_id:3 replica_id:2 > next_replica_id:3
I170117 10:14:33.248672 2003182 storage/replica.go:2385 [n1,s1,r3/1:/Table/1{2-3},@c436c5b200] proposing ADD_REPLICA {NodeID:4 StoreID:4 ReplicaID:3}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:3 StoreID:3 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:33.422856 2006556 storage/replica_command.go:2354 [n1,s1,r1/1:/{Min-Table/11},@c431e0ef00] initiating a split of this range at key "m" [r6]
I170117 10:14:33.466334 1943902 storage/replica_raftstorage.go:410 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] generated preemptive snapshot 4b7bca36 at index 10
I170117 10:14:33.466997 1943902 storage/store.go:3275 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] streamed snapshot: kv pairs: 32, log entries: 0, 0ms
I170117 10:14:33.467701 2010658 storage/replica_raftstorage.go:575 [n3,s3,r6/?:{-},@c42d02b800] applying preemptive snapshot at index 10 (id=4b7bca36, encoded size=5486, 1 rocksdb batches, 0 log entries)
I170117 10:14:33.468143 2010658 storage/replica_raftstorage.go:583 [n3,s3,r6/?:{"m"-/Table/11},@c42d02b800] applied preemptive snapshot in 0ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.470143 1943902 storage/replica_command.go:3210 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {3 3 4}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > next_replica_id:4
I170117 10:14:33.497104 2012956 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing ADD_REPLICA {NodeID:3 StoreID:3 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:3 StoreID:3 ReplicaID:4}]
I170117 10:14:33.510599 1943902 storage/replica_raftstorage.go:410 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] generated preemptive snapshot d056e62f at index 14
I170117 10:14:33.512959 1943902 storage/store.go:3275 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] streamed snapshot: kv pairs: 34, log entries: 4, 2ms
I170117 10:14:33.515650 2014571 storage/replica_raftstorage.go:575 [n5,s5,r6/?:{-},@c435360000] applying preemptive snapshot at index 14 (id=d056e62f, encoded size=9349, 1 rocksdb batches, 4 log entries)
I170117 10:14:33.516970 2014571 storage/replica_raftstorage.go:583 [n5,s5,r6/?:{"m"-/Table/11},@c435360000] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I170117 10:14:33.520052 1943902 storage/replica_command.go:3210 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {5 5 5}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:3 store_id:3 replica_id:4 > next_replica_id:5
I170117 10:14:33.541909 2016813 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing ADD_REPLICA {NodeID:5 StoreID:5 ReplicaID:5}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:3 StoreID:3 ReplicaID:4} {NodeID:5 StoreID:5 ReplicaID:5}]
I170117 10:14:34.872680 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {3 3 4}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:3 store_id:3 replica_id:4 > replicas:<node_id:5 store_id:5 replica_id:5 > next_replica_id:6
I170117 10:14:34.908086 2030172 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing REMOVE_REPLICA {NodeID:3 StoreID:3 ReplicaID:4}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:2 StoreID:2 ReplicaID:2} {NodeID:4 StoreID:4 ReplicaID:3} {NodeID:5 StoreID:5 ReplicaID:5}]
I170117 10:14:34.914566 1981018 storage/store.go:3131 [n3,s3,r6/4:{"m"-/Table/11},@c42d02b800] added to replica GC queue (peer suggestion)
I170117 10:14:34.925424 1945110 storage/replica_command.go:3210 [replicate,n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] change replicas (remove {2 2 2}): read existing descriptor range_id:6 start_key:"m" end_key:"\223" replicas:<node_id:1 store_id:1 replica_id:1 > replicas:<node_id:2 store_id:2 replica_id:2 > replicas:<node_id:4 store_id:4 replica_id:3 > replicas:<node_id:5 store_id:5 replica_id:5 > next_replica_id:6
I170117 10:14:34.940261 1962523 storage/store.go:2106 [replicaGC,n3,s3,r6/4:{"m"-/Table/11},@c42d02b800] removing replica
I170117 10:14:34.946970 1962523 storage/replica.go:731 [replicaGC,n3,s3,r6/4:{"m"-/Table/11},@c42d02b800] removed 30 (19+11) keys in 0ms [clear=0ms commit=0ms]
I170117 10:14:34.969108 2034573 storage/replica.go:2385 [n1,s1,r6/1:{"m"-/Table/11},@c4354f6600] proposing REMOVE_REPLICA {NodeID:2 StoreID:2 ReplicaID:2}: [{NodeID:1 StoreID:1 ReplicaID:1} {NodeID:5 StoreID:5 ReplicaID:5} {NodeID:4 StoreID:4 ReplicaID:3}]
I170117 10:14:34.974417 1963163 storage/store.go:3131 [n2,s2,r6/2:{"m"-/Table/11},@c437b51800] added to replica GC queue (peer suggestion)
I170117 10:14:34.997322 1957576 storage/store.go:2106 [replicaGC,n2,s2,r6/2:{"m"-/Table/11},@c437b51800] removing replica
I170117 10:14:34.998497 1957576 storage/replica.go:731 [replicaGC,n2,s2,r6/2:{"m"-/Table/11},@c437b51800] removed 30 (19+11) keys in 0ms [clear=0ms commit=0ms]
W170117 10:14:35.636993 1990851 storage/raft_transport.go:443 [n4] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
W170117 10:14:35.637526 1962769 storage/raft_transport.go:443 [n1] raft transport stream to node 2 failed: EOF
W170117 10:14:35.637678 1980724 storage/raft_transport.go:443 [n1] raft transport stream to node 3 failed: EOF
W170117 10:14:35.637810 1980960 storage/raft_transport.go:443 [n3] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I170117 10:14:35.637846 1946120 vendor/google.golang.org/grpc/transport/http2_client.go:1123 transport: http2Client.notifyError got notified that the client transport was broken EOF.
W170117 10:14:35.637984 1987360 storage/raft_transport.go:443 [n1] raft transport stream to node 5 failed: EOF
I170117 10:14:35.638036 1945608 vendor/google.golang.org/grpc/clientconn.go:766 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:52162: operation was canceled"; Reconnecting to {127.0.0.1:52162 <nil>}
W170117 10:14:35.638081 1963131 storage/raft_transport.go:443 [n2] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
I170117 10:14:35.638147 1946163 vendor/google.golang.org/grpc/transport/http2_server.go:320 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:52162->127.0.0.1:49922: use of closed network connection
I170117 10:14:35.638193 1945608 vendor/google.golang.org/grpc/clientconn.go:866 grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W170117 10:14:35.638255 1987423 storage/raft_transport.go:443 [n5] raft transport stream to node 1 failed: rpc error: code = 13 desc = transport is closing
leaktest.go:93: Leaked goroutine: goroutine 1994780 [select]:
github.com/cockroachdb/cockroach/pkg/gossip.(*client).gossip(0xc427e031e0, 0x2b20b54b0230, 0xc42e692180, 0xc426813b00, 0x261ad80, 0xc42b9a0020, 0xc428c4e870, 0xc42ba8f7c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:309 +0x3ec
github.com/cockroachdb/cockroach/pkg/gossip.(*client).start.func1()
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:126 +0x4f7
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc428c4e870, 0xc428272340)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
leaktest.go:93: Leaked goroutine: goroutine 1994786 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.newClientStream.func3(0x261a8a0, 0xc426a0fa40, 0xc43e79b500, 0xc435b6f7c0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:234 +0x426
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.newClientStream
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:254 +0xcba
leaktest.go:93: Leaked goroutine: goroutine 1994796 [select]:
github.com/cockroachdb/cockroach/pkg/gossip.(*server).Gossip(0xc424c3b800, 0x261ade0, 0xc42b9a01f0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:190 +0x57b
github.com/cockroachdb/cockroach/pkg/gossip._Gossip_Gossip_Handler(0x19d6280, 0xc424c3b800, 0x2619280, 0xc42d94c360, 0xc41f7f15a4, 0xc4208e8d70)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/gossip.pb.go:209 +0xbb
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).processStreamingRPC(0xc424c3b740, 0x261a900, 0xc42aeced00, 0xc43e79b600, 0xc4258697d0, 0x25e32a0, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:807 +0x7d6
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).handleStream(0xc424c3b740, 0x261a900, 0xc42aeced00, 0xc43e79b600, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:897 +0xc36
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1(0xc42af60a10, 0xc424c3b740, 0x261a900, 0xc42aeced00, 0xc43e79b600)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:469 +0xab
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*Server).serveStreams.func1
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/server.go:470 +0xa3
leaktest.go:93: Leaked goroutine: goroutine 1994846 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc42e692800, 0xc42e143f30, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:140 +0x69a
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*Stream).Read(0xc43e79b600, 0xc42e143f30, 0x5, 0x5, 0xc444530b00, 0xc4426ed8e8, 0x640f40)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:325 +0x5c
io.ReadAtLeast(0x2601460, 0xc43e79b600, 0xc42e143f30, 0x5, 0x5, 0x5, 0x32, 0x32, 0x0)
/usr/local/go/src/io/io.go:307 +0xa4
io.ReadFull(0x2601460, 0xc43e79b600, 0xc42e143f30, 0x5, 0x5, 0xc427588940, 0x37, 0x37)
/usr/local/go/src/io/io.go:325 +0x58
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*parser).recvMsg(0xc42e143f20, 0x7fffffff, 0x1967ee0, 0x4, 0x0, 0x7, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:233 +0x6f
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.recv(0xc42e143f20, 0x2612860, 0x2b21650, 0xc43e79b600, 0x0, 0x0, 0x197b060, 0xc42e8af1c0, 0x7fffffff, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:329 +0x4d
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*serverStream).RecvMsg(0xc42d94c360, 0x197b060, 0xc42e8af1c0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:607 +0x11e
github.com/cockroachdb/cockroach/pkg/gossip.(*gossipGossipServer).Recv(0xc42b9a01f0, 0xdec3fc, 0x18a5e20, 0xc424c3b830)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/gossip.pb.go:228 +0x62
github.com/cockroachdb/cockroach/pkg/gossip.(Gossip_GossipServer).Recv-fm(0xc424c3b830, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:153 +0x2f
github.com/cockroachdb/cockroach/pkg/gossip.(*server).gossipReceiver(0xc424c3b800, 0x2b20b54b0230, 0xc42e692880, 0xc4331b95f8, 0xc42e6928c0, 0xc4426edf40, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:321 +0x3fe
github.com/cockroachdb/cockroach/pkg/gossip.(*server).Gossip.func3.1()
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/server.go:153 +0x99
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc428c4e870, 0xc43e4252c0)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
leaktest.go:93: Leaked goroutine: goroutine 1994921 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*recvBufferReader).Read(0xc42e692280, 0xc42e143cd0, 0x5, 0x5, 0x0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:140 +0x69a
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*Stream).Read(0xc43e79b500, 0xc42e143cd0, 0x5, 0x5, 0xc443afcbc8, 0x5fd401, 0xc42a331fa8)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/transport.go:325 +0x5c
io.ReadAtLeast(0x2601460, 0xc43e79b500, 0xc42e143cd0, 0x5, 0x5, 0x5, 0x1, 0xc443afcc38, 0x5f58bf)
/usr/local/go/src/io/io.go:307 +0xa4
io.ReadFull(0x2601460, 0xc43e79b500, 0xc42e143cd0, 0x5, 0x5, 0xc42ba1ed08, 0xc42ba1ed00, 0x25ee810)
/usr/local/go/src/io/io.go:325 +0x58
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*parser).recvMsg(0xc42e143cc0, 0x7fffffff, 0xc4267ecf30, 0xc443afce88, 0xb0b82e, 0xc4267ecf30, 0xc429a80de0, 0xc400000003)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:233 +0x6f
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.recv(0xc42e143cc0, 0x2612860, 0x2b21650, 0xc43e79b500, 0x0, 0x0, 0x1967ee0, 0xc43136fcc0, 0x7fffffff, 0x0, ...)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/rpc_util.go:329 +0x4d
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc.(*clientStream).RecvMsg(0xc435b6f7c0, 0x1967ee0, 0xc43136fcc0, 0x0, 0x0)
/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/stream.go:382 +0x11b
github.com/cockroachdb/cockroach/pkg/gossip.(*gossipGossipClient).Recv(0xc42b9a0020, 0x2b20b54b0230, 0xc42e692180, 0xc426813b00)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/gossip.pb.go:192 +0x62
github.com/cockroachdb/cockroach/pkg/gossip.(*client).gossip.func2.1(0x261ad80, 0xc42b9a0020, 0xc427e031e0, 0x2b20b54b0230, 0xc42e692180, 0xc426813b00, 0x616443, 0xc42f0e3f48)
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:297 +0x35
github.com/cockroachdb/cockroach/pkg/gossip.(*client).gossip.func2()
/go/src/github.com/cockroachdb/cockroach/pkg/gossip/client.go:305 +0xd7
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc428c4e870, 0xc435cbe870)
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:196 +0x7d
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:197 +0x66
```
|
non_process
|
github com cockroachdb cockroach pkg storage testreplicatequeuedownreplicate failed under stress sha parameters cockroach proposer evaluated kv true tags deadlock goflags stress build found a failed test server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped server node go cluster has been created server node go add additional nodes by specifying join storage store go failed initial metrics computation system config not yet available server node go initialized store capacity available rangecount leasecount server node go node id initialized gossip gossip go nodedescriptor set to node id address attrs locality storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range min max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat sql event log go event alter table target info tablename eventlog statement alter table system eventlog alter column uniqueid set default uuid user node mutationid cascadedroppedviews storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage replica command go initiating a split of this range at key table storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go unable to split at key table key range table table outside of bounds of range table max storage split queue go splitting at keys storage replica command go initiating a split of this range at key table server server go done ensuring all necessary migrations have run server server go serving sql connections storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id gossip server go received initial cluster verification connection from tcp gossip client go started gossip client to storage stores go wrote node addresses to persistent storage server node go node connected via gossip and verified as part of cluster kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at server server go done ensuring all necessary migrations have run server server go serving sql connections server node go bootstrapped store storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster storage queue go purgatory of store with an attribute matching likely not enough nodes in cluster gossip gossip go no incoming or outgoing connections storage replica raftstorage go generated preemptive snapshot at index server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id gossip client go started gossip client to gossip server go received initial cluster verification connection from tcp sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage stores go wrote node addresses to persistent storage server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping storage replica go proposing add replica nodeid storeid replicaid server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage replica raftstorage go generated preemptive snapshot at index server node go bootstrapped store storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage raft transport go raft transport stream to node established server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id storage replica go proposing add replica nodeid storeid replicaid gossip client go started gossip client to storage replica raftstorage go generated preemptive snapshot at index gossip server go received initial cluster verification connection from tcp server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server server go done ensuring all necessary migrations have run server server go serving sql connections server node go bootstrapped store gossip gossip go no incoming or outgoing connections server status runtime go could not parse build timestamp parsing time as cannot parse as server config go storage engine initialized server node go store not bootstrapped storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat gossip client go started gossip client to storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id gossip server go received initial cluster verification connection from tcp server node go node connected via gossip and verified as part of cluster storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage kv dist sender go unable to determine this node s attributes for replica selection node is most likely bootstrapping storage intent resolver go failed to push during intent resolution failed to push change replica id key local range rangedescriptor rw true pri iso serializable stat pending epo ts orig max wto false rop false server node go new node allocated id gossip gossip go nodedescriptor set to node id address attrs locality server node go node started with engine s and attributes sql executor go creating distsqlplanner with address tcp gossip server go refusing gossip from node max conns forwarding to tcp storage stores go wrote node addresses to persistent storage gossip client go closing client to node received forward from node to storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage storage stores go wrote node addresses to persistent storage server server go starting https server at server server go starting grpc postgres server at server server go advertising cockroachdb node at gossip client go started gossip client to server node go bootstrapped store server server go done ensuring all necessary migrations have run server server go serving sql connections storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage raft transport go raft transport stream to node established storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id sql event log go event node join target info descriptor nodeid address networkfield tcp addressfield attrs locality clusterid startedat storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage raft transport go raft transport stream to node established storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage queue go purgatory is now empty storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage raft transport go raft transport stream to node established storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id gossip gossip go first range unavailable trying remaining resolvers gossip client go started gossip client to storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key end key replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica command go initiating a split of this range at key m storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key m end key replicas replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica raftstorage go generated preemptive snapshot at index storage store go streamed snapshot kv pairs log entries storage replica raftstorage go applying preemptive snapshot at index id encoded size rocksdb batches log entries storage replica raftstorage go applied preemptive snapshot in storage replica command go change replicas remove read existing descriptor range id start key m end key replicas replicas replicas replicas next replica id storage replica go proposing add replica nodeid storeid replicaid storage replica command go change replicas remove read existing descriptor range id start key m end key replicas replicas replicas replicas replicas next replica id storage replica go proposing remove replica nodeid storeid replicaid storage store go added to replica gc queue peer suggestion storage replica command go change replicas remove read existing descriptor range id start key m end key replicas replicas replicas replicas next replica id storage store go removing replica storage replica go removed keys in storage replica go proposing remove replica nodeid storeid replicaid storage store go added to replica gc queue peer suggestion storage store go removing replica storage replica go removed keys in storage raft transport go raft transport stream to node failed rpc error code desc transport is closing storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed eof storage raft transport go raft transport stream to node failed rpc error code desc transport is closing vendor google golang org grpc transport client go transport notifyerror got notified that the client transport was broken eof storage raft transport go raft transport stream to node failed eof vendor google golang org grpc clientconn go grpc addrconn resettransport failed to create client transport connection error desc transport dial tcp operation was canceled reconnecting to storage raft transport go raft transport stream to node failed rpc error code desc transport is closing vendor google golang org grpc transport server go transport handlestreams failed to read frame read tcp use of closed network connection vendor google golang org grpc clientconn go grpc addrconn transportmonitor exits due to grpc the connection is closing storage raft transport go raft transport stream to node failed rpc error code desc transport is closing leaktest go leaked goroutine goroutine github com cockroachdb cockroach pkg gossip client gossip go src github com cockroachdb cockroach pkg gossip client go github com cockroachdb cockroach pkg gossip client start go src github com cockroachdb cockroach pkg gossip client go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go leaktest go leaked goroutine goroutine github com cockroachdb cockroach vendor google golang org grpc newclientstream go src github com cockroachdb cockroach vendor google golang org grpc stream go created by github com cockroachdb cockroach vendor google golang org grpc newclientstream go src github com cockroachdb cockroach vendor google golang org grpc stream go leaktest go leaked goroutine goroutine github com cockroachdb cockroach pkg gossip server gossip go src github com cockroachdb cockroach pkg gossip server go github com cockroachdb cockroach pkg gossip gossip gossip handler go src github com cockroachdb cockroach pkg gossip gossip pb go github com cockroachdb cockroach vendor google golang org grpc server processstreamingrpc go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server handlestream go src github com cockroachdb cockroach vendor google golang org grpc server go github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go created by github com cockroachdb cockroach vendor google golang org grpc server servestreams go src github com cockroachdb cockroach vendor google golang org grpc server go leaktest go leaked goroutine goroutine github com cockroachdb cockroach vendor google golang org grpc transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc transport stream read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go github com cockroachdb cockroach vendor google golang org grpc parser recvmsg go src github com cockroachdb cockroach vendor google golang org grpc rpc util go github com cockroachdb cockroach vendor google golang org grpc recv go src github com cockroachdb cockroach vendor google golang org grpc rpc util go github com cockroachdb cockroach vendor google golang org grpc serverstream recvmsg go src github com cockroachdb cockroach vendor google golang org grpc stream go github com cockroachdb cockroach pkg gossip gossipgossipserver recv go src github com cockroachdb cockroach pkg gossip gossip pb go github com cockroachdb cockroach pkg gossip gossip gossipserver recv fm go src github com cockroachdb cockroach pkg gossip server go github com cockroachdb cockroach pkg gossip server gossipreceiver go src github com cockroachdb cockroach pkg gossip server go github com cockroachdb cockroach pkg gossip server gossip go src github com cockroachdb cockroach pkg gossip server go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go leaktest go leaked goroutine goroutine github com cockroachdb cockroach vendor google golang org grpc transport recvbufferreader read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go github com cockroachdb cockroach vendor google golang org grpc transport stream read go src github com cockroachdb cockroach vendor google golang org grpc transport transport go io readatleast usr local go src io io go io readfull usr local go src io io go github com cockroachdb cockroach vendor google golang org grpc parser recvmsg go src github com cockroachdb cockroach vendor google golang org grpc rpc util go github com cockroachdb cockroach vendor google golang org grpc recv go src github com cockroachdb cockroach vendor google golang org grpc rpc util go github com cockroachdb cockroach vendor google golang org grpc clientstream recvmsg go src github com cockroachdb cockroach vendor google golang org grpc stream go github com cockroachdb cockroach pkg gossip gossipgossipclient recv go src github com cockroachdb cockroach pkg gossip gossip pb go github com cockroachdb cockroach pkg gossip client gossip go src github com cockroachdb cockroach pkg gossip client go github com cockroachdb cockroach pkg gossip client gossip go src github com cockroachdb cockroach pkg gossip client go github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go created by github com cockroachdb cockroach pkg util stop stopper runworker go src github com cockroachdb cockroach pkg util stop stopper go
| 0
|
13,354
| 15,817,285,004
|
IssuesEvent
|
2021-04-05 14:23:19
|
threefoldtech/js-sdk
|
https://api.github.com/repos/threefoldtech/js-sdk
|
reopened
|
As a user, I need to restore my backup from one item not two
|
process_wontfix type_feature
|
### Description
While I try the backup and restore, I found that to restore a backup, I have to restore `config_mybackup` and `vdc_mybackup`
It will be better if the user can restore `mybackup` and this restore both of them.
|
1.0
|
As a user, I need to restore my backup from one item not two - ### Description
While I try the backup and restore, I found that to restore a backup, I have to restore `config_mybackup` and `vdc_mybackup`
It will be better if the user can restore `mybackup` and this restore both of them.
|
process
|
as a user i need to restore my backup from one item not two description while i try the backup and restore i found that to restore a backup i have to restore config mybackup and vdc mybackup it will be better if the user can restore mybackup and this restore both of them
| 1
|
290,917
| 21,911,037,829
|
IssuesEvent
|
2022-05-21 03:43:25
|
Hamza-Mandviwala/OCP4.10.3-install-GCP-UPI
|
https://api.github.com/repos/Hamza-Mandviwala/OCP4.10.3-install-GCP-UPI
|
closed
|
bootstrap node access issue
|
documentation
|
I could not access to bootstrap node. I can't see any step for that.
connectrivity is good
I could not access neither from bastion nor from my laptop
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
|
1.0
|
bootstrap node access issue - I could not access to bootstrap node. I can't see any step for that.
connectrivity is good
I could not access neither from bastion nor from my laptop
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
|
non_process
|
bootstrap node access issue i could not access to bootstrap node i can t see any step for that connectrivity is good i could not access neither from bastion nor from my laptop permission denied publickey gssapi keyex gssapi with mic
| 0
|
123,433
| 4,862,734,313
|
IssuesEvent
|
2016-11-14 13:27:57
|
OpenSRP/opensrp-client
|
https://api.github.com/repos/OpenSRP/opensrp-client
|
closed
|
Fix mcare forms to prevent partial form Bug
|
BANGLADESH High Priority
|
If a new form_definition has shouldLoadValue : true set up in Id , data of already submitted form loads up.
The goal here is to remove shouldLoadValue : true from registration forms or whenever we are creating new entities.
|
1.0
|
Fix mcare forms to prevent partial form Bug - If a new form_definition has shouldLoadValue : true set up in Id , data of already submitted form loads up.
The goal here is to remove shouldLoadValue : true from registration forms or whenever we are creating new entities.
|
non_process
|
fix mcare forms to prevent partial form bug if a new form definition has shouldloadvalue true set up in id data of already submitted form loads up the goal here is to remove shouldloadvalue true from registration forms or whenever we are creating new entities
| 0
|
263,036
| 19,863,399,557
|
IssuesEvent
|
2022-01-22 06:07:22
|
Seneca-CDOT/telescope
|
https://api.github.com/repos/Seneca-CDOT/telescope
|
closed
|
Organize Wiki pages sidebar
|
type: documentation (docs) developer experience
|
In the current Wiki we have a bunch of pages that are just there without any specific order. Look at what I mean:
<img src="https://cdn.discordapp.com/attachments/353432193730871296/933728678716772352/Screenshot_388.png" />
As you can see we have different categories mixed up. What bothers me is that many pages that are no longer relevant are mixed in with the important ones.
How I would like it to be:
- Main page of the course
- Guides
- Most recent triage
- Older triages
- Something else
...
- No longer relevant triages/meetings/information
I would actually work on this, but I don't even know if I'm able to edit pages. I don't see where, I can only edit the pages themselves, not the sidebar.
|
1.0
|
Organize Wiki pages sidebar - In the current Wiki we have a bunch of pages that are just there without any specific order. Look at what I mean:
<img src="https://cdn.discordapp.com/attachments/353432193730871296/933728678716772352/Screenshot_388.png" />
As you can see we have different categories mixed up. What bothers me is that many pages that are no longer relevant are mixed in with the important ones.
How I would like it to be:
- Main page of the course
- Guides
- Most recent triage
- Older triages
- Something else
...
- No longer relevant triages/meetings/information
I would actually work on this, but I don't even know if I'm able to edit pages. I don't see where, I can only edit the pages themselves, not the sidebar.
|
non_process
|
organize wiki pages sidebar in the current wiki we have a bunch of pages that are just there without any specific order look at what i mean as you can see we have different categories mixed up what bothers me is that many pages that are no longer relevant are mixed in with the important ones how i would like it to be main page of the course guides most recent triage older triages something else no longer relevant triages meetings information i would actually work on this but i don t even know if i m able to edit pages i don t see where i can only edit the pages themselves not the sidebar
| 0
|
42,522
| 12,892,631,909
|
IssuesEvent
|
2020-07-13 20:00:18
|
simandebvu/rails-n-chill
|
https://api.github.com/repos/simandebvu/rails-n-chill
|
opened
|
CVE-2018-19839 (Medium) detected in node-sass-4.14.1.tgz, CSS::Sass-v3.6.0
|
security vulnerability
|
## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/rails-n-chill/rails-n-chill/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/rails-n-chill/rails-n-chill/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- webpacker-4.2.2.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/simandebvu/rails-n-chill/commit/3fd9ead83599bf1f255c0144d7daaae29b27574c">3fd9ead83599bf1f255c0144d7daaae29b27574c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution: Libsass:3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-19839 (Medium) detected in node-sass-4.14.1.tgz, CSS::Sass-v3.6.0 - ## CVE-2018-19839 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>node-sass-4.14.1.tgz</b></p></summary>
<p>
<details><summary><b>node-sass-4.14.1.tgz</b></p></summary>
<p>Wrapper around libsass</p>
<p>Library home page: <a href="https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz">https://registry.npmjs.org/node-sass/-/node-sass-4.14.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/rails-n-chill/rails-n-chill/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/rails-n-chill/rails-n-chill/node_modules/node-sass/package.json</p>
<p>
Dependency Hierarchy:
- webpacker-4.2.2.tgz (Root Library)
- :x: **node-sass-4.14.1.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/simandebvu/rails-n-chill/commit/3fd9ead83599bf1f255c0144d7daaae29b27574c">3fd9ead83599bf1f255c0144d7daaae29b27574c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In LibSass prior to 3.5.5, the function handle_error in sass_context.cpp allows attackers to cause a denial-of-service resulting from a heap-based buffer over-read via a crafted sass file.
<p>Publish Date: 2018-12-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-19839>CVE-2018-19839</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-19839</a></p>
<p>Release Date: 2018-12-04</p>
<p>Fix Resolution: Libsass:3.6.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in node sass tgz css sass cve medium severity vulnerability vulnerable libraries node sass tgz node sass tgz wrapper around libsass library home page a href path to dependency file tmp ws scm rails n chill rails n chill package json path to vulnerable library tmp ws scm rails n chill rails n chill node modules node sass package json dependency hierarchy webpacker tgz root library x node sass tgz vulnerable library found in head commit a href vulnerability details in libsass prior to the function handle error in sass context cpp allows attackers to cause a denial of service resulting from a heap based buffer over read via a crafted sass file publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass step up your open source security game with whitesource
| 0
|
108,379
| 13,622,146,748
|
IssuesEvent
|
2020-09-24 02:46:30
|
pixelbakery/hollyshealthyholes
|
https://api.github.com/repos/pixelbakery/hollyshealthyholes
|
closed
|
Product catalog: Buttons turn blue when clicked?
|
CSS & Design
|
client feedback: "button links turn blue --eeeek"
could be Outline?
|
1.0
|
Product catalog: Buttons turn blue when clicked? - client feedback: "button links turn blue --eeeek"
could be Outline?
|
non_process
|
product catalog buttons turn blue when clicked client feedback button links turn blue eeeek could be outline
| 0
|
6,421
| 9,525,905,091
|
IssuesEvent
|
2019-04-28 15:53:18
|
gaoteng17/Blog_Comments
|
https://api.github.com/repos/gaoteng17/Blog_Comments
|
closed
|
OpenCV在图像处理方面的应用 | Blog
|
/ImageProcessing/ Gitalk
|
http://gaoteng17.top/ImageProcessing/#more
1 概述OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉库,可以运行在Linux、Windows、Android和Mac OS操作系统上。它轻量级而且高效——由一系列 C 函数和少量 C++ 类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。 本文将使用python3.6,通过调用OpenCV库来实现图像处理几个方面的功
|
1.0
|
OpenCV在图像处理方面的应用 | Blog - http://gaoteng17.top/ImageProcessing/#more
1 概述OpenCV是一个基于BSD许可(开源)发行的跨平台计算机视觉库,可以运行在Linux、Windows、Android和Mac OS操作系统上。它轻量级而且高效——由一系列 C 函数和少量 C++ 类构成,同时提供了Python、Ruby、MATLAB等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。 本文将使用python3.6,通过调用OpenCV库来实现图像处理几个方面的功
|
process
|
opencv在图像处理方面的应用 blog 概述opencv是一个基于bsd许可(开源)发行的跨平台计算机视觉库,可以运行在linux、windows、android和mac os操作系统上。它轻量级而且高效——由一系列 c 函数和少量 c 类构成,同时提供了python、ruby、matlab等语言的接口,实现了图像处理和计算机视觉方面的很多通用算法。 ,通过调用opencv库来实现图像处理几个方面的功
| 1
|
6,478
| 9,552,182,553
|
IssuesEvent
|
2019-05-02 16:01:33
|
aiidateam/aiida_core
|
https://api.github.com/repos/aiidateam/aiida_core
|
closed
|
Check that `store_provenance=True` in `submit`
|
aiida-core 1.x priority/important topic/engine topic/processes type/bug
|
With the current architecture, provenance *has* to be stored when using `submit` because the communication of process state, i.e. the checkpoint, goes through the database. The submit method and functions should check for this and raise early, otherwise it will fail later on in indirect and harder to understand ways.
|
1.0
|
Check that `store_provenance=True` in `submit` - With the current architecture, provenance *has* to be stored when using `submit` because the communication of process state, i.e. the checkpoint, goes through the database. The submit method and functions should check for this and raise early, otherwise it will fail later on in indirect and harder to understand ways.
|
process
|
check that store provenance true in submit with the current architecture provenance has to be stored when using submit because the communication of process state i e the checkpoint goes through the database the submit method and functions should check for this and raise early otherwise it will fail later on in indirect and harder to understand ways
| 1
|
63,720
| 3,197,868,744
|
IssuesEvent
|
2015-10-01 08:41:27
|
cs2103aug2015-t16-4j/main
|
https://api.github.com/repos/cs2103aug2015-t16-4j/main
|
closed
|
As a user, Jim wants to be able to delete all tasks at once
|
priority.high type.story
|
So that he can have a "master reset" button if needed
|
1.0
|
As a user, Jim wants to be able to delete all tasks at once - So that he can have a "master reset" button if needed
|
non_process
|
as a user jim wants to be able to delete all tasks at once so that he can have a master reset button if needed
| 0
|
299,870
| 9,205,931,255
|
IssuesEvent
|
2019-03-08 12:08:43
|
wso2/product-is
|
https://api.github.com/repos/wso2/product-is
|
closed
|
Disabling the feature SP template
|
Affected/5.8.0-M24 Component/Identity Mgt Priority/High Severity/Major
|
Disabling the feature service provider template creation since the feature is not in production ready state.
|
1.0
|
Disabling the feature SP template - Disabling the feature service provider template creation since the feature is not in production ready state.
|
non_process
|
disabling the feature sp template disabling the feature service provider template creation since the feature is not in production ready state
| 0
|
13,467
| 15,951,651,264
|
IssuesEvent
|
2021-04-15 10:05:55
|
unicode-org/icu4x
|
https://api.github.com/repos/unicode-org/icu4x
|
opened
|
Decide on strategy for updating pinned Rust versions
|
C-process discuss
|
We now have pinned Stable and Nightly rust versions (#618, #374, etc). However, we don't have a strategy in place to update them and keep them in sync. For the nightly version in particular, we currently have two versions in tree, `nightly-2021-02-28` (required for WebAssembly) and `nightly-2021-03-15` (a more up-to-date version for the coverage and memory benchmarking tools).
CC @gregtatum @dminor
|
1.0
|
Decide on strategy for updating pinned Rust versions - We now have pinned Stable and Nightly rust versions (#618, #374, etc). However, we don't have a strategy in place to update them and keep them in sync. For the nightly version in particular, we currently have two versions in tree, `nightly-2021-02-28` (required for WebAssembly) and `nightly-2021-03-15` (a more up-to-date version for the coverage and memory benchmarking tools).
CC @gregtatum @dminor
|
process
|
decide on strategy for updating pinned rust versions we now have pinned stable and nightly rust versions etc however we don t have a strategy in place to update them and keep them in sync for the nightly version in particular we currently have two versions in tree nightly required for webassembly and nightly a more up to date version for the coverage and memory benchmarking tools cc gregtatum dminor
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.