Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
106,168 | 9,115,897,779 | IssuesEvent | 2019-02-22 07:09:44 | scylladb/scylla | https://api.github.com/repos/scylladb/scylla | closed | cql SELECT limit broken | CQL bug dtest | Scylla version: 84465c23c4d74ad5f2e12d6092427d4c157c964f
Broken dtests:
[consistency_test.TestConsistency.short_read_reversed_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/consistency_test/TestConsistency/short_read_reversed_test/)
[consistency_test.TestConsistency.short_read_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/consistency_test/TestConsistency/short_read_test/)
[cql_additional_tests.CQLAdditionalTests.limit_date_value_out_of_range_upper_limit_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/CQLAdditionalTests/limit_date_value_out_of_range_upper_limit_test/)
[cql_additional_tests.TestCQL.exclusive_slice_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/exclusive_slice_test/)
[cql_additional_tests.TestCQL.limit_bugs_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/limit_bugs_test)
[cql_additional_tests.TestCQL.limit_sparse_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/limit_sparse_test)
[cql_additional_tests.TestCQL.range_with_deletes_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/range_with_deletes_test)
[cql_additional_tests.TestCQL.static_with_limit_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/static_with_limit_test)
[cqlsh_tests.cqlsh_tests.CqlshSmokeTest.select_all_cl_quorum_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cqlsh_tests/cqlsh_tests.CqlshSmokeTest/select_all_cl_quorum_test)
[paging_test.TestPagingWithModifiers.test_with_limit](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/paging_test/TestCQL/TestPagingWithModifiers/test_with_limit)
@psarna wrote:
> in one or two places the per_partition_limit is compared against numeric_limits<>::max() to decide whether we need post-processing or not, and I see that it's sometimes set to <int32_t>::max() instead of <uint32_t>::max() | 1.0 | cql SELECT limit broken - Scylla version: 84465c23c4d74ad5f2e12d6092427d4c157c964f
Broken dtests:
[consistency_test.TestConsistency.short_read_reversed_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/consistency_test/TestConsistency/short_read_reversed_test/)
[consistency_test.TestConsistency.short_read_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/consistency_test/TestConsistency/short_read_test/)
[cql_additional_tests.CQLAdditionalTests.limit_date_value_out_of_range_upper_limit_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/CQLAdditionalTests/limit_date_value_out_of_range_upper_limit_test/)
[cql_additional_tests.TestCQL.exclusive_slice_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/exclusive_slice_test/)
[cql_additional_tests.TestCQL.limit_bugs_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/limit_bugs_test)
[cql_additional_tests.TestCQL.limit_sparse_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/limit_sparse_test)
[cql_additional_tests.TestCQL.range_with_deletes_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/range_with_deletes_test)
[cql_additional_tests.TestCQL.static_with_limit_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cql_additional_tests/TestCQL/static_with_limit_test)
[cqlsh_tests.cqlsh_tests.CqlshSmokeTest.select_all_cl_quorum_test](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/cqlsh_tests/cqlsh_tests.CqlshSmokeTest/select_all_cl_quorum_test)
[paging_test.TestPagingWithModifiers.test_with_limit](http://jenkins.cloudius-systems.com:8080/view/master/job/scylla-master/job/dtest-release/36/testReport/paging_test/TestCQL/TestPagingWithModifiers/test_with_limit)
@psarna wrote:
> in one or two places the per_partition_limit is compared against numeric_limits<>::max() to decide whether we need post-processing or not, and I see that it's sometimes set to <int32_t>::max() instead of <uint32_t>::max() | non_infrastructure | cql select limit broken scylla version broken dtests psarna wrote in one or two places the per partition limit is compared against numeric limits max to decide whether we need post processing or not and i see that it s sometimes set to max instead of max | 0 |
20,452 | 13,927,835,011 | IssuesEvent | 2020-10-21 20:28:09 | cmu-db/noisepage | https://api.github.com/repos/cmu-db/noisepage | closed | jemalloc and ASAN / Valgrind don't get along | deferred infrastructure | ASAN and Valgrind currently do not catch any memory issues in tests.
As the title suggests linking in jemalloc results in some wonky issues where ASAN and Valgrind stops being able to detect leaks. The Internet suggests that either jemalloc doesn't expose the necessary interface or the dynamic linking interferes with instrumentation of the two tools. A quick search did not turn up any widely accepted explanation or fix.
See #56 for more information. | 1.0 | jemalloc and ASAN / Valgrind don't get along - ASAN and Valgrind currently do not catch any memory issues in tests.
As the title suggests linking in jemalloc results in some wonky issues where ASAN and Valgrind stops being able to detect leaks. The Internet suggests that either jemalloc doesn't expose the necessary interface or the dynamic linking interferes with instrumentation of the two tools. A quick search did not turn up any widely accepted explanation or fix.
See #56 for more information. | infrastructure | jemalloc and asan valgrind don t get along asan and valgrind currently do not catch any memory issues in tests as the title suggests linking in jemalloc results in some wonky issues where asan and valgrind stops being able to detect leaks the internet suggests that either jemalloc doesn t expose the necessary interface or the dynamic linking interferes with instrumentation of the two tools a quick search did not turn up any widely accepted explanation or fix see for more information | 1 |
83,505 | 10,330,386,776 | IssuesEvent | 2019-09-02 14:33:13 | r-lib/testthat | https://api.github.com/repos/r-lib/testthat | closed | question on stop_on_failure default | documentation | I see the following in `?testthat::test_dir`

Why is the default for `stop_on_failure` in `test_dir()` `FALSE` when it is `TRUE` for `test_package()` and `test_check()`? Is that intentional or an oversight?
In my opinion, the default behavior of a testing framework should be to fail (cause a non-zero exit code) if any tests fail. If you agree with me and this is an oversight, please let me know and I'd be happy to make a PR to change it.
Thanks! | 1.0 | question on stop_on_failure default - I see the following in `?testthat::test_dir`

Why is the default for `stop_on_failure` in `test_dir()` `FALSE` when it is `TRUE` for `test_package()` and `test_check()`? Is that intentional or an oversight?
In my opinion, the default behavior of a testing framework should be to fail (cause a non-zero exit code) if any tests fail. If you agree with me and this is an oversight, please let me know and I'd be happy to make a PR to change it.
Thanks! | non_infrastructure | question on stop on failure default i see the following in testthat test dir why is the default for stop on failure in test dir false when it is true for test package and test check is that intentional or an oversight in my opinion the default behavior of a testing framework should be to fail cause a non zero exit code if any tests fail if you agree with me and this is an oversight please let me know and i d be happy to make a pr to change it thanks | 0 |
260,798 | 19,685,839,718 | IssuesEvent | 2022-01-11 22:02:53 | suaraujo/DeepSpyce | https://api.github.com/repos/suaraujo/DeepSpyce | opened | Documentacion | documentation | 1. Armar la documentación del repo.
2. Subir y configurar la documentación armada a [Read the Docs](https://readthedocs.org/). | 1.0 | Documentacion - 1. Armar la documentación del repo.
2. Subir y configurar la documentación armada a [Read the Docs](https://readthedocs.org/). | non_infrastructure | documentacion armar la documentación del repo subir y configurar la documentación armada a | 0 |
23,353 | 16,088,036,367 | IssuesEvent | 2021-04-26 13:39:20 | gnosis/safe-ios | https://api.github.com/repos/gnosis/safe-ios | closed | Regenerate new distribution certificate before 21 April 2021 | No QA infrastructure | Distribution Certificate will no longer be valid. To generate a new certificate, sign in and visit [Certificates, Identifiers & Profiles](https://developer.apple.com/account/).
| 1.0 | Regenerate new distribution certificate before 21 April 2021 - Distribution Certificate will no longer be valid. To generate a new certificate, sign in and visit [Certificates, Identifiers & Profiles](https://developer.apple.com/account/).
| infrastructure | regenerate new distribution certificate before april distribution certificate will no longer be valid to generate a new certificate sign in and visit | 1 |
235,055 | 19,294,851,640 | IssuesEvent | 2021-12-12 12:18:24 | firebase/firebase-cpp-sdk | https://api.github.com/repos/firebase/firebase-cpp-sdk | reopened | Nightly Integration Testing Report | nightly-testing | ### ✅ [build against repo] Integration test succeeded!
Requested by @DellaBitta on commit 483c74a4047a46a676204dc9f03893302bbcc81d
Last updated: Sun Dec 12 03:33 PST 2021
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/1569113726)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 483c74a4047a46a676204dc9f03893302bbcc81d
Last updated: Sat Dec 11 04:11 PST 2021
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/1566837664)**
| 1.0 | Nightly Integration Testing Report - ### ✅ [build against repo] Integration test succeeded!
Requested by @DellaBitta on commit 483c74a4047a46a676204dc9f03893302bbcc81d
Last updated: Sun Dec 12 03:33 PST 2021
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/1569113726)**
<hidden value="integration-test-status-comment"></hidden>
***
### ✅ [build against SDK] Integration test succeeded!
Requested by @firebase-workflow-trigger[bot] on commit 483c74a4047a46a676204dc9f03893302bbcc81d
Last updated: Sat Dec 11 04:11 PST 2021
**[View integration test log & download artifacts](https://github.com/firebase/firebase-cpp-sdk/actions/runs/1566837664)**
| non_infrastructure | nightly integration testing report ✅ nbsp integration test succeeded requested by dellabitta on commit last updated sun dec pst ✅ nbsp integration test succeeded requested by firebase workflow trigger on commit last updated sat dec pst | 0 |
28,830 | 23,513,499,391 | IssuesEvent | 2022-08-18 18:57:50 | google/docsy | https://api.github.com/repos/google/docsy | closed | Google search console should be registered as an Open Source project | admin e0-minutes e1-hours infrastructure search p2-medium | Google search console account should be registered as ~nonprofit~ open source project so that we can turn off ads. | 1.0 | Google search console should be registered as an Open Source project - Google search console account should be registered as ~nonprofit~ open source project so that we can turn off ads. | infrastructure | google search console should be registered as an open source project google search console account should be registered as nonprofit open source project so that we can turn off ads | 1 |
19,000 | 13,184,852,858 | IssuesEvent | 2020-08-12 20:13:31 | Kemmey/Kemmey-TeslaWatch-Public | https://api.github.com/repos/Kemmey/Kemmey-TeslaWatch-Public | closed | Download issue | AppStore infrastructure issue | Have purchased app 3 times and will not download. Shows purchased in App Store but doesn’t show up on watch or iPhone. Purchased notation shows up subdued on App Store. Won’t show up purchased on account in App Store on watch. Have latest watch is installed. | 1.0 | Download issue - Have purchased app 3 times and will not download. Shows purchased in App Store but doesn’t show up on watch or iPhone. Purchased notation shows up subdued on App Store. Won’t show up purchased on account in App Store on watch. Have latest watch is installed. | infrastructure | download issue have purchased app times and will not download shows purchased in app store but doesn’t show up on watch or iphone purchased notation shows up subdued on app store won’t show up purchased on account in app store on watch have latest watch is installed | 1 |
170,629 | 20,883,788,729 | IssuesEvent | 2022-03-23 01:12:59 | mattdanielbrown/primed | https://api.github.com/repos/mattdanielbrown/primed | opened | CVE-2021-33502 (High) detected in normalize-url-2.0.1.tgz, normalize-url-3.3.0.tgz | security vulnerability | ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-2.0.1.tgz</b>, <b>normalize-url-3.3.0.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-2.0.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/cacheable-request/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- gulp-imagemin-6.2.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **normalize-url-2.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution (normalize-url): 4.5.1</p>
<p>Direct dependency fix Resolution (cssnano): 5.0.0-rc.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-33502 (High) detected in normalize-url-2.0.1.tgz, normalize-url-3.3.0.tgz - ## CVE-2021-33502 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-2.0.1.tgz</b>, <b>normalize-url-3.3.0.tgz</b></p></summary>
<p>
<details><summary><b>normalize-url-2.0.1.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-2.0.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/cacheable-request/node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- gulp-imagemin-6.2.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- got-8.3.2.tgz
- cacheable-request-2.1.4.tgz
- :x: **normalize-url-2.0.1.tgz** (Vulnerable Library)
</details>
<details><summary><b>normalize-url-3.3.0.tgz</b></p></summary>
<p>Normalize a URL</p>
<p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/normalize-url/package.json</p>
<p>
Dependency Hierarchy:
- cssnano-4.1.10.tgz (Root Library)
- cssnano-preset-default-4.0.7.tgz
- postcss-normalize-url-4.0.1.tgz
- :x: **normalize-url-3.3.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs.
<p>Publish Date: 2021-05-24
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p>
<p>Release Date: 2021-05-24</p>
<p>Fix Resolution (normalize-url): 4.5.1</p>
<p>Direct dependency fix Resolution (cssnano): 5.0.0-rc.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in normalize url tgz normalize url tgz cve high severity vulnerability vulnerable libraries normalize url tgz normalize url tgz normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules cacheable request node modules normalize url package json dependency hierarchy gulp imagemin tgz root library imagemin gifsicle tgz gifsicle tgz bin wrapper tgz download tgz got tgz cacheable request tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules normalize url package json dependency hierarchy cssnano tgz root library cssnano preset default tgz postcss normalize url tgz x normalize url tgz vulnerable library found in base branch master vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url direct dependency fix resolution cssnano rc step up your open source security game with whitesource | 0 |
226,452 | 17,352,296,056 | IssuesEvent | 2021-07-29 10:13:48 | flutter/flutter | https://api.github.com/repos/flutter/flutter | closed | [Request] Add return type info to doc for FocusOnKeyCallback | d: api docs documentation easy fix framework passed first triage proposal | This is the DartDoc & declaration of `FocusOnKeyCallback`:
```
/// Signature of a callback used by [Focus.onKey] and [FocusScope.onKey]
/// to receive key events.
///
/// The [node] is the node that received the event.
typedef FocusOnKeyCallback = bool Function(FocusNode node, RawKeyEvent event);
```
The callback has a return type of `bool`, but how that value will be used is not clear. (Sorry if this is an obvious one :p) | 1.0 | [Request] Add return type info to doc for FocusOnKeyCallback - This is the DartDoc & declaration of `FocusOnKeyCallback`:
```
/// Signature of a callback used by [Focus.onKey] and [FocusScope.onKey]
/// to receive key events.
///
/// The [node] is the node that received the event.
typedef FocusOnKeyCallback = bool Function(FocusNode node, RawKeyEvent event);
```
The callback has a return type of `bool`, but how that value will be used is not clear. (Sorry if this is an obvious one :p) | non_infrastructure | add return type info to doc for focusonkeycallback this is the dartdoc declaration of focusonkeycallback signature of a callback used by and to receive key events the is the node that received the event typedef focusonkeycallback bool function focusnode node rawkeyevent event the callback has a return type of bool but how that value will be used is not clear sorry if this is an obvious one p | 0 |
16,310 | 11,907,758,472 | IssuesEvent | 2020-03-30 23:04:37 | breadware/registrant | https://api.github.com/repos/breadware/registrant | opened | Estruturar o projeto conforme Gitflow | infrastructure | Estruturar as branches do projeto de acordo com o [modelo Gitflow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow). Isto facilitará o controle de branches e issues. | 1.0 | Estruturar o projeto conforme Gitflow - Estruturar as branches do projeto de acordo com o [modelo Gitflow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow). Isto facilitará o controle de branches e issues. | infrastructure | estruturar o projeto conforme gitflow estruturar as branches do projeto de acordo com o isto facilitará o controle de branches e issues | 1 |
148,693 | 5,694,538,949 | IssuesEvent | 2017-04-15 14:14:19 | Radarr/Radarr | https://api.github.com/repos/Radarr/Radarr | closed | [API] SizeOnDisk is 0 on Request by movieId | bug priority:low | As the Title says. The Field SizeOnDisk is correct if the whole MovieList is requested.
If the Movie is requested by id, the Field SizeOnDisk is 0.
List Request:

Single Request:

| 1.0 | [API] SizeOnDisk is 0 on Request by movieId - As the Title says. The Field SizeOnDisk is correct if the whole MovieList is requested.
If the Movie is requested by id, the Field SizeOnDisk is 0.
List Request:

Single Request:

| non_infrastructure | sizeondisk is on request by movieid as the title says the field sizeondisk is correct if the whole movielist is requested if the movie is requested by id the field sizeondisk is list request single request | 0 |
11,703 | 9,380,699,298 | IssuesEvent | 2019-04-04 17:44:50 | ressec/kakoo-gaming | https://api.github.com/repos/ressec/kakoo-gaming | opened | FEAT - Create the Gaming Server project | Act.: Infrastructure Env.: Development Task Typ.: Feature | ## Description
This project is containing the server part of the game. | 1.0 | FEAT - Create the Gaming Server project - ## Description
This project is containing the server part of the game. | infrastructure | feat create the gaming server project description this project is containing the server part of the game | 1 |
313,130 | 9,557,521,268 | IssuesEvent | 2019-05-03 11:50:10 | fritzing/fritzing-app | https://api.github.com/repos/fritzing/fritzing-app | closed | real cables? | Priority-Low enhancement imported | _From [irasc...@gmail.com](https://code.google.com/u/104729248032245122687/) on April 17, 2013 04:00:34_
feature idea: off-board components like displays and speakers, connectable with "real" cables to the various jacks
_Original issue: http://code.google.com/p/fritzing/issues/detail?id=2516_
| 1.0 | real cables? - _From [irasc...@gmail.com](https://code.google.com/u/104729248032245122687/) on April 17, 2013 04:00:34_
feature idea: off-board components like displays and speakers, connectable with "real" cables to the various jacks
_Original issue: http://code.google.com/p/fritzing/issues/detail?id=2516_
| non_infrastructure | real cables from on april feature idea off board components like displays and speakers connectable with real cables to the various jacks original issue | 0 |
287,223 | 31,827,556,516 | IssuesEvent | 2023-09-14 08:31:23 | gardener/gardener | https://api.github.com/repos/gardener/gardener | closed | ☂️ Improve Gardener Operator | kind/enhancement area/dev-productivity area/security area/delivery area/open-source area/high-availability area/ipcei | **How to categorize this issue?**
<!--
Please select area, kind, and priority for this issue. This helps the community categorizing it.
Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion.
If multiple identifiers make sense you can also state the commands multiple times, e.g.
/area control-plane
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
-->
/area dev-productivity delivery high-availability security open-source
/kind enhancement
As of today, Gardener has no notion of managing any components running in the garden cluster. As a consequence, human operators have to manually deploy the "garden system components" (`vertical-pod-autoscaler`, `hvpa-controller`, `etcd-druid`, `nginx-ingress`) and the "virtual garden control plane components" (`etcd`/`backup-restore`, `kube-apiserver`, `kube-controller-manager`) as well as the "Gardener control plane components".
However, logically these processes are quite similar to what we have already implemented in the `gardenlet` for seed or shoot clusters.
The idea of the `gardener-operator` component is re-using existing code and sharing it with `gardenlet` so that the needed components can be made available more easily in all environments.
As part of this, `gardener-resource-manager` is becoming a central "garden system component" as well since it has a lot of features like token invalidation, seccomp defaulting, token requesting, HA config injection, etc.
## Tasks
- Initial skaffolding and introduction of new component
- [x] https://github.com/gardener/gardener/pull/7009
- Manage Garden System Components
- [x] Fine-grained `PriorityClass`es: https://github.com/gardener/gardener/pull/7009
- [x] `gardener-resource-manager`: https://github.com/gardener/gardener/pull/7009
- [x] `vertical-pod-autoscaler`: https://github.com/gardener/gardener/pull/7009
- [x] `hvpa-controller`: https://github.com/gardener/gardener/pull/7048
- [x] `etcd-druid`: https://github.com/gardener/gardener/pull/7048
- [x] `istio`: https://github.com/gardener/gardener/pull/7817
- Manage Virtual Garden Control Plane Components
- [x] `etcd`/`backup-restore` (via `Etcd` custom resource): https://github.com/gardener/gardener/pull/7067
- [x] `kube-apiserver` exposure
- [x] via `LoadBalancer` service: https://github.com/gardener/gardener/pull/7238
- [x] via `Istio`: https://github.com/gardener/gardener/pull/7953
- [x] https://github.com/gardener/gardener/pull/8156
- [x] https://github.com/gardener/gardener/pull/8302
- [x] `kube-apiserver`: https://github.com/gardener/gardener/pull/7730
- Prerequisites / related upfront work:
- [x] https://github.com/gardener/gardener/pull/7242
- [x] https://github.com/gardener/gardener/pull/7243
- [x] https://github.com/gardener/gardener/pull/7710
- [x] https://github.com/gardener/gardener/pull/7258
- [x] https://github.com/gardener/gardener/pull/7498
- [x] https://github.com/gardener/gardener/pull/7518
- [x] https://github.com/gardener/gardener/pull/7558
- [x] https://github.com/gardener/gardener/pull/7567
- [x] https://github.com/gardener/gardener/pull/7573
- [x] https://github.com/gardener/gardener/pull/7687
- [x] https://github.com/gardener/gardener/pull/7682
- [x] https://github.com/gardener/gardener/pull/7693
- Follow-up work:
- [x] https://github.com/gardener/gardener/pull/7734
- [x] https://github.com/gardener/gardener/pull/7735
- [x] https://github.com/gardener/gardener/pull/7877
- [x] `virtual-garden-gardener-resource-manager`: https://github.com/gardener/gardener/pull/7881
- [x] `kube-controller-manager`: https://github.com/gardener/gardener/pull/7931
- Prerequisites / related upfront work:
- [x] https://github.com/gardener/gardener/pull/7858
- [x] https://github.com/gardener/gardener/pull/7887
- Manage Gardener Control Plane Components
- [x] `gardener-{apiserver,controller-manager,...}`: https://github.com/gardener/gardener/pull/8309
- Prerequisites / related upfront work:
- [x] https://github.com/gardener/gardener/pull/7998
- [x] https://github.com/gardener/gardener/pull/8234
- [x] https://github.com/gardener/gardener/pull/8215
- [x] https://github.com/gardener/gardener/pull/8235
- [x] https://github.com/gardener/gardener/pull/8244
- [x] https://github.com/gardener/gardener/pull/8251
- [x] https://github.com/gardener/gardener/pull/8282
- [x] https://github.com/gardener/gardener/pull/8283
- [x] https://github.com/gardener/gardener/pull/8262
- [x] https://github.com/gardener/gardener/pull/8265
- [x] https://github.com/gardener/gardener/pull/8276
- [x] https://github.com/gardener/gardener/pull/8396
- Manage Garden Observability Components
- [x] `nginx-ingress-controller` (~[or ideally `istio` only](https://github.com/gardener/gardener/issues/7232)~): https://github.com/gardener/gardener/pull/7945
- [x] `kube-state-metrics`: https://github.com/gardener/gardener/pull/7836
- [x] `fluent-operator`: https://github.com/gardener/gardener/pull/8240
- [x] `vali`: https://github.com/gardener/gardener/pull/8240
- [x] `plutono`: https://github.com/gardener/gardener/pull/8301
- [x] `gardener-metrics-exporter`: https://github.com/gardener/gardener/pull/8419
- Miscellaneous
- [x] https://github.com/gardener/gardener/pull/7859
- [x] https://github.com/gardener/gardener/pull/8158
- [x] https://github.com/gardener/gardener/pull/8346
- [x] https://github.com/gardener/gardener/pull/8439
- [x] Add support for credentials rotation (similar to how it works for [`Shoot`s](https://github.com/gardener/gardener/blob/master/docs/usage/shoot_credentials_rotation.md)): https://github.com/gardener/gardener/pull/7144
- [x] https://github.com/gardener/gardener/pull/8393
- [x] Extended validation (deletion protection, etc.): https://github.com/gardener/gardener/pull/7144
- [x] https://github.com/gardener/gardener/pull/7225
- [x] https://github.com/gardener/gardener/issues/6896
- [x] https://github.com/gardener/gardener/pull/8238
- [x] https://github.com/gardener/gardener/pull/8279
- [x] https://github.com/gardener/gardener/pull/8413
- [x] https://github.com/gardener/gardener/pull/8433
❗️ Please note ❗️
- It is NOT planned (at least for the foreseeable future until this component graduates) to manage any additional addons deployed to the garden cluster (Gardener dashboard, audit log components, ...) or any extensions.
- Managing the `gardenlet` via `gardener-operator` is NOT planned as well, but could be done. However, in production scenarios, the garden cluster is typically not a seed cluster at a same time, and generally we cannot assume that `gardener-operator` has network connectivity to the clusters where a `gardenlet` should be deployed to. Hence, managing the `gardenlet` has a low priority (if any at all). | True | ☂️ Improve Gardener Operator - **How to categorize this issue?**
<!--
Please select area, kind, and priority for this issue. This helps the community categorizing it.
Replace below TODOs or exchange the existing identifiers with those that fit best in your opinion.
If multiple identifiers make sense you can also state the commands multiple times, e.g.
/area control-plane
/area auto-scaling
...
"/area" identifiers: audit-logging|auto-scaling|backup|certification|control-plane-migration|control-plane|cost|delivery|dev-productivity|disaster-recovery|documentation|high-availability|logging|metering|monitoring|networking|open-source|ops-productivity|os|performance|quality|robustness|scalability|security|storage|testing|usability|user-management
"/kind" identifiers: api-change|bug|cleanup|discussion|enhancement|epic|impediment|poc|post-mortem|question|regression|task|technical-debt|test
-->
/area dev-productivity delivery high-availability security open-source
/kind enhancement
As of today, Gardener has no notion of managing any components running in the garden cluster. As a consequence, human operators have to manually deploy the "garden system components" (`vertical-pod-autoscaler`, `hvpa-controller`, `etcd-druid`, `nginx-ingress`) and the "virtual garden control plane components" (`etcd`/`backup-restore`, `kube-apiserver`, `kube-controller-manager`) as well as the "Gardener control plane components".
However, logically these processes are quite similar to what we have already implemented in the `gardenlet` for seed or shoot clusters.
The idea of the `gardener-operator` component is re-using existing code and sharing it with `gardenlet` so that the needed components can be made available more easily in all environments.
As part of this, `gardener-resource-manager` is becoming a central "garden system component" as well since it has a lot of features like token invalidation, seccomp defaulting, token requesting, HA config injection, etc.
## Tasks
- Initial skaffolding and introduction of new component
- [x] https://github.com/gardener/gardener/pull/7009
- Manage Garden System Components
- [x] Fine-grained `PriorityClass`es: https://github.com/gardener/gardener/pull/7009
- [x] `gardener-resource-manager`: https://github.com/gardener/gardener/pull/7009
- [x] `vertical-pod-autoscaler`: https://github.com/gardener/gardener/pull/7009
- [x] `hvpa-controller`: https://github.com/gardener/gardener/pull/7048
- [x] `etcd-druid`: https://github.com/gardener/gardener/pull/7048
- [x] `istio`: https://github.com/gardener/gardener/pull/7817
- Manage Virtual Garden Control Plane Components
- [x] `etcd`/`backup-restore` (via `Etcd` custom resource): https://github.com/gardener/gardener/pull/7067
- [x] `kube-apiserver` exposure
- [x] via `LoadBalancer` service: https://github.com/gardener/gardener/pull/7238
- [x] via `Istio`: https://github.com/gardener/gardener/pull/7953
- [x] https://github.com/gardener/gardener/pull/8156
- [x] https://github.com/gardener/gardener/pull/8302
- [x] `kube-apiserver`: https://github.com/gardener/gardener/pull/7730
- Prerequisites / related upfront work:
- [x] https://github.com/gardener/gardener/pull/7242
- [x] https://github.com/gardener/gardener/pull/7243
- [x] https://github.com/gardener/gardener/pull/7710
- [x] https://github.com/gardener/gardener/pull/7258
- [x] https://github.com/gardener/gardener/pull/7498
- [x] https://github.com/gardener/gardener/pull/7518
- [x] https://github.com/gardener/gardener/pull/7558
- [x] https://github.com/gardener/gardener/pull/7567
- [x] https://github.com/gardener/gardener/pull/7573
- [x] https://github.com/gardener/gardener/pull/7687
- [x] https://github.com/gardener/gardener/pull/7682
- [x] https://github.com/gardener/gardener/pull/7693
- Follow-up work:
- [x] https://github.com/gardener/gardener/pull/7734
- [x] https://github.com/gardener/gardener/pull/7735
- [x] https://github.com/gardener/gardener/pull/7877
- [x] `virtual-garden-gardener-resource-manager`: https://github.com/gardener/gardener/pull/7881
- [x] `kube-controller-manager`: https://github.com/gardener/gardener/pull/7931
- Prerequisites / related upfront work:
- [x] https://github.com/gardener/gardener/pull/7858
- [x] https://github.com/gardener/gardener/pull/7887
- Manage Gardener Control Plane Components
- [x] `gardener-{apiserver,controller-manager,...}`: https://github.com/gardener/gardener/pull/8309
- Prerequisites / related upfront work:
- [x] https://github.com/gardener/gardener/pull/7998
- [x] https://github.com/gardener/gardener/pull/8234
- [x] https://github.com/gardener/gardener/pull/8215
- [x] https://github.com/gardener/gardener/pull/8235
- [x] https://github.com/gardener/gardener/pull/8244
- [x] https://github.com/gardener/gardener/pull/8251
- [x] https://github.com/gardener/gardener/pull/8282
- [x] https://github.com/gardener/gardener/pull/8283
- [x] https://github.com/gardener/gardener/pull/8262
- [x] https://github.com/gardener/gardener/pull/8265
- [x] https://github.com/gardener/gardener/pull/8276
- [x] https://github.com/gardener/gardener/pull/8396
- Manage Garden Observability Components
- [x] `nginx-ingress-controller` (~[or ideally `istio` only](https://github.com/gardener/gardener/issues/7232)~): https://github.com/gardener/gardener/pull/7945
- [x] `kube-state-metrics`: https://github.com/gardener/gardener/pull/7836
- [x] `fluent-operator`: https://github.com/gardener/gardener/pull/8240
- [x] `vali`: https://github.com/gardener/gardener/pull/8240
- [x] `plutono`: https://github.com/gardener/gardener/pull/8301
- [x] `gardener-metrics-exporter`: https://github.com/gardener/gardener/pull/8419
- Miscellaneous
- [x] https://github.com/gardener/gardener/pull/7859
- [x] https://github.com/gardener/gardener/pull/8158
- [x] https://github.com/gardener/gardener/pull/8346
- [x] https://github.com/gardener/gardener/pull/8439
- [x] Add support for credentials rotation (similar to how it works for [`Shoot`s](https://github.com/gardener/gardener/blob/master/docs/usage/shoot_credentials_rotation.md)): https://github.com/gardener/gardener/pull/7144
- [x] https://github.com/gardener/gardener/pull/8393
- [x] Extended validation (deletion protection, etc.): https://github.com/gardener/gardener/pull/7144
- [x] https://github.com/gardener/gardener/pull/7225
- [x] https://github.com/gardener/gardener/issues/6896
- [x] https://github.com/gardener/gardener/pull/8238
- [x] https://github.com/gardener/gardener/pull/8279
- [x] https://github.com/gardener/gardener/pull/8413
- [x] https://github.com/gardener/gardener/pull/8433
❗️ Please note ❗️
- It is NOT planned (at least for the foreseeable future until this component graduates) to manage any additional addons deployed to the garden cluster (Gardener dashboard, audit log components, ...) or any extensions.
- Managing the `gardenlet` via `gardener-operator` is NOT planned as well, but could be done. However, in production scenarios, the garden cluster is typically not a seed cluster at a same time, and generally we cannot assume that `gardener-operator` has network connectivity to the clusters where a `gardenlet` should be deployed to. Hence, managing the `gardenlet` has a low priority (if any at all). | non_infrastructure | ☂️ improve gardener operator how to categorize this issue please select area kind and priority for this issue this helps the community categorizing it replace below todos or exchange the existing identifiers with those that fit best in your opinion if multiple identifiers make sense you can also state the commands multiple times e g area control plane area auto scaling area identifiers audit logging auto scaling backup certification control plane migration control plane cost delivery dev productivity disaster recovery documentation high availability logging metering monitoring networking open source ops productivity os performance quality robustness scalability security storage testing usability user management kind identifiers api change bug cleanup discussion enhancement epic impediment poc post mortem question regression task technical debt test area dev productivity delivery high availability security open source kind enhancement as of today gardener has no notion of managing any components running in the garden cluster as a consequence human operators have to manually deploy the garden system components vertical pod autoscaler hvpa controller etcd druid nginx ingress and the virtual garden control plane components etcd backup restore kube apiserver kube controller manager as well as the gardener control plane components however logically these processes are quite similar to what we have already implemented in the gardenlet for seed or shoot clusters the idea of the gardener operator component is re using existing code and sharing it with gardenlet so that the needed components can be made available more easily in all environments as part of this gardener resource manager is becoming a central garden system component as well since it has a lot of features like token invalidation seccomp defaulting token requesting ha config injection etc tasks initial skaffolding and introduction of new component manage garden system components fine grained priorityclass es gardener resource manager vertical pod autoscaler hvpa controller etcd druid istio manage virtual garden control plane components etcd backup restore via etcd custom resource kube apiserver exposure via loadbalancer service via istio kube apiserver prerequisites related upfront work follow up work virtual garden gardener resource manager kube controller manager prerequisites related upfront work manage gardener control plane components gardener apiserver controller manager prerequisites related upfront work manage garden observability components nginx ingress controller kube state metrics fluent operator vali plutono gardener metrics exporter miscellaneous add support for credentials rotation similar to how it works for extended validation deletion protection etc ❗️ please note ❗️ it is not planned at least for the foreseeable future until this component graduates to manage any additional addons deployed to the garden cluster gardener dashboard audit log components or any extensions managing the gardenlet via gardener operator is not planned as well but could be done however in production scenarios the garden cluster is typically not a seed cluster at a same time and generally we cannot assume that gardener operator has network connectivity to the clusters where a gardenlet should be deployed to hence managing the gardenlet has a low priority if any at all | 0 |
16,323 | 3,518,045,552 | IssuesEvent | 2016-01-12 10:53:29 | I2PC/scipion | https://api.github.com/repos/I2PC/scipion | opened | Check failing tests of Significant and Compare-Reprojections | bug test | scipion test tests.em.protocols.test_protocols_xmipp_2d.TestXmippCompareReprojections
scipion test tests.em.workflows.test_workflow_initialvolume.TestSignificant | 1.0 | Check failing tests of Significant and Compare-Reprojections - scipion test tests.em.protocols.test_protocols_xmipp_2d.TestXmippCompareReprojections
scipion test tests.em.workflows.test_workflow_initialvolume.TestSignificant | non_infrastructure | check failing tests of significant and compare reprojections scipion test tests em protocols test protocols xmipp testxmippcomparereprojections scipion test tests em workflows test workflow initialvolume testsignificant | 0 |
82,114 | 15,646,505,639 | IssuesEvent | 2021-03-23 01:04:51 | jgeraigery/linux | https://api.github.com/repos/jgeraigery/linux | opened | CVE-2019-8980 (High) detected in linuxv5.2 | security vulnerability | ## CVE-2019-8980 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/fs/exec.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/fs/exec.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the kernel_read_file function in fs/exec.c in the Linux kernel through 4.20.11 allows attackers to cause a denial of service (memory consumption) by triggering vfs_read failures.
<p>Publish Date: 2019-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8980>CVE-2019-8980</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-8980">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-8980</a></p>
<p>Release Date: 2019-02-21</p>
<p>Fix Resolution: v5.1-rc1</p>
</p>
</details>
<p></p>
| True | CVE-2019-8980 (High) detected in linuxv5.2 - ## CVE-2019-8980 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
</p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/fs/exec.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>linux/fs/exec.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
A memory leak in the kernel_read_file function in fs/exec.c in the Linux kernel through 4.20.11 allows attackers to cause a denial of service (memory consumption) by triggering vfs_read failures.
<p>Publish Date: 2019-02-21
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8980>CVE-2019-8980</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-8980">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-8980</a></p>
<p>Release Date: 2019-02-21</p>
<p>Fix Resolution: v5.1-rc1</p>
</p>
</details>
<p></p>
| non_infrastructure | cve high detected in cve high severity vulnerability vulnerable library linux kernel source tree library home page a href vulnerable source files linux fs exec c linux fs exec c vulnerability details a memory leak in the kernel read file function in fs exec c in the linux kernel through allows attackers to cause a denial of service memory consumption by triggering vfs read failures publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution | 0 |
108,220 | 23,579,862,613 | IssuesEvent | 2022-08-23 06:37:31 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | closed | Unnecessary reflections in code generated for arrays | codegen | **Description**
In some cases codegen works with arrays using reflection even if there is no need for it.
**To Reproduce**
Run plugin on the following code:
```Java
package rndpkg;
class C {
int x;
public C(int x) { this.x = x; }
}
public class SomeClass {
public int f(C[] c) {
c[0].x -= 1;
return c[0].x;
}
}
```
**Expected behavior**
Produced tests don't use reflection.
**Actual behavior**
Tests look like this:
```Java
@Test
@DisplayName("f: c = C[0] -> throw ArrayIndexOutOfBoundsException")
public void testFThrowsAIOOBEWithEmptyObjectArray() throws Throwable {
SomeClass someClass = new SomeClass();
Object[] c = createArray("rndpkg.C", 0);
/* This test fails because method [rndpkg.SomeClass.f] produces [java.lang.ArrayIndexOutOfBoundsException: 0]
rndpkg.SomeClass.f(SomeClass.java:10) */
Class someClassClazz = Class.forName("rndpkg.SomeClass");
Class cType = Class.forName("[Lrndpkg.C;");
Method fMethod = someClassClazz.getDeclaredMethod("f", cType);
fMethod.setAccessible(true);
Object[] fMethodArguments = new Object[1];
fMethodArguments[0] = c;
try {
fMethod.invoke(someClass, fMethodArguments);
} catch (InvocationTargetException invocationTargetException) {
throw invocationTargetException.getTargetException();
}
}
```
| 1.0 | Unnecessary reflections in code generated for arrays - **Description**
In some cases codegen works with arrays using reflection even if there is no need for it.
**To Reproduce**
Run plugin on the following code:
```Java
package rndpkg;
class C {
int x;
public C(int x) { this.x = x; }
}
public class SomeClass {
public int f(C[] c) {
c[0].x -= 1;
return c[0].x;
}
}
```
**Expected behavior**
Produced tests don't use reflection.
**Actual behavior**
Tests look like this:
```Java
@Test
@DisplayName("f: c = C[0] -> throw ArrayIndexOutOfBoundsException")
public void testFThrowsAIOOBEWithEmptyObjectArray() throws Throwable {
SomeClass someClass = new SomeClass();
Object[] c = createArray("rndpkg.C", 0);
/* This test fails because method [rndpkg.SomeClass.f] produces [java.lang.ArrayIndexOutOfBoundsException: 0]
rndpkg.SomeClass.f(SomeClass.java:10) */
Class someClassClazz = Class.forName("rndpkg.SomeClass");
Class cType = Class.forName("[Lrndpkg.C;");
Method fMethod = someClassClazz.getDeclaredMethod("f", cType);
fMethod.setAccessible(true);
Object[] fMethodArguments = new Object[1];
fMethodArguments[0] = c;
try {
fMethod.invoke(someClass, fMethodArguments);
} catch (InvocationTargetException invocationTargetException) {
throw invocationTargetException.getTargetException();
}
}
```
| non_infrastructure | unnecessary reflections in code generated for arrays description in some cases codegen works with arrays using reflection even if there is no need for it to reproduce run plugin on the following code java package rndpkg class c int x public c int x this x x public class someclass public int f c c c x return c x expected behavior produced tests don t use reflection actual behavior tests look like this java test displayname f c c throw arrayindexoutofboundsexception public void testfthrowsaioobewithemptyobjectarray throws throwable someclass someclass new someclass object c createarray rndpkg c this test fails because method produces rndpkg someclass f someclass java class someclassclazz class forname rndpkg someclass class ctype class forname lrndpkg c method fmethod someclassclazz getdeclaredmethod f ctype fmethod setaccessible true object fmethodarguments new object fmethodarguments c try fmethod invoke someclass fmethodarguments catch invocationtargetexception invocationtargetexception throw invocationtargetexception gettargetexception | 0 |
24,171 | 16,986,655,319 | IssuesEvent | 2021-06-30 15:04:52 | celo-org/celo-blockchain | https://api.github.com/repos/celo-org/celo-blockchain | closed | Kubernetes Service session-affinity not working for multiple ports in the same service | blockchain current-sprint theme: infrastructure | Kubernetes service may not forward rpc calls (port 8545) and ws traffic (port 8546) from the same client when `sessionAffinity: ClientIP`. This causes multiple blockscout errors in mainnet now that the load that the indexer generates is higher and there are multiple service endpoints.
Issue in Kubernetes repository: https://github.com/kubernetes/kubernetes/issues/103000
Tentative solution will be setting up an internal nginx proxy inside each pod that handles the redirection to rpc or websocket port. | 1.0 | Kubernetes Service session-affinity not working for multiple ports in the same service - Kubernetes service may not forward rpc calls (port 8545) and ws traffic (port 8546) from the same client when `sessionAffinity: ClientIP`. This causes multiple blockscout errors in mainnet now that the load that the indexer generates is higher and there are multiple service endpoints.
Issue in Kubernetes repository: https://github.com/kubernetes/kubernetes/issues/103000
Tentative solution will be setting up an internal nginx proxy inside each pod that handles the redirection to rpc or websocket port. | infrastructure | kubernetes service session affinity not working for multiple ports in the same service kubernetes service may not forward rpc calls port and ws traffic port from the same client when sessionaffinity clientip this causes multiple blockscout errors in mainnet now that the load that the indexer generates is higher and there are multiple service endpoints issue in kubernetes repository tentative solution will be setting up an internal nginx proxy inside each pod that handles the redirection to rpc or websocket port | 1 |
76,497 | 26,459,780,175 | IssuesEvent | 2023-01-16 16:33:48 | zed-industries/feedback | https://api.github.com/repos/zed-industries/feedback | closed | Typescript support doesn't work | defect typescript language | ### Check for existing issues
- [X] Completed
### Describe the bug
Hey zed team!
This looks super promising and I'm excited to see where it goes from here.
Now, for the issue at hand - it looks like the typescript language server simply doesn't work.
Here's my config, just in case I misconfigured something on my end:
```json
// Zed settings
//
// For information on how to configure Zed, see the Zed
// documentation: https://zed.dev/docs/configuring-zed
//
// To see all of Zed's default settings without changing your
// custom settings, run the `open default settings` command
// from the command palette or from `Zed` application menu.
{
"buffer_font_size": 15,
"buffer_font_family": "CaskaydiaCove Nerd Font",
"autosave": "on_focus_change",
"lsp": {},
"terminal": {
"font_family": "CaskaydiaCove Nerd Font"
},
"enable_language_server": true,
// "vim_mode": true,
"tab_size": 2,
"language_overrides": {
"JavaScript": {
"format_on_save": {
"external": {
"command": "prettier",
"arguments": [
"--stdin-filepath",
"{buffer_path}"
]
}
}
},
"TypeScript": {
"format_on_save": {
"external": {
"command": "prettier",
"arguments": [
"--stdin-filepath",
"{buffer_path}"
]
}
}
}
},
"languages": {
"TypeScript": {
"format_on_save": "language_server",
"enable_language_server": true
},
"JavaScript": {
"format_on_save": "language_server",
"enable_language_server": true
}
}
}
```
### To reproduce
Open any typescript project
### Expected behavior
autocompletion, go to definition, etc` should work
### Environment
Zed 0.50.0 – /Applications/Zed.app
macOS 12.5
architecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
18:26:14 [INFO] ========== starting zed ==========
18:26:17 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
18:27:09 [ERROR] Unhandled method completionItem/resolve
18:27:42 [ERROR] no worktree found for diagnostics
18:27:57 [ERROR] Os { code: 2, kind: NotFound, message: "No such file or directory" }
18:28:03 [INFO] set status on client 0: Authenticating
18:28:07 [INFO] set status on client 0: Connecting
18:28:07 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
18:28:08 [INFO] add connection to peer
18:28:08 [INFO] add_connection;
18:28:08 [INFO] set status to connected 0
18:28:08 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
18:28:34 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
18:29:04 [ERROR] no worktree found for diagnostics
18:29:30 [INFO] Editor::page_down
18:29:30 [INFO] Editor::page_down
18:29:30 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:33 [INFO] Editor::page_down
18:29:33 [INFO] Editor::page_down
18:29:33 [INFO] Editor::page_down
18:29:41 [ERROR] Unhandled method completionItem/resolve
18:29:51 [ERROR] Unhandled method completionItem/resolve
18:29:56 [ERROR] Unhandled method completionItem/resolve
18:30:06 [ERROR] Unhandled method completionItem/resolve
18:30:10 [ERROR] Unhandled method completionItem/resolve
18:30:16 [ERROR] Unhandled method completionItem/resolve
18:30:21 [ERROR] Unhandled method completionItem/resolve
18:30:24 [ERROR] Unhandled method completionItem/resolve
18:30:34 [ERROR] Unhandled method completionItem/resolve
18:30:37 [ERROR] Unhandled method completionItem/resolve
18:30:58 [ERROR] trailing comma at line 15 column 1
18:31:01 [ERROR] trailing comma at line 15 column 1
18:31:20 [ERROR] Unhandled method completionItem/resolve
18:31:25 [ERROR] Unhandled method completionItem/resolve
18:32:13 [ERROR] invalid header
18:32:13 [ERROR] oneshot canceled
18:32:13 [ERROR] Broken pipe (os error 32)
18:32:13 [ERROR] oneshot canceled
18:32:43 [ERROR] Unhandled method workspace/symbol
18:32:45 [ERROR] Unhandled method workspace/symbol
18:33:40 [ERROR] Unhandled method workspace/symbol
18:33:43 [INFO] Editor::page_up
18:33:43 [INFO] Editor::page_up
18:33:43 [INFO] Editor::page_up
18:33:43 [INFO] Editor::page_up
18:33:51 [ERROR] Unhandled method completionItem/resolve
18:33:53 [ERROR] Unhandled method completionItem/resolve
18:33:55 [ERROR] trailing comma at line 19 column 1
18:34:23 [ERROR] no worktree found for diagnostics
18:34:34 [ERROR] invalid header
18:34:34 [ERROR] oneshot canceled
18:34:34 [ERROR] Broken pipe (os error 32)
18:34:34 [ERROR] oneshot canceled
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_up
18:34:42 [INFO] Editor::page_up
18:34:47 [INFO] Editor::page_down
18:34:47 [INFO] Editor::page_up
18:34:47 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:49 [INFO] Editor::page_up
18:34:49 [INFO] Editor::page_up
18:34:49 [INFO] Editor::page_down
18:38:37 [WARN] incoming response: unknown request connection_id=0 message_id=15 responding_to=835
18:38:41 [ERROR] no such worktree
18:38:52 [ERROR] oneshot canceled
18:39:09 [INFO] ========== starting zed ==========
18:39:09 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
18:39:09 [INFO] set status on client 0: Authenticating
18:39:09 [INFO] set status on client 0: Connecting
18:39:09 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
18:39:10 [INFO] add connection to peer
18:39:10 [INFO] add_connection;
18:39:10 [INFO] set status to connected 0
18:39:10 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
18:39:12 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
18:41:07 [ERROR] invalid header
18:41:07 [ERROR] oneshot canceled
18:41:07 [ERROR] oneshot canceled
18:41:07 [ERROR] Broken pipe (os error 32)
18:41:24 [ERROR] Unhandled method completionItem/resolve
18:41:44 [ERROR] Unhandled method completionItem/resolve
18:41:55 [ERROR] no worktree found for diagnostics
18:41:59 [INFO] Editor::page_down
18:41:59 [INFO] Editor::page_down
18:41:59 [INFO] Editor::page_down
18:42:07 [INFO] Editor::page_down
18:42:07 [INFO] Editor::page_down
18:42:07 [INFO] Editor::page_down
18:42:09 [INFO] Editor::page_down
18:42:09 [INFO] Editor::page_down
18:42:09 [INFO] Editor::page_down
18:43:37 [INFO] Editor::page_down
18:43:37 [INFO] Editor::page_down
18:43:39 [ERROR] oneshot canceled
18:43:39 [ERROR] oneshot canceled
05:11:48 [INFO] ========== starting zed ==========
05:11:49 [INFO] set status on client 0: Authenticating
05:11:49 [INFO] set status on client 0: Connecting
05:11:50 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:11:50 [INFO] add connection to peer
05:11:50 [INFO] add_connection;
05:11:50 [INFO] set status to connected 0
05:11:50 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:11:51 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:22:35 [INFO] ========== starting zed ==========
05:22:36 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:22:36 [INFO] set status on client 0: Authenticating
05:22:36 [INFO] set status on client 0: Connecting
05:22:37 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:22:37 [INFO] add connection to peer
05:22:37 [INFO] add_connection;
05:22:37 [INFO] set status to connected 0
05:22:37 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:22:39 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:22:51 [ERROR] invalid header
05:22:51 [ERROR] Broken pipe (os error 32)
05:22:51 [ERROR] oneshot canceled
05:22:51 [ERROR] oneshot canceled
05:23:03 [INFO] Editor::page_down
05:23:03 [INFO] Editor::page_down
05:23:03 [INFO] Editor::page_down
05:23:03 [INFO] Editor::page_down
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:19 [INFO] Editor::page_down
05:23:20 [INFO] Editor::page_down
05:23:20 [INFO] Editor::page_down
05:23:20 [INFO] Editor::page_down
05:24:56 [ERROR] Unhandled method completionItem/resolve
05:25:01 [ERROR] Unhandled method completionItem/resolve
05:25:03 [ERROR] Unhandled method completionItem/resolve
05:25:12 [ERROR] duplicate field `languages` at line 33 column 14
05:25:23 [ERROR] oneshot canceled
05:25:27 [INFO] ========== starting zed ==========
05:25:27 [ERROR] duplicate field `languages` at line 33 column 14
05:25:28 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:25:28 [INFO] set status on client 0: Authenticating
05:25:28 [INFO] set status on client 0: Connecting
05:25:28 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:25:29 [INFO] add connection to peer
05:25:29 [INFO] add_connection;
05:25:29 [INFO] set status to connected 0
05:25:29 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:25:30 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:25:33 [ERROR] invalid header
05:25:33 [ERROR] oneshot canceled
05:25:33 [ERROR] Broken pipe (os error 32)
05:25:33 [ERROR] oneshot canceled
05:25:37 [INFO] Editor::page_down
05:25:37 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:41 [ERROR] duplicate field `languages` at line 33 column 16
05:25:44 [ERROR] duplicate field `languages` at line 32 column 16
05:26:13 [ERROR] Unhandled method completionItem/resolve
05:26:20 [ERROR] duplicate field `languages` at line 32 column 16
05:26:33 [ERROR] oneshot canceled
05:26:35 [INFO] ========== starting zed ==========
05:26:35 [ERROR] duplicate field `languages` at line 32 column 16
05:26:35 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:26:35 [INFO] set status on client 0: Authenticating
05:26:35 [INFO] set status on client 0: Connecting
05:26:36 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:26:36 [INFO] add connection to peer
05:26:36 [INFO] add_connection;
05:26:36 [INFO] set status to connected 0
05:26:36 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:26:37 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:27:11 [ERROR] no worktree found for diagnostics
05:27:14 [ERROR] invalid header
05:27:14 [ERROR] oneshot canceled
05:27:14 [ERROR] oneshot canceled
05:27:14 [ERROR] Broken pipe (os error 32)
05:27:25 [INFO] Editor::page_down
05:27:25 [INFO] Editor::page_down
05:27:25 [INFO] Editor::page_down
05:27:36 [ERROR] Unhandled method completionItem/resolve
05:27:40 [ERROR] Unhandled method completionItem/resolve
05:27:41 [ERROR] duplicate field `languages` at line 32 column 16
05:27:44 [ERROR] duplicate field `languages` at line 32 column 16
05:27:59 [ERROR] Unhandled method completionItem/resolve
05:28:01 [ERROR] duplicate field `languages` at line 43 column 16
05:28:55 [ERROR] Unhandled method completionItem/resolve
05:32:58 [ERROR] no worktree found for diagnostics
05:32:58 [ERROR] duplicate field `languages` at line 44 column 16
05:33:41 [INFO] Editor::page_down
05:33:41 [INFO] Editor::page_down
05:33:41 [INFO] Editor::page_down
05:33:42 [INFO] Editor::page_down
05:33:43 [INFO] Editor::page_down
05:33:43 [INFO] Editor::page_down
05:33:44 [INFO] Editor::page_down
05:33:44 [INFO] Editor::page_down
05:35:09 [ERROR] oneshot canceled
05:35:09 [ERROR] oneshot canceled
05:35:09 [ERROR] oneshot canceled
05:35:10 [INFO] ========== starting zed ==========
05:35:10 [ERROR] duplicate field `languages` at line 44 column 16
05:35:11 [INFO] set status on client 0: Authenticating
05:35:11 [INFO] set status on client 0: Connecting
05:35:11 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:35:13 [INFO] add connection to peer
05:35:13 [INFO] add_connection;
05:35:13 [INFO] set status to connected 0
05:35:13 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:35:13 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:35:16 [INFO] open paths ["/Users/lev/Projects/zencity"]
05:35:21 [INFO] ========== starting zed ==========
05:35:21 [ERROR] duplicate field `languages` at line 44 column 16
05:35:22 [INFO] set status on client 0: Authenticating
05:35:22 [INFO] set status on client 0: Connecting
05:35:22 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:35:24 [INFO] add connection to peer
05:35:24 [INFO] add_connection;
05:35:24 [INFO] set status to connected 0
05:35:24 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:35:24 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:35:27 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:35:38 [ERROR] invalid header
05:35:38 [ERROR] oneshot canceled
05:35:38 [ERROR] Broken pipe (os error 32)
05:35:38 [ERROR] oneshot canceled
05:40:54 [INFO] open paths ["/Users/lev/Projects/personal/inertiajs-adonisjs"]
05:41:01 [ERROR] invalid header
05:41:01 [ERROR] oneshot canceled
05:41:01 [ERROR] Broken pipe (os error 32)
05:41:01 [ERROR] oneshot canceled
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:08 [INFO] Editor::page_up
05:43:08 [INFO] Editor::page_up
05:43:08 [INFO] Editor::page_up
| 1.0 | Typescript support doesn't work - ### Check for existing issues
- [X] Completed
### Describe the bug
Hey zed team!
This looks super promising and I'm excited to see where it goes from here.
Now, for the issue at hand - it looks like the typescript language server simply doesn't work.
Here's my config, just in case I misconfigured something on my end:
```json
// Zed settings
//
// For information on how to configure Zed, see the Zed
// documentation: https://zed.dev/docs/configuring-zed
//
// To see all of Zed's default settings without changing your
// custom settings, run the `open default settings` command
// from the command palette or from `Zed` application menu.
{
"buffer_font_size": 15,
"buffer_font_family": "CaskaydiaCove Nerd Font",
"autosave": "on_focus_change",
"lsp": {},
"terminal": {
"font_family": "CaskaydiaCove Nerd Font"
},
"enable_language_server": true,
// "vim_mode": true,
"tab_size": 2,
"language_overrides": {
"JavaScript": {
"format_on_save": {
"external": {
"command": "prettier",
"arguments": [
"--stdin-filepath",
"{buffer_path}"
]
}
}
},
"TypeScript": {
"format_on_save": {
"external": {
"command": "prettier",
"arguments": [
"--stdin-filepath",
"{buffer_path}"
]
}
}
}
},
"languages": {
"TypeScript": {
"format_on_save": "language_server",
"enable_language_server": true
},
"JavaScript": {
"format_on_save": "language_server",
"enable_language_server": true
}
}
}
```
### To reproduce
Open any typescript project
### Expected behavior
autocompletion, go to definition, etc` should work
### Environment
Zed 0.50.0 – /Applications/Zed.app
macOS 12.5
architecture x86_64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue
18:26:14 [INFO] ========== starting zed ==========
18:26:17 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
18:27:09 [ERROR] Unhandled method completionItem/resolve
18:27:42 [ERROR] no worktree found for diagnostics
18:27:57 [ERROR] Os { code: 2, kind: NotFound, message: "No such file or directory" }
18:28:03 [INFO] set status on client 0: Authenticating
18:28:07 [INFO] set status on client 0: Connecting
18:28:07 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
18:28:08 [INFO] add connection to peer
18:28:08 [INFO] add_connection;
18:28:08 [INFO] set status to connected 0
18:28:08 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
18:28:34 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
18:29:04 [ERROR] no worktree found for diagnostics
18:29:30 [INFO] Editor::page_down
18:29:30 [INFO] Editor::page_down
18:29:30 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:32 [INFO] Editor::page_down
18:29:33 [INFO] Editor::page_down
18:29:33 [INFO] Editor::page_down
18:29:33 [INFO] Editor::page_down
18:29:41 [ERROR] Unhandled method completionItem/resolve
18:29:51 [ERROR] Unhandled method completionItem/resolve
18:29:56 [ERROR] Unhandled method completionItem/resolve
18:30:06 [ERROR] Unhandled method completionItem/resolve
18:30:10 [ERROR] Unhandled method completionItem/resolve
18:30:16 [ERROR] Unhandled method completionItem/resolve
18:30:21 [ERROR] Unhandled method completionItem/resolve
18:30:24 [ERROR] Unhandled method completionItem/resolve
18:30:34 [ERROR] Unhandled method completionItem/resolve
18:30:37 [ERROR] Unhandled method completionItem/resolve
18:30:58 [ERROR] trailing comma at line 15 column 1
18:31:01 [ERROR] trailing comma at line 15 column 1
18:31:20 [ERROR] Unhandled method completionItem/resolve
18:31:25 [ERROR] Unhandled method completionItem/resolve
18:32:13 [ERROR] invalid header
18:32:13 [ERROR] oneshot canceled
18:32:13 [ERROR] Broken pipe (os error 32)
18:32:13 [ERROR] oneshot canceled
18:32:43 [ERROR] Unhandled method workspace/symbol
18:32:45 [ERROR] Unhandled method workspace/symbol
18:33:40 [ERROR] Unhandled method workspace/symbol
18:33:43 [INFO] Editor::page_up
18:33:43 [INFO] Editor::page_up
18:33:43 [INFO] Editor::page_up
18:33:43 [INFO] Editor::page_up
18:33:51 [ERROR] Unhandled method completionItem/resolve
18:33:53 [ERROR] Unhandled method completionItem/resolve
18:33:55 [ERROR] trailing comma at line 19 column 1
18:34:23 [ERROR] no worktree found for diagnostics
18:34:34 [ERROR] invalid header
18:34:34 [ERROR] oneshot canceled
18:34:34 [ERROR] Broken pipe (os error 32)
18:34:34 [ERROR] oneshot canceled
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:40 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_down
18:34:41 [INFO] Editor::page_up
18:34:42 [INFO] Editor::page_up
18:34:47 [INFO] Editor::page_down
18:34:47 [INFO] Editor::page_up
18:34:47 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:48 [INFO] Editor::page_up
18:34:49 [INFO] Editor::page_up
18:34:49 [INFO] Editor::page_up
18:34:49 [INFO] Editor::page_down
18:38:37 [WARN] incoming response: unknown request connection_id=0 message_id=15 responding_to=835
18:38:41 [ERROR] no such worktree
18:38:52 [ERROR] oneshot canceled
18:39:09 [INFO] ========== starting zed ==========
18:39:09 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
18:39:09 [INFO] set status on client 0: Authenticating
18:39:09 [INFO] set status on client 0: Connecting
18:39:09 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
18:39:10 [INFO] add connection to peer
18:39:10 [INFO] add_connection;
18:39:10 [INFO] set status to connected 0
18:39:10 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
18:39:12 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
18:41:07 [ERROR] invalid header
18:41:07 [ERROR] oneshot canceled
18:41:07 [ERROR] oneshot canceled
18:41:07 [ERROR] Broken pipe (os error 32)
18:41:24 [ERROR] Unhandled method completionItem/resolve
18:41:44 [ERROR] Unhandled method completionItem/resolve
18:41:55 [ERROR] no worktree found for diagnostics
18:41:59 [INFO] Editor::page_down
18:41:59 [INFO] Editor::page_down
18:41:59 [INFO] Editor::page_down
18:42:07 [INFO] Editor::page_down
18:42:07 [INFO] Editor::page_down
18:42:07 [INFO] Editor::page_down
18:42:09 [INFO] Editor::page_down
18:42:09 [INFO] Editor::page_down
18:42:09 [INFO] Editor::page_down
18:43:37 [INFO] Editor::page_down
18:43:37 [INFO] Editor::page_down
18:43:39 [ERROR] oneshot canceled
18:43:39 [ERROR] oneshot canceled
05:11:48 [INFO] ========== starting zed ==========
05:11:49 [INFO] set status on client 0: Authenticating
05:11:49 [INFO] set status on client 0: Connecting
05:11:50 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:11:50 [INFO] add connection to peer
05:11:50 [INFO] add_connection;
05:11:50 [INFO] set status to connected 0
05:11:50 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:11:51 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:22:35 [INFO] ========== starting zed ==========
05:22:36 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:22:36 [INFO] set status on client 0: Authenticating
05:22:36 [INFO] set status on client 0: Connecting
05:22:37 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:22:37 [INFO] add connection to peer
05:22:37 [INFO] add_connection;
05:22:37 [INFO] set status to connected 0
05:22:37 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:22:39 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:22:51 [ERROR] invalid header
05:22:51 [ERROR] Broken pipe (os error 32)
05:22:51 [ERROR] oneshot canceled
05:22:51 [ERROR] oneshot canceled
05:23:03 [INFO] Editor::page_down
05:23:03 [INFO] Editor::page_down
05:23:03 [INFO] Editor::page_down
05:23:03 [INFO] Editor::page_down
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:04 [INFO] Editor::page_up
05:23:19 [INFO] Editor::page_down
05:23:20 [INFO] Editor::page_down
05:23:20 [INFO] Editor::page_down
05:23:20 [INFO] Editor::page_down
05:24:56 [ERROR] Unhandled method completionItem/resolve
05:25:01 [ERROR] Unhandled method completionItem/resolve
05:25:03 [ERROR] Unhandled method completionItem/resolve
05:25:12 [ERROR] duplicate field `languages` at line 33 column 14
05:25:23 [ERROR] oneshot canceled
05:25:27 [INFO] ========== starting zed ==========
05:25:27 [ERROR] duplicate field `languages` at line 33 column 14
05:25:28 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:25:28 [INFO] set status on client 0: Authenticating
05:25:28 [INFO] set status on client 0: Connecting
05:25:28 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:25:29 [INFO] add connection to peer
05:25:29 [INFO] add_connection;
05:25:29 [INFO] set status to connected 0
05:25:29 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:25:30 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:25:33 [ERROR] invalid header
05:25:33 [ERROR] oneshot canceled
05:25:33 [ERROR] Broken pipe (os error 32)
05:25:33 [ERROR] oneshot canceled
05:25:37 [INFO] Editor::page_down
05:25:37 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:38 [INFO] Editor::page_down
05:25:41 [ERROR] duplicate field `languages` at line 33 column 16
05:25:44 [ERROR] duplicate field `languages` at line 32 column 16
05:26:13 [ERROR] Unhandled method completionItem/resolve
05:26:20 [ERROR] duplicate field `languages` at line 32 column 16
05:26:33 [ERROR] oneshot canceled
05:26:35 [INFO] ========== starting zed ==========
05:26:35 [ERROR] duplicate field `languages` at line 32 column 16
05:26:35 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:26:35 [INFO] set status on client 0: Authenticating
05:26:35 [INFO] set status on client 0: Connecting
05:26:36 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:26:36 [INFO] add connection to peer
05:26:36 [INFO] add_connection;
05:26:36 [INFO] set status to connected 0
05:26:36 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:26:37 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.nvm/versions/node/v16.15.0/bin:/Users/lev/.cargo/bin:/Applications/kitty.app/Contents/MacOS:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:27:11 [ERROR] no worktree found for diagnostics
05:27:14 [ERROR] invalid header
05:27:14 [ERROR] oneshot canceled
05:27:14 [ERROR] oneshot canceled
05:27:14 [ERROR] Broken pipe (os error 32)
05:27:25 [INFO] Editor::page_down
05:27:25 [INFO] Editor::page_down
05:27:25 [INFO] Editor::page_down
05:27:36 [ERROR] Unhandled method completionItem/resolve
05:27:40 [ERROR] Unhandled method completionItem/resolve
05:27:41 [ERROR] duplicate field `languages` at line 32 column 16
05:27:44 [ERROR] duplicate field `languages` at line 32 column 16
05:27:59 [ERROR] Unhandled method completionItem/resolve
05:28:01 [ERROR] duplicate field `languages` at line 43 column 16
05:28:55 [ERROR] Unhandled method completionItem/resolve
05:32:58 [ERROR] no worktree found for diagnostics
05:32:58 [ERROR] duplicate field `languages` at line 44 column 16
05:33:41 [INFO] Editor::page_down
05:33:41 [INFO] Editor::page_down
05:33:41 [INFO] Editor::page_down
05:33:42 [INFO] Editor::page_down
05:33:43 [INFO] Editor::page_down
05:33:43 [INFO] Editor::page_down
05:33:44 [INFO] Editor::page_down
05:33:44 [INFO] Editor::page_down
05:35:09 [ERROR] oneshot canceled
05:35:09 [ERROR] oneshot canceled
05:35:09 [ERROR] oneshot canceled
05:35:10 [INFO] ========== starting zed ==========
05:35:10 [ERROR] duplicate field `languages` at line 44 column 16
05:35:11 [INFO] set status on client 0: Authenticating
05:35:11 [INFO] set status on client 0: Connecting
05:35:11 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:35:13 [INFO] add connection to peer
05:35:13 [INFO] add_connection;
05:35:13 [INFO] set status to connected 0
05:35:13 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:35:13 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:35:16 [INFO] open paths ["/Users/lev/Projects/zencity"]
05:35:21 [INFO] ========== starting zed ==========
05:35:21 [ERROR] duplicate field `languages` at line 44 column 16
05:35:22 [INFO] set status on client 0: Authenticating
05:35:22 [INFO] set status on client 0: Connecting
05:35:22 [INFO] connected to rpc endpoint https://collab.zed.dev/rpc
05:35:24 [INFO] add connection to peer
05:35:24 [INFO] add_connection;
05:35:24 [INFO] set status to connected 0
05:35:24 [INFO] set status on client 0: Connected { connection_id: ConnectionId(0) }
05:35:24 [INFO] set environment variables from shell:/bin/zsh, path:/Users/lev/.pyenv/shims:/Users/lev/.nvm/versions/node/v12.16.1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/lev/.cargo/bin:/Users/lev/.fig/bin:/Users/lev/.local/bin:/Users/lev/go/bin:/Users/lev/.deno/bin
05:35:27 [INFO] open paths ["/Users/lev/Projects/zencity/export-service"]
05:35:38 [ERROR] invalid header
05:35:38 [ERROR] oneshot canceled
05:35:38 [ERROR] Broken pipe (os error 32)
05:35:38 [ERROR] oneshot canceled
05:40:54 [INFO] open paths ["/Users/lev/Projects/personal/inertiajs-adonisjs"]
05:41:01 [ERROR] invalid header
05:41:01 [ERROR] oneshot canceled
05:41:01 [ERROR] Broken pipe (os error 32)
05:41:01 [ERROR] oneshot canceled
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:06 [INFO] Editor::page_up
05:43:08 [INFO] Editor::page_up
05:43:08 [INFO] Editor::page_up
05:43:08 [INFO] Editor::page_up
| non_infrastructure | typescript support doesn t work check for existing issues completed describe the bug hey zed team this looks super promising and i m excited to see where it goes from here now for the issue at hand it looks like the typescript language server simply doesn t work here s my config just in case i misconfigured something on my end json zed settings for information on how to configure zed see the zed documentation to see all of zed s default settings without changing your custom settings run the open default settings command from the command palette or from zed application menu buffer font size buffer font family caskaydiacove nerd font autosave on focus change lsp terminal font family caskaydiacove nerd font enable language server true vim mode true tab size language overrides javascript format on save external command prettier arguments stdin filepath buffer path typescript format on save external command prettier arguments stdin filepath buffer path languages typescript format on save language server enable language server true javascript format on save language server enable language server true to reproduce open any typescript project expected behavior autocompletion go to definition etc should work environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue starting zed set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev cargo bin users lev fig bin users lev local bin users lev go bin users lev deno bin unhandled method completionitem resolve no worktree found for diagnostics os code kind notfound message no such file or directory set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid open paths no worktree found for diagnostics editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve trailing comma at line column trailing comma at line column unhandled method completionitem resolve unhandled method completionitem resolve invalid header oneshot canceled broken pipe os error oneshot canceled unhandled method workspace symbol unhandled method workspace symbol unhandled method workspace symbol editor page up editor page up editor page up editor page up unhandled method completionitem resolve unhandled method completionitem resolve trailing comma at line column no worktree found for diagnostics invalid header oneshot canceled broken pipe os error oneshot canceled editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page up editor page up editor page down editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page down incoming response unknown request connection id message id responding to no such worktree oneshot canceled starting zed open paths set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev nvm versions node bin users lev cargo bin applications kitty app contents macos users lev fig bin users lev local bin users lev go bin users lev deno bin users lev go bin users lev deno bin invalid header oneshot canceled oneshot canceled broken pipe os error unhandled method completionitem resolve unhandled method completionitem resolve no worktree found for diagnostics editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down oneshot canceled oneshot canceled starting zed set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev cargo bin users lev fig bin users lev local bin users lev go bin users lev deno bin starting zed open paths set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev nvm versions node bin users lev cargo bin applications kitty app contents macos users lev fig bin users lev local bin users lev go bin users lev deno bin users lev go bin users lev deno bin invalid header broken pipe os error oneshot canceled oneshot canceled editor page down editor page down editor page down editor page down editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page down editor page down editor page down editor page down unhandled method completionitem resolve unhandled method completionitem resolve unhandled method completionitem resolve duplicate field languages at line column oneshot canceled starting zed duplicate field languages at line column open paths set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev nvm versions node bin users lev cargo bin applications kitty app contents macos users lev fig bin users lev local bin users lev go bin users lev deno bin users lev go bin users lev deno bin invalid header oneshot canceled broken pipe os error oneshot canceled editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down duplicate field languages at line column duplicate field languages at line column unhandled method completionitem resolve duplicate field languages at line column oneshot canceled starting zed duplicate field languages at line column open paths set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev nvm versions node bin users lev cargo bin applications kitty app contents macos users lev fig bin users lev local bin users lev go bin users lev deno bin users lev go bin users lev deno bin no worktree found for diagnostics invalid header oneshot canceled oneshot canceled broken pipe os error editor page down editor page down editor page down unhandled method completionitem resolve unhandled method completionitem resolve duplicate field languages at line column duplicate field languages at line column unhandled method completionitem resolve duplicate field languages at line column unhandled method completionitem resolve no worktree found for diagnostics duplicate field languages at line column editor page down editor page down editor page down editor page down editor page down editor page down editor page down editor page down oneshot canceled oneshot canceled oneshot canceled starting zed duplicate field languages at line column set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev cargo bin users lev fig bin users lev local bin users lev go bin users lev deno bin open paths starting zed duplicate field languages at line column set status on client authenticating set status on client connecting connected to rpc endpoint add connection to peer add connection set status to connected set status on client connected connection id connectionid set environment variables from shell bin zsh path users lev pyenv shims users lev nvm versions node bin usr local bin usr bin bin usr sbin sbin library apple usr bin users lev cargo bin users lev fig bin users lev local bin users lev go bin users lev deno bin open paths invalid header oneshot canceled broken pipe os error oneshot canceled open paths invalid header oneshot canceled broken pipe os error oneshot canceled editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up editor page up | 0 |
240,994 | 7,807,932,313 | IssuesEvent | 2018-06-11 18:30:07 | JukkaL/mypyc | https://api.github.com/repos/JukkaL/mypyc | closed | Combine Integer, Float, Unicode Literals Dictionaries | priority-0-high | Right now, integer, float, and string literals are represented in multiple dictionaries (i.e. `integer_literals`, `float_literals`, `unicode_literals`). This code can be cleaned up by combining them into a single map. | 1.0 | Combine Integer, Float, Unicode Literals Dictionaries - Right now, integer, float, and string literals are represented in multiple dictionaries (i.e. `integer_literals`, `float_literals`, `unicode_literals`). This code can be cleaned up by combining them into a single map. | non_infrastructure | combine integer float unicode literals dictionaries right now integer float and string literals are represented in multiple dictionaries i e integer literals float literals unicode literals this code can be cleaned up by combining them into a single map | 0 |
70,626 | 18,242,958,336 | IssuesEvent | 2021-10-01 14:53:20 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | opened | bazel: systematically avoid "cannot open autom4te.cache/requests: Read-only file system" issue | C-enhancement A-build-system T-dev-inf | In the Bazel build, compiling the C dependencies can often fail like this:
```
autom4te: cannot open autom4te.cache/requests: Read-only file system
autom4te: cannot open autom4te.cache/requests: Read-only file system
autoreconf: /usr/bin/autoconf failed with exit status: 1
```
i.e. `autmo4te` is trying to use its own cache but Bazel sandboxing is breaking it. I haven't seen this happen on macOS, but it routinely happens on Linux. Unfortunately there isn't an environment variable we can set to unilaterally turn off the cache, and my experimentation with trying to pass the `--no-cache` argument down to `autom4te` from Bazel was not productive. In CI we have [a wrapper script](https://github.com/cockroachdb/cockroach/blob/master/build/bazelbuilder/autom4te) that passes the `--no-cache` argument down to the real `autom4te`, but it's not realistic to ask everyone to put this shim script into their environment.
If we can't figure out how to address it inside of the actual Bazel build, consider adding something to `dev`.
ref:
* https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/autom4te-Invocation.html
* https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Autom4te-Cache.html
Epic CRDB-8036 | 1.0 | bazel: systematically avoid "cannot open autom4te.cache/requests: Read-only file system" issue - In the Bazel build, compiling the C dependencies can often fail like this:
```
autom4te: cannot open autom4te.cache/requests: Read-only file system
autom4te: cannot open autom4te.cache/requests: Read-only file system
autoreconf: /usr/bin/autoconf failed with exit status: 1
```
i.e. `autmo4te` is trying to use its own cache but Bazel sandboxing is breaking it. I haven't seen this happen on macOS, but it routinely happens on Linux. Unfortunately there isn't an environment variable we can set to unilaterally turn off the cache, and my experimentation with trying to pass the `--no-cache` argument down to `autom4te` from Bazel was not productive. In CI we have [a wrapper script](https://github.com/cockroachdb/cockroach/blob/master/build/bazelbuilder/autom4te) that passes the `--no-cache` argument down to the real `autom4te`, but it's not realistic to ask everyone to put this shim script into their environment.
If we can't figure out how to address it inside of the actual Bazel build, consider adding something to `dev`.
ref:
* https://www.gnu.org/software/autoconf/manual/autoconf-2.67/html_node/autom4te-Invocation.html
* https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Autom4te-Cache.html
Epic CRDB-8036 | non_infrastructure | bazel systematically avoid cannot open cache requests read only file system issue in the bazel build compiling the c dependencies can often fail like this cannot open cache requests read only file system cannot open cache requests read only file system autoreconf usr bin autoconf failed with exit status i e is trying to use its own cache but bazel sandboxing is breaking it i haven t seen this happen on macos but it routinely happens on linux unfortunately there isn t an environment variable we can set to unilaterally turn off the cache and my experimentation with trying to pass the no cache argument down to from bazel was not productive in ci we have that passes the no cache argument down to the real but it s not realistic to ask everyone to put this shim script into their environment if we can t figure out how to address it inside of the actual bazel build consider adding something to dev ref epic crdb | 0 |
8,467 | 3,184,171,883 | IssuesEvent | 2015-09-27 04:02:24 | BumblebeeBat/FlyingFox | https://api.github.com/repos/BumblebeeBat/FlyingFox | closed | Oracles | discussion documentation needs more info | We need a new transaction type that allows for the creation of oracles.
The first oracle should be either a single address, or a multisig of several addresses. The participants in the oracle are the people who know the private keys for the addresses that make up the oracle.
If a decision is given to the oracle, then the oracle can profitably answer the decision.
Eventually a mechanism will be added to the oracle to encourage the participants to be honest. | 1.0 | Oracles - We need a new transaction type that allows for the creation of oracles.
The first oracle should be either a single address, or a multisig of several addresses. The participants in the oracle are the people who know the private keys for the addresses that make up the oracle.
If a decision is given to the oracle, then the oracle can profitably answer the decision.
Eventually a mechanism will be added to the oracle to encourage the participants to be honest. | non_infrastructure | oracles we need a new transaction type that allows for the creation of oracles the first oracle should be either a single address or a multisig of several addresses the participants in the oracle are the people who know the private keys for the addresses that make up the oracle if a decision is given to the oracle then the oracle can profitably answer the decision eventually a mechanism will be added to the oracle to encourage the participants to be honest | 0 |
4,784 | 2,610,155,965 | IssuesEvent | 2015-02-26 18:49:34 | chrsmith/republic-at-war | https://api.github.com/repos/chrsmith/republic-at-war | closed | Dreadnaught | auto-migrated Priority-Medium Type-Defect | ```
put dread at tech 2
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:25 | 1.0 | Dreadnaught - ```
put dread at tech 2
```
-----
Original issue reported on code.google.com by `z3r0...@gmail.com` on 30 Jan 2011 at 2:25 | non_infrastructure | dreadnaught put dread at tech original issue reported on code google com by gmail com on jan at | 0 |
100,791 | 16,490,416,614 | IssuesEvent | 2021-05-25 02:19:05 | hiucimon/PF2Client | https://api.github.com/repos/hiucimon/PF2Client | opened | CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz | security vulnerability | ## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: /PF2Client/package.json</p>
<p>Path to vulnerable library: PF2Client/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.0.0.tgz (Root Library)
- socket.io-2.1.1.tgz
- socket.io-client-2.1.1.tgz
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2021-31597 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary>
<p>XMLHttpRequest for Node</p>
<p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p>
<p>Path to dependency file: /PF2Client/package.json</p>
<p>Path to vulnerable library: PF2Client/node_modules/xmlhttprequest-ssl/package.json</p>
<p>
Dependency Hierarchy:
- karma-3.0.0.tgz (Root Library)
- socket.io-2.1.1.tgz
- socket.io-client-2.1.1.tgz
- engine.io-client-3.2.1.tgz
- :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.
<p>Publish Date: 2021-04-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p>
<p>Release Date: 2021-04-23</p>
<p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file package json path to vulnerable library node modules xmlhttprequest ssl package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library vulnerability details the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest ssl step up your open source security game with whitesource | 0 |
313,609 | 26,939,750,932 | IssuesEvent | 2023-02-08 00:35:01 | devssa/onde-codar-em-salvador | https://api.github.com/repos/devssa/onde-codar-em-salvador | closed | [REACT NATIVE] [SENIOR] [REMOTO] [TAMBEM PCD] Desenvolvedor(a) React Native Sênior na [AGENDA EDU] | CSS3 HTML SENIOR TYPESCRIPT REACT NATIVE REDUX MOBILE REMOTO TESTE FUNCIONAL FASTLANE TESTES UNITARIOS HELP WANTED VAGA PARA PCD TAMBÉM styled compnents REDUX/SAGA STYLED SYSTEM Stale | <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Em 2021, nossa missão de simplificar as jornada educacionais está ainda mais desafiadora! Procuramos pessoas que amam aprender e ensinar, que tenham vontade de transformar a relação entre escolas e famílias e, claro, que amem educação! Estamos crescendo sempre e precisamos de pessoas com propósito, brilho nos olhos e muita mão na massa para transformar a Agenda Edu em uma referência como hub de soluções educacionais da América Latina. Vem ser #AgendaLover e fazer parte do nosso time! 😍
### RESPONSABILIDADES E ATRIBUIÇÕES
- Trabalhar em conjunto de um time diverso de front-ends, back-ends e designers e QAs para entregar uma experiência mobile incrível para os usuários do nosso aplicativo mobile.
- Entregar código limpo, legível, usando boas práticas de desenvolvimento e performance e coberto com testes automatizados, sempre atentando para a qualidade.
- Avaliar, discutir e definir com o time melhorias de arquitetura, processos e desenvolvimento, participar do processo na revisão de pull requests, manter e melhorar o fluxo de entrega
## Local
- Remoto
## Benefícios
- Informações diretamente com o responsável/ recrutador da vaga
## Requisitos
**Obrigatórios:**
- Conhecimento na stack usada na aplicação React Native, Redux, Redux Saga, Typescript, HTML, CSS3, frameworks de testes unitários e funcionais(Jest).
- Bibliotecas de estilização como Styled Components e Styled System
- Experiência com delivery de aplicações mobile com Fastlane
- Gerenciamentos de aplicações usando as lojas da google play e app store.
**Diferenciais:**
- Conhecimento em integração contínua (Github Actions, CircleCI, Jenkins...);
- Conhecimento sobre design systems e como construí-los
## Contratação
- a combinar
## Nossa empresa
- Queremos ajudar as escolas a darem o próximo passo frente ao novo, engajar os alunos e responsáveis na rotina escolar, indo além das funcionalidades de uma agenda digital para transformar e simplificar jornadas educacionais. Para isso, chegou a hora de começarmos um movimento coletivo sério, inteligente, corajoso e sem concessões, que nos livre da herança educacional que nos foi imposta e não nos pertence. Essa revolução não virá de fora. Ela já está surgindo, aos poucos, silenciosa, no chão das nossas escolas. No chão de empresas apaixonadas por educação e na atuação de todas as pessoas que trabalham conectadas e movidas por essa paixão. Bem vindos à jornada educacional!
## Como se candidatar
- [Clique aqui para se candidatar](https://agendaedu.gupy.io/jobs/640691?jobBoardSource=gupy_public_page)
| 2.0 | [REACT NATIVE] [SENIOR] [REMOTO] [TAMBEM PCD] Desenvolvedor(a) React Native Sênior na [AGENDA EDU] - <!--
==================================================
POR FAVOR, SÓ POSTE SE A VAGA FOR PARA SALVADOR E CIDADES VIZINHAS!
Use: "Desenvolvedor Front-end" ao invés de
"Front-End Developer" \o/
Exemplo: `[JAVASCRIPT] [MYSQL] [NODE.JS] Desenvolvedor Front-End na [NOME DA EMPRESA]`
==================================================
-->
## Descrição da vaga
- Em 2021, nossa missão de simplificar as jornada educacionais está ainda mais desafiadora! Procuramos pessoas que amam aprender e ensinar, que tenham vontade de transformar a relação entre escolas e famílias e, claro, que amem educação! Estamos crescendo sempre e precisamos de pessoas com propósito, brilho nos olhos e muita mão na massa para transformar a Agenda Edu em uma referência como hub de soluções educacionais da América Latina. Vem ser #AgendaLover e fazer parte do nosso time! 😍
### RESPONSABILIDADES E ATRIBUIÇÕES
- Trabalhar em conjunto de um time diverso de front-ends, back-ends e designers e QAs para entregar uma experiência mobile incrível para os usuários do nosso aplicativo mobile.
- Entregar código limpo, legível, usando boas práticas de desenvolvimento e performance e coberto com testes automatizados, sempre atentando para a qualidade.
- Avaliar, discutir e definir com o time melhorias de arquitetura, processos e desenvolvimento, participar do processo na revisão de pull requests, manter e melhorar o fluxo de entrega
## Local
- Remoto
## Benefícios
- Informações diretamente com o responsável/ recrutador da vaga
## Requisitos
**Obrigatórios:**
- Conhecimento na stack usada na aplicação React Native, Redux, Redux Saga, Typescript, HTML, CSS3, frameworks de testes unitários e funcionais(Jest).
- Bibliotecas de estilização como Styled Components e Styled System
- Experiência com delivery de aplicações mobile com Fastlane
- Gerenciamentos de aplicações usando as lojas da google play e app store.
**Diferenciais:**
- Conhecimento em integração contínua (Github Actions, CircleCI, Jenkins...);
- Conhecimento sobre design systems e como construí-los
## Contratação
- a combinar
## Nossa empresa
- Queremos ajudar as escolas a darem o próximo passo frente ao novo, engajar os alunos e responsáveis na rotina escolar, indo além das funcionalidades de uma agenda digital para transformar e simplificar jornadas educacionais. Para isso, chegou a hora de começarmos um movimento coletivo sério, inteligente, corajoso e sem concessões, que nos livre da herança educacional que nos foi imposta e não nos pertence. Essa revolução não virá de fora. Ela já está surgindo, aos poucos, silenciosa, no chão das nossas escolas. No chão de empresas apaixonadas por educação e na atuação de todas as pessoas que trabalham conectadas e movidas por essa paixão. Bem vindos à jornada educacional!
## Como se candidatar
- [Clique aqui para se candidatar](https://agendaedu.gupy.io/jobs/640691?jobBoardSource=gupy_public_page)
| non_infrastructure | desenvolvedor a react native sênior na por favor só poste se a vaga for para salvador e cidades vizinhas use desenvolvedor front end ao invés de front end developer o exemplo desenvolvedor front end na descrição da vaga em nossa missão de simplificar as jornada educacionais está ainda mais desafiadora procuramos pessoas que amam aprender e ensinar que tenham vontade de transformar a relação entre escolas e famílias e claro que amem educação estamos crescendo sempre e precisamos de pessoas com propósito brilho nos olhos e muita mão na massa para transformar a agenda edu em uma referência como hub de soluções educacionais da américa latina vem ser agendalover e fazer parte do nosso time 😍 responsabilidades e atribuições trabalhar em conjunto de um time diverso de front ends back ends e designers e qas para entregar uma experiência mobile incrível para os usuários do nosso aplicativo mobile entregar código limpo legível usando boas práticas de desenvolvimento e performance e coberto com testes automatizados sempre atentando para a qualidade avaliar discutir e definir com o time melhorias de arquitetura processos e desenvolvimento participar do processo na revisão de pull requests manter e melhorar o fluxo de entrega local remoto benefícios informações diretamente com o responsável recrutador da vaga requisitos obrigatórios conhecimento na stack usada na aplicação react native redux redux saga typescript html frameworks de testes unitários e funcionais jest bibliotecas de estilização como styled components e styled system experiência com delivery de aplicações mobile com fastlane gerenciamentos de aplicações usando as lojas da google play e app store diferenciais conhecimento em integração contínua github actions circleci jenkins conhecimento sobre design systems e como construí los contratação a combinar nossa empresa queremos ajudar as escolas a darem o próximo passo frente ao novo engajar os alunos e responsáveis na rotina escolar indo além das funcionalidades de uma agenda digital para transformar e simplificar jornadas educacionais para isso chegou a hora de começarmos um movimento coletivo sério inteligente corajoso e sem concessões que nos livre da herança educacional que nos foi imposta e não nos pertence essa revolução não virá de fora ela já está surgindo aos poucos silenciosa no chão das nossas escolas no chão de empresas apaixonadas por educação e na atuação de todas as pessoas que trabalham conectadas e movidas por essa paixão bem vindos à jornada educacional como se candidatar | 0 |
1,551 | 3,266,387,789 | IssuesEvent | 2015-10-22 20:27:37 | twosigma/beaker-notebook | https://api.github.com/repos/twosigma/beaker-notebook | opened | enforce JSCS code style in the build | Core Client Infrastructure | We have jscs config with a defined Google code style but we are not using it. Running the code checker currently outputs 225 errors. We should run a code checker on CI and fail the build on error. | 1.0 | enforce JSCS code style in the build - We have jscs config with a defined Google code style but we are not using it. Running the code checker currently outputs 225 errors. We should run a code checker on CI and fail the build on error. | infrastructure | enforce jscs code style in the build we have jscs config with a defined google code style but we are not using it running the code checker currently outputs errors we should run a code checker on ci and fail the build on error | 1 |
3,617 | 4,445,329,455 | IssuesEvent | 2016-08-20 01:05:43 | dmitrinesterenko/blog | https://api.github.com/repos/dmitrinesterenko/blog | closed | Deploy a route that enables load testing | infrastructure | Loader.io will want a route that identifies this app as belonging to me deployed to a public URL.
Place this verification token in a file:
loaderio-98e1ada80f68207cec7d4745237732ea
Or download the file you need.
2
Upload the file to your server so it is accessible at one of the following URLs:
http://dmitri.co/loaderio-98e1ada80f68207cec7d4745237732ea/
http://dmitri.co/loaderio-98e1ada80f68207cec7d4745237732ea.html
http://dmitri.co/loaderio-98e1ada80f68207cec7d4745237732ea.txt
Verify
| 1.0 | Deploy a route that enables load testing - Loader.io will want a route that identifies this app as belonging to me deployed to a public URL.
Place this verification token in a file:
loaderio-98e1ada80f68207cec7d4745237732ea
Or download the file you need.
2
Upload the file to your server so it is accessible at one of the following URLs:
http://dmitri.co/loaderio-98e1ada80f68207cec7d4745237732ea/
http://dmitri.co/loaderio-98e1ada80f68207cec7d4745237732ea.html
http://dmitri.co/loaderio-98e1ada80f68207cec7d4745237732ea.txt
Verify
| infrastructure | deploy a route that enables load testing loader io will want a route that identifies this app as belonging to me deployed to a public url place this verification token in a file loaderio or download the file you need upload the file to your server so it is accessible at one of the following urls verify | 1 |
36,953 | 8,198,670,610 | IssuesEvent | 2018-08-31 17:14:53 | google/googletest | https://api.github.com/repos/google/googletest | closed | Display test fixture and name of diasabled tests | Priority-Medium Type-Defect auto-migrated | ```
When tests are disabled (by prepending 'DISABLED_' to the test name), gtest
displays a warning at the end of its output, i.e.
YOU HAVE 2 DISABLED TESTS
It would be useful if this message displayed the test fixture and test name of
each disabled test such that they can be identified easily, e.g.
YOU HAVE 2 DISABLED TESTS, listed below:
[ DISABLED ] TestFixtureA.someTest
[ DISABLED ] TestFixtureB.someOtherTest
```
Original issue reported on code.google.com by `jdc....@gmail.com` on 24 Jan 2014 at 3:48
| 1.0 | Display test fixture and name of diasabled tests - ```
When tests are disabled (by prepending 'DISABLED_' to the test name), gtest
displays a warning at the end of its output, i.e.
YOU HAVE 2 DISABLED TESTS
It would be useful if this message displayed the test fixture and test name of
each disabled test such that they can be identified easily, e.g.
YOU HAVE 2 DISABLED TESTS, listed below:
[ DISABLED ] TestFixtureA.someTest
[ DISABLED ] TestFixtureB.someOtherTest
```
Original issue reported on code.google.com by `jdc....@gmail.com` on 24 Jan 2014 at 3:48
| non_infrastructure | display test fixture and name of diasabled tests when tests are disabled by prepending disabled to the test name gtest displays a warning at the end of its output i e you have disabled tests it would be useful if this message displayed the test fixture and test name of each disabled test such that they can be identified easily e g you have disabled tests listed below testfixturea sometest testfixtureb someothertest original issue reported on code google com by jdc gmail com on jan at | 0 |
18,518 | 13,045,946,622 | IssuesEvent | 2020-07-29 08:11:30 | gnosis/safe-ios | https://api.github.com/repos/gnosis/safe-ios | closed | Remove non-inclusive language | infrastructure | - "master" branch -> "main" branch
- "masterCopy" -> "implementation"
Not renaming user-facing strings
# How to test
- https://github.com/gnosis/safe-ios/ shows "main" as a branch name
- no user-facing changes to test | 1.0 | Remove non-inclusive language - - "master" branch -> "main" branch
- "masterCopy" -> "implementation"
Not renaming user-facing strings
# How to test
- https://github.com/gnosis/safe-ios/ shows "main" as a branch name
- no user-facing changes to test | infrastructure | remove non inclusive language master branch main branch mastercopy implementation not renaming user facing strings how to test shows main as a branch name no user facing changes to test | 1 |
26,085 | 19,650,735,435 | IssuesEvent | 2022-01-10 06:38:16 | Altinn/altinn-studio | https://api.github.com/repos/Altinn/altinn-studio | reopened | Limiting external network access for apps | solution/app-backend area/api-use kind/analysis ops/infrastructure solution/apps org/skd/sirius | ## Description
It will be supported to create client code in and app that could trigger calls against external APIs. We need to figure out how this call can go out of the cloud infrastructure and how credentials (secrets) can be assigned to that request.
## Technical considerations
- How to handle credentials safe (handled in #1127)
- How can organizations managing outgoing firewall rules from their own Kubernetes Cluster
## Acceptance criterea
- Firewalls opening is controlled
## Tasks
- [ ] Analyze how the network setup can be set up to prevent general access to external API
- [ ] Analyze how the firewall can be controlled safely | 1.0 | Limiting external network access for apps - ## Description
It will be supported to create client code in and app that could trigger calls against external APIs. We need to figure out how this call can go out of the cloud infrastructure and how credentials (secrets) can be assigned to that request.
## Technical considerations
- How to handle credentials safe (handled in #1127)
- How can organizations managing outgoing firewall rules from their own Kubernetes Cluster
## Acceptance criterea
- Firewalls opening is controlled
## Tasks
- [ ] Analyze how the network setup can be set up to prevent general access to external API
- [ ] Analyze how the firewall can be controlled safely | infrastructure | limiting external network access for apps description it will be supported to create client code in and app that could trigger calls against external apis we need to figure out how this call can go out of the cloud infrastructure and how credentials secrets can be assigned to that request technical considerations how to handle credentials safe handled in how can organizations managing outgoing firewall rules from their own kubernetes cluster acceptance criterea firewalls opening is controlled tasks analyze how the network setup can be set up to prevent general access to external api analyze how the firewall can be controlled safely | 1 |
4,546 | 3,871,724,942 | IssuesEvent | 2016-04-11 11:02:52 | lionheart/openradar-mirror | https://api.github.com/repos/lionheart/openradar-mirror | opened | 22671818: Apple Music - "Add to a Playlist" has no option to create a new playlist if there are existing playlist | classification:ui/usability reproducible:always status:open | #### Description
Summary:
When using the "Add to a Playlist..." function in the Apple Music app, it presents a modal showing the list of existing playlists for selection. It does not however provide any way to create a new playlist unless there are no existing playlists at all.
This means that if you decide to add some music to a playlist, activate the "Add to a Playlist" function, but then decide that none of the existing playlists are suitable, and you would like to create a new playlist to add the music to, you have to first cancel adding to a playlist, navigate to My Music, then to Playlists, and then choose "New" to create a new playlist. Then to add the music that you wanted to add originally, you either can choose to add it from inside the playlist (requiring that you search for the music again anyway), or navigate back to the other part of Apple Music that you were in before when you originally decided that you wanted to add it to a playlist, and then choose "Add to a Playlist" again, where you could then select your newly-created playlist.
This is an overly-complicated workflow.
Steps to Reproduce:
1. Choose "Add to Playlist..." from a context menu activated by the '...' button available throughout the Apple Music user interface.
Expected Results:
An option to create a new playlist is available from this modal, even when one or more playlists already exist.
Actual Results:
No option to create a new playlist from this modal view is available when one or more playlists already exist (option is only available if no playlists already exist)
Version:
iOS 9.0 [13A341]
Notes:
Configuration:
iPhone 5S 64GB A1530, WiFi or cellular.
-
Product Version: 9.0 (13A341)
Created: 2015-09-12T01:46:17.970620
Originated: 2015-12-09T00:00:00
Open Radar Link: http://www.openradar.me/22671818 | True | 22671818: Apple Music - "Add to a Playlist" has no option to create a new playlist if there are existing playlist - #### Description
Summary:
When using the "Add to a Playlist..." function in the Apple Music app, it presents a modal showing the list of existing playlists for selection. It does not however provide any way to create a new playlist unless there are no existing playlists at all.
This means that if you decide to add some music to a playlist, activate the "Add to a Playlist" function, but then decide that none of the existing playlists are suitable, and you would like to create a new playlist to add the music to, you have to first cancel adding to a playlist, navigate to My Music, then to Playlists, and then choose "New" to create a new playlist. Then to add the music that you wanted to add originally, you either can choose to add it from inside the playlist (requiring that you search for the music again anyway), or navigate back to the other part of Apple Music that you were in before when you originally decided that you wanted to add it to a playlist, and then choose "Add to a Playlist" again, where you could then select your newly-created playlist.
This is an overly-complicated workflow.
Steps to Reproduce:
1. Choose "Add to Playlist..." from a context menu activated by the '...' button available throughout the Apple Music user interface.
Expected Results:
An option to create a new playlist is available from this modal, even when one or more playlists already exist.
Actual Results:
No option to create a new playlist from this modal view is available when one or more playlists already exist (option is only available if no playlists already exist)
Version:
iOS 9.0 [13A341]
Notes:
Configuration:
iPhone 5S 64GB A1530, WiFi or cellular.
-
Product Version: 9.0 (13A341)
Created: 2015-09-12T01:46:17.970620
Originated: 2015-12-09T00:00:00
Open Radar Link: http://www.openradar.me/22671818 | non_infrastructure | apple music add to a playlist has no option to create a new playlist if there are existing playlist description summary when using the add to a playlist function in the apple music app it presents a modal showing the list of existing playlists for selection it does not however provide any way to create a new playlist unless there are no existing playlists at all this means that if you decide to add some music to a playlist activate the add to a playlist function but then decide that none of the existing playlists are suitable and you would like to create a new playlist to add the music to you have to first cancel adding to a playlist navigate to my music then to playlists and then choose new to create a new playlist then to add the music that you wanted to add originally you either can choose to add it from inside the playlist requiring that you search for the music again anyway or navigate back to the other part of apple music that you were in before when you originally decided that you wanted to add it to a playlist and then choose add to a playlist again where you could then select your newly created playlist this is an overly complicated workflow steps to reproduce choose add to playlist from a context menu activated by the button available throughout the apple music user interface expected results an option to create a new playlist is available from this modal even when one or more playlists already exist actual results no option to create a new playlist from this modal view is available when one or more playlists already exist option is only available if no playlists already exist version ios notes configuration iphone wifi or cellular product version created originated open radar link | 0 |
3,890 | 4,699,213,680 | IssuesEvent | 2016-10-12 15:05:34 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | opened | Move bootstrap build back to MSBuild 15.0 | Area-Infrastructure | Our bootstrap build temporarily needs to go back to MSBuild 14.0. There is a bug in MSBuild 15.0 which prevents us from fully loading our bootstrap components and hence invalidates our build.
https://github.com/Microsoft/msbuild/issues/1183
This bug tracks the removal of work arounds on our end. | 1.0 | Move bootstrap build back to MSBuild 15.0 - Our bootstrap build temporarily needs to go back to MSBuild 14.0. There is a bug in MSBuild 15.0 which prevents us from fully loading our bootstrap components and hence invalidates our build.
https://github.com/Microsoft/msbuild/issues/1183
This bug tracks the removal of work arounds on our end. | infrastructure | move bootstrap build back to msbuild our bootstrap build temporarily needs to go back to msbuild there is a bug in msbuild which prevents us from fully loading our bootstrap components and hence invalidates our build this bug tracks the removal of work arounds on our end | 1 |
19,458 | 13,244,888,593 | IssuesEvent | 2020-08-19 13:40:58 | cmu-lib/dhweb_app | https://api.github.com/repos/cmu-lib/dhweb_app | closed | 500 error on downloads page | bug infrastructure | Accidentally deleted the most recent CSV exports when cleaning up the production directory - just need to re-run the export tables task | 1.0 | 500 error on downloads page - Accidentally deleted the most recent CSV exports when cleaning up the production directory - just need to re-run the export tables task | infrastructure | error on downloads page accidentally deleted the most recent csv exports when cleaning up the production directory just need to re run the export tables task | 1 |
525,463 | 15,254,051,583 | IssuesEvent | 2021-02-20 10:20:10 | HackYourFuture-CPH/rate-my-cv | https://api.github.com/repos/HackYourFuture-CPH/rate-my-cv | closed | Login Page | frontend priority | ## User story
**Who:** **As a** user
**What:** **I want to** want to be able to login
**Why:** **so that we can** so that I can use the app
## Acceptance criteria
Check the existing components and implement this the design below using this container
/src/client/containers/SignIn/index.js
<img width="751" alt="Screenshot 2021-02-07 at 14 28 41" src="https://user-images.githubusercontent.com/6642037/107148636-a452ef00-6954-11eb-9004-05c23e65c587.png">
## Implementation details
Use the components below:
Side banner - https://github.com/HackYourFuture-CPH/rate-my-cv/issues/88
And , background color (#6236FF), and add the image below
Login form https://github.com/HackYourFuture-CPH/rate-my-cv/issues/44
 | 1.0 | Login Page - ## User story
**Who:** **As a** user
**What:** **I want to** want to be able to login
**Why:** **so that we can** so that I can use the app
## Acceptance criteria
Check the existing components and implement this the design below using this container
/src/client/containers/SignIn/index.js
<img width="751" alt="Screenshot 2021-02-07 at 14 28 41" src="https://user-images.githubusercontent.com/6642037/107148636-a452ef00-6954-11eb-9004-05c23e65c587.png">
## Implementation details
Use the components below:
Side banner - https://github.com/HackYourFuture-CPH/rate-my-cv/issues/88
And , background color (#6236FF), and add the image below
Login form https://github.com/HackYourFuture-CPH/rate-my-cv/issues/44
 | non_infrastructure | login page user story who as a user what i want to want to be able to login why so that we can so that i can use the app acceptance criteria check the existing components and implement this the design below using this container
src client containers signin index js img width alt screenshot at src implementation details use the components below side banner and background color and add the image below login form | 0 |
71,394 | 13,652,438,133 | IssuesEvent | 2020-09-27 07:31:45 | gupta-shrinath/Notes | https://api.github.com/repos/gupta-shrinath/Notes | closed | Change approach of bottom navigation implementation | code improvement help wanted | * Currently the bottom_navigation.dart has three lists `unSelectedItems` `selectedItems` `items`
* The `items` list has home as selected item and `items` list is passed to items property of BottomNavigationBar
* When onTap of BottomNavigationBar is called three things happen
* `items` list tapped index is changed to have selected item from `selectedItems` list
* the rest elements in `items` list is changed to have unselected items from `unSelectedItems` list
* the current index of BottomNavigationBar is changed
As you can see this is a naive approach and I need help in improving this code. | 1.0 | Change approach of bottom navigation implementation - * Currently the bottom_navigation.dart has three lists `unSelectedItems` `selectedItems` `items`
* The `items` list has home as selected item and `items` list is passed to items property of BottomNavigationBar
* When onTap of BottomNavigationBar is called three things happen
* `items` list tapped index is changed to have selected item from `selectedItems` list
* the rest elements in `items` list is changed to have unselected items from `unSelectedItems` list
* the current index of BottomNavigationBar is changed
As you can see this is a naive approach and I need help in improving this code. | non_infrastructure | change approach of bottom navigation implementation currently the bottom navigation dart has three lists unselecteditems selecteditems items the items list has home as selected item and items list is passed to items property of bottomnavigationbar when ontap of bottomnavigationbar is called three things happen items list tapped index is changed to have selected item from selecteditems list the rest elements in items list is changed to have unselected items from unselecteditems list the current index of bottomnavigationbar is changed as you can see this is a naive approach and i need help in improving this code | 0 |
325,876 | 24,064,382,728 | IssuesEvent | 2022-09-17 09:04:01 | digitallyinduced/ihp | https://api.github.com/repos/digitallyinduced/ihp | closed | Add hint to start editor in project directory | documentation | I've been debugging for a few hours and getting really annoyed that HLS just didn't work out of the box, and even with tons of fiddling, it would not get the right GHC version.
Turns out, I had to start `code` from within the IHP project directory.
For someone who's never used Nix, this is not obvious, so it might be nice to give a warning/footnote about this in the guide when the beginner starts using their editor? | 1.0 | Add hint to start editor in project directory - I've been debugging for a few hours and getting really annoyed that HLS just didn't work out of the box, and even with tons of fiddling, it would not get the right GHC version.
Turns out, I had to start `code` from within the IHP project directory.
For someone who's never used Nix, this is not obvious, so it might be nice to give a warning/footnote about this in the guide when the beginner starts using their editor? | non_infrastructure | add hint to start editor in project directory i ve been debugging for a few hours and getting really annoyed that hls just didn t work out of the box and even with tons of fiddling it would not get the right ghc version turns out i had to start code from within the ihp project directory for someone who s never used nix this is not obvious so it might be nice to give a warning footnote about this in the guide when the beginner starts using their editor | 0 |
19,040 | 13,187,220,267 | IssuesEvent | 2020-08-13 02:43:46 | icecube-trac/tix3 | https://api.github.com/repos/icecube-trac/tix3 | opened | switch docs to combo (Trac #1525) | Incomplete Migration Migrated from Trac infrastructure task | <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1525">https://code.icecube.wisc.edu/ticket/1525</a>, reported by david.schultz and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-30T20:15:27",
"description": "Proposal:\n\nInstead of showing offline-software, simulation, and icerec separately on http://software.icecube.wisc.edu, just show combo - a combined source of all the docs. We could even start directly at the sphinx docs index, instead of having a separate index.\n\nBenefits:\n\n* one single source for software docs\n * finding a project's docs is as simple as scrolling down on a single page (or \"find\")\n * could eliminate the individual metaproject docs and merge them into a central location\n* can add projects like `filterscripts` that aren't in icerec but are important\n\nPotential Issues:\n\n* old projects that have been removed from trunk no longer show up\n * but doesn't that happen already?",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"_ts": "1459368927867548",
"component": "infrastructure",
"summary": "switch docs to combo",
"priority": "normal",
"keywords": "",
"time": "2016-01-22T18:19:57",
"milestone": "Long-Term Future",
"owner": "nega",
"type": "task"
}
```
</p>
</details>
| 1.0 | switch docs to combo (Trac #1525) - <details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1525">https://code.icecube.wisc.edu/ticket/1525</a>, reported by david.schultz and owned by nega</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2016-03-30T20:15:27",
"description": "Proposal:\n\nInstead of showing offline-software, simulation, and icerec separately on http://software.icecube.wisc.edu, just show combo - a combined source of all the docs. We could even start directly at the sphinx docs index, instead of having a separate index.\n\nBenefits:\n\n* one single source for software docs\n * finding a project's docs is as simple as scrolling down on a single page (or \"find\")\n * could eliminate the individual metaproject docs and merge them into a central location\n* can add projects like `filterscripts` that aren't in icerec but are important\n\nPotential Issues:\n\n* old projects that have been removed from trunk no longer show up\n * but doesn't that happen already?",
"reporter": "david.schultz",
"cc": "olivas",
"resolution": "fixed",
"_ts": "1459368927867548",
"component": "infrastructure",
"summary": "switch docs to combo",
"priority": "normal",
"keywords": "",
"time": "2016-01-22T18:19:57",
"milestone": "Long-Term Future",
"owner": "nega",
"type": "task"
}
```
</p>
</details>
| infrastructure | switch docs to combo trac migrated from json status closed changetime description proposal n ninstead of showing offline software simulation and icerec separately on just show combo a combined source of all the docs we could even start directly at the sphinx docs index instead of having a separate index n nbenefits n n one single source for software docs n finding a project s docs is as simple as scrolling down on a single page or find n could eliminate the individual metaproject docs and merge them into a central location n can add projects like filterscripts that aren t in icerec but are important n npotential issues n n old projects that have been removed from trunk no longer show up n but doesn t that happen already reporter david schultz cc olivas resolution fixed ts component infrastructure summary switch docs to combo priority normal keywords time milestone long term future owner nega type task | 1 |
23,846 | 16,620,161,992 | IssuesEvent | 2021-06-02 22:56:18 | gitcoinco/web | https://api.github.com/repos/gitcoinco/web | closed | happyfox support@gitcoin.co redirect | gpg - infrastructure | ### Circumstance
One thing I noticed from all the support tickets we're getting: is it possible to change the replies to the new_bounty_email to not direct to support@gitcoin.co? otherwise it'll create a new ticket anytime someone has an autoresponder / daily email sent
### Description
- [x] set CONTACT_EMAIL to `team@gitcoin.co` in the envs on the celery servers | 1.0 | happyfox support@gitcoin.co redirect - ### Circumstance
One thing I noticed from all the support tickets we're getting: is it possible to change the replies to the new_bounty_email to not direct to support@gitcoin.co? otherwise it'll create a new ticket anytime someone has an autoresponder / daily email sent
### Description
- [x] set CONTACT_EMAIL to `team@gitcoin.co` in the envs on the celery servers | infrastructure | happyfox support gitcoin co redirect circumstance one thing i noticed from all the support tickets we re getting is it possible to change the replies to the new bounty email to not direct to support gitcoin co otherwise it ll create a new ticket anytime someone has an autoresponder daily email sent description set contact email to team gitcoin co in the envs on the celery servers | 1 |
23,708 | 16,539,969,631 | IssuesEvent | 2021-05-27 15:39:14 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | closed | Flakey CI test: HTTPStatus.INTERNAL_SERVER_ERROR: 500 | area:rasa-oss/infrastructure :bullettrain_front: effort:enable-squad/2 feature:speed-up-ci :zap: type:maintenance :wrench: | **Error Information**:
```
> assert response.status == HTTPStatus.OK
E assert <HTTPStatus.INTERNAL_SERVER_ERROR: 500> == <HTTPStatus.OK: 200>
E + where <HTTPStatus.INTERNAL_SERVER_ERROR: 500> = <Response [500 Internal Server Error]>.status
E + and <HTTPStatus.OK: 200> = HTTPStatus.OK
tests\test_server.py:650: AssertionError
```
**Job run URL examples**:
https://github.com/RasaHQ/rasa/runs/2456015076
https://github.com/RasaHQ/rasa/runs/2517988286
https://github.com/RasaHQ/rasa/runs/2517988260
https://github.com/RasaHQ/rasa/runs/2547484544
https://github.com/RasaHQ/rasa/runs/2554528209
**Definition of Done**:
- [ ] apply fix so that tests run deterministic | 1.0 | Flakey CI test: HTTPStatus.INTERNAL_SERVER_ERROR: 500 - **Error Information**:
```
> assert response.status == HTTPStatus.OK
E assert <HTTPStatus.INTERNAL_SERVER_ERROR: 500> == <HTTPStatus.OK: 200>
E + where <HTTPStatus.INTERNAL_SERVER_ERROR: 500> = <Response [500 Internal Server Error]>.status
E + and <HTTPStatus.OK: 200> = HTTPStatus.OK
tests\test_server.py:650: AssertionError
```
**Job run URL examples**:
https://github.com/RasaHQ/rasa/runs/2456015076
https://github.com/RasaHQ/rasa/runs/2517988286
https://github.com/RasaHQ/rasa/runs/2517988260
https://github.com/RasaHQ/rasa/runs/2547484544
https://github.com/RasaHQ/rasa/runs/2554528209
**Definition of Done**:
- [ ] apply fix so that tests run deterministic | infrastructure | flakey ci test httpstatus internal server error error information assert response status httpstatus ok e assert e where status e and httpstatus ok tests test server py assertionerror job run url examples definition of done apply fix so that tests run deterministic | 1 |
99,337 | 8,697,782,908 | IssuesEvent | 2018-12-04 21:18:16 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: rebalance-replicas-by-load failed | C-test-failure O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/109cf8705b773c0d3a1e7ab02ce63f764e101106
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stressrace TESTS=rebalance-replicas-by-load PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=991714&tab=buildLog
```
The test failed on master:
test.go:639,rebalance_load.go:127,rebalance_load.go:154: timed out before leases were evenly spread
``` | 1.0 | roachtest: rebalance-replicas-by-load failed - SHA: https://github.com/cockroachdb/cockroach/commits/109cf8705b773c0d3a1e7ab02ce63f764e101106
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
make stressrace TESTS=rebalance-replicas-by-load PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=991714&tab=buildLog
```
The test failed on master:
test.go:639,rebalance_load.go:127,rebalance_load.go:154: timed out before leases were evenly spread
``` | non_infrastructure | roachtest rebalance replicas by load failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach make stressrace tests rebalance replicas by load pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on master test go rebalance load go rebalance load go timed out before leases were evenly spread | 0 |
11,505 | 9,217,500,072 | IssuesEvent | 2019-03-11 10:54:30 | pharo-spec/Spec | https://api.github.com/repos/pharo-spec/Spec | closed | CI is once again broken | Infrastructure bug in progress | This make it harder to make PR because each time I need to go and compare failures between master and my PR for Pharo 7 and 8. | 1.0 | CI is once again broken - This make it harder to make PR because each time I need to go and compare failures between master and my PR for Pharo 7 and 8. | infrastructure | ci is once again broken this make it harder to make pr because each time i need to go and compare failures between master and my pr for pharo and | 1 |
297,526 | 9,171,611,093 | IssuesEvent | 2019-03-04 02:47:34 | BoiseState/CS471-S19-ProjectWarmup | https://api.github.com/repos/BoiseState/CS471-S19-ProjectWarmup | closed | Remove Medium Priority Bug - ijpq | bug priority-medium | References #2
**Steps to Reproduce:**
1. Open in an IDE the file `src\main\java\com\mucommander\command\AssociationWriter.java`
2. Go to line `52`
**Actual Results:**
Syntax errors are highlighted in the IDE
**Expected Results:**
No syntax errors should be displayed
**Other notes:**
The bug is spread in one other locations:
* `src\main\java\com\mucommander\command\CommandWriter.java` at line `19` | 1.0 | Remove Medium Priority Bug - ijpq - References #2
**Steps to Reproduce:**
1. Open in an IDE the file `src\main\java\com\mucommander\command\AssociationWriter.java`
2. Go to line `52`
**Actual Results:**
Syntax errors are highlighted in the IDE
**Expected Results:**
No syntax errors should be displayed
**Other notes:**
The bug is spread in one other locations:
* `src\main\java\com\mucommander\command\CommandWriter.java` at line `19` | non_infrastructure | remove medium priority bug ijpq references steps to reproduce open in an ide the file src main java com mucommander command associationwriter java go to line actual results syntax errors are highlighted in the ide expected results no syntax errors should be displayed other notes the bug is spread in one other locations src main java com mucommander command commandwriter java at line | 0 |
408,750 | 27,706,453,626 | IssuesEvent | 2023-03-14 11:32:26 | GIScience/openrouteservice | https://api.github.com/repos/GIScience/openrouteservice | closed | remove sourcespy from readme | documentation :book: | Sourcespy seems dead. The README contains the failing status badge that should be removed. | 1.0 | remove sourcespy from readme - Sourcespy seems dead. The README contains the failing status badge that should be removed. | non_infrastructure | remove sourcespy from readme sourcespy seems dead the readme contains the failing status badge that should be removed | 0 |
3,594 | 4,427,725,168 | IssuesEvent | 2016-08-16 22:28:36 | servo/servo | https://api.github.com/repos/servo/servo | closed | RFC: Tidy config file | A-infrastructure L-python | As brought to light by the mess I've been making around https://github.com/servo/servo/issues/10636, including https://github.com/servo/rust-mozjs/pull/255, there are cases in which we need `tidy` to selectively ignore certain types of error in certain directories.
One way to address this problem will be to create a Tidy config file, `.servo-tidy.toml` or similar, to optionally live at the root of a project (or in `.github` or `.travis`?), which Tidy would read to configure the strictness of its various checks.
* Are there any compelling reasons that we should configure exceptions in Tidy itself rather than per repo?
* Where should it live? I think a choice between `.servo-tidy` in the root of the project and `servo-tidy.toml` (or some other name not starting with `.`) in `.travis/` would provide an appropriate level of flexibility
* How should it be structured? I think [toml](https://github.com/toml-lang/toml) would make sense considering its extensive use in Cargo and other Rust tooling configuration. I think keeping the config to a blacklist of tidy errors to ignore would require the least maintenance as Tidy evolves.
| 1.0 | RFC: Tidy config file - As brought to light by the mess I've been making around https://github.com/servo/servo/issues/10636, including https://github.com/servo/rust-mozjs/pull/255, there are cases in which we need `tidy` to selectively ignore certain types of error in certain directories.
One way to address this problem will be to create a Tidy config file, `.servo-tidy.toml` or similar, to optionally live at the root of a project (or in `.github` or `.travis`?), which Tidy would read to configure the strictness of its various checks.
* Are there any compelling reasons that we should configure exceptions in Tidy itself rather than per repo?
* Where should it live? I think a choice between `.servo-tidy` in the root of the project and `servo-tidy.toml` (or some other name not starting with `.`) in `.travis/` would provide an appropriate level of flexibility
* How should it be structured? I think [toml](https://github.com/toml-lang/toml) would make sense considering its extensive use in Cargo and other Rust tooling configuration. I think keeping the config to a blacklist of tidy errors to ignore would require the least maintenance as Tidy evolves.
| infrastructure | rfc tidy config file as brought to light by the mess i ve been making around including there are cases in which we need tidy to selectively ignore certain types of error in certain directories one way to address this problem will be to create a tidy config file servo tidy toml or similar to optionally live at the root of a project or in github or travis which tidy would read to configure the strictness of its various checks are there any compelling reasons that we should configure exceptions in tidy itself rather than per repo where should it live i think a choice between servo tidy in the root of the project and servo tidy toml or some other name not starting with in travis would provide an appropriate level of flexibility how should it be structured i think would make sense considering its extensive use in cargo and other rust tooling configuration i think keeping the config to a blacklist of tidy errors to ignore would require the least maintenance as tidy evolves | 1 |
23,028 | 15,770,595,920 | IssuesEvent | 2021-03-31 19:36:16 | ampproject/amp-github-apps | https://api.github.com/repos/ampproject/amp-github-apps | opened | Rename the default branch of this repo to `main` | Category: Infrastructure Type: Feature Request | This is a tracking FR that can be closed when the default branch of this repo is renamed from `master` to `main`. For more context, see https://github.com/ampproject/amp-github-apps/pull/1256, https://github.com/ampproject/amphtml/issues/32195, and https://github.com/ampproject/amphtml/pull/33571
Tagging @ampproject/wg-infra for visibility, since we'll need to coordinate the renaming with https://github.com/ampproject/amp-github-apps/pull/1256 (after there is agreement on [this plan](https://docs.google.com/document/d/1BN8CkM3b2nydENRI2X1tE6nCxcaeVliUIaC2e-mXZbg).) | 1.0 | Rename the default branch of this repo to `main` - This is a tracking FR that can be closed when the default branch of this repo is renamed from `master` to `main`. For more context, see https://github.com/ampproject/amp-github-apps/pull/1256, https://github.com/ampproject/amphtml/issues/32195, and https://github.com/ampproject/amphtml/pull/33571
Tagging @ampproject/wg-infra for visibility, since we'll need to coordinate the renaming with https://github.com/ampproject/amp-github-apps/pull/1256 (after there is agreement on [this plan](https://docs.google.com/document/d/1BN8CkM3b2nydENRI2X1tE6nCxcaeVliUIaC2e-mXZbg).) | infrastructure | rename the default branch of this repo to main this is a tracking fr that can be closed when the default branch of this repo is renamed from master to main for more context see and tagging ampproject wg infra for visibility since we ll need to coordinate the renaming with after there is agreement on | 1 |
12,825 | 9,967,273,751 | IssuesEvent | 2019-07-08 13:16:36 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Q: how to patch CorLib on released sdk | area-Infrastructure | I'm trying to patch the released `3.0.100-preview6-012264` sdk with some corefx changes.
I've downloaded the sdk from https://dotnet.microsoft.com/download.
Then checked out the corefx repo at the `v3.0.0-preview6` tag.
Made some small changes:
```diff
diff --git a/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs b/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs
index 7adc7d019c..1e4a1ae8b9 100644
--- a/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs
+++ b/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs
@@ -35,6 +35,7 @@ public SafeFileHandle(IntPtr preexistingHandle, bool ownsHandle) : this(ownsHand
/// <returns>A SafeFileHandle for the opened file.</returns>
internal static SafeFileHandle Open(string path, Interop.Sys.OpenFlags flags, int mode)
{
+ System.Console.WriteLine($"Opening file {path} {flags} {mode}");
Debug.Assert(path != null);
SafeFileHandle handle = Interop.Sys.Open(path, flags, mode);
diff --git a/src/System.Net.Sockets/src/System.Net.Sockets.csproj b/src/System.Net.Sockets/src/System.Net.Sockets.csproj
index 681c46246c..210165387d 100644
--- a/src/System.Net.Sockets/src/System.Net.Sockets.csproj
+++ b/src/System.Net.Sockets/src/System.Net.Sockets.csproj
@@ -380,6 +380,7 @@
<ItemGroup>
<Reference Include="Microsoft.Win32.Primitives" />
<Reference Include="System.Buffers" />
+ <Reference Include="System.Console" />
<Reference Include="System.Collections" />
<Reference Include="System.Collections.Concurrent" />
<Reference Include="System.Diagnostics.Debug" />
diff --git a/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs b/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs
index 1f12774d62..41ca93fd6c 100644
--- a/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs
+++ b/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs
@@ -84,6 +84,7 @@ public Socket(SocketType socketType, ProtocolType protocolType)
// Initializes a new instance of the Sockets.Socket class.
public Socket(AddressFamily addressFamily, SocketType socketType, ProtocolType protocolType)
{
+ System.Console.WriteLine($"Creating socket {addressFamily} {socketType} {protocolType}");
if (NetEventSource.IsEnabled) NetEventSource.Enter(this, addressFamily);
InitializeSockets();
```
Then `./build.sh -c Release` and copy over files:
```
cp /home/tmds/repos/corefx/artifacts/bin/runtime/netcoreapp-Linux-Release-x64/* /tmp/dotnet-p6/shared/Microsoft.NETCore.App/3.0.0-preview6-27804-01
```
Now run a console app:
```cs
using System;
using System.IO;
using System.Net.Sockets;
namespace console
{
class Program
{
static void Main(string[] args)
{
Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
File.WriteAllText("/tmp/somefile", string.Empty);
}
}
}
```
outputs:
```
$ dotnet run
Creating socket InterNetwork Stream Tcp
Creating socket InterNetworkV6 Stream Tcp
Creating socket InterNetwork Stream Tcp
Creating socket InterNetworkV6 Stream Tcp
Creating socket InterNetwork Stream Tcp
Creating socket InterNetworkV6 Stream Tcp
Creating socket InterNetwork Stream Tcp
```
It seems the changes from `System.Net.Sockets` are used. But the changes to `CoreLib` are not. What do I need to do to include the `CoreLib` changes?
CC @ViktorHofer | 1.0 | Q: how to patch CorLib on released sdk - I'm trying to patch the released `3.0.100-preview6-012264` sdk with some corefx changes.
I've downloaded the sdk from https://dotnet.microsoft.com/download.
Then checked out the corefx repo at the `v3.0.0-preview6` tag.
Made some small changes:
```diff
diff --git a/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs b/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs
index 7adc7d019c..1e4a1ae8b9 100644
--- a/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs
+++ b/src/Common/src/CoreLib/Microsoft/Win32/SafeHandles/SafeFileHandle.Unix.cs
@@ -35,6 +35,7 @@ public SafeFileHandle(IntPtr preexistingHandle, bool ownsHandle) : this(ownsHand
/// <returns>A SafeFileHandle for the opened file.</returns>
internal static SafeFileHandle Open(string path, Interop.Sys.OpenFlags flags, int mode)
{
+ System.Console.WriteLine($"Opening file {path} {flags} {mode}");
Debug.Assert(path != null);
SafeFileHandle handle = Interop.Sys.Open(path, flags, mode);
diff --git a/src/System.Net.Sockets/src/System.Net.Sockets.csproj b/src/System.Net.Sockets/src/System.Net.Sockets.csproj
index 681c46246c..210165387d 100644
--- a/src/System.Net.Sockets/src/System.Net.Sockets.csproj
+++ b/src/System.Net.Sockets/src/System.Net.Sockets.csproj
@@ -380,6 +380,7 @@
<ItemGroup>
<Reference Include="Microsoft.Win32.Primitives" />
<Reference Include="System.Buffers" />
+ <Reference Include="System.Console" />
<Reference Include="System.Collections" />
<Reference Include="System.Collections.Concurrent" />
<Reference Include="System.Diagnostics.Debug" />
diff --git a/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs b/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs
index 1f12774d62..41ca93fd6c 100644
--- a/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs
+++ b/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs
@@ -84,6 +84,7 @@ public Socket(SocketType socketType, ProtocolType protocolType)
// Initializes a new instance of the Sockets.Socket class.
public Socket(AddressFamily addressFamily, SocketType socketType, ProtocolType protocolType)
{
+ System.Console.WriteLine($"Creating socket {addressFamily} {socketType} {protocolType}");
if (NetEventSource.IsEnabled) NetEventSource.Enter(this, addressFamily);
InitializeSockets();
```
Then `./build.sh -c Release` and copy over files:
```
cp /home/tmds/repos/corefx/artifacts/bin/runtime/netcoreapp-Linux-Release-x64/* /tmp/dotnet-p6/shared/Microsoft.NETCore.App/3.0.0-preview6-27804-01
```
Now run a console app:
```cs
using System;
using System.IO;
using System.Net.Sockets;
namespace console
{
class Program
{
static void Main(string[] args)
{
Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
File.WriteAllText("/tmp/somefile", string.Empty);
}
}
}
```
outputs:
```
$ dotnet run
Creating socket InterNetwork Stream Tcp
Creating socket InterNetworkV6 Stream Tcp
Creating socket InterNetwork Stream Tcp
Creating socket InterNetworkV6 Stream Tcp
Creating socket InterNetwork Stream Tcp
Creating socket InterNetworkV6 Stream Tcp
Creating socket InterNetwork Stream Tcp
```
It seems the changes from `System.Net.Sockets` are used. But the changes to `CoreLib` are not. What do I need to do to include the `CoreLib` changes?
CC @ViktorHofer | infrastructure | q how to patch corlib on released sdk i m trying to patch the released sdk with some corefx changes i ve downloaded the sdk from then checked out the corefx repo at the tag made some small changes diff diff git a src common src corelib microsoft safehandles safefilehandle unix cs b src common src corelib microsoft safehandles safefilehandle unix cs index a src common src corelib microsoft safehandles safefilehandle unix cs b src common src corelib microsoft safehandles safefilehandle unix cs public safefilehandle intptr preexistinghandle bool ownshandle this ownshand a safefilehandle for the opened file internal static safefilehandle open string path interop sys openflags flags int mode system console writeline opening file path flags mode debug assert path null safefilehandle handle interop sys open path flags mode diff git a src system net sockets src system net sockets csproj b src system net sockets src system net sockets csproj index a src system net sockets src system net sockets csproj b src system net sockets src system net sockets csproj diff git a src system net sockets src system net sockets socket cs b src system net sockets src system net sockets socket cs index a src system net sockets src system net sockets socket cs b src system net sockets src system net sockets socket cs public socket sockettype sockettype protocoltype protocoltype initializes a new instance of the sockets socket class public socket addressfamily addressfamily sockettype sockettype protocoltype protocoltype system console writeline creating socket addressfamily sockettype protocoltype if neteventsource isenabled neteventsource enter this addressfamily initializesockets then build sh c release and copy over files cp home tmds repos corefx artifacts bin runtime netcoreapp linux release tmp dotnet shared microsoft netcore app now run a console app cs using system using system io using system net sockets namespace console class program static void main string args socket s new socket addressfamily internetwork sockettype stream protocoltype tcp file writealltext tmp somefile string empty outputs dotnet run creating socket internetwork stream tcp creating socket stream tcp creating socket internetwork stream tcp creating socket stream tcp creating socket internetwork stream tcp creating socket stream tcp creating socket internetwork stream tcp it seems the changes from system net sockets are used but the changes to corelib are not what do i need to do to include the corelib changes cc viktorhofer | 1 |
304,149 | 9,321,918,365 | IssuesEvent | 2019-03-27 06:18:59 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | kahoot.it - site is not usable | browser-firefox-mobile priority-normal | <!-- @browser: Firefox Mobile 67.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:67.0) Gecko/67.0 Firefox/67.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://kahoot.it/
**Browser / Version**: Firefox Mobile 67.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: The site doesn't load.
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/3/fd363616-8301-4756-9140-93b90e59db3d.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190309094812</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Error: "TypeError: localStorage is null" {file: "https://tap-nexus.appspot.com/js/sdk/kahunaAPI_min.js" line: 1}]\nKahuna<@https://tap-nexus.appspot.com/js/sdk/kahunaAPI_min.js:1:10948\n@https://tap-nexus.appspot.com/js/sdk/kahunaAPI_min.js:1:19457\n', u'[JavaScript Warning: "An iframe which has both allow-scripts and allow-same-origin for its sandbox attribute can remove its sandboxing." {file: "https://kahoot.it/" line: 0}]', u'[JavaScript Warning: "The resource at https://www.google-analytics.com/analytics.js was blocked because content blocking is enabled." {file: "https://kahoot.it/" line: 0}]', u'[JavaScript Error: "TypeError: window.localStorage is null" {file: "https://kahoot.it/js/controller.min.js?v1.660.0" line: 15}]\n@https://kahoot.it/js/controller.min.js?v1.660.0:15:22072\n@https://kahoot.it/js/controller.min.js?v1.660.0:15:22329\n', u'[JavaScript Error: "TypeError: angular.bootstrap is not a function" {file: "https://kahoot.it/js/bootstrap.js" line: 1}]\n@https://kahoot.it/js/bootstrap.js:1:1545\na@https://kahoot.it/js/bootstrap.js:1:367\ne/e[a]@https://kahoot.it/js/bootstrap.js:1:869\n', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src: strict-dynamic specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring https: within script-src: strict-dynamic specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring http: within script-src: strict-dynamic specified"]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | kahoot.it - site is not usable - <!-- @browser: Firefox Mobile 67.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 9; Mobile; rv:67.0) Gecko/67.0 Firefox/67.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://kahoot.it/
**Browser / Version**: Firefox Mobile 67.0
**Operating System**: Android
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: The site doesn't load.
**Steps to Reproduce**:
[](https://webcompat.com/uploads/2019/3/fd363616-8301-4756-9140-93b90e59db3d.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>image.mem.shared: true</li><li>buildID: 20190309094812</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>hasTouchScreen: true</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>gfx.webrender.all: false</li><li>channel: nightly</li>
</ul>
<p>Console Messages:</p>
<pre>
[u'[JavaScript Error: "TypeError: localStorage is null" {file: "https://tap-nexus.appspot.com/js/sdk/kahunaAPI_min.js" line: 1}]\nKahuna<@https://tap-nexus.appspot.com/js/sdk/kahunaAPI_min.js:1:10948\n@https://tap-nexus.appspot.com/js/sdk/kahunaAPI_min.js:1:19457\n', u'[JavaScript Warning: "An iframe which has both allow-scripts and allow-same-origin for its sandbox attribute can remove its sandboxing." {file: "https://kahoot.it/" line: 0}]', u'[JavaScript Warning: "The resource at https://www.google-analytics.com/analytics.js was blocked because content blocking is enabled." {file: "https://kahoot.it/" line: 0}]', u'[JavaScript Error: "TypeError: window.localStorage is null" {file: "https://kahoot.it/js/controller.min.js?v1.660.0" line: 15}]\n@https://kahoot.it/js/controller.min.js?v1.660.0:15:22072\n@https://kahoot.it/js/controller.min.js?v1.660.0:15:22329\n', u'[JavaScript Error: "TypeError: angular.bootstrap is not a function" {file: "https://kahoot.it/js/bootstrap.js" line: 1}]\n@https://kahoot.it/js/bootstrap.js:1:1545\na@https://kahoot.it/js/bootstrap.js:1:367\ne/e[a]@https://kahoot.it/js/bootstrap.js:1:869\n', u'[JavaScript Warning: "Content Security Policy: Ignoring \'unsafe-inline\' within script-src: strict-dynamic specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring https: within script-src: strict-dynamic specified"]', u'[JavaScript Warning: "Content Security Policy: Ignoring http: within script-src: strict-dynamic specified"]']
</pre>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_infrastructure | kahoot it site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description the site doesn t load steps to reproduce browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel nightly console messages nkahuna u u u n u n u u u from with ❤️ | 0 |
25,350 | 25,031,723,836 | IssuesEvent | 2022-11-04 12:58:04 | precice/precice | https://api.github.com/repos/precice/precice | opened | Rename attributes in `m2n` tag | usability breaking change | ## Problem
We currently have:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
```
The `from` and `to` are kind of hard to understand. I often have to look up again which was which. And sometimes users misunderstand these and think that for a bi-directional coupling one would need two:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
<m2n:sockets from="Solid" to="Fluid" exchange-directory="../" />
```
## Suggested solution
Rename to:
```xml
<m2n:sockets acceptor="Fluid" requestor="Solid" exchange-directory="../" />
```
## Alternatives
?
| True | Rename attributes in `m2n` tag - ## Problem
We currently have:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
```
The `from` and `to` are kind of hard to understand. I often have to look up again which was which. And sometimes users misunderstand these and think that for a bi-directional coupling one would need two:
```xml
<m2n:sockets from="Fluid" to="Solid" exchange-directory="../" />
<m2n:sockets from="Solid" to="Fluid" exchange-directory="../" />
```
## Suggested solution
Rename to:
```xml
<m2n:sockets acceptor="Fluid" requestor="Solid" exchange-directory="../" />
```
## Alternatives
?
| non_infrastructure | rename attributes in tag problem we currently have xml the from and to are kind of hard to understand i often have to look up again which was which and sometimes users misunderstand these and think that for a bi directional coupling one would need two xml suggested solution rename to xml alternatives | 0 |
13,012 | 10,062,514,577 | IssuesEvent | 2019-07-23 01:30:42 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | opened | Crash when clicking on report | bug interface/infrastructure | This issue arises when a variable name in a report contains a single apostrophe. After running the simulation containing the report and clicking on the report (or datastore), apsim will crash. | 1.0 | Crash when clicking on report - This issue arises when a variable name in a report contains a single apostrophe. After running the simulation containing the report and clicking on the report (or datastore), apsim will crash. | infrastructure | crash when clicking on report this issue arises when a variable name in a report contains a single apostrophe after running the simulation containing the report and clicking on the report or datastore apsim will crash | 1 |
78,301 | 22,193,320,231 | IssuesEvent | 2022-06-07 02:59:12 | tensorflow/tensorflow | https://api.github.com/repos/tensorflow/tensorflow | closed | tensorflow cpu module's speed lower on windows than linux | stat:awaiting response type:build/install type:support stalled 1.4.0 | System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows7 64bit and ubuntu 16.04 64bit
- TensorFlow installed from (source or binary):build tensorflow source to shared lib
- TensorFlow version (use command below):tensorflow v1.3.0
- Python version: 3.5
- Bazel version (if compiling from source):N/A
- GCC/Compiler version (if compiling from source):N/A
- CUDA/cuDNN version:N/A
- GPU model and memory:N/A
- Exact command to reproduce:N/A
Describe the problem
Training tensorflow module and detect faces both on windows7 and ubuntu 16.04, but it costs about twice time on windows7 than ubuntu16.04. So we want to know this issue is normal or not? And if it is normal, what's the reason?
windows7 PC environment:
CPU: Intel Core i3 2120
time: 80~160 ms
ubuntu16.04 PC environment:
CPU: Intel(R) Core(TM) i3-3220 CPU@3.30GHz
time: 40~100 ms
| 1.0 | tensorflow cpu module's speed lower on windows than linux - System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow):yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):windows7 64bit and ubuntu 16.04 64bit
- TensorFlow installed from (source or binary):build tensorflow source to shared lib
- TensorFlow version (use command below):tensorflow v1.3.0
- Python version: 3.5
- Bazel version (if compiling from source):N/A
- GCC/Compiler version (if compiling from source):N/A
- CUDA/cuDNN version:N/A
- GPU model and memory:N/A
- Exact command to reproduce:N/A
Describe the problem
Training tensorflow module and detect faces both on windows7 and ubuntu 16.04, but it costs about twice time on windows7 than ubuntu16.04. So we want to know this issue is normal or not? And if it is normal, what's the reason?
windows7 PC environment:
CPU: Intel Core i3 2120
time: 80~160 ms
ubuntu16.04 PC environment:
CPU: Intel(R) Core(TM) i3-3220 CPU@3.30GHz
time: 40~100 ms
| non_infrastructure | tensorflow cpu module s speed lower on windows than linux system information have i written custom code as opposed to using a stock example script provided in tensorflow yes os platform and distribution e g linux ubuntu and ubuntu tensorflow installed from source or binary build tensorflow source to shared lib tensorflow version use command below tensorflow python version bazel version if compiling from source n a gcc compiler version if compiling from source n a cuda cudnn version n a gpu model and memory n a exact command to reproduce n a describe the problem training tensorflow module and detect faces both on and ubuntu but it costs about twice time on than so we want to know this issue is normal or not and if it is normal what s the reason pc environment cpu intel core time ms pc environment cpu intel r core tm cpu time ms | 0 |
13,354 | 10,218,832,308 | IssuesEvent | 2019-08-15 16:57:37 | E3SM-Project/scream | https://api.github.com/repos/E3SM-Project/scream | opened | Introduce cmake logic to create baselines for scream | cmake infrastructure testing | Currently, the homme standalone test is considered 'passed' if it runs. We should add some bfb checks via cprnc (like in homme) to also check correctness.
The way homme creates baselines is a bit involved, and perhaps we can do a bit better and avoid bash scripts, staying 100% in the cmake realm.
Possible steps:
- have a 'SCREAM_BASELINES_DIR` variable. If this is 'master', do not set the variable, otherwise, set it to the build dir of master (or whatever blessed version we have)
- create a cmake script that runs the test and runs cprnc upon completion.
- if baselines are present, run cmake script (run+cprnc), otherwise simply run (and save baselines) | 1.0 | Introduce cmake logic to create baselines for scream - Currently, the homme standalone test is considered 'passed' if it runs. We should add some bfb checks via cprnc (like in homme) to also check correctness.
The way homme creates baselines is a bit involved, and perhaps we can do a bit better and avoid bash scripts, staying 100% in the cmake realm.
Possible steps:
- have a 'SCREAM_BASELINES_DIR` variable. If this is 'master', do not set the variable, otherwise, set it to the build dir of master (or whatever blessed version we have)
- create a cmake script that runs the test and runs cprnc upon completion.
- if baselines are present, run cmake script (run+cprnc), otherwise simply run (and save baselines) | infrastructure | introduce cmake logic to create baselines for scream currently the homme standalone test is considered passed if it runs we should add some bfb checks via cprnc like in homme to also check correctness the way homme creates baselines is a bit involved and perhaps we can do a bit better and avoid bash scripts staying in the cmake realm possible steps have a scream baselines dir variable if this is master do not set the variable otherwise set it to the build dir of master or whatever blessed version we have create a cmake script that runs the test and runs cprnc upon completion if baselines are present run cmake script run cprnc otherwise simply run and save baselines | 1 |
32,300 | 26,609,311,859 | IssuesEvent | 2023-01-23 22:14:36 | zcash/zcash | https://api.github.com/repos/zcash/zcash | closed | Update contrib/devtools/fix-copyright-headers.py to handle Zcash copyright headers | release dev infrastructure licensing | As suggested at https://github.com/zcash/zcash/issues/2887#issuecomment-361712150 .
For bonus points, robustify this script and add running it to the automatic release process. | 1.0 | Update contrib/devtools/fix-copyright-headers.py to handle Zcash copyright headers - As suggested at https://github.com/zcash/zcash/issues/2887#issuecomment-361712150 .
For bonus points, robustify this script and add running it to the automatic release process. | infrastructure | update contrib devtools fix copyright headers py to handle zcash copyright headers as suggested at for bonus points robustify this script and add running it to the automatic release process | 1 |
30,431 | 24,821,337,796 | IssuesEvent | 2022-10-25 16:41:17 | yt-project/yt_idv | https://api.github.com/repos/yt-project/yt_idv | closed | add pypi deployment to `create-release.md` action | infrastructure | The automatic release action only creates a draft release on github. Should add to that (or create a different action) to also automate the pypi deployment. | 1.0 | add pypi deployment to `create-release.md` action - The automatic release action only creates a draft release on github. Should add to that (or create a different action) to also automate the pypi deployment. | infrastructure | add pypi deployment to create release md action the automatic release action only creates a draft release on github should add to that or create a different action to also automate the pypi deployment | 1 |
5,088 | 5,434,139,702 | IssuesEvent | 2017-03-05 03:09:41 | SemanticMediaWiki/SemanticMediaWiki | https://api.github.com/repos/SemanticMediaWiki/SemanticMediaWiki | closed | HHVM test failures | infrastructure | relates to the discussion in https://github.com/SemanticMediaWiki/SemanticMediaWiki/issues/1424#issuecomment-196066449
> Starting test 'SMW\Tests\IntlTimeFormatterTest::testFormat with data set #3 ('2/1300/11/02/12/03/25.888499949', 'Y-m-d H:i:s.u', '1300-11-02 12:03:25.888500 JL')'.
> F
> Starting test 'SMW\Tests\IntlTimeFormatterTest::testFormat with data set #4 ('2/1300/11/02/12/03/25.888499949', 'H:i:s.u', '12:03:25.888500')'.
> F
Was reported with https://github.com/facebook/hhvm/issues/6899.
> Fatal error: Cannot access protected property SMWQuantityValue::$m_unitin
Caused by https://github.com/facebook/hhvm/issues/5128.
> Undefined index: json
Was reported with https://github.com/facebook/hhvm/issues/7402 (https://github.com/SemanticMediaWiki/SemanticMediaWiki/commit/9e5182715f8f83926db1c2cbc993702ca8eb4eaf)
| 1.0 | HHVM test failures - relates to the discussion in https://github.com/SemanticMediaWiki/SemanticMediaWiki/issues/1424#issuecomment-196066449
> Starting test 'SMW\Tests\IntlTimeFormatterTest::testFormat with data set #3 ('2/1300/11/02/12/03/25.888499949', 'Y-m-d H:i:s.u', '1300-11-02 12:03:25.888500 JL')'.
> F
> Starting test 'SMW\Tests\IntlTimeFormatterTest::testFormat with data set #4 ('2/1300/11/02/12/03/25.888499949', 'H:i:s.u', '12:03:25.888500')'.
> F
Was reported with https://github.com/facebook/hhvm/issues/6899.
> Fatal error: Cannot access protected property SMWQuantityValue::$m_unitin
Caused by https://github.com/facebook/hhvm/issues/5128.
> Undefined index: json
Was reported with https://github.com/facebook/hhvm/issues/7402 (https://github.com/SemanticMediaWiki/SemanticMediaWiki/commit/9e5182715f8f83926db1c2cbc993702ca8eb4eaf)
| infrastructure | hhvm test failures relates to the discussion in starting test smw tests intltimeformattertest testformat with data set y m d h i s u jl f starting test smw tests intltimeformattertest testformat with data set h i s u f was reported with fatal error cannot access protected property smwquantityvalue m unitin caused by undefined index json was reported with | 1 |
166,779 | 20,725,523,491 | IssuesEvent | 2022-03-14 01:03:55 | BrianMcDonaldWS/genie | https://api.github.com/repos/BrianMcDonaldWS/genie | opened | CVE-2021-37701 (High) detected in tar-2.2.1.tgz | security vulnerability | ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /genie-ui/package.json</p>
<p>Path to vulnerable library: /genie-ui/.gradle/nodejs/node-v8.11.1-linux-x64/lib/node_modules/npm/node_modules/node-gyp/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- webpack-1.15.0.tgz (Root Library)
- watchpack-0.2.9.tgz
- chokidar-1.7.0.tgz
- fsevents-1.1.3.tgz
- node-pre-gyp-0.6.39.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (webpack): 2.0.0-beta</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"webpack","packageVersion":"1.15.0","packageFilePaths":["/genie-ui/package.json"],"isTransitiveDependency":false,"dependencyTree":"webpack:1.15.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0.0-beta","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-37701","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\\` and `/` characters as path separators, however `\\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701","cvss3Severity":"high","cvss3Score":"8.6","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"Required","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-37701 (High) detected in tar-2.2.1.tgz - ## CVE-2021-37701 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-2.2.1.tgz</b></p></summary>
<p>tar for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-2.2.1.tgz">https://registry.npmjs.org/tar/-/tar-2.2.1.tgz</a></p>
<p>Path to dependency file: /genie-ui/package.json</p>
<p>Path to vulnerable library: /genie-ui/.gradle/nodejs/node-v8.11.1-linux-x64/lib/node_modules/npm/node_modules/node-gyp/node_modules/tar/package.json</p>
<p>
Dependency Hierarchy:
- webpack-1.15.0.tgz (Root Library)
- watchpack-0.2.9.tgz
- chokidar-1.7.0.tgz
- fsevents-1.1.3.tgz
- node-pre-gyp-0.6.39.tgz
- :x: **tar-2.2.1.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The npm package "tar" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\` and `/` characters as path separators, however `\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.
<p>Publish Date: 2021-08-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701>CVE-2021-37701</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc">https://github.com/npm/node-tar/security/advisories/GHSA-9r2w-394v-53qc</a></p>
<p>Release Date: 2021-08-31</p>
<p>Fix Resolution (tar): 4.4.16</p>
<p>Direct dependency fix Resolution (webpack): 2.0.0-beta</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"webpack","packageVersion":"1.15.0","packageFilePaths":["/genie-ui/package.json"],"isTransitiveDependency":false,"dependencyTree":"webpack:1.15.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.0.0-beta","isBinary":false}],"baseBranches":[],"vulnerabilityIdentifier":"CVE-2021-37701","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 4.4.16, 5.0.8, and 6.1.7 has an arbitrary file creation/overwrite and arbitrary code execution vulnerability. node-tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary stat calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory, where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems. The cache checking logic used both `\\` and `/` characters as path separators, however `\\` is a valid filename character on posix systems. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass node-tar symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. Additionally, a similar confusion could arise on case-insensitive filesystems. If a tar archive contained a directory at `FOO`, followed by a symbolic link named `foo`, then on case-insensitive file systems, the creation of the symbolic link would remove the directory from the filesystem, but _not_ from the internal directory cache, as it would not be treated as a cache hit. A subsequent file entry within the `FOO` directory would then be placed in the target of the symbolic link, thinking that the directory had already been created. These issues were addressed in releases 4.4.16, 5.0.8 and 6.1.7. The v3 branch of node-tar has been deprecated and did not receive patches for these issues. If you are still using a v3 release we recommend you update to a more recent version of node-tar. If this is not possible, a workaround is available in the referenced GHSA-9r2w-394v-53qc.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-37701","cvss3Severity":"high","cvss3Score":"8.6","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Changed","C":"High","UI":"Required","AV":"Local","I":"High"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href path to dependency file genie ui package json path to vulnerable library genie ui gradle nodejs node linux lib node modules npm node modules node gyp node modules tar package json dependency hierarchy webpack tgz root library watchpack tgz chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems the cache checking logic used both and characters as path separators however is a valid filename character on posix systems by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite additionally a similar confusion could arise on case insensitive filesystems if a tar archive contained a directory at foo followed by a symbolic link named foo then on case insensitive file systems the creation of the symbolic link would remove the directory from the filesystem but not from the internal directory cache as it would not be treated as a cache hit a subsequent file entry within the foo directory would then be placed in the target of the symbolic link thinking that the directory had already been created these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar direct dependency fix resolution webpack beta check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree webpack isminimumfixversionavailable true minimumfixversion beta isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the npm package tar aka node tar before versions and has an arbitrary file creation overwrite and arbitrary code execution vulnerability node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory where the symlink and directory names in the archive entry used backslashes as a path separator on posix systems the cache checking logic used both and characters as path separators however is a valid filename character on posix systems by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite additionally a similar confusion could arise on case insensitive filesystems if a tar archive contained a directory at foo followed by a symbolic link named foo then on case insensitive file systems the creation of the symbolic link would remove the directory from the filesystem but not from the internal directory cache as it would not be treated as a cache hit a subsequent file entry within the foo directory would then be placed in the target of the symbolic link thinking that the directory had already been created these issues were addressed in releases and the branch of node tar has been deprecated and did not receive patches for these issues if you are still using a release we recommend you update to a more recent version of node tar if this is not possible a workaround is available in the referenced ghsa vulnerabilityurl | 0 |
155,869 | 24,533,020,332 | IssuesEvent | 2022-10-11 18:07:08 | department-of-veterans-affairs/abd-vro | https://api.github.com/repos/department-of-veterans-affairs/abd-vro | closed | Write research plan for testing iMVP PDF design | Design research | ## Context
We want to do another round of usability testing and refinement of the iMVP design before engineering starts building it. We also want to start setting up qualitative and quantitative metrics for evaluating the success of new iterations to the PDF designs.
## Related issues
- See other issues in the `Design iMVP` milestone
## Resources
- [Prioritization Workshop (Mural) ](https://app.mural.co/t/nava4113/m/nava4113/1664474258124/97c7b5785713700317d4d213e12d3f60cfc0c542?sender=carolyn1198)
## Resources
| 1.0 | Write research plan for testing iMVP PDF design - ## Context
We want to do another round of usability testing and refinement of the iMVP design before engineering starts building it. We also want to start setting up qualitative and quantitative metrics for evaluating the success of new iterations to the PDF designs.
## Related issues
- See other issues in the `Design iMVP` milestone
## Resources
- [Prioritization Workshop (Mural) ](https://app.mural.co/t/nava4113/m/nava4113/1664474258124/97c7b5785713700317d4d213e12d3f60cfc0c542?sender=carolyn1198)
## Resources
| non_infrastructure | write research plan for testing imvp pdf design context we want to do another round of usability testing and refinement of the imvp design before engineering starts building it we also want to start setting up qualitative and quantitative metrics for evaluating the success of new iterations to the pdf designs related issues see other issues in the design imvp milestone resources resources | 0 |
31,716 | 26,034,794,186 | IssuesEvent | 2022-12-22 03:00:35 | phpmyadmin/phpmyadmin | https://api.github.com/repos/phpmyadmin/phpmyadmin | closed | Uncaught TypeError: can't access property "update", window.Navigation is undefined | bug infrastructure affects/5.3 confirmed/5.3 | ### Describe the bug
Uncaught TypeError: can't access property "update", window.Navigation is undefined
### To Reproduce
Steps to reproduce the behavior:
1. Go to [https://demo.phpmyadmin.net/master/](https://demo.phpmyadmin.net/master/)
2. See error
### Expected behavior
No error should happen.
### Server configuration
- phpMyAdmin version: master | 1.0 | Uncaught TypeError: can't access property "update", window.Navigation is undefined - ### Describe the bug
Uncaught TypeError: can't access property "update", window.Navigation is undefined
### To Reproduce
Steps to reproduce the behavior:
1. Go to [https://demo.phpmyadmin.net/master/](https://demo.phpmyadmin.net/master/)
2. See error
### Expected behavior
No error should happen.
### Server configuration
- phpMyAdmin version: master | infrastructure | uncaught typeerror can t access property update window navigation is undefined describe the bug uncaught typeerror can t access property update window navigation is undefined to reproduce steps to reproduce the behavior go to see error expected behavior no error should happen server configuration phpmyadmin version master | 1 |
60,989 | 25,335,837,017 | IssuesEvent | 2022-11-18 16:45:21 | hashicorp/terraform-provider-google | https://api.github.com/repos/hashicorp/terraform-provider-google | closed | Add retries for operations to account for intermittent network issues | enhancement size/s crosslinked service/bigtable | <!--- Please leave this line, it helps our automation: [issue-type:enhancement] --->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. Including use cases and why it would help you is a great way to convince maintainers to spend time on it. --->
Currently resource for instance creation or table creation for Bigtable resources do not seem to rely on retrying requests to account for intermittent network failures.
Consider a Terraform task to setup a bigtable instance with column family configuration. This results in multiple requests, and if additional requests fail, it results in complete task failure. Moreover, to complete the task involves having to delete the tables manually, as task fails due to since the table already exists.
- https://github.com/terraform-providers/terraform-provider-google/blob/master/google/resource_bigtable_instance.go
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* google_bigtable_table
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```tf
resource google_bigtable_table "table" {
name = replace(var.name, "some-instance", "")
project = var.project
instance_name = var.instance_name
split_keys = var.split_keys
dynamic column_family {
for_each = var.column_families
content {
family = column_family.value
}
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
| 1.0 | Add retries for operations to account for intermittent network issues - <!--- Please leave this line, it helps our automation: [issue-type:enhancement] --->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. Including use cases and why it would help you is a great way to convince maintainers to spend time on it. --->
Currently resource for instance creation or table creation for Bigtable resources do not seem to rely on retrying requests to account for intermittent network failures.
Consider a Terraform task to setup a bigtable instance with column family configuration. This results in multiple requests, and if additional requests fail, it results in complete task failure. Moreover, to complete the task involves having to delete the tables manually, as task fails due to since the table already exists.
- https://github.com/terraform-providers/terraform-provider-google/blob/master/google/resource_bigtable_instance.go
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* google_bigtable_table
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```tf
resource google_bigtable_table "table" {
name = replace(var.name, "some-instance", "")
project = var.project
instance_name = var.instance_name
split_keys = var.split_keys
dynamic column_family {
for_each = var.column_families
content {
family = column_family.value
}
}
}
```
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
| non_infrastructure | add retries for operations to account for intermittent network issues community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment if the issue is assigned to the modular magician user it is either in the process of being autogenerated or is planned to be autogenerated soon if the issue is assigned to a user that user is claiming responsibility for the issue if the issue is assigned to hashibot a community member has claimed the issue already description currently resource for instance creation or table creation for bigtable resources do not seem to rely on retrying requests to account for intermittent network failures consider a terraform task to setup a bigtable instance with column family configuration this results in multiple requests and if additional requests fail it results in complete task failure moreover to complete the task involves having to delete the tables manually as task fails due to since the table already exists new or affected resource s google bigtable table potential terraform configuration tf resource google bigtable table table name replace var name some instance project var project instance name var instance name split keys var split keys dynamic column family for each var column families content family column family value references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation | 0 |
59,488 | 24,795,099,857 | IssuesEvent | 2022-10-24 16:33:39 | hashicorp/terraform-provider-awscc | https://api.github.com/repos/hashicorp/terraform-provider-awscc | closed | [awscc v0.34.0] awscc_networkmanager_core_network produces invalid json | upstream-plugin-framework service/networkmanager | ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
* The resources and data sources in this provider are generated from the CloudFormation schema, so they can only support the actions that the underlying schema supports. For this reason submitted bugs should be limited to defects in the generation and runtime code of the provider. Customizing behavior of the resource, or noting a gap in behavior are not valid bugs and should be submitted as enhancements to AWS via the CloudFormation Open Coverage Roadmap.
### Terraform CLI and Terraform AWS Cloud Control Provider Version
Terraform v1.3.2
on linux_amd64
* provider registry.terraform.io/hashicorp/aws v4.32.0
* provider registry.terraform.io/hashicorp/awscc v0.34.0
### Affected Resource(s)
* awscc_networkmanager_core_network
### Terraform Configuration Files
```hcl
resource "awscc_networkmanager_global_network" "cn" {
description = "Global Network"
}
resource "awscc_networkmanager_core_network" "cn" {
description = "Core Network"
global_network_id = "NetworkID"
policy_document = jsonencode({"hello"="world"})
}
```
### Debug Output
https://gist.github.com/zhujik/e4985d91dd915f1bca3e3db6a5102162
### Panic Output
### Expected Behavior
```
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# awscc_networkmanager_core_network.cn will be created
+ resource "awscc_networkmanager_core_network" "cn" {
+ core_network_arn = (known after apply)
+ core_network_id = (known after apply)
+ created_at = (known after apply)
+ description = "Core Network"
+ edges = [
] -> (known after apply)
+ global_network_id = "NetworkID"
+ id = (known after apply)
+ owner_account = (known after apply)
+ policy_document = jsonencode(
{
+ hello = "world"
}
)
+ segments = [
] -> (known after apply)
+ state = (known after apply)
+ tags = [
] -> (known after apply)
}
# awscc_networkmanager_global_network.cn will be created
+ resource "awscc_networkmanager_global_network" "cn" {
+ arn = (known after apply)
+ description = "Global Network"
+ id = (known after apply)
+ tags = [
] -> (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
```
### Actual Behavior
```
╷
│ Error: Invalid JSON string
│
│ with awscc_networkmanager_core_network.cn,
│ on main.tf line 5, in resource "awscc_networkmanager_core_network" "cn":
│ 5: resource "awscc_networkmanager_core_network" "cn" {
│
│ unable to unmarshal JSON: unexpected end of JSON input
```
### Steps to Reproduce
using awscc 0.34.0:
1. `terraform plan`
this does *not happen* in awscc 0.33.0!
### Important Factoids
### References
| 1.0 | [awscc v0.34.0] awscc_networkmanager_core_network produces invalid json - ### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
* The resources and data sources in this provider are generated from the CloudFormation schema, so they can only support the actions that the underlying schema supports. For this reason submitted bugs should be limited to defects in the generation and runtime code of the provider. Customizing behavior of the resource, or noting a gap in behavior are not valid bugs and should be submitted as enhancements to AWS via the CloudFormation Open Coverage Roadmap.
### Terraform CLI and Terraform AWS Cloud Control Provider Version
Terraform v1.3.2
on linux_amd64
* provider registry.terraform.io/hashicorp/aws v4.32.0
* provider registry.terraform.io/hashicorp/awscc v0.34.0
### Affected Resource(s)
* awscc_networkmanager_core_network
### Terraform Configuration Files
```hcl
resource "awscc_networkmanager_global_network" "cn" {
description = "Global Network"
}
resource "awscc_networkmanager_core_network" "cn" {
description = "Core Network"
global_network_id = "NetworkID"
policy_document = jsonencode({"hello"="world"})
}
```
### Debug Output
https://gist.github.com/zhujik/e4985d91dd915f1bca3e3db6a5102162
### Panic Output
### Expected Behavior
```
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# awscc_networkmanager_core_network.cn will be created
+ resource "awscc_networkmanager_core_network" "cn" {
+ core_network_arn = (known after apply)
+ core_network_id = (known after apply)
+ created_at = (known after apply)
+ description = "Core Network"
+ edges = [
] -> (known after apply)
+ global_network_id = "NetworkID"
+ id = (known after apply)
+ owner_account = (known after apply)
+ policy_document = jsonencode(
{
+ hello = "world"
}
)
+ segments = [
] -> (known after apply)
+ state = (known after apply)
+ tags = [
] -> (known after apply)
}
# awscc_networkmanager_global_network.cn will be created
+ resource "awscc_networkmanager_global_network" "cn" {
+ arn = (known after apply)
+ description = "Global Network"
+ id = (known after apply)
+ tags = [
] -> (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
```
### Actual Behavior
```
╷
│ Error: Invalid JSON string
│
│ with awscc_networkmanager_core_network.cn,
│ on main.tf line 5, in resource "awscc_networkmanager_core_network" "cn":
│ 5: resource "awscc_networkmanager_core_network" "cn" {
│
│ unable to unmarshal JSON: unexpected end of JSON input
```
### Steps to Reproduce
using awscc 0.34.0:
1. `terraform plan`
this does *not happen* in awscc 0.33.0!
### Important Factoids
### References
| non_infrastructure | awscc networkmanager core network produces invalid json community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment the resources and data sources in this provider are generated from the cloudformation schema so they can only support the actions that the underlying schema supports for this reason submitted bugs should be limited to defects in the generation and runtime code of the provider customizing behavior of the resource or noting a gap in behavior are not valid bugs and should be submitted as enhancements to aws via the cloudformation open coverage roadmap terraform cli and terraform aws cloud control provider version terraform on linux provider registry terraform io hashicorp aws provider registry terraform io hashicorp awscc affected resource s awscc networkmanager core network terraform configuration files hcl resource awscc networkmanager global network cn description global network resource awscc networkmanager core network cn description core network global network id networkid policy document jsonencode hello world debug output panic output expected behavior terraform used the selected providers to generate the following execution plan resource actions are indicated with the following symbols create terraform will perform the following actions awscc networkmanager core network cn will be created resource awscc networkmanager core network cn core network arn known after apply core network id known after apply created at known after apply description core network edges known after apply global network id networkid id known after apply owner account known after apply policy document jsonencode hello world segments known after apply state known after apply tags known after apply awscc networkmanager global network cn will be created resource awscc networkmanager global network cn arn known after apply description global network id known after apply tags known after apply plan to add to change to destroy actual behavior ╷ │ error invalid json string │ │ with awscc networkmanager core network cn │ on main tf line in resource awscc networkmanager core network cn │ resource awscc networkmanager core network cn │ │ unable to unmarshal json unexpected end of json input steps to reproduce using awscc terraform plan this does not happen in awscc important factoids references | 0 |
619,120 | 19,516,892,137 | IssuesEvent | 2021-12-29 11:48:20 | ClassicLootManager/ClassicLootManager | https://api.github.com/repos/ClassicLootManager/ClassicLootManager | closed | Anonymous English Auction support | enhancement feature Priority::Low | Implement it through anonymous raid announcements but not sending bid info to others. | 1.0 | Anonymous English Auction support - Implement it through anonymous raid announcements but not sending bid info to others. | non_infrastructure | anonymous english auction support implement it through anonymous raid announcements but not sending bid info to others | 0 |
80,254 | 15,374,934,558 | IssuesEvent | 2021-03-02 14:23:50 | AUSoftAndreas/tetris | https://api.github.com/repos/AUSoftAndreas/tetris | closed | [CODE] Create /lib/services/constants.dart | Code DiffEasy ProMid | Die Klasse soll abstract sein, weil sie kein wirkliches Objekt erzeugen kann
Sie enthält ausschließlich Konstanten, die wir irgendwo brauchen
Im Moment fallen mir ein:
numRows = Anzahl der Reihen = 20
numCols = Anzahl der Spalten = 10
Recherchieren + Linter wird reichen | 1.0 | [CODE] Create /lib/services/constants.dart - Die Klasse soll abstract sein, weil sie kein wirkliches Objekt erzeugen kann
Sie enthält ausschließlich Konstanten, die wir irgendwo brauchen
Im Moment fallen mir ein:
numRows = Anzahl der Reihen = 20
numCols = Anzahl der Spalten = 10
Recherchieren + Linter wird reichen | non_infrastructure | create lib services constants dart die klasse soll abstract sein weil sie kein wirkliches objekt erzeugen kann sie enthält ausschließlich konstanten die wir irgendwo brauchen im moment fallen mir ein numrows anzahl der reihen numcols anzahl der spalten recherchieren linter wird reichen | 0 |
4,610 | 16,995,343,567 | IssuesEvent | 2021-07-01 05:21:47 | baloise/gitopscli | https://api.github.com/repos/baloise/gitopscli | closed | Fix CI | automation bug | Travis does not seem to work anymore. See for example #156 which did not trigger any build.
Maybe we could switch to GitHub actions. | 1.0 | Fix CI - Travis does not seem to work anymore. See for example #156 which did not trigger any build.
Maybe we could switch to GitHub actions. | non_infrastructure | fix ci travis does not seem to work anymore see for example which did not trigger any build maybe we could switch to github actions | 0 |
246,045 | 18,819,364,421 | IssuesEvent | 2021-11-10 05:49:57 | adobe/spectrum-css | https://api.github.com/repos/adobe/spectrum-css | closed | Migrations tables do not use t-shirt sized class. | documentation | ## Description

## Link to documentation
https://opensource.adobe.com/spectrum-css/button-warning.html#migrationguide
## Additional context
Seems like it just needs `.spectrum-Table--sizeM`...

| 1.0 | Migrations tables do not use t-shirt sized class. - ## Description

## Link to documentation
https://opensource.adobe.com/spectrum-css/button-warning.html#migrationguide
## Additional context
Seems like it just needs `.spectrum-Table--sizeM`...

| non_infrastructure | migrations tables do not use t shirt sized class description link to documentation additional context seems like it just needs spectrum table sizem | 0 |
344,157 | 24,799,916,225 | IssuesEvent | 2022-10-24 20:41:30 | scaleway/docs-content | https://api.github.com/repos/scaleway/docs-content | opened | 👩💻 Documentation Request: Add docs on how to connect to an M1 Mac from a Linux OS | Documentation Request | ### Summary
I tried connecting with Vinagre and Remmina but wasn't able to connect with either of those clients. I finally managed to connect using VNC Viewer by Real VNC. It would be great if you could add docs on how to connect using Vinagre, Remmina or some other FOSS VNC client.
### Why is it needed?
Not all VNC clients seem to work with the documentation provided.
### Want to write this documentation yourself?
No
### Related PR(s)
_No response_
### Scaleway Organization ID
_No response_
### Email address
_No response_ | 1.0 | 👩💻 Documentation Request: Add docs on how to connect to an M1 Mac from a Linux OS - ### Summary
I tried connecting with Vinagre and Remmina but wasn't able to connect with either of those clients. I finally managed to connect using VNC Viewer by Real VNC. It would be great if you could add docs on how to connect using Vinagre, Remmina or some other FOSS VNC client.
### Why is it needed?
Not all VNC clients seem to work with the documentation provided.
### Want to write this documentation yourself?
No
### Related PR(s)
_No response_
### Scaleway Organization ID
_No response_
### Email address
_No response_ | non_infrastructure | 👩💻 documentation request add docs on how to connect to an mac from a linux os summary i tried connecting with vinagre and remmina but wasn t able to connect with either of those clients i finally managed to connect using vnc viewer by real vnc it would be great if you could add docs on how to connect using vinagre remmina or some other foss vnc client why is it needed not all vnc clients seem to work with the documentation provided want to write this documentation yourself no related pr s no response scaleway organization id no response email address no response | 0 |
19,231 | 13,207,782,152 | IssuesEvent | 2020-08-15 00:35:17 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Add a .vsconfig file to the repo root | area-infrastructure | See https://devblogs.microsoft.com/setup/configure-visual-studio-across-your-organization-with-vsconfig/
Work required:
- [ ] convert https://github.com/aspnet/AspNetCore/blob/master/eng/scripts/vs.json into the proper format and ensure InstallVisualStudio.ps1 still works.
- [ ] review the list of required workloads and see if we can remove any. Candidates:
* Microsoft.VisualStudio.Workload.VisualStudioExtension,
* Microsoft.VisualStudio.Component.Azure.Storage.Emulator
* all of the old Microsoft.Net.Component.(version).TargetingPack workloads | 1.0 | Add a .vsconfig file to the repo root - See https://devblogs.microsoft.com/setup/configure-visual-studio-across-your-organization-with-vsconfig/
Work required:
- [ ] convert https://github.com/aspnet/AspNetCore/blob/master/eng/scripts/vs.json into the proper format and ensure InstallVisualStudio.ps1 still works.
- [ ] review the list of required workloads and see if we can remove any. Candidates:
* Microsoft.VisualStudio.Workload.VisualStudioExtension,
* Microsoft.VisualStudio.Component.Azure.Storage.Emulator
* all of the old Microsoft.Net.Component.(version).TargetingPack workloads | infrastructure | add a vsconfig file to the repo root see work required convert into the proper format and ensure installvisualstudio still works review the list of required workloads and see if we can remove any candidates microsoft visualstudio workload visualstudioextension microsoft visualstudio component azure storage emulator all of the old microsoft net component version targetingpack workloads | 1 |
320,838 | 9,789,896,130 | IssuesEvent | 2019-06-10 11:07:40 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | opened | Correct Signalr service for Scheduler demos | Bug C: Scheduler Demo Kendo2 Priority 2 SEV: Low | The SignalR service for the Scheduler utilizes GUID instead of int for IDs which is incorect and causes issues with creating recurrence exceptions. | 1.0 | Correct Signalr service for Scheduler demos - The SignalR service for the Scheduler utilizes GUID instead of int for IDs which is incorect and causes issues with creating recurrence exceptions. | non_infrastructure | correct signalr service for scheduler demos the signalr service for the scheduler utilizes guid instead of int for ids which is incorect and causes issues with creating recurrence exceptions | 0 |
7,125 | 6,777,816,156 | IssuesEvent | 2017-10-28 01:25:21 | archco/wise-quotes | https://api.github.com/repos/archco/wise-quotes | closed | Add CLI command "db:refresh" | database infrastructure | DB structure를 변경하고 쉽게 수정할 수 있도록 CLI를 추가한다.
```sh
wise-quotes db:refresh --feed=backup.json
``` | 1.0 | Add CLI command "db:refresh" - DB structure를 변경하고 쉽게 수정할 수 있도록 CLI를 추가한다.
```sh
wise-quotes db:refresh --feed=backup.json
``` | infrastructure | add cli command db refresh db structure를 변경하고 쉽게 수정할 수 있도록 cli를 추가한다 sh wise quotes db refresh feed backup json | 1 |
168,953 | 20,827,977,099 | IssuesEvent | 2022-03-19 01:10:32 | samq-ghdemo/wixplosives-sample-monorepo | https://api.github.com/repos/samq-ghdemo/wixplosives-sample-monorepo | opened | CVE-2021-44906 (Medium) detected in minimist-1.2.5.tgz | security vulnerability | ## CVE-2021-44906 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-4.6.0.tgz (Root Library)
- portfinder-1.0.28.tgz
- mkdirp-0.5.5.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:4.6.0;portfinder:1.0.28;mkdirp:0.5.5;minimist:1.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-44906","vulnerabilityDetails":"Minimist \u003c\u003d1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2021-44906 (Medium) detected in minimist-1.2.5.tgz - ## CVE-2021-44906 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>minimist-1.2.5.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- webpack-dev-server-4.6.0.tgz (Root Library)
- portfinder-1.0.28.tgz
- mkdirp-0.5.5.tgz
- :x: **minimist-1.2.5.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-44906">https://nvd.nist.gov/vuln/detail/CVE-2021-44906</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"minimist","packageVersion":"1.2.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"webpack-dev-server:4.6.0;portfinder:1.0.28;mkdirp:0.5.5;minimist:1.2.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"BumperLane.Public.Service.Contracts - 0.23.35.214-prerelease;cloudscribe.templates - 5.2.0;Virteom.Tenant.Mobile.Bluetooth - 0.21.29.159-prerelease;ShowingVault.DotNet.Sdk - 0.13.41.190-prerelease;Envisia.DotNet.Templates - 3.0.1;Yarnpkg.Yarn - 0.26.1;Virteom.Tenant.Mobile.Framework.UWP - 0.20.41.103-prerelease;Virteom.Tenant.Mobile.Framework.iOS - 0.20.41.103-prerelease;BumperLane.Public.Api.V2.ClientModule - 0.23.35.214-prerelease;VueJS.NetCore - 1.1.1;Dianoga - 4.0.0,3.0.0-RC02;Virteom.Tenant.Mobile.Bluetooth.iOS - 0.20.41.103-prerelease;Virteom.Public.Utilities - 0.23.37.212-prerelease;Indianadavy.VueJsWebAPITemplate.CSharp - 1.0.1;NorDroN.AngularTemplate - 0.1.6;Virteom.Tenant.Mobile.Framework - 0.21.29.159-prerelease;Virteom.Tenant.Mobile.Bluetooth.Android - 0.20.41.103-prerelease;z4a-dotnet-scaffold - 1.0.0.2;Raml.Parser - 1.0.7;CoreVueWebTest - 3.0.101;dotnetng.template - 1.0.0.4;SitecoreMaster.TrueDynamicPlaceholders - 1.0.3;Virteom.Tenant.Mobile.Framework.Android - 0.20.41.103-prerelease;Fable.Template.Elmish.React - 0.1.6;BlazorPolyfill.Build - 6.0.100.2;Fable.Snowpack.Template - 2.1.0;BumperLane.Public.Api.Client - 0.23.35.214-prerelease;Yarn.MSBuild - 0.22.0,0.24.6;Blazor.TailwindCSS.BUnit - 1.0.2;Bridge.AWS - 0.3.30.36;tslint - 5.6.0;SAFE.Template - 3.0.1;GR.PageRender.Razor - 1.8.0;MIDIator.WebClient - 1.0.105","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-44906","vulnerabilityDetails":"Minimist \u003c\u003d1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906","cvss3Severity":"medium","cvss3Score":"5.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Local","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve medium detected in minimist tgz cve medium severity vulnerability vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json dependency hierarchy webpack dev server tgz root library portfinder tgz mkdirp tgz x minimist tgz vulnerable library found in base branch master vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree webpack dev server portfinder mkdirp minimist isminimumfixversionavailable true minimumfixversion bumperlane public service contracts prerelease cloudscribe templates virteom tenant mobile bluetooth prerelease showingvault dotnet sdk prerelease envisia dotnet templates yarnpkg yarn virteom tenant mobile framework uwp prerelease virteom tenant mobile framework ios prerelease bumperlane public api clientmodule prerelease vuejs netcore dianoga virteom tenant mobile bluetooth ios prerelease virteom public utilities prerelease indianadavy vuejswebapitemplate csharp nordron angulartemplate virteom tenant mobile framework prerelease virteom tenant mobile bluetooth android prerelease dotnet scaffold raml parser corevuewebtest dotnetng template sitecoremaster truedynamicplaceholders virteom tenant mobile framework android prerelease fable template elmish react blazorpolyfill build fable snowpack template bumperlane public api client prerelease yarn msbuild blazor tailwindcss bunit bridge aws tslint safe template gr pagerender razor midiator webclient isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails minimist is vulnerable to prototype pollution via file index js function setkey lines vulnerabilityurl | 0 |
13,741 | 10,440,032,157 | IssuesEvent | 2019-09-18 07:50:16 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Port over modules collection script to golang | :infrastructure Metricbeat module refactoring | We are trying to [move away](https://github.com/elastic/beats/pull/9118#discussion_r234195444) from Python scripts called from Makefiles to golang scripts called from magefiles. Currently the [modules collection script in Metricbeat](https://github.com/elastic/beats/blob/master/metricbeat/scripts/modules_collector.py) is written in Python. It should be ported over to a golang script. | 1.0 | Port over modules collection script to golang - We are trying to [move away](https://github.com/elastic/beats/pull/9118#discussion_r234195444) from Python scripts called from Makefiles to golang scripts called from magefiles. Currently the [modules collection script in Metricbeat](https://github.com/elastic/beats/blob/master/metricbeat/scripts/modules_collector.py) is written in Python. It should be ported over to a golang script. | infrastructure | port over modules collection script to golang we are trying to from python scripts called from makefiles to golang scripts called from magefiles currently the is written in python it should be ported over to a golang script | 1 |
63,543 | 7,725,093,333 | IssuesEvent | 2018-05-24 16:51:49 | archesproject/arches | https://api.github.com/repos/archesproject/arches | closed | Create method to centralize the normalizing of request Header parameters and querystring parameters | Roadmap: Graph Designer Update - Task 1 (API) | It would be nice to have a method/(base view) that can normalize the parameters passed in via request headers as well as a query strings. this method/view could then be used in requests to the API to allow users some flexibility in how to pass parameters.
Request header parameters should be prefixed with some easily distinguishable string (eg: "X-", or "Arches-", etc...)
Some headers/parameters we should consider:
- api version
- output format
- token
- profile (flavor) ?? I can't remember what this meant
- metadata (admin) | 1.0 | Create method to centralize the normalizing of request Header parameters and querystring parameters - It would be nice to have a method/(base view) that can normalize the parameters passed in via request headers as well as a query strings. this method/view could then be used in requests to the API to allow users some flexibility in how to pass parameters.
Request header parameters should be prefixed with some easily distinguishable string (eg: "X-", or "Arches-", etc...)
Some headers/parameters we should consider:
- api version
- output format
- token
- profile (flavor) ?? I can't remember what this meant
- metadata (admin) | non_infrastructure | create method to centralize the normalizing of request header parameters and querystring parameters it would be nice to have a method base view that can normalize the parameters passed in via request headers as well as a query strings this method view could then be used in requests to the api to allow users some flexibility in how to pass parameters request header parameters should be prefixed with some easily distinguishable string eg x or arches etc some headers parameters we should consider api version output format token profile flavor i can t remember what this meant metadata admin | 0 |
25,065 | 18,071,158,788 | IssuesEvent | 2021-09-21 03:12:08 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Errors running own-compiled ApsimX | bug interface/infrastructure | Having compiled ApsimX from source in Debian Buster, I get this message when I open a model:
System.TypeLoadException: Could not load type of field 'UserInterface.Views.TWWebBrowserWK:Browser' (0) due to: Could not load file or assembly 'webkit-sharp, Version=1.1.15.0, Culture=neutral, PublicKeyToken=eaa1d335d2e19745' or one of its dependencies.
at UserInterface.Views.HTMLView.PopulateView (System.String contents, System.Boolean isURI) [0x000cb] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Views.HTMLView.SetContents (System.String contents, System.Boolean allowModification, System.Boolean isURI) [0x00026] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Presenters.GenericPresenter.Attach (System.Object model, System.Object view, UserInterface.Presenters.ExplorerPresenter explorerPresenter) [0x000ec] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Presenters.ExplorerPresenter.ShowInRightHandPanel (System.Object model, System.String viewName, System.String presenterName) [0x00071] in <26124421fb0f419cae2f222cd96186ab>:0
The model seems to open and run OK. I get similar messages when running it.
However when I try to add a new manager script to my model I get this error:
System.Reflection.ReflectionTypeLoadException: Exception of type 'System.Reflection.ReflectionTypeLoadException' was thrown.
Could not load type of field 'UserInterface.Views.TWWebBrowserWK:Browser' (0) due to: Could not load file or assembly 'webkit-sharp, Version=1.1.15.0, Culture=neutral, PublicKeyToken=eaa1d335d2e19745' or one of its dependencies.
at (wrapper managed-to-native) System.Reflection.Assembly.GetTypes(System.Reflection.Assembly,bool)
at System.Reflection.Assembly.GetTypes () [0x00000] in <7b90a8780ac4414295b539b19eea7eea>:0
at APSIM.Shared.Utilities.ReflectionUtilities.GetTypesThatHaveInterface (System.Type interfaceType) [0x0001d] in <3e558596337e4479a6739a574b89f693>:0
at Models.Core.Apsim.GetAllowableChildModels (System.Object parent) [0x00008] in <b6c2def756f5416c84e31d8fb99ed34d>:0
at UserInterface.Presenters.AddModelPresenter.Attach (System.Object model, System.Object view, UserInterface.Presenters.ExplorerPresenter explorerPresenter) [0x00026] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Presenters.ExplorerPresenter.ShowInRightHandPanel (System.Object model, System.String viewName, System.String presenterName) [0x00071] in <26124421fb0f419cae2f222cd96186ab>:0
System.TypeLoadException: Could not load type of field 'UserInterface.Views.TWWebBrowserWK:Browser' (0) due to: Could not load file or assembly 'webkit-sharp, Version=1.1.15.0, Culture=neutral, PublicKeyToken=eaa1d335d2e19745' or one of its dependencies.
... I cannot add any component to the model.
Noting the mentions of webkit-sharp in the above, I checked for matching installed packages:
$ apt list *webkit*sharp*
Listing... Done
libwebkit2-sharp-4.0-cil-dev/stable,stable,now 2.10.9+git20160917-1.1 amd64 [installed]
libwebkit2-sharp-4.0-cil-dev/stable,stable 2.10.9+git20160917-1.1 i386
libwebkit2-sharp-4.0-cil/stable,stable,now 2.10.9+git20160917-1.1 amd64 [installed]
libwebkit2-sharp-4.0-cil/stable,stable 2.10.9+git20160917-1.1 i386
...which seems to indicate I have what is required.
If I am using the .deb package downloaded from apsim.info, everything works OK. Unfortunately I cannot open my models with that version now because it says they have been opened with a newer version!
Any help? | 1.0 | Errors running own-compiled ApsimX - Having compiled ApsimX from source in Debian Buster, I get this message when I open a model:
System.TypeLoadException: Could not load type of field 'UserInterface.Views.TWWebBrowserWK:Browser' (0) due to: Could not load file or assembly 'webkit-sharp, Version=1.1.15.0, Culture=neutral, PublicKeyToken=eaa1d335d2e19745' or one of its dependencies.
at UserInterface.Views.HTMLView.PopulateView (System.String contents, System.Boolean isURI) [0x000cb] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Views.HTMLView.SetContents (System.String contents, System.Boolean allowModification, System.Boolean isURI) [0x00026] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Presenters.GenericPresenter.Attach (System.Object model, System.Object view, UserInterface.Presenters.ExplorerPresenter explorerPresenter) [0x000ec] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Presenters.ExplorerPresenter.ShowInRightHandPanel (System.Object model, System.String viewName, System.String presenterName) [0x00071] in <26124421fb0f419cae2f222cd96186ab>:0
The model seems to open and run OK. I get similar messages when running it.
However when I try to add a new manager script to my model I get this error:
System.Reflection.ReflectionTypeLoadException: Exception of type 'System.Reflection.ReflectionTypeLoadException' was thrown.
Could not load type of field 'UserInterface.Views.TWWebBrowserWK:Browser' (0) due to: Could not load file or assembly 'webkit-sharp, Version=1.1.15.0, Culture=neutral, PublicKeyToken=eaa1d335d2e19745' or one of its dependencies.
at (wrapper managed-to-native) System.Reflection.Assembly.GetTypes(System.Reflection.Assembly,bool)
at System.Reflection.Assembly.GetTypes () [0x00000] in <7b90a8780ac4414295b539b19eea7eea>:0
at APSIM.Shared.Utilities.ReflectionUtilities.GetTypesThatHaveInterface (System.Type interfaceType) [0x0001d] in <3e558596337e4479a6739a574b89f693>:0
at Models.Core.Apsim.GetAllowableChildModels (System.Object parent) [0x00008] in <b6c2def756f5416c84e31d8fb99ed34d>:0
at UserInterface.Presenters.AddModelPresenter.Attach (System.Object model, System.Object view, UserInterface.Presenters.ExplorerPresenter explorerPresenter) [0x00026] in <26124421fb0f419cae2f222cd96186ab>:0
at UserInterface.Presenters.ExplorerPresenter.ShowInRightHandPanel (System.Object model, System.String viewName, System.String presenterName) [0x00071] in <26124421fb0f419cae2f222cd96186ab>:0
System.TypeLoadException: Could not load type of field 'UserInterface.Views.TWWebBrowserWK:Browser' (0) due to: Could not load file or assembly 'webkit-sharp, Version=1.1.15.0, Culture=neutral, PublicKeyToken=eaa1d335d2e19745' or one of its dependencies.
... I cannot add any component to the model.
Noting the mentions of webkit-sharp in the above, I checked for matching installed packages:
$ apt list *webkit*sharp*
Listing... Done
libwebkit2-sharp-4.0-cil-dev/stable,stable,now 2.10.9+git20160917-1.1 amd64 [installed]
libwebkit2-sharp-4.0-cil-dev/stable,stable 2.10.9+git20160917-1.1 i386
libwebkit2-sharp-4.0-cil/stable,stable,now 2.10.9+git20160917-1.1 amd64 [installed]
libwebkit2-sharp-4.0-cil/stable,stable 2.10.9+git20160917-1.1 i386
...which seems to indicate I have what is required.
If I am using the .deb package downloaded from apsim.info, everything works OK. Unfortunately I cannot open my models with that version now because it says they have been opened with a newer version!
Any help? | infrastructure | errors running own compiled apsimx having compiled apsimx from source in debian buster i get this message when i open a model system typeloadexception could not load type of field userinterface views twwebbrowserwk browser due to could not load file or assembly webkit sharp version culture neutral publickeytoken or one of its dependencies at userinterface views htmlview populateview system string contents system boolean isuri in at userinterface views htmlview setcontents system string contents system boolean allowmodification system boolean isuri in at userinterface presenters genericpresenter attach system object model system object view userinterface presenters explorerpresenter explorerpresenter in at userinterface presenters explorerpresenter showinrighthandpanel system object model system string viewname system string presentername in the model seems to open and run ok i get similar messages when running it however when i try to add a new manager script to my model i get this error system reflection reflectiontypeloadexception exception of type system reflection reflectiontypeloadexception was thrown could not load type of field userinterface views twwebbrowserwk browser due to could not load file or assembly webkit sharp version culture neutral publickeytoken or one of its dependencies at wrapper managed to native system reflection assembly gettypes system reflection assembly bool at system reflection assembly gettypes in at apsim shared utilities reflectionutilities gettypesthathaveinterface system type interfacetype in at models core apsim getallowablechildmodels system object parent in at userinterface presenters addmodelpresenter attach system object model system object view userinterface presenters explorerpresenter explorerpresenter in at userinterface presenters explorerpresenter showinrighthandpanel system object model system string viewname system string presentername in system typeloadexception could not load type of field userinterface views twwebbrowserwk browser due to could not load file or assembly webkit sharp version culture neutral publickeytoken or one of its dependencies i cannot add any component to the model noting the mentions of webkit sharp in the above i checked for matching installed packages apt list webkit sharp listing done sharp cil dev stable stable now sharp cil dev stable stable sharp cil stable stable now sharp cil stable stable which seems to indicate i have what is required if i am using the deb package downloaded from apsim info everything works ok unfortunately i cannot open my models with that version now because it says they have been opened with a newer version any help | 1 |
15,116 | 11,356,558,210 | IssuesEvent | 2020-01-24 23:09:40 | nwfsc-fram/boatnet | https://api.github.com/repos/nwfsc-fram/boatnet | closed | Yarn link not working for some boatnet-modules | Prj:infrastructure | ```yarn link``` dev of bn-auth is throwing core-js errors.
| 1.0 | Yarn link not working for some boatnet-modules - ```yarn link``` dev of bn-auth is throwing core-js errors.
| infrastructure | yarn link not working for some boatnet modules yarn link dev of bn auth is throwing core js errors | 1 |
24,809 | 17,791,839,117 | IssuesEvent | 2021-08-31 17:06:09 | qoollo/bob | https://api.github.com/repos/qoollo/bob | opened | Write a description on Docker Hub | infrastructure | Required parts:
1. Bob description (can be taken from Readme)
2. Link to GitHub repository
3. Link to Wiki on GitHub
4. Versions description (8 bytes key vs 16 bytes key, alpine vs ubuntu)
5. How to use the image. Command to run it, exposed ports and directories, ulimit
6. Configuration details. List of configuration files, their purpose, links to examples in this repository and to description in Wiki
Source of inspiration: https://hub.docker.com/r/yandex/clickhouse-server | 1.0 | Write a description on Docker Hub - Required parts:
1. Bob description (can be taken from Readme)
2. Link to GitHub repository
3. Link to Wiki on GitHub
4. Versions description (8 bytes key vs 16 bytes key, alpine vs ubuntu)
5. How to use the image. Command to run it, exposed ports and directories, ulimit
6. Configuration details. List of configuration files, their purpose, links to examples in this repository and to description in Wiki
Source of inspiration: https://hub.docker.com/r/yandex/clickhouse-server | infrastructure | write a description on docker hub required parts bob description can be taken from readme link to github repository link to wiki on github versions description bytes key vs bytes key alpine vs ubuntu how to use the image command to run it exposed ports and directories ulimit configuration details list of configuration files their purpose links to examples in this repository and to description in wiki source of inspiration | 1 |
286,055 | 31,167,570,758 | IssuesEvent | 2023-08-16 21:04:09 | OpenHistoricalMap/issues | https://api.github.com/repos/OpenHistoricalMap/issues | opened | Inspector should only embed images from whitelisted domains | compliance inspector security images | As noted in https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1679783531, the inspector automatically embeds any URL in an `image:0`, `image:1`, `image:2`, etc. tag as an image. #583 would at least check whether it’s an image before displaying it, but we don’t have any idea what the URL points to. The inspector should only embed the image if it comes from a domain on some project-wide whitelist.
## Problem
In general, it’s quite risky for us to hotlink an image from an unknown source, especially since we present it as a signature part of the website content, rather than as part of something most laypeople would recognize as a user-contributed post.
### Durability
[This way](https://www.openhistoricalmap.org/way/198636092) links directly to [an image hosted on Google Photos](https://lh3.googleusercontent.com/2QH8W9HluQJILPpfKbFjIRc453VlFORuvrzXpb4g4YCmpG7R3RqmBvWWz3zK7wqN0tHreb6BHsV9hEJU0YVSubEBzCYUIdX3E4Sb55Yk1noUw0Ugo3MRDB4kl_j8HwlSrTVeDWYlX5w=w2400). Except for an earlier version that linked to [the share page](https://photos.google.com/share/AF1QipOj4uozDyQdvlLYDxCOy--xQXK2YRKpuwGHjBfH4N--ybfyedLTAu2lqxYSS6XVlA/photo/AF1QipOJO4EzAxXsuO-umZ-LLxOAOny_N2WiOb7MTD7C?key=clA4WEIwVkFfYzdYV21oVEUzajl0RUN5UFY1RDFn), we wouldn’t know whose Google Photo account it’s on or whether it’ll remain there long-term. For all we’d know, the mapper got the photo from somewhere else on the Internet, and the Google Photo account owner has no way of knowing that deleting their photo would break OHM.
Images on personal photo hosting services are not the only images at risk of breakage, but more public-facing services can be archived by the Wayback Machine or kept in sync with OHM through other means: https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1681131541. By explicitly listing the domains that the inspector _does_ support, we have a clearer strategy for keeping track of any widespread changes, such as [imgur culling old images](https://community.openstreetmap.org/t/imgur-is-going-to-delete-old-images/98229).
### Privacy
The page for the [Segedunum](https://www.openhistoricalmap.org/relation/2694442#map=18/54.98786/-1.53233&layers=O&date=335-12-11&daterange=10-01-01,2023-12-31) relation (the example given in https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1679886563) embeds an image from [this URL shortened by Bitly](https://bit.ly/pl_89288).[^bitly] If a user views this page in OHM, Bitly automatically sets a tracking cookie in the user’s browser, even if the user never clicks on it. It also sets `referrer-policy` to `unsafe-url`, which [potentially undermines HTTPS security](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy#unsafe-url).
<img width="547" alt="set-cookie: _bit=…; Domain=bit.ly; Expires=Mon, 12 Feb 2024 17:55:18 GMT" src="https://github.com/OpenHistoricalMap/issues/assets/1231218/afbac76a-75a5-4d3b-a289-485ff97ea5b4">
The OHM website makes no mention of [Bitly’s cookie policy](https://bitly.com/pages/cookies) or links to their opt-out screen, which I assume would be required in some jurisdictions. Perhaps the feature could be tagged with `image:1:cookie_policy` or somesuch, but I don’t imagine that mappers would be very interested in doing this kind of accounting for an image just to have it show up in the inspector.
URL shorteners aren’t the only websites that set HTTP cookies on images – so does Wikimedia Commons. However, Bitly’s business model is to use these cookies for tracking purposes. I suspect it would be a lot easier for an OHM privacy policy to link out to a fixed set of third-party privacy policies than to have to automatically discover the privacy policy for any site linked from `image:#`.
## Security
Many websites in OSM’s `website` tag have gone away, replaced by squatters who run the gamut from benign to malicious. Tracking these changes has been a challenge and [an unsolved problem for OSM](https://community.openstreetmap.org/t/is-there-a-procedure-to-prevent-link-rot/100886). Most of the occurrences of `image:#` in OHM are recent enough that they haven’t had time to rot yet, but it’s only a matter of time.
Even now, a number of `image:#` occurrences refer to HTTP URLs instead of HTTPS URLs. Google Chrome refuses to load images from HTTP in an HTTPS webpage.
## Proposal
Maintain a JSON file in some repository under the OpenHistoricalMap organization. The JSON file would include an array of whitelisted domains.[^blacklist] [In the inspector](https://github.com/OpenHistoricalMap/ohm-inspector/blob/94e546bc8ad4b05a09ad052cffff0cec1a8c8217/openhistoricalmap-inspector.js#L117), fetch this file (caching if necessary) and match an `image:#` URL against the whitelist before showing it.
Document the process for OHM contributors to get another domain listed in this file, including the criteria for getting whitelisted. If necessary, publish the repository to GitHub Pages on a subdomain of openhistoricalmap.org to keep the file lightweight and independent of the usual deployment process.
## Alternatives
@jeffreyameyer suggested in https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1679886563 that the inspector should continue to show the tagged image in general, unless the domain is on a blacklist. This would give mappers instant gratification when including images from lesser-known websites. Unfortunately, I don’t think this approach would address the durability, privacy, and security concerns, and it would create maintenance overhead of a worse nature than the proposal above: if we don’t get around to approving a pull request for a whitelist, the user may be discouraged until we do respond. But if we don’t get around to approving a PR on a blacklist, OHM potentially faces a reputational issue in the meantime.
[^bitly]: This is a kind of rubegoldbergian image reference. Besides the URL shortening service, [the rehydrated URL](https://web.archive.org/web/20210203035412/https://d279tnhy9skgzk.cloudfront.net/d8KIUrQXIdG4wBcxYD3BC0Yw9ZU=/600x0/https://s3-eu-west-1.amazonaws.com/atwam-images-files/production/images/content/segedunumromanfort/2015-06/5687.jpg) also goes through the Internet Archive and Cloudfront on its way to an AWS s3 bucket. At only 231 characters in length, the rehydrated URL would’ve satisfied OHM’s 255-character tag value length limit. Furthermore, the resulting image is a mere 600-pixel-wide thumbnail, whereas [the full image](https://s3-eu-west-1.amazonaws.com/atwam-images-files/production/images/content/segedunumromanfort/2015-06/5687.jpg) at a much shorter URL is 2,000 pixels wide, allowing the user to see many more details than in the thumbnail.
[^blacklist]: A JSON file would allow us to potentially add a complementary blacklist of URL patterns in the future, in case there’s something from a trusted domain that should be tagged but would be problematic to show in the inspector for some reason. | True | Inspector should only embed images from whitelisted domains - As noted in https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1679783531, the inspector automatically embeds any URL in an `image:0`, `image:1`, `image:2`, etc. tag as an image. #583 would at least check whether it’s an image before displaying it, but we don’t have any idea what the URL points to. The inspector should only embed the image if it comes from a domain on some project-wide whitelist.
## Problem
In general, it’s quite risky for us to hotlink an image from an unknown source, especially since we present it as a signature part of the website content, rather than as part of something most laypeople would recognize as a user-contributed post.
### Durability
[This way](https://www.openhistoricalmap.org/way/198636092) links directly to [an image hosted on Google Photos](https://lh3.googleusercontent.com/2QH8W9HluQJILPpfKbFjIRc453VlFORuvrzXpb4g4YCmpG7R3RqmBvWWz3zK7wqN0tHreb6BHsV9hEJU0YVSubEBzCYUIdX3E4Sb55Yk1noUw0Ugo3MRDB4kl_j8HwlSrTVeDWYlX5w=w2400). Except for an earlier version that linked to [the share page](https://photos.google.com/share/AF1QipOj4uozDyQdvlLYDxCOy--xQXK2YRKpuwGHjBfH4N--ybfyedLTAu2lqxYSS6XVlA/photo/AF1QipOJO4EzAxXsuO-umZ-LLxOAOny_N2WiOb7MTD7C?key=clA4WEIwVkFfYzdYV21oVEUzajl0RUN5UFY1RDFn), we wouldn’t know whose Google Photo account it’s on or whether it’ll remain there long-term. For all we’d know, the mapper got the photo from somewhere else on the Internet, and the Google Photo account owner has no way of knowing that deleting their photo would break OHM.
Images on personal photo hosting services are not the only images at risk of breakage, but more public-facing services can be archived by the Wayback Machine or kept in sync with OHM through other means: https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1681131541. By explicitly listing the domains that the inspector _does_ support, we have a clearer strategy for keeping track of any widespread changes, such as [imgur culling old images](https://community.openstreetmap.org/t/imgur-is-going-to-delete-old-images/98229).
### Privacy
The page for the [Segedunum](https://www.openhistoricalmap.org/relation/2694442#map=18/54.98786/-1.53233&layers=O&date=335-12-11&daterange=10-01-01,2023-12-31) relation (the example given in https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1679886563) embeds an image from [this URL shortened by Bitly](https://bit.ly/pl_89288).[^bitly] If a user views this page in OHM, Bitly automatically sets a tracking cookie in the user’s browser, even if the user never clicks on it. It also sets `referrer-policy` to `unsafe-url`, which [potentially undermines HTTPS security](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy#unsafe-url).
<img width="547" alt="set-cookie: _bit=…; Domain=bit.ly; Expires=Mon, 12 Feb 2024 17:55:18 GMT" src="https://github.com/OpenHistoricalMap/issues/assets/1231218/afbac76a-75a5-4d3b-a289-485ff97ea5b4">
The OHM website makes no mention of [Bitly’s cookie policy](https://bitly.com/pages/cookies) or links to their opt-out screen, which I assume would be required in some jurisdictions. Perhaps the feature could be tagged with `image:1:cookie_policy` or somesuch, but I don’t imagine that mappers would be very interested in doing this kind of accounting for an image just to have it show up in the inspector.
URL shorteners aren’t the only websites that set HTTP cookies on images – so does Wikimedia Commons. However, Bitly’s business model is to use these cookies for tracking purposes. I suspect it would be a lot easier for an OHM privacy policy to link out to a fixed set of third-party privacy policies than to have to automatically discover the privacy policy for any site linked from `image:#`.
## Security
Many websites in OSM’s `website` tag have gone away, replaced by squatters who run the gamut from benign to malicious. Tracking these changes has been a challenge and [an unsolved problem for OSM](https://community.openstreetmap.org/t/is-there-a-procedure-to-prevent-link-rot/100886). Most of the occurrences of `image:#` in OHM are recent enough that they haven’t had time to rot yet, but it’s only a matter of time.
Even now, a number of `image:#` occurrences refer to HTTP URLs instead of HTTPS URLs. Google Chrome refuses to load images from HTTP in an HTTPS webpage.
## Proposal
Maintain a JSON file in some repository under the OpenHistoricalMap organization. The JSON file would include an array of whitelisted domains.[^blacklist] [In the inspector](https://github.com/OpenHistoricalMap/ohm-inspector/blob/94e546bc8ad4b05a09ad052cffff0cec1a8c8217/openhistoricalmap-inspector.js#L117), fetch this file (caching if necessary) and match an `image:#` URL against the whitelist before showing it.
Document the process for OHM contributors to get another domain listed in this file, including the criteria for getting whitelisted. If necessary, publish the repository to GitHub Pages on a subdomain of openhistoricalmap.org to keep the file lightweight and independent of the usual deployment process.
## Alternatives
@jeffreyameyer suggested in https://github.com/OpenHistoricalMap/issues/issues/581#issuecomment-1679886563 that the inspector should continue to show the tagged image in general, unless the domain is on a blacklist. This would give mappers instant gratification when including images from lesser-known websites. Unfortunately, I don’t think this approach would address the durability, privacy, and security concerns, and it would create maintenance overhead of a worse nature than the proposal above: if we don’t get around to approving a pull request for a whitelist, the user may be discouraged until we do respond. But if we don’t get around to approving a PR on a blacklist, OHM potentially faces a reputational issue in the meantime.
[^bitly]: This is a kind of rubegoldbergian image reference. Besides the URL shortening service, [the rehydrated URL](https://web.archive.org/web/20210203035412/https://d279tnhy9skgzk.cloudfront.net/d8KIUrQXIdG4wBcxYD3BC0Yw9ZU=/600x0/https://s3-eu-west-1.amazonaws.com/atwam-images-files/production/images/content/segedunumromanfort/2015-06/5687.jpg) also goes through the Internet Archive and Cloudfront on its way to an AWS s3 bucket. At only 231 characters in length, the rehydrated URL would’ve satisfied OHM’s 255-character tag value length limit. Furthermore, the resulting image is a mere 600-pixel-wide thumbnail, whereas [the full image](https://s3-eu-west-1.amazonaws.com/atwam-images-files/production/images/content/segedunumromanfort/2015-06/5687.jpg) at a much shorter URL is 2,000 pixels wide, allowing the user to see many more details than in the thumbnail.
[^blacklist]: A JSON file would allow us to potentially add a complementary blacklist of URL patterns in the future, in case there’s something from a trusted domain that should be tagged but would be problematic to show in the inspector for some reason. | non_infrastructure | inspector should only embed images from whitelisted domains as noted in the inspector automatically embeds any url in an image image image etc tag as an image would at least check whether it’s an image before displaying it but we don’t have any idea what the url points to the inspector should only embed the image if it comes from a domain on some project wide whitelist problem in general it’s quite risky for us to hotlink an image from an unknown source especially since we present it as a signature part of the website content rather than as part of something most laypeople would recognize as a user contributed post durability links directly to except for an earlier version that linked to we wouldn’t know whose google photo account it’s on or whether it’ll remain there long term for all we’d know the mapper got the photo from somewhere else on the internet and the google photo account owner has no way of knowing that deleting their photo would break ohm images on personal photo hosting services are not the only images at risk of breakage but more public facing services can be archived by the wayback machine or kept in sync with ohm through other means by explicitly listing the domains that the inspector does support we have a clearer strategy for keeping track of any widespread changes such as privacy the page for the relation the example given in embeds an image from if a user views this page in ohm bitly automatically sets a tracking cookie in the user’s browser even if the user never clicks on it it also sets referrer policy to unsafe url which img width alt set cookie bit … domain bit ly expires mon feb gmt src the ohm website makes no mention of or links to their opt out screen which i assume would be required in some jurisdictions perhaps the feature could be tagged with image cookie policy or somesuch but i don’t imagine that mappers would be very interested in doing this kind of accounting for an image just to have it show up in the inspector url shorteners aren’t the only websites that set http cookies on images – so does wikimedia commons however bitly’s business model is to use these cookies for tracking purposes i suspect it would be a lot easier for an ohm privacy policy to link out to a fixed set of third party privacy policies than to have to automatically discover the privacy policy for any site linked from image security many websites in osm’s website tag have gone away replaced by squatters who run the gamut from benign to malicious tracking these changes has been a challenge and most of the occurrences of image in ohm are recent enough that they haven’t had time to rot yet but it’s only a matter of time even now a number of image occurrences refer to http urls instead of https urls google chrome refuses to load images from http in an https webpage proposal maintain a json file in some repository under the openhistoricalmap organization the json file would include an array of whitelisted domains fetch this file caching if necessary and match an image url against the whitelist before showing it document the process for ohm contributors to get another domain listed in this file including the criteria for getting whitelisted if necessary publish the repository to github pages on a subdomain of openhistoricalmap org to keep the file lightweight and independent of the usual deployment process alternatives jeffreyameyer suggested in that the inspector should continue to show the tagged image in general unless the domain is on a blacklist this would give mappers instant gratification when including images from lesser known websites unfortunately i don’t think this approach would address the durability privacy and security concerns and it would create maintenance overhead of a worse nature than the proposal above if we don’t get around to approving a pull request for a whitelist the user may be discouraged until we do respond but if we don’t get around to approving a pr on a blacklist ohm potentially faces a reputational issue in the meantime this is a kind of rubegoldbergian image reference besides the url shortening service also goes through the internet archive and cloudfront on its way to an aws bucket at only characters in length the rehydrated url would’ve satisfied ohm’s character tag value length limit furthermore the resulting image is a mere pixel wide thumbnail whereas at a much shorter url is nbsp pixels wide allowing the user to see many more details than in the thumbnail a json file would allow us to potentially add a complementary blacklist of url patterns in the future in case there’s something from a trusted domain that should be tagged but would be problematic to show in the inspector for some reason | 0 |
25,945 | 19,484,619,410 | IssuesEvent | 2021-12-26 04:57:00 | dealii/dealii | https://api.github.com/repos/dealii/dealii | closed | Fix indent script warnings | Infrastructure | On my Ubuntu 21.10 system, I get these warnings from the indent script several times:
```
xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value
```
I believe that the place where this is produced is here (note that `-n` is the same as the `--max-args` warned about):
```
> grep -n -r xargs contrib/utilities/
contrib/utilities/check_doxygen.sh:9:find doc examples include \( -name "*.h" -o -name "*.dox" \) -print0 | xargs -0 -n 1 contrib/utilities/checkdoxygen.py
contrib/utilities/checkdoxygen.py:6:# find doc examples include \( -name "*.h" -o -name "*.dox" \) -print | xargs -n 1 contrib/utilities/checkdoxygen.py
contrib/utilities/indent_common.sh:272:# - For 'xargs', -0 does the opposite: it separates filenames that are
contrib/utilities/indent_common.sh:285: xargs -0 -n 1 -P 10 -I {} bash -c "${3} {}" *************************************************
contrib/utilities/indent_common.sh:289: xargs -0 -n 1 -P 10 -I {} bash -c "${3} {}" *************************************************
contrib/utilities/indent_common.sh:309: XARGS="xargs -E"
contrib/utilities/indent_common.sh:312: XARGS="xargs --no-run-if-empty -d"
contrib/utilities/indent_common.sh:319: xargs -n 1 ls -d 2>/dev/null |
```
I have to admit that I don't understand why `-n` and `-I` are incompatible and what to do without further reading. | 1.0 | Fix indent script warnings - On my Ubuntu 21.10 system, I get these warnings from the indent script several times:
```
xargs: warning: options --max-args and --replace/-I/-i are mutually exclusive, ignoring previous --max-args value
```
I believe that the place where this is produced is here (note that `-n` is the same as the `--max-args` warned about):
```
> grep -n -r xargs contrib/utilities/
contrib/utilities/check_doxygen.sh:9:find doc examples include \( -name "*.h" -o -name "*.dox" \) -print0 | xargs -0 -n 1 contrib/utilities/checkdoxygen.py
contrib/utilities/checkdoxygen.py:6:# find doc examples include \( -name "*.h" -o -name "*.dox" \) -print | xargs -n 1 contrib/utilities/checkdoxygen.py
contrib/utilities/indent_common.sh:272:# - For 'xargs', -0 does the opposite: it separates filenames that are
contrib/utilities/indent_common.sh:285: xargs -0 -n 1 -P 10 -I {} bash -c "${3} {}" *************************************************
contrib/utilities/indent_common.sh:289: xargs -0 -n 1 -P 10 -I {} bash -c "${3} {}" *************************************************
contrib/utilities/indent_common.sh:309: XARGS="xargs -E"
contrib/utilities/indent_common.sh:312: XARGS="xargs --no-run-if-empty -d"
contrib/utilities/indent_common.sh:319: xargs -n 1 ls -d 2>/dev/null |
```
I have to admit that I don't understand why `-n` and `-I` are incompatible and what to do without further reading. | infrastructure | fix indent script warnings on my ubuntu system i get these warnings from the indent script several times xargs warning options max args and replace i i are mutually exclusive ignoring previous max args value i believe that the place where this is produced is here note that n is the same as the max args warned about grep n r xargs contrib utilities contrib utilities check doxygen sh find doc examples include name h o name dox xargs n contrib utilities checkdoxygen py contrib utilities checkdoxygen py find doc examples include name h o name dox print xargs n contrib utilities checkdoxygen py contrib utilities indent common sh for xargs does the opposite it separates filenames that are contrib utilities indent common sh xargs n p i bash c contrib utilities indent common sh xargs n p i bash c contrib utilities indent common sh xargs xargs e contrib utilities indent common sh xargs xargs no run if empty d contrib utilities indent common sh xargs n ls d dev null i have to admit that i don t understand why n and i are incompatible and what to do without further reading | 1 |
437,299 | 30,594,706,399 | IssuesEvent | 2023-07-21 20:35:19 | DCC-EX/dcc-ex.github.io | https://api.github.com/repos/DCC-EX/dcc-ex.github.io | closed | Add Ash++'s ESP32 info and comments to the docs. | Documentation | [Ash++ on Discord]https://discord.com/channels/713189617066836079/907288524158558299/929849585151660114
I am now careful to see that I am getting the model that says WROOM-32. It is the low-cost version and available.
--- ESP32 pins are more scarce than expected; using WiFi may preclude use of all the ADC2 pins and other pins are potentially reserved. In other words, don't plan on connecting accessories to anything except I2C modules, and use an external transistor/inverter circuit for motor boards.
--- There is a WROOM-32 board in UNO form factor. It may provide the plug-n-play option with standard motor shield; add jumpers to put current sense pins at A2/A3 locations. Also 2A is 3.3V, so a question of whether to add voltage divider or zener. The onboard antenna might be an issue as it is located below the motor shield.
--- Using Arduino IDE for ESP32 is a different experience than with a Mega. Even after you get the environment installed, it is questionable on whether you have the correct board selected. And compiling/uploading takes several minutes longer than it does for a Mega. (Or I need a new laptop?) | 1.0 | Add Ash++'s ESP32 info and comments to the docs. - [Ash++ on Discord]https://discord.com/channels/713189617066836079/907288524158558299/929849585151660114
I am now careful to see that I am getting the model that says WROOM-32. It is the low-cost version and available.
--- ESP32 pins are more scarce than expected; using WiFi may preclude use of all the ADC2 pins and other pins are potentially reserved. In other words, don't plan on connecting accessories to anything except I2C modules, and use an external transistor/inverter circuit for motor boards.
--- There is a WROOM-32 board in UNO form factor. It may provide the plug-n-play option with standard motor shield; add jumpers to put current sense pins at A2/A3 locations. Also 2A is 3.3V, so a question of whether to add voltage divider or zener. The onboard antenna might be an issue as it is located below the motor shield.
--- Using Arduino IDE for ESP32 is a different experience than with a Mega. Even after you get the environment installed, it is questionable on whether you have the correct board selected. And compiling/uploading takes several minutes longer than it does for a Mega. (Or I need a new laptop?) | non_infrastructure | add ash s info and comments to the docs i am now careful to see that i am getting the model that says wroom it is the low cost version and available pins are more scarce than expected using wifi may preclude use of all the pins and other pins are potentially reserved in other words don t plan on connecting accessories to anything except modules and use an external transistor inverter circuit for motor boards there is a wroom board in uno form factor it may provide the plug n play option with standard motor shield add jumpers to put current sense pins at locations also is so a question of whether to add voltage divider or zener the onboard antenna might be an issue as it is located below the motor shield using arduino ide for is a different experience than with a mega even after you get the environment installed it is questionable on whether you have the correct board selected and compiling uploading takes several minutes longer than it does for a mega or i need a new laptop | 0 |
8,071 | 7,202,040,932 | IssuesEvent | 2018-02-06 01:36:44 | KhronosGroup/glslang | https://api.github.com/repos/KhronosGroup/glslang | closed | Poll: Minimum required MSVC version? | Infrastructure | What should be the minimum version of MSVC required for glslang?
I'd been setting the bar at 2012 to be conservative across a broad base, but many submissions require 2013, which I then fix to work on 2012. (2012 supports a broad set of C++11, but not all those supported by 2013.)
Any input on a required minimum version?
| 1.0 | Poll: Minimum required MSVC version? - What should be the minimum version of MSVC required for glslang?
I'd been setting the bar at 2012 to be conservative across a broad base, but many submissions require 2013, which I then fix to work on 2012. (2012 supports a broad set of C++11, but not all those supported by 2013.)
Any input on a required minimum version?
| infrastructure | poll minimum required msvc version what should be the minimum version of msvc required for glslang i d been setting the bar at to be conservative across a broad base but many submissions require which i then fix to work on supports a broad set of c but not all those supported by any input on a required minimum version | 1 |
648,940 | 21,213,656,246 | IssuesEvent | 2022-04-11 04:03:26 | Youngphil5/Choices-proj3- | https://api.github.com/repos/Youngphil5/Choices-proj3- | opened | End a Poll (a post) | Difficulty 3 Priority 3 | As a user I want to be able to end a poll if I have made a choice based on the result.
| 1.0 | End a Poll (a post) - As a user I want to be able to end a poll if I have made a choice based on the result.
| non_infrastructure | end a poll a post as a user i want to be able to end a poll if i have made a choice based on the result | 0 |
31,783 | 26,114,044,330 | IssuesEvent | 2022-12-28 02:05:33 | OpenHistoricalMap/issues | https://api.github.com/repos/OpenHistoricalMap/issues | closed | OHM Overpass not updating from Database | overpass infrastructure | **Bug description**
new additions to OHM doe not appear to be making it into the OHM Overpass database
they should be showing up after a few minutes
i added Greenport Speedway to OHM 2 days ago and it is still not showing up in Overpass searches.
it should appear in this instance of my leaflet widget: http://www.na-motorsports.com/Tracks/NY/Greenport.html#TrackMap | 1.0 | OHM Overpass not updating from Database - **Bug description**
new additions to OHM doe not appear to be making it into the OHM Overpass database
they should be showing up after a few minutes
i added Greenport Speedway to OHM 2 days ago and it is still not showing up in Overpass searches.
it should appear in this instance of my leaflet widget: http://www.na-motorsports.com/Tracks/NY/Greenport.html#TrackMap | infrastructure | ohm overpass not updating from database bug description new additions to ohm doe not appear to be making it into the ohm overpass database they should be showing up after a few minutes i added greenport speedway to ohm days ago and it is still not showing up in overpass searches it should appear in this instance of my leaflet widget | 1 |
11,475 | 9,203,340,226 | IssuesEvent | 2019-03-08 02:01:19 | GSA/datagov-deploy | https://api.github.com/repos/GSA/datagov-deploy | closed | Populate: Copy into structure related files | infrastructure java server | Task: Copy in appropriate file and edit/or amend as needed.
Repo: datagov-deploy-tomcat | 1.0 | Populate: Copy into structure related files - Task: Copy in appropriate file and edit/or amend as needed.
Repo: datagov-deploy-tomcat | infrastructure | populate copy into structure related files task copy in appropriate file and edit or amend as needed repo datagov deploy tomcat | 1 |
276,185 | 20,972,384,053 | IssuesEvent | 2022-03-28 12:35:43 | Fengrui-Liu/nad21_ictfi | https://api.github.com/repos/Fengrui-Liu/nad21_ictfi | reopened | Help with datasets | documentation | I recently read your article <AN ACCURACY NETWORK ANOMALY DETECTION METHOD BASED ON ENSEMBLE MODEL>. With regard to the ZYELL-NCTU nettraffic dataset used in the experiment, I tried to find its source for use in my experiment, but it failed. Is this dataset free from open source? If so, can you share it with me? Esteem it a favor. | 1.0 | Help with datasets - I recently read your article <AN ACCURACY NETWORK ANOMALY DETECTION METHOD BASED ON ENSEMBLE MODEL>. With regard to the ZYELL-NCTU nettraffic dataset used in the experiment, I tried to find its source for use in my experiment, but it failed. Is this dataset free from open source? If so, can you share it with me? Esteem it a favor. | non_infrastructure | help with datasets i recently read your article with regard to the zyell nctu nettraffic dataset used in the experiment i tried to find its source for use in my experiment but it failed is this dataset free from open source if so can you share it with me esteem it a favor | 0 |
1,508 | 3,254,912,558 | IssuesEvent | 2015-10-20 04:24:03 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Within the Roslyn build, treat AD0001 as a warning/error | Area-Analyzers Area-Infrastructure Bug | @jaredpar came across an issue where the public API analyzer was throwing an exception in its compilation start action (see https://github.com/dotnet/roslyn-analyzers/issues/312). This effectively disabled the analyzer for the rest of the project build.
The exception was surfaced as an AD0001 diagnostic, but unfortunately these are surfaced as *info* diagnostics rather than warnings or errors. This means the only way you'll see them at the command line is if you bump up the MSBuild verbosity and then go looking through the build log. Otherwise, you have no idea it happened.
We should update the shared .ruleset file to turn AD0001 into a warning, which will then be prompted to an error during CI builds. | 1.0 | Within the Roslyn build, treat AD0001 as a warning/error - @jaredpar came across an issue where the public API analyzer was throwing an exception in its compilation start action (see https://github.com/dotnet/roslyn-analyzers/issues/312). This effectively disabled the analyzer for the rest of the project build.
The exception was surfaced as an AD0001 diagnostic, but unfortunately these are surfaced as *info* diagnostics rather than warnings or errors. This means the only way you'll see them at the command line is if you bump up the MSBuild verbosity and then go looking through the build log. Otherwise, you have no idea it happened.
We should update the shared .ruleset file to turn AD0001 into a warning, which will then be prompted to an error during CI builds. | infrastructure | within the roslyn build treat as a warning error jaredpar came across an issue where the public api analyzer was throwing an exception in its compilation start action see this effectively disabled the analyzer for the rest of the project build the exception was surfaced as an diagnostic but unfortunately these are surfaced as info diagnostics rather than warnings or errors this means the only way you ll see them at the command line is if you bump up the msbuild verbosity and then go looking through the build log otherwise you have no idea it happened we should update the shared ruleset file to turn into a warning which will then be prompted to an error during ci builds | 1 |
27,006 | 21,001,750,201 | IssuesEvent | 2022-03-29 18:09:39 | KonomiAI/Chronicle | https://api.github.com/repos/KonomiAI/Chronicle | closed | add step `prisma db push` into docker for backend | area:infrastructure | I get this when I tried to build docker:
```
chronicle-api | [2]
chronicle-api | [2] The database is already in sync with the Prisma schema.
chronicle-api | [1]
chronicle-api | [1] Watching... /usr/src/app/prisma/schema.prisma
chronicle-api | [1]
chronicle-api | [1] ✔ Generated Prisma Client (3.10.0 | library) to ./node_modules/@prisma/client in
chronicle-api | [1] 257ms
✔ Generated Prisma Client (3.10.0 | library) to ./node_modules/@prisma/client in
chronicle-api | [2] 196ms
chronicle-api | [2]
mongodb | {"t":{"$date":"2022-03-06T17:36:10.370+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1646588170:370488][1:0x7facfca40700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10, snapshot max: 10 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7"}}
chronicle-api | [2] Done in 5.25s.
chronicle-api | [2] yarn run prisma:db-push exited with code 0
chronicle-api | --> Sending SIGTERM to other processes..
chronicle-api | [0] yarn run start:debug exited with code SIGTERM
chronicle-api | --> Sending SIGTERM to other processes..
chronicle-api | [3] yarn run prisma:studio exited with code SIGTERM
chronicle-api | --> Sending SIGTERM to other processes..
chronicle-api | [1] yarn run prisma:generate-watch exited with code SIGTERM
chronicle-api exited with code 1
```
the `prisma db push` may need to be a part of the [docker step instead](https://github.com/KonomiAI/Chronicle/blob/12cf92f87a08b40847c19f323688a428b5e9386b/server/Dockerfile#L11-L12)
More Info
https://github.com/prisma/prisma/releases/tag/3.4.0
_Originally posted by @andrewpratheepan in https://github.com/KonomiAI/Chronicle/pull/103#discussion_r820265329_ | 1.0 | add step `prisma db push` into docker for backend - I get this when I tried to build docker:
```
chronicle-api | [2]
chronicle-api | [2] The database is already in sync with the Prisma schema.
chronicle-api | [1]
chronicle-api | [1] Watching... /usr/src/app/prisma/schema.prisma
chronicle-api | [1]
chronicle-api | [1] ✔ Generated Prisma Client (3.10.0 | library) to ./node_modules/@prisma/client in
chronicle-api | [1] 257ms
✔ Generated Prisma Client (3.10.0 | library) to ./node_modules/@prisma/client in
chronicle-api | [2] 196ms
chronicle-api | [2]
mongodb | {"t":{"$date":"2022-03-06T17:36:10.370+00:00"},"s":"I", "c":"STORAGE", "id":22430, "ctx":"Checkpointer","msg":"WiredTiger message","attr":{"message":"[1646588170:370488][1:0x7facfca40700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 10, snapshot max: 10 snapshot count: 0, oldest timestamp: (0, 0) , meta checkpoint timestamp: (0, 0) base write gen: 7"}}
chronicle-api | [2] Done in 5.25s.
chronicle-api | [2] yarn run prisma:db-push exited with code 0
chronicle-api | --> Sending SIGTERM to other processes..
chronicle-api | [0] yarn run start:debug exited with code SIGTERM
chronicle-api | --> Sending SIGTERM to other processes..
chronicle-api | [3] yarn run prisma:studio exited with code SIGTERM
chronicle-api | --> Sending SIGTERM to other processes..
chronicle-api | [1] yarn run prisma:generate-watch exited with code SIGTERM
chronicle-api exited with code 1
```
the `prisma db push` may need to be a part of the [docker step instead](https://github.com/KonomiAI/Chronicle/blob/12cf92f87a08b40847c19f323688a428b5e9386b/server/Dockerfile#L11-L12)
More Info
https://github.com/prisma/prisma/releases/tag/3.4.0
_Originally posted by @andrewpratheepan in https://github.com/KonomiAI/Chronicle/pull/103#discussion_r820265329_ | infrastructure | add step prisma db push into docker for backend i get this when i tried to build docker chronicle api chronicle api the database is already in sync with the prisma schema chronicle api chronicle api watching usr src app prisma schema prisma chronicle api chronicle api ✔ generated prisma client library to node modules prisma client in chronicle api ✔ generated prisma client library to node modules prisma client in chronicle api chronicle api mongodb t date s i c storage id ctx checkpointer msg wiredtiger message attr message wt session checkpoint saving checkpoint snapshot min snapshot max snapshot count oldest timestamp meta checkpoint timestamp base write gen chronicle api done in chronicle api yarn run prisma db push exited with code chronicle api sending sigterm to other processes chronicle api yarn run start debug exited with code sigterm chronicle api sending sigterm to other processes chronicle api yarn run prisma studio exited with code sigterm chronicle api sending sigterm to other processes chronicle api yarn run prisma generate watch exited with code sigterm chronicle api exited with code the prisma db push may need to be a part of the more info originally posted by andrewpratheepan in | 1 |
145 | 2,537,213,453 | IssuesEvent | 2015-01-26 19:04:05 | RobDixonIII/Bloom | https://api.github.com/repos/RobDixonIII/Bloom | closed | Stub Analytics Modules | infrastructure | In the Browser/Analytics solution folder stub the following module projects:
* Bloom.Analytics.Menu
* Bloom.Analytics.Library
* Bloom.Analytics.Album
* Bloom.Analytics.Artist
* Bloom.Analytics.Person
* Bloom.Analytics.Playlist
* Bloom.Analytics.Song
This includes setting up the properties and assembly info, and NuGet references to Unity and Prism.
| 1.0 | Stub Analytics Modules - In the Browser/Analytics solution folder stub the following module projects:
* Bloom.Analytics.Menu
* Bloom.Analytics.Library
* Bloom.Analytics.Album
* Bloom.Analytics.Artist
* Bloom.Analytics.Person
* Bloom.Analytics.Playlist
* Bloom.Analytics.Song
This includes setting up the properties and assembly info, and NuGet references to Unity and Prism.
| infrastructure | stub analytics modules in the browser analytics solution folder stub the following module projects bloom analytics menu bloom analytics library bloom analytics album bloom analytics artist bloom analytics person bloom analytics playlist bloom analytics song this includes setting up the properties and assembly info and nuget references to unity and prism | 1 |
15,707 | 3,972,000,065 | IssuesEvent | 2016-05-04 14:02:15 | MarlinFirmware/Marlin | https://api.github.com/repos/MarlinFirmware/Marlin | closed | Not sure what is being accomplished with #define SDSUPPORT | Status: Inactive Support: Configuration Support: Documentation | I've been fighting with my firmware for the past day or so after upgrading to RC5. I copied my config and pins over like I always do, except this time my SD card was mysteriously not working. I'm running a rambo and reprapdiscount smartlcd controller
I dug through the code, tried out other RCs, etc. No avail. Until just moments ago when I was really really carefully reading the code in desperation. I finally spied under #define DOGLCD and above a huge block of other commented code, the object of my search: #define SDSUPPORT :|
I'm really not certain
a) Why this is in such a weird place. If you want to be THAT granular about enabling features. (Even ones that people obviously want. Seriously, I don't think I've met a printer without SD card support.) Can we at least have it at the top of the category?
b) Why I wouldn't want to use the SD card soldered to the back of my display? I can't think of a reason. So, to me at least, it seems weird to have the edge case enabled by default. I used to enable my controller and get all its features. Now I have to know to go to two different places to enable the same board? Seems like a step back in usability to me. I think for cards that support it, it would make sense to have #define SDSUPPORT in their definition, and have a NOSDSUPPORT option to turn it off in config.h if desired.
Anyway, despite my first world rage, I would like to add that the new firmware works much better than my old one. The code is much cleaner than it was, and I like the streamlined display! So thanks! :) | 1.0 | Not sure what is being accomplished with #define SDSUPPORT - I've been fighting with my firmware for the past day or so after upgrading to RC5. I copied my config and pins over like I always do, except this time my SD card was mysteriously not working. I'm running a rambo and reprapdiscount smartlcd controller
I dug through the code, tried out other RCs, etc. No avail. Until just moments ago when I was really really carefully reading the code in desperation. I finally spied under #define DOGLCD and above a huge block of other commented code, the object of my search: #define SDSUPPORT :|
I'm really not certain
a) Why this is in such a weird place. If you want to be THAT granular about enabling features. (Even ones that people obviously want. Seriously, I don't think I've met a printer without SD card support.) Can we at least have it at the top of the category?
b) Why I wouldn't want to use the SD card soldered to the back of my display? I can't think of a reason. So, to me at least, it seems weird to have the edge case enabled by default. I used to enable my controller and get all its features. Now I have to know to go to two different places to enable the same board? Seems like a step back in usability to me. I think for cards that support it, it would make sense to have #define SDSUPPORT in their definition, and have a NOSDSUPPORT option to turn it off in config.h if desired.
Anyway, despite my first world rage, I would like to add that the new firmware works much better than my old one. The code is much cleaner than it was, and I like the streamlined display! So thanks! :) | non_infrastructure | not sure what is being accomplished with define sdsupport i ve been fighting with my firmware for the past day or so after upgrading to i copied my config and pins over like i always do except this time my sd card was mysteriously not working i m running a rambo and reprapdiscount smartlcd controller i dug through the code tried out other rcs etc no avail until just moments ago when i was really really carefully reading the code in desperation i finally spied under define doglcd and above a huge block of other commented code the object of my search define sdsupport i m really not certain a why this is in such a weird place if you want to be that granular about enabling features even ones that people obviously want seriously i don t think i ve met a printer without sd card support can we at least have it at the top of the category b why i wouldn t want to use the sd card soldered to the back of my display i can t think of a reason so to me at least it seems weird to have the edge case enabled by default i used to enable my controller and get all its features now i have to know to go to two different places to enable the same board seems like a step back in usability to me i think for cards that support it it would make sense to have define sdsupport in their definition and have a nosdsupport option to turn it off in config h if desired anyway despite my first world rage i would like to add that the new firmware works much better than my old one the code is much cleaner than it was and i like the streamlined display so thanks | 0 |
23,864 | 16,636,763,499 | IssuesEvent | 2021-06-04 00:30:44 | google/site-kit-wp | https://api.github.com/repos/google/site-kit-wp | closed | Consolidate JS linting and JS tests workflows into one | P2 QA: Eng Type: Infrastructure | ## Feature Description
In #2969 we overhauled our GitHub actions to be much more selective with when each of them runs to be more efficient with build time. This issue is intended to further enhance that by combining JS linting and tests into a single workflow so that JS tests don't run if there is a linting failure.
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
* The current `js-tests.yml` and `js-css-lint.yml` workflows should be combined into a single workflow
* The JS tests should run only after JS linting has passed successfully
* CSS linting should continue to run first as it runs the fastest
* `paths` should be updated to combine rules from both as needed
## Implementation Brief
* Check PR https://github.com/google/site-kit-wp/pull/3302
* Check that `js-tests.yml` and `js-css-lint.yml` have been merged together successfully (no paths missing)
* Check that workflow is running
### Test Coverage
* The same number of tests should be run as before.
### Visual Regression Changes
* N/A
## QA Brief
* Check that new job is working correctly on GitHub
## Changelog entry
* N/A
| 1.0 | Consolidate JS linting and JS tests workflows into one - ## Feature Description
In #2969 we overhauled our GitHub actions to be much more selective with when each of them runs to be more efficient with build time. This issue is intended to further enhance that by combining JS linting and tests into a single workflow so that JS tests don't run if there is a linting failure.
---------------
_Do not alter or remove anything below. The following sections will be managed by moderators only._
## Acceptance criteria
* The current `js-tests.yml` and `js-css-lint.yml` workflows should be combined into a single workflow
* The JS tests should run only after JS linting has passed successfully
* CSS linting should continue to run first as it runs the fastest
* `paths` should be updated to combine rules from both as needed
## Implementation Brief
* Check PR https://github.com/google/site-kit-wp/pull/3302
* Check that `js-tests.yml` and `js-css-lint.yml` have been merged together successfully (no paths missing)
* Check that workflow is running
### Test Coverage
* The same number of tests should be run as before.
### Visual Regression Changes
* N/A
## QA Brief
* Check that new job is working correctly on GitHub
## Changelog entry
* N/A
| infrastructure | consolidate js linting and js tests workflows into one feature description in we overhauled our github actions to be much more selective with when each of them runs to be more efficient with build time this issue is intended to further enhance that by combining js linting and tests into a single workflow so that js tests don t run if there is a linting failure do not alter or remove anything below the following sections will be managed by moderators only acceptance criteria the current js tests yml and js css lint yml workflows should be combined into a single workflow the js tests should run only after js linting has passed successfully css linting should continue to run first as it runs the fastest paths should be updated to combine rules from both as needed implementation brief check pr check that js tests yml and js css lint yml have been merged together successfully no paths missing check that workflow is running test coverage the same number of tests should be run as before visual regression changes n a qa brief check that new job is working correctly on github changelog entry n a | 1 |
21,318 | 14,524,770,754 | IssuesEvent | 2020-12-14 11:55:38 | Altinn/altinn-studio | https://api.github.com/repos/Altinn/altinn-studio | closed | TTD resource group in TT02 does not follow the agreed upon naming convention | kind/bug ops/infrastructure team/infra | ## Describe the bug
A resource not following the naming conventions means we alway need to make special cases when handling the resources in this resource group.
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. See error
## Expected behavior
Resource group should use the common convention ttd-
## Screenshots
> If applicable, add screenshots or animated gif to help explain your problem.
## Tasks that needs to be done on our side once this has been corrected
- [x] Setup TTD cluster for tt02 with correct naming convention (handled by infra)
- [x] Update pipelines that have logic for handling incorrect name conventions
- [x] `altinn-studio-deploy-app-image`
- [x] Investigate if other pipelines have custom logic
- [x] kuberneteswrapper
- [x] Verify that app build/deploy works as expected end-to-end for TTD @jeevananthank
## Additional info
> Add any other relevant context info about the problem here.
> For example OS (Windows, MacOS, iOS), browser (Chrome, Firefox, Safari), device (iPhone, laptop), screen-size, etc.
| 1.0 | TTD resource group in TT02 does not follow the agreed upon naming convention - ## Describe the bug
A resource not following the naming conventions means we alway need to make special cases when handling the resources in this resource group.
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. See error
## Expected behavior
Resource group should use the common convention ttd-
## Screenshots
> If applicable, add screenshots or animated gif to help explain your problem.
## Tasks that needs to be done on our side once this has been corrected
- [x] Setup TTD cluster for tt02 with correct naming convention (handled by infra)
- [x] Update pipelines that have logic for handling incorrect name conventions
- [x] `altinn-studio-deploy-app-image`
- [x] Investigate if other pipelines have custom logic
- [x] kuberneteswrapper
- [x] Verify that app build/deploy works as expected end-to-end for TTD @jeevananthank
## Additional info
> Add any other relevant context info about the problem here.
> For example OS (Windows, MacOS, iOS), browser (Chrome, Firefox, Safari), device (iPhone, laptop), screen-size, etc.
| infrastructure | ttd resource group in does not follow the agreed upon naming convention describe the bug a resource not following the naming conventions means we alway need to make special cases when handling the resources in this resource group to reproduce steps to reproduce the behavior go to click on see error expected behavior resource group should use the common convention ttd screenshots if applicable add screenshots or animated gif to help explain your problem tasks that needs to be done on our side once this has been corrected setup ttd cluster for with correct naming convention handled by infra update pipelines that have logic for handling incorrect name conventions altinn studio deploy app image investigate if other pipelines have custom logic kuberneteswrapper verify that app build deploy works as expected end to end for ttd jeevananthank additional info add any other relevant context info about the problem here for example os windows macos ios browser chrome firefox safari device iphone laptop screen size etc | 1 |
75,224 | 20,732,500,213 | IssuesEvent | 2022-03-14 10:43:06 | denoland/deno | https://api.github.com/repos/denoland/deno | opened | Disable snapshots when cross compiling | build suggestion | When cross-compiling, snapshots are built for the host architecture instead of the target. There is no easy workaround for this.
Unblocks our CI to produce the following targets:
- `aarch64-android-linux`
- `aarch64-unknown-linux-gnu` | 1.0 | Disable snapshots when cross compiling - When cross-compiling, snapshots are built for the host architecture instead of the target. There is no easy workaround for this.
Unblocks our CI to produce the following targets:
- `aarch64-android-linux`
- `aarch64-unknown-linux-gnu` | non_infrastructure | disable snapshots when cross compiling when cross compiling snapshots are built for the host architecture instead of the target there is no easy workaround for this unblocks our ci to produce the following targets android linux unknown linux gnu | 0 |
23,170 | 15,876,897,253 | IssuesEvent | 2021-04-09 08:57:26 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | closed | Update `pytest-timeout` usage | area:rasa-oss :ferris_wheel: area:rasa-oss/infrastructure :bullettrain_front: effort:enable-squad/2 feature:speed-up-ci :zap: type:enhancement :sparkles: | **Description of Problem**:
Our `pytest.mark.timeouts` are
* too big by default (120 seconds) to identify slow tests
* include the fixture setup time which means that if a fixture is very slow and one test fails due to a timeout, the next one is likely to fail as well
**Overview of the Solution**:
* Have a low default timeout so we can identify slow tests outside of the machine learning / integration tests quickly.
* only measure the pure test runtime and don't include the fixture setup time
**Blockers** (if relevant):
* https://github.com/RasaHQ/rasa/issues/8118 needs to be done first
**Definition of Done**:
- [ ] Set pytest.timeout to `10` for all tests except
* Core featurizers / policies
* NLU featurizers / components
* integration tests
- [ ] Make sure that every increased timeout in above exceptions (which are annotated using `@pytest.mark.timeout`) use `func_only=True`. This excludes the fixture time to be included in the timeout
- [ ] optional: Ensure that no tests outside ML + integration can have a customized `pytest.mark.timeout`
| 1.0 | Update `pytest-timeout` usage - **Description of Problem**:
Our `pytest.mark.timeouts` are
* too big by default (120 seconds) to identify slow tests
* include the fixture setup time which means that if a fixture is very slow and one test fails due to a timeout, the next one is likely to fail as well
**Overview of the Solution**:
* Have a low default timeout so we can identify slow tests outside of the machine learning / integration tests quickly.
* only measure the pure test runtime and don't include the fixture setup time
**Blockers** (if relevant):
* https://github.com/RasaHQ/rasa/issues/8118 needs to be done first
**Definition of Done**:
- [ ] Set pytest.timeout to `10` for all tests except
* Core featurizers / policies
* NLU featurizers / components
* integration tests
- [ ] Make sure that every increased timeout in above exceptions (which are annotated using `@pytest.mark.timeout`) use `func_only=True`. This excludes the fixture time to be included in the timeout
- [ ] optional: Ensure that no tests outside ML + integration can have a customized `pytest.mark.timeout`
| infrastructure | update pytest timeout usage description of problem our pytest mark timeouts are too big by default seconds to identify slow tests include the fixture setup time which means that if a fixture is very slow and one test fails due to a timeout the next one is likely to fail as well overview of the solution have a low default timeout so we can identify slow tests outside of the machine learning integration tests quickly only measure the pure test runtime and don t include the fixture setup time blockers if relevant needs to be done first definition of done set pytest timeout to for all tests except core featurizers policies nlu featurizers components integration tests make sure that every increased timeout in above exceptions which are annotated using pytest mark timeout use func only true this excludes the fixture time to be included in the timeout optional ensure that no tests outside ml integration can have a customized pytest mark timeout | 1 |
519,729 | 15,056,439,546 | IssuesEvent | 2021-02-03 20:10:57 | remnoteio/remnote-issues | https://api.github.com/repos/remnoteio/remnote-issues | closed | Same Flashcards Showing Up Twice | fixed-in-next-update fixed-in-remnote-1.2.2 priority=1 | Some flashcards repeat themselves although I had already gone through them. For example, on the Android app, I had the last card repeat itself twice and acting like the spaced repetition period didn't exist. The first time it showed it gave me the options to repeat the same material in a couple days, and then in the repeated card, it gave the choice to postpone the card up to month. I just made this card a couple days ago.
More problems like this happen where I finish my queue in the morning (I did it with my browser today) and when switch to my phone, it gives me part that same queue again even though I had finished it earlier that day. Note that it didn't give me the entire queue, just part of it.
Extra details (may be jumbled up an incoherent but it might help): When entering the app, it was trying recognize that did some if earlier, so it turned the 27 to 0. A little later I see that queue shows up again and has stars next to it. I press it and it the same queue I did earlier that day (still referring to the one only showed part of it).
**To Reproduce**
I'm not exactly sure what triggered this, but try to switch between the Android app and the Web version to see if any inconsistencies happen.
**Expected behavior**
I expected for the material to be shown once and not multiple times in the same day.
I remember when looking at the tutorial videos and hearing something that bothered me: when you do flashcards without spaced repetition, spaced repetition still applies. That's crazy! I imagined there would be a checker in the code to see if the card has been reviewed within a certain time interval (aka the spaced repetition time). What also confuses me is how the cards in that mode can't be put into a state where spaced repetition does not affect them. At the least, you could make it so each option temporarily works like this option
. You could even temporarily remove the options altogether. Sorry for the rant, but I believe that detail has something to do with why the card didn't recognize it was being used twice. Another option would be just keep the cards from changing spaced repetition dates if they were reviewed in the same day, but that wouldn't fix the underlying problem in my opinion. Alright, that's everything to the detail.


I also have a completely different thing happening on my phone. Same thing as the first thing I said.


**Desktop:**
Chrome 84.0.4147 / Chrome OS 13099.110.0
**Smartphone:**
- Device: [Moto e6]
- OS: [Android 9]
- Android App
**Additional context**
TLDR: My phone and computer are showing to different queues at the same time. Also they're glitching so they get counted twice.
| 1.0 | Same Flashcards Showing Up Twice - Some flashcards repeat themselves although I had already gone through them. For example, on the Android app, I had the last card repeat itself twice and acting like the spaced repetition period didn't exist. The first time it showed it gave me the options to repeat the same material in a couple days, and then in the repeated card, it gave the choice to postpone the card up to month. I just made this card a couple days ago.
More problems like this happen where I finish my queue in the morning (I did it with my browser today) and when switch to my phone, it gives me part that same queue again even though I had finished it earlier that day. Note that it didn't give me the entire queue, just part of it.
Extra details (may be jumbled up an incoherent but it might help): When entering the app, it was trying recognize that did some if earlier, so it turned the 27 to 0. A little later I see that queue shows up again and has stars next to it. I press it and it the same queue I did earlier that day (still referring to the one only showed part of it).
**To Reproduce**
I'm not exactly sure what triggered this, but try to switch between the Android app and the Web version to see if any inconsistencies happen.
**Expected behavior**
I expected for the material to be shown once and not multiple times in the same day.
I remember when looking at the tutorial videos and hearing something that bothered me: when you do flashcards without spaced repetition, spaced repetition still applies. That's crazy! I imagined there would be a checker in the code to see if the card has been reviewed within a certain time interval (aka the spaced repetition time). What also confuses me is how the cards in that mode can't be put into a state where spaced repetition does not affect them. At the least, you could make it so each option temporarily works like this option
. You could even temporarily remove the options altogether. Sorry for the rant, but I believe that detail has something to do with why the card didn't recognize it was being used twice. Another option would be just keep the cards from changing spaced repetition dates if they were reviewed in the same day, but that wouldn't fix the underlying problem in my opinion. Alright, that's everything to the detail.


I also have a completely different thing happening on my phone. Same thing as the first thing I said.


**Desktop:**
Chrome 84.0.4147 / Chrome OS 13099.110.0
**Smartphone:**
- Device: [Moto e6]
- OS: [Android 9]
- Android App
**Additional context**
TLDR: My phone and computer are showing to different queues at the same time. Also they're glitching so they get counted twice.
| non_infrastructure | same flashcards showing up twice some flashcards repeat themselves although i had already gone through them for example on the android app i had the last card repeat itself twice and acting like the spaced repetition period didn t exist the first time it showed it gave me the options to repeat the same material in a couple days and then in the repeated card it gave the choice to postpone the card up to month i just made this card a couple days ago more problems like this happen where i finish my queue in the morning i did it with my browser today and when switch to my phone it gives me part that same queue again even though i had finished it earlier that day note that it didn t give me the entire queue just part of it extra details may be jumbled up an incoherent but it might help when entering the app it was trying recognize that did some if earlier so it turned the to a little later i see that queue shows up again and has stars next to it i press it and it the same queue i did earlier that day still referring to the one only showed part of it to reproduce i m not exactly sure what triggered this but try to switch between the android app and the web version to see if any inconsistencies happen expected behavior i expected for the material to be shown once and not multiple times in the same day i remember when looking at the tutorial videos and hearing something that bothered me when you do flashcards without spaced repetition spaced repetition still applies that s crazy i imagined there would be a checker in the code to see if the card has been reviewed within a certain time interval aka the spaced repetition time what also confuses me is how the cards in that mode can t be put into a state where spaced repetition does not affect them at the least you could make it so each option temporarily works like this option you could even temporarily remove the options altogether sorry for the rant but i believe that detail has something to do with why the card didn t recognize it was being used twice another option would be just keep the cards from changing spaced repetition dates if they were reviewed in the same day but that wouldn t fix the underlying problem in my opinion alright that s everything to the detail i also have a completely different thing happening on my phone same thing as the first thing i said desktop chrome chrome os smartphone device os android app additional context tldr my phone and computer are showing to different queues at the same time also they re glitching so they get counted twice | 0 |
181,395 | 14,019,126,647 | IssuesEvent | 2020-10-29 17:44:54 | quarkusio/quarkus | https://api.github.com/repos/quarkusio/quarkus | closed | Root cause of exception "Failed to initialize Arc" not getting reported. | area/testing kind/bug |
All types of exceptions thrown during testing using quarkus are not getting
reported, (e.g. ClassNotFound, ServiceConfigurationError, RuntimeExceptions,
java.net.ConnectException).
Almost all errors bubble up through io.quarkus.arc.Arc.initialize(Arc.java:26)
and are caught in org.jboss.arquillian.core.impl.ObserverImpl line 89,
however at no time in the call stack is the exception information being
printed to the console. All the user sees is a test result of
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
In addition the true cause is hidden in layers of wrapping exceptions. See
attached image () of a typical cause stack. This image was generated by
setting a breakpoint at ObserverImpl line 90 and commenting out exclude statement
in pom.xml at line 599 for test org/jboss/resteasy/test/client/InputStreamTest.java.
Test run with cmd
mvn test -Dmaven.surefire.debug
See project to reproduce https://github.com/rsearls/qurakus-test-exception-reporting.git
Follow the directions in the README to run test
Execution env.
jdk 11.0.2
mvn 3.6.0
quarkus 1.3.0.Alpha1
| 1.0 | Root cause of exception "Failed to initialize Arc" not getting reported. -
All types of exceptions thrown during testing using quarkus are not getting
reported, (e.g. ClassNotFound, ServiceConfigurationError, RuntimeExceptions,
java.net.ConnectException).
Almost all errors bubble up through io.quarkus.arc.Arc.initialize(Arc.java:26)
and are caught in org.jboss.arquillian.core.impl.ObserverImpl line 89,
however at no time in the call stack is the exception information being
printed to the console. All the user sees is a test result of
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
In addition the true cause is hidden in layers of wrapping exceptions. See
attached image () of a typical cause stack. This image was generated by
setting a breakpoint at ObserverImpl line 90 and commenting out exclude statement
in pom.xml at line 599 for test org/jboss/resteasy/test/client/InputStreamTest.java.
Test run with cmd
mvn test -Dmaven.surefire.debug
See project to reproduce https://github.com/rsearls/qurakus-test-exception-reporting.git
Follow the directions in the README to run test
Execution env.
jdk 11.0.2
mvn 3.6.0
quarkus 1.3.0.Alpha1
| non_infrastructure | root cause of exception failed to initialize arc not getting reported all types of exceptions thrown during testing using quarkus are not getting reported e g classnotfound serviceconfigurationerror runtimeexceptions java net connectexception almost all errors bubble up through io quarkus arc arc initialize arc java and are caught in org jboss arquillian core impl observerimpl line however at no time in the call stack is the exception information being printed to the console all the user sees is a test result of tests run failures errors skipped in addition the true cause is hidden in layers of wrapping exceptions see attached image of a typical cause stack this image was generated by setting a breakpoint at observerimpl line and commenting out exclude statement in pom xml at line for test org jboss resteasy test client inputstreamtest java test run with cmd mvn test dmaven surefire debug see project to reproduce follow the directions in the readme to run test execution env jdk mvn quarkus | 0 |
32,631 | 26,850,349,128 | IssuesEvent | 2023-02-03 10:31:48 | nilearn/nilearn | https://api.github.com/repos/nilearn/nilearn | closed | SciPy installation failing in Documentation builder Action workflow | Infrastructure | The pip install of scipy started to fail in the documentation builder. I have yet to investigate but may be related to:
https://github.com/scipy/scipy/issues/17736
https://scipy.github.io/devdocs/dev/contributor/meson_advanced.html#select-a-different-blas-or-lapack-library
One failure log can be found here: https://github.com/nilearn/nilearn/actions/runs/4065031871/jobs/6999194786 | 1.0 | SciPy installation failing in Documentation builder Action workflow - The pip install of scipy started to fail in the documentation builder. I have yet to investigate but may be related to:
https://github.com/scipy/scipy/issues/17736
https://scipy.github.io/devdocs/dev/contributor/meson_advanced.html#select-a-different-blas-or-lapack-library
One failure log can be found here: https://github.com/nilearn/nilearn/actions/runs/4065031871/jobs/6999194786 | infrastructure | scipy installation failing in documentation builder action workflow the pip install of scipy started to fail in the documentation builder i have yet to investigate but may be related to one failure log can be found here | 1 |
488,015 | 14,073,532,764 | IssuesEvent | 2020-11-04 05:08:05 | tikv/tikv | https://api.github.com/repos/tikv/tikv | closed | storage::test_raft_storage::test_auto_gc, panicked at 'called `Result::unwrap()` on an `Err` value: Txn(Engine(Request(message: "stale command")))', src/libcore/result.rs:1165:5 | priority/high severity/Major sig/raft type/bug | The log is here
https://internal.pingcap.net/idc-jenkins/blue/organizations/jenkins/tikv_ghpr_test/detail/tikv_ghpr_test/9145/pipeline/142/
| 1.0 | storage::test_raft_storage::test_auto_gc, panicked at 'called `Result::unwrap()` on an `Err` value: Txn(Engine(Request(message: "stale command")))', src/libcore/result.rs:1165:5 - The log is here
https://internal.pingcap.net/idc-jenkins/blue/organizations/jenkins/tikv_ghpr_test/detail/tikv_ghpr_test/9145/pipeline/142/
| non_infrastructure | storage test raft storage test auto gc panicked at called result unwrap on an err value txn engine request message stale command src libcore result rs the log is here | 0 |
50,677 | 7,622,758,894 | IssuesEvent | 2018-05-03 13:14:29 | fga-gpp-mds/2018.1_Gerencia_mais | https://api.github.com/repos/fga-gpp-mds/2018.1_Gerencia_mais | closed | [TS28] - Automatizar deploy da página de documentos (pages) | [Configuration] [DevOps] [Documentation] [EPS] [Organization] [Technical Stories] | **Eu, como** devops
**Desejo** automatizar o deploy da documentação do projeto
**Para que** a página de documentação esteja sempre atualizada. | 1.0 | [TS28] - Automatizar deploy da página de documentos (pages) - **Eu, como** devops
**Desejo** automatizar o deploy da documentação do projeto
**Para que** a página de documentação esteja sempre atualizada. | non_infrastructure | automatizar deploy da página de documentos pages eu como devops desejo automatizar o deploy da documentação do projeto para que a página de documentação esteja sempre atualizada | 0 |
3,307 | 4,212,194,726 | IssuesEvent | 2016-06-29 15:35:28 | uProxy/uproxy | https://api.github.com/repos/uProxy/uproxy | closed | Commits to dev/master that don't have a pull request should generate a warning notification | C:Infrastructure P2 | We don't want commits to master or dev that didn't go through review.
We'd like too commits/merges into master or dev to first have had a pull request.
Ideally we'd like also for the pull request to have been reviews and got an LGTM or thumbs up comment. | 1.0 | Commits to dev/master that don't have a pull request should generate a warning notification - We don't want commits to master or dev that didn't go through review.
We'd like too commits/merges into master or dev to first have had a pull request.
Ideally we'd like also for the pull request to have been reviews and got an LGTM or thumbs up comment. | infrastructure | commits to dev master that don t have a pull request should generate a warning notification we don t want commits to master or dev that didn t go through review we d like too commits merges into master or dev to first have had a pull request ideally we d like also for the pull request to have been reviews and got an lgtm or thumbs up comment | 1 |
12,376 | 9,754,764,370 | IssuesEvent | 2019-06-04 12:28:19 | SciTools/iris | https://api.github.com/repos/SciTools/iris | closed | Installing iris with pip is cumbersome (with fix at the bottom) | Type: Infrastructure | Hey guys, been a while since I spammed you with nasty stuff :grin: I am trying to think carefully how to migrate ESMValTool deps from conda to PyPi since PyPi is so much more awesome, and in the process, as a natural starter, I started with `iris>=2.2`. Since some of our deps are impossible to install from PyPi we'll still require a conda environment; installing `iris` with `pip` is actually harder and more cumbersome than I thought (sorry, am being nasty :grin: ). Here is the conda env file that I used for a **successful** installation:
```
---
name: test_basic
channels:
- conda-forge
dependencies:
#- matplotlib<3
- pip=18 # revert from 19 due to PEP517 error with cartopy
- six # will not install automatically via pip
- proj4 # will not install automatically via pip
- pyke # will not install automatically via pip
- cartopy # pip install: error: command 'gcc' failed with exit status 1
- udunits2 # libudunits2.so.0: cannot open shared object
- pip:
- scitools-iris>=2.2
```
As you can see there are a number of iris deps that will not be automatically picked up and installed by pip and need to be installed in advance by conda; also there is [this issue](https://github.com/SciTools/cartopy/issues/1270) that forces pip to be retrograded to v18.1; and there is the issue of `udunits2` that manifests itself after the environment has successfuly been created and iris installed (creeps up at `import iris` stage).
Anything that can be done for a much smoother installation of the package? Will buy beer :beer:
NOTE: the PEP517 issue is not only for MacOSX, I used Debian/Ubuntu and ran into it | 1.0 | Installing iris with pip is cumbersome (with fix at the bottom) - Hey guys, been a while since I spammed you with nasty stuff :grin: I am trying to think carefully how to migrate ESMValTool deps from conda to PyPi since PyPi is so much more awesome, and in the process, as a natural starter, I started with `iris>=2.2`. Since some of our deps are impossible to install from PyPi we'll still require a conda environment; installing `iris` with `pip` is actually harder and more cumbersome than I thought (sorry, am being nasty :grin: ). Here is the conda env file that I used for a **successful** installation:
```
---
name: test_basic
channels:
- conda-forge
dependencies:
#- matplotlib<3
- pip=18 # revert from 19 due to PEP517 error with cartopy
- six # will not install automatically via pip
- proj4 # will not install automatically via pip
- pyke # will not install automatically via pip
- cartopy # pip install: error: command 'gcc' failed with exit status 1
- udunits2 # libudunits2.so.0: cannot open shared object
- pip:
- scitools-iris>=2.2
```
As you can see there are a number of iris deps that will not be automatically picked up and installed by pip and need to be installed in advance by conda; also there is [this issue](https://github.com/SciTools/cartopy/issues/1270) that forces pip to be retrograded to v18.1; and there is the issue of `udunits2` that manifests itself after the environment has successfuly been created and iris installed (creeps up at `import iris` stage).
Anything that can be done for a much smoother installation of the package? Will buy beer :beer:
NOTE: the PEP517 issue is not only for MacOSX, I used Debian/Ubuntu and ran into it | infrastructure | installing iris with pip is cumbersome with fix at the bottom hey guys been a while since i spammed you with nasty stuff grin i am trying to think carefully how to migrate esmvaltool deps from conda to pypi since pypi is so much more awesome and in the process as a natural starter i started with iris since some of our deps are impossible to install from pypi we ll still require a conda environment installing iris with pip is actually harder and more cumbersome than i thought sorry am being nasty grin here is the conda env file that i used for a successful installation name test basic channels conda forge dependencies matplotlib pip revert from due to error with cartopy six will not install automatically via pip will not install automatically via pip pyke will not install automatically via pip cartopy pip install error command gcc failed with exit status so cannot open shared object pip scitools iris as you can see there are a number of iris deps that will not be automatically picked up and installed by pip and need to be installed in advance by conda also there is that forces pip to be retrograded to and there is the issue of that manifests itself after the environment has successfuly been created and iris installed creeps up at import iris stage anything that can be done for a much smoother installation of the package will buy beer beer note the issue is not only for macosx i used debian ubuntu and ran into it | 1 |
481,541 | 13,888,531,154 | IssuesEvent | 2020-10-19 06:28:28 | vanjarosoftware/Vanjaro.Platform | https://api.github.com/repos/vanjarosoftware/Vanjaro.Platform | closed | Improve loading progress bar and icon | Area: Frontend Enhancement Priority: Medium Release: Minor | 1. Implement grodient: linear-gradient(left, #777 0%, #444 100%)
2. Update progress icon with Vanjaro logo icon

| 1.0 | Improve loading progress bar and icon - 1. Implement grodient: linear-gradient(left, #777 0%, #444 100%)
2. Update progress icon with Vanjaro logo icon

| non_infrastructure | improve loading progress bar and icon implement grodient linear gradient left update progress icon with vanjaro logo icon | 0 |
59,832 | 6,664,230,617 | IssuesEvent | 2017-10-02 19:21:05 | vmware/vic | https://api.github.com/repos/vmware/vic | reopened | Couldn't find official build of vcfvt testware on vsphere60u2 | component/test/nightly priority/high | Build# 9992
vSphere v6.0
```
KEYWORD OperatingSystem . Set Environment Variable GOVC_URL, ${vc-ip}
Documentation:
Sets an environment variable to a specified value.
Start / End / Elapsed: 20170426 08:23:02.229 / 20170426 08:23:02.231 / 00:00:00.002
08:23:02.231 FAIL Resolving variable '${vc-ip}' failed: Variable '${vc}' not found.
``` | 1.0 | Couldn't find official build of vcfvt testware on vsphere60u2 - Build# 9992
vSphere v6.0
```
KEYWORD OperatingSystem . Set Environment Variable GOVC_URL, ${vc-ip}
Documentation:
Sets an environment variable to a specified value.
Start / End / Elapsed: 20170426 08:23:02.229 / 20170426 08:23:02.231 / 00:00:00.002
08:23:02.231 FAIL Resolving variable '${vc-ip}' failed: Variable '${vc}' not found.
``` | non_infrastructure | couldn t find official build of vcfvt testware on build vsphere keyword operatingsystem set environment variable govc url vc ip documentation sets an environment variable to a specified value start end elapsed fail resolving variable vc ip failed variable vc not found | 0 |
821,196 | 30,810,319,817 | IssuesEvent | 2023-08-01 09:56:37 | GSS-Cogs/dd-cms | https://api.github.com/repos/GSS-Cogs/dd-cms | closed | Investigate How to Update Logging Messages in Plone or SPARQL Data Connector | spike high priority | There is a lack of detail in CMS log messages
This issue is to investigate how to update logging messages in Plone or the SPARQL data connector and test the Plone log levels to see if the level should change
Tasks:
- [ ] Understand the [add-on](https://training.plone.org/mastering-plone-5/add-ons.html) architecture in Plone
- [ ] Identify whether the SPARQL Data Connector exists as code in the dd-cms repository we can change, or whether it is a dependency we pull in
- [ ] Prove your local environment outputs the [current logging](https://console.cloud.google.com/logs/query;query=resource.labels.cluster_name%3D%22staging%22%0Aresource.type%3D%22k8s_container%22;cursorTimestamp=2023-07-14T08:45:09.146203144Z;startTime=2023-07-14T08:45:00.000Z;endTime=2023-07-14T08:45:30.000Z?project=optimum-bonbon-257411) when loading charts (by loading a dashboard for example). Specifically, the only existing log line that mentions SPARQL is : `WARNING [eea.restapi:19][waitress-0] func:'_get_data' args:[(<ukstats.sparql_dataconnector.adapter.SPARQLDataProviderForConnectors object at 0x7fc590080250>,), {}] took: 0.7701 sec"`
- [ ] Find a way to log out some detail of the SPARQL query. This could be at the restapi layer, if the data connector code is out of our control, or preferably within the data connector add-on if possible.
- [ ] Ideally, we want to log the Title and SPARQL Endpoint URL from the data connector object

| 1.0 | Investigate How to Update Logging Messages in Plone or SPARQL Data Connector - There is a lack of detail in CMS log messages
This issue is to investigate how to update logging messages in Plone or the SPARQL data connector and test the Plone log levels to see if the level should change
Tasks:
- [ ] Understand the [add-on](https://training.plone.org/mastering-plone-5/add-ons.html) architecture in Plone
- [ ] Identify whether the SPARQL Data Connector exists as code in the dd-cms repository we can change, or whether it is a dependency we pull in
- [ ] Prove your local environment outputs the [current logging](https://console.cloud.google.com/logs/query;query=resource.labels.cluster_name%3D%22staging%22%0Aresource.type%3D%22k8s_container%22;cursorTimestamp=2023-07-14T08:45:09.146203144Z;startTime=2023-07-14T08:45:00.000Z;endTime=2023-07-14T08:45:30.000Z?project=optimum-bonbon-257411) when loading charts (by loading a dashboard for example). Specifically, the only existing log line that mentions SPARQL is : `WARNING [eea.restapi:19][waitress-0] func:'_get_data' args:[(<ukstats.sparql_dataconnector.adapter.SPARQLDataProviderForConnectors object at 0x7fc590080250>,), {}] took: 0.7701 sec"`
- [ ] Find a way to log out some detail of the SPARQL query. This could be at the restapi layer, if the data connector code is out of our control, or preferably within the data connector add-on if possible.
- [ ] Ideally, we want to log the Title and SPARQL Endpoint URL from the data connector object

| non_infrastructure | investigate how to update logging messages in plone or sparql data connector there is a lack of detail in cms log messages this issue is to investigate how to update logging messages in plone or the sparql data connector and test the plone log levels to see if the level should change tasks understand the architecture in plone identify whether the sparql data connector exists as code in the dd cms repository we can change or whether it is a dependency we pull in prove your local environment outputs the when loading charts by loading a dashboard for example specifically the only existing log line that mentions sparql is warning func get data args took sec find a way to log out some detail of the sparql query this could be at the restapi layer if the data connector code is out of our control or preferably within the data connector add on if possible ideally we want to log the title and sparql endpoint url from the data connector object | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.