Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
15,584
| 19,706,219,438
|
IssuesEvent
|
2022-01-12 22:23:10
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Add firewall samples to google-cloud-compute-v1
|
type: process samples
|
Thanks for stopping by to let us know something could be better!
**PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response.
**Is your feature request related to a problem? Please describe.**
Add firewall samples for google-cloud-compute-v1 to show how to use the API.
|
1.0
|
Add firewall samples to google-cloud-compute-v1 - Thanks for stopping by to let us know something could be better!
**PLEASE READ**: If you have a support contract with Google, please create an issue in the [support console](https://cloud.google.com/support/) instead of filing on GitHub. This will ensure a timely response.
**Is your feature request related to a problem? Please describe.**
Add firewall samples for google-cloud-compute-v1 to show how to use the API.
|
process
|
add firewall samples to google cloud compute thanks for stopping by to let us know something could be better please read if you have a support contract with google please create an issue in the instead of filing on github this will ensure a timely response is your feature request related to a problem please describe add firewall samples for google cloud compute to show how to use the api
| 1
|
76,128
| 9,382,409,633
|
IssuesEvent
|
2019-04-04 22:20:20
|
byucs340ta/Winter2019
|
https://api.github.com/repos/byucs340ta/Winter2019
|
closed
|
Team 11: Registering with a short username like "dory" yeilded error message "only letters and numbers"
|
P4: Aesthetic or Design Flaw Team 11
|
The error was not descriptive or precise
|
1.0
|
Team 11: Registering with a short username like "dory" yeilded error message "only letters and numbers" - The error was not descriptive or precise
|
non_process
|
team registering with a short username like dory yeilded error message only letters and numbers the error was not descriptive or precise
| 0
|
586,643
| 17,594,171,784
|
IssuesEvent
|
2021-08-17 01:03:09
|
shoepro/server
|
https://api.github.com/repos/shoepro/server
|
opened
|
[Feat]: Create cache for auth using redis
|
Priority: High 2.0h Feature Server
|
### ISSUE
- Group: `Server`
- Type: `Feat`
- Time: `2.0h`
- Priority: `High`
### TODO
1. [ ] Install redis and bluebird
2. [ ] Create user cache for auth using redis
3. [ ] Apply user cache instead user repo
|
1.0
|
[Feat]: Create cache for auth using redis - ### ISSUE
- Group: `Server`
- Type: `Feat`
- Time: `2.0h`
- Priority: `High`
### TODO
1. [ ] Install redis and bluebird
2. [ ] Create user cache for auth using redis
3. [ ] Apply user cache instead user repo
|
non_process
|
create cache for auth using redis issue group server type feat time priority high todo install redis and bluebird create user cache for auth using redis apply user cache instead user repo
| 0
|
34,062
| 12,237,743,650
|
IssuesEvent
|
2020-05-04 18:33:33
|
ignatandrei/console_to_saas
|
https://api.github.com/repos/ignatandrei/console_to_saas
|
opened
|
CVE-2020-7608 (High) detected in yargs-parser-9.0.2.tgz
|
security vulnerability
|
## CVE-2020-7608 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-9.0.2.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-9.0.2.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-9.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/console_to_saas/print/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/console_to_saas/print/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- csv2md-1.0.1.tgz (Root Library)
- yargs-11.1.1.tgz
- :x: **yargs-parser-9.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/console_to_saas/commit/e88b87616d135a822e33bdc35d9eb66662e231f7">e88b87616d135a822e33bdc35d9eb66662e231f7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-7608 (High) detected in yargs-parser-9.0.2.tgz - ## CVE-2020-7608 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>yargs-parser-9.0.2.tgz</b></p></summary>
<p>the mighty option parser used by yargs</p>
<p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-9.0.2.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-9.0.2.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/console_to_saas/print/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/console_to_saas/print/node_modules/yargs-parser/package.json</p>
<p>
Dependency Hierarchy:
- csv2md-1.0.1.tgz (Root Library)
- yargs-11.1.1.tgz
- :x: **yargs-parser-9.0.2.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ignatandrei/console_to_saas/commit/e88b87616d135a822e33bdc35d9eb66662e231f7">e88b87616d135a822e33bdc35d9eb66662e231f7</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload.
<p>Publish Date: 2020-03-16
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7608</a></p>
<p>Release Date: 2020-03-16</p>
<p>Fix Resolution: v18.1.1;13.1.2;15.0.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in yargs parser tgz cve high severity vulnerability vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file tmp ws scm console to saas print package json path to vulnerable library tmp ws scm console to saas print node modules yargs parser package json dependency hierarchy tgz root library yargs tgz x yargs parser tgz vulnerable library found in head commit a href vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
16,444
| 21,317,543,367
|
IssuesEvent
|
2022-04-16 14:49:47
|
PyCQA/pylint
|
https://api.github.com/repos/PyCQA/pylint
|
closed
|
🔍 Improve CPU count detection in cgroup environments
|
Enhancement ✨ High Effort 🏋 topic-multiprocessing
|
### Current problem
PyLint [currently uses](https://github.com/PyCQA/pylint/blob/8c0062f5ac0cfd80a74568e36d1e68e5a128c7f5/pylint/lint/run.py#L22-L30) `sched_getaffinity` or `multiprocessing.cpu_count` to identify the number of CPUs available to the system, falling back to `1` if neither function is available. This doesn't work properly in cgroup environments, where CPU limits may be imposed as a fraction of total CPU time available to the set of processes.
### Desired solution
It would be great if PyLint were to automatically detect the fraction of CPUs available to it when determining how many subprocesses to launch. Currently in our Kubernetes-based CI system, PyLint is launching 16 processes which all compete for a single CPU's worth of time. This ultimately slows down the checking process, which we want to be fast 🙂
Alternatively, it would be great to be able to set an environment variable to control the number of subprocesses to launch. This would make it possible for us set the number of subprocesses in the same place where we set the number of CPUs available to the CI process.
As a workaround, we're going to modify our launching script to compute the fraction of CPUs available to the process and pass that using the `-j` argument to PyLint.
### Additional context
Determining the amount of CPU available to a Python program is a somewhat widespread problem, here are some related resources:
* Python issue requesting this ability in the standard library: https://bugs.python.org/issue36054
* Message showing how to manually query cgroup quotas: https://bugs.python.org/msg353690
* Conan PR using similar logic to load cgroup quota information: https://github.com/conan-io/conan/pull/5466/files
* Utility using `LD_PRELOAD` to hijack CPU count requests to return computed CPU availability: https://github.com/agile6v/container_cpu_detection
|
1.0
|
🔍 Improve CPU count detection in cgroup environments - ### Current problem
PyLint [currently uses](https://github.com/PyCQA/pylint/blob/8c0062f5ac0cfd80a74568e36d1e68e5a128c7f5/pylint/lint/run.py#L22-L30) `sched_getaffinity` or `multiprocessing.cpu_count` to identify the number of CPUs available to the system, falling back to `1` if neither function is available. This doesn't work properly in cgroup environments, where CPU limits may be imposed as a fraction of total CPU time available to the set of processes.
### Desired solution
It would be great if PyLint were to automatically detect the fraction of CPUs available to it when determining how many subprocesses to launch. Currently in our Kubernetes-based CI system, PyLint is launching 16 processes which all compete for a single CPU's worth of time. This ultimately slows down the checking process, which we want to be fast 🙂
Alternatively, it would be great to be able to set an environment variable to control the number of subprocesses to launch. This would make it possible for us set the number of subprocesses in the same place where we set the number of CPUs available to the CI process.
As a workaround, we're going to modify our launching script to compute the fraction of CPUs available to the process and pass that using the `-j` argument to PyLint.
### Additional context
Determining the amount of CPU available to a Python program is a somewhat widespread problem, here are some related resources:
* Python issue requesting this ability in the standard library: https://bugs.python.org/issue36054
* Message showing how to manually query cgroup quotas: https://bugs.python.org/msg353690
* Conan PR using similar logic to load cgroup quota information: https://github.com/conan-io/conan/pull/5466/files
* Utility using `LD_PRELOAD` to hijack CPU count requests to return computed CPU availability: https://github.com/agile6v/container_cpu_detection
|
process
|
🔍 improve cpu count detection in cgroup environments current problem pylint sched getaffinity or multiprocessing cpu count to identify the number of cpus available to the system falling back to if neither function is available this doesn t work properly in cgroup environments where cpu limits may be imposed as a fraction of total cpu time available to the set of processes desired solution it would be great if pylint were to automatically detect the fraction of cpus available to it when determining how many subprocesses to launch currently in our kubernetes based ci system pylint is launching processes which all compete for a single cpu s worth of time this ultimately slows down the checking process which we want to be fast 🙂 alternatively it would be great to be able to set an environment variable to control the number of subprocesses to launch this would make it possible for us set the number of subprocesses in the same place where we set the number of cpus available to the ci process as a workaround we re going to modify our launching script to compute the fraction of cpus available to the process and pass that using the j argument to pylint additional context determining the amount of cpu available to a python program is a somewhat widespread problem here are some related resources python issue requesting this ability in the standard library message showing how to manually query cgroup quotas conan pr using similar logic to load cgroup quota information utility using ld preload to hijack cpu count requests to return computed cpu availability
| 1
|
80,727
| 15,559,174,154
|
IssuesEvent
|
2021-03-16 11:10:15
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
opened
|
extensionworkshop.com links without UTM params
|
component: code quality contrib: maybe good first bug priority: p5
|
From searching for #16744 I realised we have a number of links to extensionworkshop.com that don't have any UTM param
@caitmuenster should they be added to all of these? And if so, can you provide them?
```
olympia • src\olympia\devhub\templates\devhub\agreement.html:
41 </form>
42: <p><a href="https://extensionworkshop.com/documentation/publish/developer-accounts/">{{ _('More information on Developer Accounts') }}</a></p>
olympia • src\olympia\lib\crypto\tasks.py:
44 ---
45: [1] https://extensionworkshop.com/documentation/publish/signing-and-distribution-overview/ # noqa
46 [2] https://wiki.mozilla.org/Release_Management/Calendar
olympia • src\olympia\devhub\templates\devhub\emails\submission.txt:
4
5: For more information about the review process and policies, please visit https://extensionworkshop.com/documentation/publish/add-on-policies/
6
12
13: To learn how to help users discover your extension, stay up-to-date with news from the add-ons community, or contact the add-ons team, please visit https://extensionworkshop.com/documentation/manage/resources-for-publishers/
14
olympia • src\olympia\devhub\templates\devhub\addons\submit\describe_minimal.html:
26 <span class="req">{{ _('Remember') }}</span>:
27: {% trans policy_requirements_open='<a href="https://extensionworkshop.com/documentation/publish/source-code-submission/">'|safe, policy_requirements_close='</a>'|safe %}
28 If you submitted source code, but did not include instructions, you must provide them here.
olympia • src\olympia\devhub\templates\devhub\addons\submit\describe.html:
222 <span class="req">{{ _('Remember') }}</span>:
223: {% trans policy_requirements_open='<a href="https://extensionworkshop.com/documentation/publish/source-code-submission">'|safe, policy_requirements_close='</a>'|safe %}
224 If you submitted source code, but did not include instructions, you must provide them here.
olympia • src\olympia\devhub\templates\devhub\addons\submit\distribute.html:
16 <p>{{ distribution_form.channel.errors }}</p>
17: <p><a href="https://extensionworkshop.com/documentation/publish/signing-and-distribution-overview/"
18 target="_blank" rel="noopener noreferrer">
olympia • src\olympia\devhub\templates\devhub\addons\submit\source.html:
19 <p class="list-header">
20: {% trans a_attrs = 'href="https://extensionworkshop.com/documentation/publish/source-code-submission/" target="_blank" rel="noopener noreferrer"'|safe %}
21 Please review the <a {{ a_attrs }}>source code submission policy</a>.
53 <p class="instruction-emphasis list-header">
54: {% trans a_attrs = 'href="https://extensionworkshop.com/documentation/publish/source-code-submission/" target="_blank" rel="noopener noreferrer"'|safe %}
55 The source code must meet <a {{ a_attrs }}>policy requirements</a>, which includes:
olympia • src\olympia\devhub\templates\devhub\emails\submission.html:
4
5: <p>For more information about the review process and policies, please visit <a href="https://extensionworkshop.com/documentation/publish/add-on-policies/">https://extensionworkshop.com/documentation/publish/add-on-policies/</a>.</p>
6
12
13: <p>To learn how to help users discover your extension, stay up-to-date with news from the add-ons community, or contact the add-ons team, please visit <a href="https://extensionworkshop.com/documentation/manage/resources-for-publishers/">https://extensionworkshop.com/documentation/manage/resources-for-publishers/</a>.</p>>
14
olympia • src\olympia\devhub\templates\devhub\new-landing\components\banner.html:
12 </p>
13: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}">{{ _('Visit the Extension Workshop') }}</a>
14 </div>olympia • src\olympia\devhub\templates\devhub\new-landing\components\overview.html:
10 </p>
11: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}" class="Button Button--primary">{{ _('Learn how to make an extension') }}</a>
12 </div>
olympia • src\olympia\stats\templates\stats\reports\campaigns.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
olympia • src\olympia\stats\templates\stats\reports\contents.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
olympia • src\olympia\stats\templates\stats\reports\mediums.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
olympia • src\olympia\stats\templates\stats\reports\sources.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
```
Note the mix of `settings.EXTENSION_WORKSHOP_URL` and the literal url. IMO we should update all the links to use the setting if we're taking the time to make a change.
|
1.0
|
extensionworkshop.com links without UTM params - From searching for #16744 I realised we have a number of links to extensionworkshop.com that don't have any UTM param
@caitmuenster should they be added to all of these? And if so, can you provide them?
```
olympia • src\olympia\devhub\templates\devhub\agreement.html:
41 </form>
42: <p><a href="https://extensionworkshop.com/documentation/publish/developer-accounts/">{{ _('More information on Developer Accounts') }}</a></p>
olympia • src\olympia\lib\crypto\tasks.py:
44 ---
45: [1] https://extensionworkshop.com/documentation/publish/signing-and-distribution-overview/ # noqa
46 [2] https://wiki.mozilla.org/Release_Management/Calendar
olympia • src\olympia\devhub\templates\devhub\emails\submission.txt:
4
5: For more information about the review process and policies, please visit https://extensionworkshop.com/documentation/publish/add-on-policies/
6
12
13: To learn how to help users discover your extension, stay up-to-date with news from the add-ons community, or contact the add-ons team, please visit https://extensionworkshop.com/documentation/manage/resources-for-publishers/
14
olympia • src\olympia\devhub\templates\devhub\addons\submit\describe_minimal.html:
26 <span class="req">{{ _('Remember') }}</span>:
27: {% trans policy_requirements_open='<a href="https://extensionworkshop.com/documentation/publish/source-code-submission/">'|safe, policy_requirements_close='</a>'|safe %}
28 If you submitted source code, but did not include instructions, you must provide them here.
olympia • src\olympia\devhub\templates\devhub\addons\submit\describe.html:
222 <span class="req">{{ _('Remember') }}</span>:
223: {% trans policy_requirements_open='<a href="https://extensionworkshop.com/documentation/publish/source-code-submission">'|safe, policy_requirements_close='</a>'|safe %}
224 If you submitted source code, but did not include instructions, you must provide them here.
olympia • src\olympia\devhub\templates\devhub\addons\submit\distribute.html:
16 <p>{{ distribution_form.channel.errors }}</p>
17: <p><a href="https://extensionworkshop.com/documentation/publish/signing-and-distribution-overview/"
18 target="_blank" rel="noopener noreferrer">
olympia • src\olympia\devhub\templates\devhub\addons\submit\source.html:
19 <p class="list-header">
20: {% trans a_attrs = 'href="https://extensionworkshop.com/documentation/publish/source-code-submission/" target="_blank" rel="noopener noreferrer"'|safe %}
21 Please review the <a {{ a_attrs }}>source code submission policy</a>.
53 <p class="instruction-emphasis list-header">
54: {% trans a_attrs = 'href="https://extensionworkshop.com/documentation/publish/source-code-submission/" target="_blank" rel="noopener noreferrer"'|safe %}
55 The source code must meet <a {{ a_attrs }}>policy requirements</a>, which includes:
olympia • src\olympia\devhub\templates\devhub\emails\submission.html:
4
5: <p>For more information about the review process and policies, please visit <a href="https://extensionworkshop.com/documentation/publish/add-on-policies/">https://extensionworkshop.com/documentation/publish/add-on-policies/</a>.</p>
6
12
13: <p>To learn how to help users discover your extension, stay up-to-date with news from the add-ons community, or contact the add-ons team, please visit <a href="https://extensionworkshop.com/documentation/manage/resources-for-publishers/">https://extensionworkshop.com/documentation/manage/resources-for-publishers/</a>.</p>>
14
olympia • src\olympia\devhub\templates\devhub\new-landing\components\banner.html:
12 </p>
13: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}">{{ _('Visit the Extension Workshop') }}</a>
14 </div>olympia • src\olympia\devhub\templates\devhub\new-landing\components\overview.html:
10 </p>
11: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}" class="Button Button--primary">{{ _('Learn how to make an extension') }}</a>
12 </div>
olympia • src\olympia\stats\templates\stats\reports\campaigns.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
olympia • src\olympia\stats\templates\stats\reports\contents.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
olympia • src\olympia\stats\templates\stats\reports\mediums.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
olympia • src\olympia\stats\templates\stats\reports\sources.html:
5 {% block stats_note_link %}
6: <a href="{{ settings.EXTENSION_WORKSHOP_URL }}/documentation/manage/monitoring-extension-usage-statistics/" target="_blank" rel="noopener noreferrer">
7 {{ _('About tracking external sources...') }}
```
Note the mix of `settings.EXTENSION_WORKSHOP_URL` and the literal url. IMO we should update all the links to use the setting if we're taking the time to make a change.
|
non_process
|
extensionworkshop com links without utm params from searching for i realised we have a number of links to extensionworkshop com that don t have any utm param caitmuenster should they be added to all of these and if so can you provide them olympia • src olympia devhub templates devhub agreement html olympia • src olympia lib crypto tasks py noqa olympia • src olympia devhub templates devhub emails submission txt for more information about the review process and policies please visit to learn how to help users discover your extension stay up to date with news from the add ons community or contact the add ons team please visit olympia • src olympia devhub templates devhub addons submit describe minimal html remember trans policy requirements open safe if you submitted source code but did not include instructions you must provide them here olympia • src olympia devhub templates devhub addons submit describe html remember trans policy requirements open safe if you submitted source code but did not include instructions you must provide them here olympia • src olympia devhub templates devhub addons submit distribute html distribution form channel errors a href target blank rel noopener noreferrer olympia • src olympia devhub templates devhub addons submit source html trans a attrs href target blank rel noopener noreferrer safe please review the source code submission policy trans a attrs href target blank rel noopener noreferrer safe the source code must meet policy requirements which includes olympia • src olympia devhub templates devhub emails submission html for more information about the review process and policies please visit a href to learn how to help users discover your extension stay up to date with news from the add ons community or contact the add ons team please visit a href olympia • src olympia devhub templates devhub new landing components banner html visit the extension workshop olympia • src olympia devhub templates devhub new landing components overview html learn how to make an extension olympia • src olympia stats templates stats reports campaigns html block stats note link about tracking external sources olympia • src olympia stats templates stats reports contents html block stats note link about tracking external sources olympia • src olympia stats templates stats reports mediums html block stats note link about tracking external sources olympia • src olympia stats templates stats reports sources html block stats note link about tracking external sources note the mix of settings extension workshop url and the literal url imo we should update all the links to use the setting if we re taking the time to make a change
| 0
|
319,092
| 9,739,108,827
|
IssuesEvent
|
2019-06-01 08:12:39
|
WoWManiaUK/Blackwing-Lair
|
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
|
closed
|
[Spell] [Inscription: Darkmoon Card of Destruction]
|
Fixed Confirmed Fixed in Dev Priority-High Profession
|
**Links:**
https://www.wowhead.com/spell=86615/darkmoon-card-of-destruction
https://www.wowhead.com/item=61987/darkmoon-card-of-destruction
**What is happening:**
creates this item darkmoon-card-of-destruction
**What should happen:**
shoud create a rnd card of a deck
[Earthquake Deck]
https://www.wowhead.com/item=62046/earthquake-deck
[Hurricane Deck]
https://www.wowhead.com/item=62045/hurricane-deck
[Tsunami Deck]
https://www.wowhead.com/item=62044/tsunami-deck
[Volcanic Deck]
https://www.wowhead.com/item=62021/volcanic-deck
|
1.0
|
[Spell] [Inscription: Darkmoon Card of Destruction] - **Links:**
https://www.wowhead.com/spell=86615/darkmoon-card-of-destruction
https://www.wowhead.com/item=61987/darkmoon-card-of-destruction
**What is happening:**
creates this item darkmoon-card-of-destruction
**What should happen:**
shoud create a rnd card of a deck
[Earthquake Deck]
https://www.wowhead.com/item=62046/earthquake-deck
[Hurricane Deck]
https://www.wowhead.com/item=62045/hurricane-deck
[Tsunami Deck]
https://www.wowhead.com/item=62044/tsunami-deck
[Volcanic Deck]
https://www.wowhead.com/item=62021/volcanic-deck
|
non_process
|
links what is happening creates this item darkmoon card of destruction what should happen shoud create a rnd card of a deck
| 0
|
39,344
| 5,231,251,107
|
IssuesEvent
|
2017-01-30 00:51:36
|
CiscoDevNet/ydk-gen
|
https://api.github.com/repos/CiscoDevNet/ydk-gen
|
closed
|
Generate model exercising test generator
|
enhancement in progress testing
|
This task will generate a unit test that creates an object graph that is part of a payload of an RPC, serializes the RPC then deserailizes the RPC and extracts the object graph. The object graphs are then compared to ensure that they are the same.
Note this tests
1. The encoder/decoder logic
2. Exercises all the generated api making sure that it confirms to python syntax
|
1.0
|
Generate model exercising test generator - This task will generate a unit test that creates an object graph that is part of a payload of an RPC, serializes the RPC then deserailizes the RPC and extracts the object graph. The object graphs are then compared to ensure that they are the same.
Note this tests
1. The encoder/decoder logic
2. Exercises all the generated api making sure that it confirms to python syntax
|
non_process
|
generate model exercising test generator this task will generate a unit test that creates an object graph that is part of a payload of an rpc serializes the rpc then deserailizes the rpc and extracts the object graph the object graphs are then compared to ensure that they are the same note this tests the encoder decoder logic exercises all the generated api making sure that it confirms to python syntax
| 0
|
1,319
| 3,870,409,011
|
IssuesEvent
|
2016-04-11 03:18:27
|
agile-alliance-brazil/submissions
|
https://api.github.com/repos/agile-alliance-brazil/submissions
|
opened
|
Show reviewer terms after reviewers accepted it
|
Enhancement Low Review Process
|
Currently, reviewer only see the terms of agreement as a reviewer when they accept the invitation to be a reviewer. Once they've accepted, that text is never shown again.
It would be great to keep it available as a publicly available and easily accessible page.
|
1.0
|
Show reviewer terms after reviewers accepted it - Currently, reviewer only see the terms of agreement as a reviewer when they accept the invitation to be a reviewer. Once they've accepted, that text is never shown again.
It would be great to keep it available as a publicly available and easily accessible page.
|
process
|
show reviewer terms after reviewers accepted it currently reviewer only see the terms of agreement as a reviewer when they accept the invitation to be a reviewer once they ve accepted that text is never shown again it would be great to keep it available as a publicly available and easily accessible page
| 1
|
776,037
| 27,244,253,378
|
IssuesEvent
|
2023-02-21 23:49:00
|
noisy/portfolio
|
https://api.github.com/repos/noisy/portfolio
|
opened
|
Wrong hover behavior for blog's link "How I bankrupt" in project "SpisTresci"
|
bug Priority: Minor
|
There's a link to blog's post "How I bankrupt" in "SpisTresci" project's page. Thumbnail's hover behavior is different from the same's thumbnail in blog page.
Steps:
1. Open page https://krzysztofszumny.com/project/spistresci
2. Find section `Not understanding what MVP truly means`
3. Post's thumbnail is in this section
Expected effect: Hover effect is the same as on page https://krzysztofszumny.com/blog
Actual effect: Thumbnail doesn't get darker and button `Read more` is always visible.
Video help:
[Krzysztof-s-Portfolio.webm](https://user-images.githubusercontent.com/1151664/220482921-2a66d863-2524-46a9-8ef8-6d4f8ad339c1.webm)
Desktop:
Chrome 110.0.5481.100
Ubuntu 20.04.4 LTS
|
1.0
|
Wrong hover behavior for blog's link "How I bankrupt" in project "SpisTresci" - There's a link to blog's post "How I bankrupt" in "SpisTresci" project's page. Thumbnail's hover behavior is different from the same's thumbnail in blog page.
Steps:
1. Open page https://krzysztofszumny.com/project/spistresci
2. Find section `Not understanding what MVP truly means`
3. Post's thumbnail is in this section
Expected effect: Hover effect is the same as on page https://krzysztofszumny.com/blog
Actual effect: Thumbnail doesn't get darker and button `Read more` is always visible.
Video help:
[Krzysztof-s-Portfolio.webm](https://user-images.githubusercontent.com/1151664/220482921-2a66d863-2524-46a9-8ef8-6d4f8ad339c1.webm)
Desktop:
Chrome 110.0.5481.100
Ubuntu 20.04.4 LTS
|
non_process
|
wrong hover behavior for blog s link how i bankrupt in project spistresci there s a link to blog s post how i bankrupt in spistresci project s page thumbnail s hover behavior is different from the same s thumbnail in blog page steps open page find section not understanding what mvp truly means post s thumbnail is in this section expected effect hover effect is the same as on page actual effect thumbnail doesn t get darker and button read more is always visible video help desktop chrome ubuntu lts
| 0
|
19,809
| 26,198,928,564
|
IssuesEvent
|
2023-01-03 15:46:25
|
scikit-learn/scikit-learn
|
https://api.github.com/repos/scikit-learn/scikit-learn
|
closed
|
FunctionTransformer does not handle object dtype when check_inverse is True
|
Bug module:preprocessing
|
#### Describe the bug
FunctionTransformer can process a feature made out of strings in a DataFrame. However, one has to disable the inverse_transform checking otherwise np.allclose is called on the dataframes, causing a crash.
I believe that the behavior should be consistent: either string DataFrames are not supported, and it should crash, or they are. It should not depend on the value of check_inverse.
#### Steps/Code to Reproduce
Example:
```python
from sklearn.preprocessing import FunctionTransformer
import pandas as pd
import numpy as np
mapping = {'uno': 1, 'dos':2, 'quattro':4}
inv_mapping = {1: 'uno', 2:'dos', 4:'quattro'}
data = pd.DataFrame({'value': ['uno', 'dos', 'quattro', 'uno', 'uno']})
transformer = FunctionTransformer(
lambda X: np.array([mapping[x] for x in X['value']]),
lambda X: pd.DataFrame({'value': np.array([inv_mapping[x] for x in X])}),
validate=False, check_inverse=False)
print("It works:")
print(transformer.inverse_transform(transformer.fit_transform(data)))
transformer = FunctionTransformer(
lambda X: np.array([mapping[x] for x in X['value']]),
lambda X: pd.DataFrame({'value': np.array([inv_mapping[x] for x in X])}),
validate=False)
print("It crashes:")
print(transformer.inverse_transform(transformer.fit_transform(data)))
```
#### Expected Results
I expect no error to be raised.
#### Actual Results
En error is raised when check_inverse=True
#### Versions
```
System:
python: 3.6.12 (default, Dec 02 2020, 09:44:23) [GCC]
executable: /home/aabraham/py36/bin/python3
machine: Linux-5.3.18-lp152.69-default-x86_64-with-glibc2.3.4
Python dependencies:
pip: 21.0.1
setuptools: 44.1.1
sklearn: 0.24.1
numpy: 1.19.5
scipy: 1.5.4
Cython: 0.29.21
pandas: 1.1.5
matplotlib: 3.3.3
joblib: 0.17.0
threadpoolctl: 2.1.0
Built with OpenMP: True
```
|
1.0
|
FunctionTransformer does not handle object dtype when check_inverse is True - #### Describe the bug
FunctionTransformer can process a feature made out of strings in a DataFrame. However, one has to disable the inverse_transform checking otherwise np.allclose is called on the dataframes, causing a crash.
I believe that the behavior should be consistent: either string DataFrames are not supported, and it should crash, or they are. It should not depend on the value of check_inverse.
#### Steps/Code to Reproduce
Example:
```python
from sklearn.preprocessing import FunctionTransformer
import pandas as pd
import numpy as np
mapping = {'uno': 1, 'dos':2, 'quattro':4}
inv_mapping = {1: 'uno', 2:'dos', 4:'quattro'}
data = pd.DataFrame({'value': ['uno', 'dos', 'quattro', 'uno', 'uno']})
transformer = FunctionTransformer(
lambda X: np.array([mapping[x] for x in X['value']]),
lambda X: pd.DataFrame({'value': np.array([inv_mapping[x] for x in X])}),
validate=False, check_inverse=False)
print("It works:")
print(transformer.inverse_transform(transformer.fit_transform(data)))
transformer = FunctionTransformer(
lambda X: np.array([mapping[x] for x in X['value']]),
lambda X: pd.DataFrame({'value': np.array([inv_mapping[x] for x in X])}),
validate=False)
print("It crashes:")
print(transformer.inverse_transform(transformer.fit_transform(data)))
```
#### Expected Results
I expect no error to be raised.
#### Actual Results
En error is raised when check_inverse=True
#### Versions
```
System:
python: 3.6.12 (default, Dec 02 2020, 09:44:23) [GCC]
executable: /home/aabraham/py36/bin/python3
machine: Linux-5.3.18-lp152.69-default-x86_64-with-glibc2.3.4
Python dependencies:
pip: 21.0.1
setuptools: 44.1.1
sklearn: 0.24.1
numpy: 1.19.5
scipy: 1.5.4
Cython: 0.29.21
pandas: 1.1.5
matplotlib: 3.3.3
joblib: 0.17.0
threadpoolctl: 2.1.0
Built with OpenMP: True
```
|
process
|
functiontransformer does not handle object dtype when check inverse is true describe the bug functiontransformer can process a feature made out of strings in a dataframe however one has to disable the inverse transform checking otherwise np allclose is called on the dataframes causing a crash i believe that the behavior should be consistent either string dataframes are not supported and it should crash or they are it should not depend on the value of check inverse steps code to reproduce example python from sklearn preprocessing import functiontransformer import pandas as pd import numpy as np mapping uno dos quattro inv mapping uno dos quattro data pd dataframe value transformer functiontransformer lambda x np array for x in x lambda x pd dataframe value np array for x in x validate false check inverse false print it works print transformer inverse transform transformer fit transform data transformer functiontransformer lambda x np array for x in x lambda x pd dataframe value np array for x in x validate false print it crashes print transformer inverse transform transformer fit transform data expected results i expect no error to be raised actual results en error is raised when check inverse true versions system python default dec executable home aabraham bin machine linux default with python dependencies pip setuptools sklearn numpy scipy cython pandas matplotlib joblib threadpoolctl built with openmp true
| 1
|
364,674
| 25,497,590,519
|
IssuesEvent
|
2022-11-27 21:30:19
|
quickwit-oss/quickwit
|
https://api.github.com/repos/quickwit-oss/quickwit
|
closed
|
Integrate MZ feedback on Quickwit tutorial
|
bug documentation
|
We received an email feedback on different papercuts in the tutorials.
- #1421
- #1422
- #1423
|
1.0
|
Integrate MZ feedback on Quickwit tutorial - We received an email feedback on different papercuts in the tutorials.
- #1421
- #1422
- #1423
|
non_process
|
integrate mz feedback on quickwit tutorial we received an email feedback on different papercuts in the tutorials
| 0
|
551,145
| 16,163,592,612
|
IssuesEvent
|
2021-05-01 04:39:06
|
thzidaan/Web-Scraper
|
https://api.github.com/repos/thzidaan/Web-Scraper
|
closed
|
[Dev Task] Output the scraped results for Amazon in Excel
|
Dev Task Priority (High) Risk (Low)
|
**User Story**
[Scraping Amazon](https://github.com/thzidaan/Web-Scraper/issues/4)
**Details**
Have the output file that is compatible with Excel
**Story Points**
Story Point : 1
|
1.0
|
[Dev Task] Output the scraped results for Amazon in Excel - **User Story**
[Scraping Amazon](https://github.com/thzidaan/Web-Scraper/issues/4)
**Details**
Have the output file that is compatible with Excel
**Story Points**
Story Point : 1
|
non_process
|
output the scraped results for amazon in excel user story details have the output file that is compatible with excel story points story point
| 0
|
8,246
| 11,421,368,976
|
IssuesEvent
|
2020-02-03 12:02:25
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
HMR for CSS doesn't refresh on subsequent entry pages and can also throw an error
|
:bug: Bug :pray: Help Wanted CSS Preprocessing HMR Stale
|
# 🐛 bug report
HMR for CSS doesn't refresh on subsequent entry pages and can also throw an error.
## 🎛 Configuration (.babelrc, package.json, cli command)
I created a repository to reproduce the error: https://github.com/gregorybolkenstijn/parcel-issue-hmr
Steps to reproduce the issue:
1. Clone the repository and run `yarn`
2. Run `yarn start`
3. In `main.scss` change the h1 background color to something else like blue.
4. The h1 background color gets refreshed via HMR
5. Navigate to index2.html
6. In `main.scss` change the h1 background color to something else like orange.
7. No visible changes, you have to reload to see the changes
I've noticed changes in HTML and JS do show, only CSS changes don't. I've seen that CSS HMR only works on the first file Parcel encounters. If you run `yarn start-glob`, you'll notice the CSS changes only show on home.html, not index.html or index2.html. Probably because home.html comes first alphabetically.
I have this problem as well in a more complex project with PostHTML and PostCSS transforms. The same setup is throwing an error instead of not refreshing the CSS:
```
app.ef966a9d.js:39 Uncaught Error: Cannot find module '../../../node_modules/parcel-bundler/src/builtins/css-loader.js'
at newRequire (app.ef966a9d.js:39)
at newRequire (app.ef966a9d.js:23)
at localRequire (app.ef966a9d.js:54)
at Object.eval (eval at hmrApply (app.js:1), <anonymous>:4:17)
at newRequire (app.ef966a9d.js:48)
at hmrAccept (app.js:1)
at app.js:1
at Array.forEach (<anonymous>)
at WebSocket.ws.onmessage (app.js:1)
```
## 🤔 Expected Behavior
Hot reloading works on all pages for all HTML, CSS/SCSS and JS changes.
## 😯 Current Behavior
Doesn't refresh on subsequent pages for CSS/SCSS changes.
## 💻 Code Sample
https://github.com/gregorybolkenstijn/parcel-issue-hmr
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.9.7
| Node | 10.8.0
| npm/Yarn | 6.2.0 / 1.9.2
| Operating System | 6.2.0 / 1.9.2
|
1.0
|
HMR for CSS doesn't refresh on subsequent entry pages and can also throw an error - # 🐛 bug report
HMR for CSS doesn't refresh on subsequent entry pages and can also throw an error.
## 🎛 Configuration (.babelrc, package.json, cli command)
I created a repository to reproduce the error: https://github.com/gregorybolkenstijn/parcel-issue-hmr
Steps to reproduce the issue:
1. Clone the repository and run `yarn`
2. Run `yarn start`
3. In `main.scss` change the h1 background color to something else like blue.
4. The h1 background color gets refreshed via HMR
5. Navigate to index2.html
6. In `main.scss` change the h1 background color to something else like orange.
7. No visible changes, you have to reload to see the changes
I've noticed changes in HTML and JS do show, only CSS changes don't. I've seen that CSS HMR only works on the first file Parcel encounters. If you run `yarn start-glob`, you'll notice the CSS changes only show on home.html, not index.html or index2.html. Probably because home.html comes first alphabetically.
I have this problem as well in a more complex project with PostHTML and PostCSS transforms. The same setup is throwing an error instead of not refreshing the CSS:
```
app.ef966a9d.js:39 Uncaught Error: Cannot find module '../../../node_modules/parcel-bundler/src/builtins/css-loader.js'
at newRequire (app.ef966a9d.js:39)
at newRequire (app.ef966a9d.js:23)
at localRequire (app.ef966a9d.js:54)
at Object.eval (eval at hmrApply (app.js:1), <anonymous>:4:17)
at newRequire (app.ef966a9d.js:48)
at hmrAccept (app.js:1)
at app.js:1
at Array.forEach (<anonymous>)
at WebSocket.ws.onmessage (app.js:1)
```
## 🤔 Expected Behavior
Hot reloading works on all pages for all HTML, CSS/SCSS and JS changes.
## 😯 Current Behavior
Doesn't refresh on subsequent pages for CSS/SCSS changes.
## 💻 Code Sample
https://github.com/gregorybolkenstijn/parcel-issue-hmr
## 🌍 Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.9.7
| Node | 10.8.0
| npm/Yarn | 6.2.0 / 1.9.2
| Operating System | 6.2.0 / 1.9.2
|
process
|
hmr for css doesn t refresh on subsequent entry pages and can also throw an error 🐛 bug report hmr for css doesn t refresh on subsequent entry pages and can also throw an error 🎛 configuration babelrc package json cli command i created a repository to reproduce the error steps to reproduce the issue clone the repository and run yarn run yarn start in main scss change the background color to something else like blue the background color gets refreshed via hmr navigate to html in main scss change the background color to something else like orange no visible changes you have to reload to see the changes i ve noticed changes in html and js do show only css changes don t i ve seen that css hmr only works on the first file parcel encounters if you run yarn start glob you ll notice the css changes only show on home html not index html or html probably because home html comes first alphabetically i have this problem as well in a more complex project with posthtml and postcss transforms the same setup is throwing an error instead of not refreshing the css app js uncaught error cannot find module node modules parcel bundler src builtins css loader js at newrequire app js at newrequire app js at localrequire app js at object eval eval at hmrapply app js at newrequire app js at hmraccept app js at app js at array foreach at websocket ws onmessage app js 🤔 expected behavior hot reloading works on all pages for all html css scss and js changes 😯 current behavior doesn t refresh on subsequent pages for css scss changes 💻 code sample 🌍 your environment software version s parcel node npm yarn operating system
| 1
|
10,251
| 13,105,149,217
|
IssuesEvent
|
2020-08-04 11:38:43
|
Explore-AI/test-repo
|
https://api.github.com/repos/Explore-AI/test-repo
|
closed
|
e
|
bug content-type:pre-processing student-submitted unread
|
Content Type: Pre-Processing
Content Name: e
Problem: Broken link/could not access the content
Additional Details
Broken link/could not access the content
Supporting Files:
Reported by: jhc@jhc.jhc
|
1.0
|
e - Content Type: Pre-Processing
Content Name: e
Problem: Broken link/could not access the content
Additional Details
Broken link/could not access the content
Supporting Files:
Reported by: jhc@jhc.jhc
|
process
|
e content type pre processing content name e problem broken link could not access the content additional details broken link could not access the content supporting files reported by jhc jhc jhc
| 1
|
1,167
| 3,655,929,328
|
IssuesEvent
|
2016-02-17 17:57:56
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Uncaught exceptions in Promises inherently dangerous?
|
process question
|
I have been reading a lot about Node uncaught exception handling. Information in the docs [here](https://nodejs.org/api/process.html#process_event_uncaughtexception) talks about logging uncaught errors and immediately exiting the process. I also found similar information in the [domain docs](https://nodejs.org/api/domain.html#domain_warning_don_t_ignore_errors). Essentially, your Node process is in an unknown state and the only safe thing to do is restart.
I then discovered a related issue discussion about the dangers of `process.on('uncaughtException'`
https://github.com/nodejs/node-v0.x-archive/issues/2582
Since Promises catch errors (known and unknown) and there are no guards on `Promise.prototype.catch` to filter error types, any Promise code, even with a `.catch`, could be leaking references similar to the issues with `domain` and `uncaughtException`
There has been a recent discussion here about what to do on `process.on('unhandledRejection'`
https://github.com/nodejs/node/issues/830
It appears this issue is bigger than just unhandled rejections. All Promise code in Node appears to be dangerous? Or am I missing something?
|
1.0
|
Uncaught exceptions in Promises inherently dangerous? - I have been reading a lot about Node uncaught exception handling. Information in the docs [here](https://nodejs.org/api/process.html#process_event_uncaughtexception) talks about logging uncaught errors and immediately exiting the process. I also found similar information in the [domain docs](https://nodejs.org/api/domain.html#domain_warning_don_t_ignore_errors). Essentially, your Node process is in an unknown state and the only safe thing to do is restart.
I then discovered a related issue discussion about the dangers of `process.on('uncaughtException'`
https://github.com/nodejs/node-v0.x-archive/issues/2582
Since Promises catch errors (known and unknown) and there are no guards on `Promise.prototype.catch` to filter error types, any Promise code, even with a `.catch`, could be leaking references similar to the issues with `domain` and `uncaughtException`
There has been a recent discussion here about what to do on `process.on('unhandledRejection'`
https://github.com/nodejs/node/issues/830
It appears this issue is bigger than just unhandled rejections. All Promise code in Node appears to be dangerous? Or am I missing something?
|
process
|
uncaught exceptions in promises inherently dangerous i have been reading a lot about node uncaught exception handling information in the docs talks about logging uncaught errors and immediately exiting the process i also found similar information in the essentially your node process is in an unknown state and the only safe thing to do is restart i then discovered a related issue discussion about the dangers of process on uncaughtexception since promises catch errors known and unknown and there are no guards on promise prototype catch to filter error types any promise code even with a catch could be leaking references similar to the issues with domain and uncaughtexception there has been a recent discussion here about what to do on process on unhandledrejection it appears this issue is bigger than just unhandled rejections all promise code in node appears to be dangerous or am i missing something
| 1
|
326,297
| 24,077,438,305
|
IssuesEvent
|
2022-09-19 00:25:05
|
mondyjosh/advent-of-code
|
https://api.github.com/repos/mondyjosh/advent-of-code
|
closed
|
Add GitHub Actions
|
documentation
|
I want to take the opportunity to learn GitHub workflows and actions. Among other things, I'd really like to generate status badges to include in the README.
|
1.0
|
Add GitHub Actions - I want to take the opportunity to learn GitHub workflows and actions. Among other things, I'd really like to generate status badges to include in the README.
|
non_process
|
add github actions i want to take the opportunity to learn github workflows and actions among other things i d really like to generate status badges to include in the readme
| 0
|
4,240
| 7,187,113,415
|
IssuesEvent
|
2018-02-02 03:01:09
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
Each data item should have its own formatting string in monitors
|
monitors-all status-inprocess type-enhancement
|
Currently, we can only format transactions, but I think events, traces, blooms, logs, and all other data should be customizable.
|
1.0
|
Each data item should have its own formatting string in monitors - Currently, we can only format transactions, but I think events, traces, blooms, logs, and all other data should be customizable.
|
process
|
each data item should have its own formatting string in monitors currently we can only format transactions but i think events traces blooms logs and all other data should be customizable
| 1
|
7,947
| 11,137,527,559
|
IssuesEvent
|
2019-12-20 19:36:20
|
openopps/openopps-platform
|
https://api.github.com/repos/openopps/openopps-platform
|
closed
|
DoS: Update text on next steps page
|
Apply Process Approved Requirements Ready State Dept.
|
Who: Student applicants
What: Next steps page content
Why: In order to provide more information related to closing time, clicking submit, and pulling data from USAJOBS.
Acceptance Criteria:
Update the text.
Add a 6th step that says:
6. Review and submit application
Double check your application and submit it before 11:59 p.m. EST on the closing date. Don't forget to click Submit.
Update the intro to say:
To save time, we'll import your information in your USAJOBS profile into your application. Review your USAJOBS profile to make sure all of your information is correct before you start the application.
If you update your USAJOBS profile after you start the application, you must come back to Open Opportunities and update your application as well.
If you make edits to your application, the information will not update in your USAJOBS profile.
Related Tickets:
4118 - Move USAJOBS data pull from apply button to next steps continue
4009 - update next steps text
4119 - Modal to say we're pulling from USAJOBS
|
1.0
|
DoS: Update text on next steps page - Who: Student applicants
What: Next steps page content
Why: In order to provide more information related to closing time, clicking submit, and pulling data from USAJOBS.
Acceptance Criteria:
Update the text.
Add a 6th step that says:
6. Review and submit application
Double check your application and submit it before 11:59 p.m. EST on the closing date. Don't forget to click Submit.
Update the intro to say:
To save time, we'll import your information in your USAJOBS profile into your application. Review your USAJOBS profile to make sure all of your information is correct before you start the application.
If you update your USAJOBS profile after you start the application, you must come back to Open Opportunities and update your application as well.
If you make edits to your application, the information will not update in your USAJOBS profile.
Related Tickets:
4118 - Move USAJOBS data pull from apply button to next steps continue
4009 - update next steps text
4119 - Modal to say we're pulling from USAJOBS
|
process
|
dos update text on next steps page who student applicants what next steps page content why in order to provide more information related to closing time clicking submit and pulling data from usajobs acceptance criteria update the text add a step that says review and submit application double check your application and submit it before p m est on the closing date don t forget to click submit update the intro to say to save time we ll import your information in your usajobs profile into your application review your usajobs profile to make sure all of your information is correct before you start the application if you update your usajobs profile after you start the application you must come back to open opportunities and update your application as well if you make edits to your application the information will not update in your usajobs profile related tickets move usajobs data pull from apply button to next steps continue update next steps text modal to say we re pulling from usajobs
| 1
|
9,564
| 12,519,431,559
|
IssuesEvent
|
2020-06-03 14:26:09
|
code4romania/expert-consultation-api
|
https://api.github.com/repos/code4romania/expert-consultation-api
|
closed
|
Edit the contents of the breakdown document
|
document processing documents java spring
|
As an admin of the Legal Consultation platform, I want to be able to edit the contents of a breakdown document.
After loading the document in the platform, the automatic process of breaking down the document into units takes place. The process might not work as intended, so the user must be able to edit the contents of the document.
CRUD operations need to be implemented for a `document node`
|
1.0
|
Edit the contents of the breakdown document - As an admin of the Legal Consultation platform, I want to be able to edit the contents of a breakdown document.
After loading the document in the platform, the automatic process of breaking down the document into units takes place. The process might not work as intended, so the user must be able to edit the contents of the document.
CRUD operations need to be implemented for a `document node`
|
process
|
edit the contents of the breakdown document as an admin of the legal consultation platform i want to be able to edit the contents of a breakdown document after loading the document in the platform the automatic process of breaking down the document into units takes place the process might not work as intended so the user must be able to edit the contents of the document crud operations need to be implemented for a document node
| 1
|
336,824
| 24,514,879,136
|
IssuesEvent
|
2022-10-11 03:26:23
|
AhmadApriliyanto23/Berandaku
|
https://api.github.com/repos/AhmadApriliyanto23/Berandaku
|
reopened
|
Navigator Interface, get longitude and latitude | Javascript
|
documentation
|
untuk mengambil data letak dimana titik device berada berdasarkan longtitude & latitude dalam javascript menggunakan.<br>
`navigator.geolocation.getCurrentPosition(loc => console.log(loc))`
result :
coords : GeolocationCoordinates
accuracy : 1775.8908447980746
altitude : null
altitudeAccuracy : null
heading : null
latitude : -6.1872413
longitude : 106.8060503
speed : null
|
1.0
|
Navigator Interface, get longitude and latitude | Javascript - untuk mengambil data letak dimana titik device berada berdasarkan longtitude & latitude dalam javascript menggunakan.<br>
`navigator.geolocation.getCurrentPosition(loc => console.log(loc))`
result :
coords : GeolocationCoordinates
accuracy : 1775.8908447980746
altitude : null
altitudeAccuracy : null
heading : null
latitude : -6.1872413
longitude : 106.8060503
speed : null
|
non_process
|
navigator interface get longitude and latitude javascript untuk mengambil data letak dimana titik device berada berdasarkan longtitude latitude dalam javascript menggunakan navigator geolocation getcurrentposition loc console log loc result coords geolocationcoordinates accuracy altitude null altitudeaccuracy null heading null latitude longitude speed null
| 0
|
20,857
| 27,636,001,990
|
IssuesEvent
|
2023-03-10 14:29:45
|
MicrosoftDocs/windows-dev-docs
|
https://api.github.com/repos/MicrosoftDocs/windows-dev-docs
|
closed
|
Found the code
|
uwp/prod processes-and-threading/tech Pri2
|
I found the code, another thing seems incorrect here and that's with deploying the service. The article makes you thing you deploy the service project directedly but it has no deploy option. I again assume it means deploy the containing UWP project which will bring the references service with it?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: edde9dbc-6e04-69cf-206e-123792666abf
* Version Independent ID: 9894e78f-3270-9485-4769-11050669b805
* Content: [Create and consume an app service - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/how-to-create-and-consume-an-app-service)
* Content Source: [windows-apps-src/launch-resume/how-to-create-and-consume-an-app-service.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/how-to-create-and-consume-an-app-service.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft**
|
1.0
|
Found the code -
I found the code, another thing seems incorrect here and that's with deploying the service. The article makes you thing you deploy the service project directedly but it has no deploy option. I again assume it means deploy the containing UWP project which will bring the references service with it?
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: edde9dbc-6e04-69cf-206e-123792666abf
* Version Independent ID: 9894e78f-3270-9485-4769-11050669b805
* Content: [Create and consume an app service - UWP applications](https://docs.microsoft.com/en-us/windows/uwp/launch-resume/how-to-create-and-consume-an-app-service)
* Content Source: [windows-apps-src/launch-resume/how-to-create-and-consume-an-app-service.md](https://github.com/MicrosoftDocs/windows-uwp/blob/docs/windows-apps-src/launch-resume/how-to-create-and-consume-an-app-service.md)
* Product: **uwp**
* Technology: **processes-and-threading**
* GitHub Login: @alvinashcraft
* Microsoft Alias: **aashcraft**
|
process
|
found the code i found the code another thing seems incorrect here and that s with deploying the service the article makes you thing you deploy the service project directedly but it has no deploy option i again assume it means deploy the containing uwp project which will bring the references service with it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product uwp technology processes and threading github login alvinashcraft microsoft alias aashcraft
| 1
|
253,089
| 19,090,992,046
|
IssuesEvent
|
2021-11-29 12:05:21
|
girlscript/winter-of-contributing
|
https://api.github.com/repos/girlscript/winter-of-contributing
|
closed
|
Binary Tree : Sum of nodes at kth level
|
documentation GWOC21 Assigned C/CPP
|
### Description
Documentation on how to find the sum of nodes at kth level in a binary tree
### Domain
C/CPP
### Type of Contribution
Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project.
|
1.0
|
Binary Tree : Sum of nodes at kth level - ### Description
Documentation on how to find the sum of nodes at kth level in a binary tree
### Domain
C/CPP
### Type of Contribution
Documentation
### Code of Conduct
- [X] I follow [Contributing Guidelines](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CONTRIBUTING.md) & [Code of conduct](https://github.com/girlscript/winter-of-contributing/blob/main/.github/CODE_OF_CONDUCT.md) of this project.
|
non_process
|
binary tree sum of nodes at kth level description documentation on how to find the sum of nodes at kth level in a binary tree domain c cpp type of contribution documentation code of conduct i follow of this project
| 0
|
29,629
| 5,773,556,413
|
IssuesEvent
|
2017-04-28 02:40:12
|
CompEvol/beast2
|
https://api.github.com/repos/CompEvol/beast2
|
opened
|
template system in BEAUti is broken
|
defect HIGH priority
|
@rbouckaert
Template in BEAUti seems totally broken. I tested StarBEAST and MutliType Tree.
Two problems:
1. Load alignments => change the template to StarBEAST => click Yes => BEAUti is broken, for example, if you switch between Partitions panel and other panels, then all data are lost or no input error;
2. Change the template to StarBEAST before loading alignments => Load alignments => BEAUti is broken, for example, the template in the top of GUI still shows "Standard", and no data is loaded in Partitions panel.
|
1.0
|
template system in BEAUti is broken - @rbouckaert
Template in BEAUti seems totally broken. I tested StarBEAST and MutliType Tree.
Two problems:
1. Load alignments => change the template to StarBEAST => click Yes => BEAUti is broken, for example, if you switch between Partitions panel and other panels, then all data are lost or no input error;
2. Change the template to StarBEAST before loading alignments => Load alignments => BEAUti is broken, for example, the template in the top of GUI still shows "Standard", and no data is loaded in Partitions panel.
|
non_process
|
template system in beauti is broken rbouckaert template in beauti seems totally broken i tested starbeast and mutlitype tree two problems load alignments change the template to starbeast click yes beauti is broken for example if you switch between partitions panel and other panels then all data are lost or no input error change the template to starbeast before loading alignments load alignments beauti is broken for example the template in the top of gui still shows standard and no data is loaded in partitions panel
| 0
|
73,052
| 15,252,457,676
|
IssuesEvent
|
2021-02-20 03:00:24
|
AlexRogalskiy/charts
|
https://api.github.com/repos/AlexRogalskiy/charts
|
opened
|
CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz
|
security vulnerability
|
## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: charts/package.json</p>
<p>Path to vulnerable library: charts/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/0da9fe91f756320ceb1bd42757063d21b1fc5316">0da9fe91f756320ceb1bd42757063d21b1fc5316</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2)
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-28500 (Medium) detected in lodash-4.17.20.tgz - ## CVE-2020-28500 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>lodash-4.17.20.tgz</b></p></summary>
<p>Lodash modular utilities.</p>
<p>Library home page: <a href="https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz">https://registry.npmjs.org/lodash/-/lodash-4.17.20.tgz</a></p>
<p>Path to dependency file: charts/package.json</p>
<p>Path to vulnerable library: charts/node_modules/lodash/package.json</p>
<p>
Dependency Hierarchy:
- :x: **lodash-4.17.20.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/charts/commit/0da9fe91f756320ceb1bd42757063d21b1fc5316">0da9fe91f756320ceb1bd42757063d21b1fc5316</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package lodash; all versions of package org.fujion.webjars:lodash are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions. Steps to reproduce (provided by reporter Liyuan Chen): var lo = require('lodash'); function build_blank (n) { var ret = "1" for (var i = 0; i < n; i++) { ret += " " } return ret + "1"; } var s = build_blank(50000) var time0 = Date.now(); lo.trim(s) var time_cost0 = Date.now() - time0; console.log("time_cost0: " + time_cost0) var time1 = Date.now(); lo.toNumber(s) var time_cost1 = Date.now() - time1; console.log("time_cost1: " + time_cost1) var time2 = Date.now(); lo.trimEnd(s) var time_cost2 = Date.now() - time2; console.log("time_cost2: " + time_cost2)
<p>Publish Date: 2021-02-15
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28500>CVE-2020-28500</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in lodash tgz cve medium severity vulnerability vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file charts package json path to vulnerable library charts node modules lodash package json dependency hierarchy x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details all versions of package lodash all versions of package org fujion webjars lodash are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions steps to reproduce provided by reporter liyuan chen var lo require lodash function build blank n var ret for var i i n i ret return ret var s build blank var date now lo trim s var time date now console log time time var date now lo tonumber s var time date now console log time time var date now lo trimend s var time date now console log time time publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href step up your open source security game with whitesource
| 0
|
11,991
| 14,737,210,639
|
IssuesEvent
|
2021-01-07 01:11:04
|
kdjstudios/SABillingGitlab
|
https://api.github.com/repos/kdjstudios/SABillingGitlab
|
closed
|
El Paso and Portland Revenue Analysis Worksheet
|
anc-ops anc-process anc-report anp-1 ant-bug ant-support
|
In GitLab by @kdjstudios on Apr 24, 2018, 12:03
**Submitted by:** "Martin Villegas" <martin.villegas@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-24-57352/conversation
**Server:** Internal
**Client/Site:** Multi
**Account:** NA
**Issue:**
We are trying to download the Revenue Analysis Worksheet report from SAB for El Paso and Portland but after a couple of minutes we get the “we’re sorry message.”
|
1.0
|
El Paso and Portland Revenue Analysis Worksheet - In GitLab by @kdjstudios on Apr 24, 2018, 12:03
**Submitted by:** "Martin Villegas" <martin.villegas@answernet.com>
**Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-04-24-57352/conversation
**Server:** Internal
**Client/Site:** Multi
**Account:** NA
**Issue:**
We are trying to download the Revenue Analysis Worksheet report from SAB for El Paso and Portland but after a couple of minutes we get the “we’re sorry message.”
|
process
|
el paso and portland revenue analysis worksheet in gitlab by kdjstudios on apr submitted by martin villegas helpdesk server internal client site multi account na issue we are trying to download the revenue analysis worksheet report from sab for el paso and portland but after a couple of minutes we get the “we’re sorry message ”
| 1
|
22,129
| 30,673,757,250
|
IssuesEvent
|
2023-07-26 02:13:41
|
jointakahe/takahe
|
https://api.github.com/repos/jointakahe/takahe
|
closed
|
SMTP: Catch and Display Errors
|
bug area/processing pri/medium
|
_Takahe 0.9.0 running in Docker._
Takahe should display SMTP errors when they occur. While trying to figure out why my account creation email wasn't coming through, I changed my configuration to something I know will never, ever work:
```
TAKAHE_EMAIL_SERVER=smtp://testuser:testpass@apple.com:587/?tls=true
```
This configuration doesn't produce any log output that I can use to troubleshoot the issue, nor does it present any errors in the web UI.
Contrast this to `nodemailer`, which throws a timeout when using the same config:
```
node:internal/process/promises:279
triggerUncaughtException(err, true /* fromPromise */);
^
Error: connect ETIMEDOUT 17.253.144.10:587
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
errno: -60,
code: 'ESOCKET',
syscall: 'connect',
address: '17.253.144.10',
port: 587,
command: 'CONN'
}
```
|
1.0
|
SMTP: Catch and Display Errors - _Takahe 0.9.0 running in Docker._
Takahe should display SMTP errors when they occur. While trying to figure out why my account creation email wasn't coming through, I changed my configuration to something I know will never, ever work:
```
TAKAHE_EMAIL_SERVER=smtp://testuser:testpass@apple.com:587/?tls=true
```
This configuration doesn't produce any log output that I can use to troubleshoot the issue, nor does it present any errors in the web UI.
Contrast this to `nodemailer`, which throws a timeout when using the same config:
```
node:internal/process/promises:279
triggerUncaughtException(err, true /* fromPromise */);
^
Error: connect ETIMEDOUT 17.253.144.10:587
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
errno: -60,
code: 'ESOCKET',
syscall: 'connect',
address: '17.253.144.10',
port: 587,
command: 'CONN'
}
```
|
process
|
smtp catch and display errors takahe running in docker takahe should display smtp errors when they occur while trying to figure out why my account creation email wasn t coming through i changed my configuration to something i know will never ever work takahe email server smtp testuser testpass apple com tls true this configuration doesn t produce any log output that i can use to troubleshoot the issue nor does it present any errors in the web ui contrast this to nodemailer which throws a timeout when using the same config node internal process promises triggeruncaughtexception err true frompromise error connect etimedout at tcpconnectwrap afterconnect node net errno code esocket syscall connect address port command conn
| 1
|
300,730
| 25,991,664,003
|
IssuesEvent
|
2022-12-20 08:09:54
|
Tencent/bk-job
|
https://api.github.com/repos/Tencent/bk-job
|
closed
|
bugfix: 作业编辑态时脚本内出现“扫描”级别的高危语句无法保存
|
kind/bug stage/test
|
**Version / Branch / tag**
V3.5.X
**出了什么问题?(What Happened?)**
高危规则是 `扫描` 级别:
<img width="1227" alt="image" src="https://user-images.githubusercontent.com/3852595/203005776-29e982a8-7552-495b-b409-801c9d03bb35.png">
但是作业编辑中依然无法正常保存:
<img width="1511" alt="image" src="https://user-images.githubusercontent.com/3852595/203005637-25be9377-0cb4-4fbf-a5a5-7552908e02d2.png">
**如何复现?(How to reproduce?)**
**预期结果(What you expect?)**
能够保存
|
1.0
|
bugfix: 作业编辑态时脚本内出现“扫描”级别的高危语句无法保存 - **Version / Branch / tag**
V3.5.X
**出了什么问题?(What Happened?)**
高危规则是 `扫描` 级别:
<img width="1227" alt="image" src="https://user-images.githubusercontent.com/3852595/203005776-29e982a8-7552-495b-b409-801c9d03bb35.png">
但是作业编辑中依然无法正常保存:
<img width="1511" alt="image" src="https://user-images.githubusercontent.com/3852595/203005637-25be9377-0cb4-4fbf-a5a5-7552908e02d2.png">
**如何复现?(How to reproduce?)**
**预期结果(What you expect?)**
能够保存
|
non_process
|
bugfix 作业编辑态时脚本内出现“扫描”级别的高危语句无法保存 version branch tag x 出了什么问题? what happened 高危规则是 扫描 级别: img width alt image src 但是作业编辑中依然无法正常保存: img width alt image src 如何复现? how to reproduce 预期结果 what you expect 能够保存
| 0
|
14,706
| 17,876,321,826
|
IssuesEvent
|
2021-09-07 04:38:37
|
googleapis/sloth
|
https://api.github.com/repos/googleapis/sloth
|
closed
|
Dependency Dashboard
|
type: process
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
- [ ] <!-- unschedule-branch=renovate/codecov-codecov-action-2.x -->chore(deps): update codecov/codecov-action action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/gts-3.x -->[chore(deps): update dependency gts to v3](../pull/815)
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/946) (`mocha`, `@types/mocha`)
- [ ] <!-- recreate-branch=renovate/meow-10.x -->[fix(deps): update dependency meow to v10](../pull/923)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Awaiting Schedule
These updates are awaiting their schedule. Click on a checkbox to get an update now.
- [ ] <!-- unschedule-branch=renovate/actions-setup-node-2.x -->chore(deps): update actions/setup-node action to v2
- [ ] <!-- unschedule-branch=renovate/codecov-codecov-action-2.x -->chore(deps): update codecov/codecov-action action to v2
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/gts-3.x -->[chore(deps): update dependency gts to v3](../pull/815)
- [ ] <!-- recreate-branch=renovate/mocha-9.x -->[chore(deps): update dependency mocha to v9](../pull/946) (`mocha`, `@types/mocha`)
- [ ] <!-- recreate-branch=renovate/meow-10.x -->[fix(deps): update dependency meow to v10](../pull/923)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses awaiting schedule these updates are awaiting their schedule click on a checkbox to get an update now chore deps update actions setup node action to chore deps update codecov codecov action action to ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull mocha types mocha pull check this box to trigger a request for renovate to run again on this repository
| 1
|
234,105
| 25,800,748,036
|
IssuesEvent
|
2022-12-11 00:50:47
|
KBVE/kbve.com
|
https://api.github.com/repos/KBVE/kbve.com
|
closed
|
[Update] : [Astrojs] : Update to astro@1.6.14
|
enhancement update security 0
|
**Describe the update**
We are currently on astro v1.6.11 , so we will plan to upgrade to astro v1.6.14
The release information / notes will be in the reference.
* * *
**References for update**
Astrojs Update - https://github.com/withastro/astro/releases/tag/astro%401.6.14
* * *
**Security/Performance risks**
Are there any major security and/or performance risks?!
No there are no major concerns.
* * *
|
True
|
[Update] : [Astrojs] : Update to astro@1.6.14 - **Describe the update**
We are currently on astro v1.6.11 , so we will plan to upgrade to astro v1.6.14
The release information / notes will be in the reference.
* * *
**References for update**
Astrojs Update - https://github.com/withastro/astro/releases/tag/astro%401.6.14
* * *
**Security/Performance risks**
Are there any major security and/or performance risks?!
No there are no major concerns.
* * *
|
non_process
|
update to astro describe the update we are currently on astro so we will plan to upgrade to astro the release information notes will be in the reference references for update astrojs update security performance risks are there any major security and or performance risks no there are no major concerns
| 0
|
13,948
| 16,723,530,662
|
IssuesEvent
|
2021-06-10 10:09:19
|
darktable-org/darktable
|
https://api.github.com/repos/darktable-org/darktable
|
closed
|
Artefacts at bottom of image - Fujifilm X100V cropping issue?
|
bug: pending scope: camera support scope: image processing
|
**Describe the bug/issue**
Photos processed in scene referred mode have an artefact - the bottom of the image (maybe just the bottom row of pixels) is either lightened (washed out white) or bright green, or black line. The issue is most obvious in exported images but also visible in the blocky preview you get when you zoom in in dark room and move the image to trigger "working...". The artefact doesn't show up once the blocky preview is replaced by the higher quality preview. If I re-process the image in display referred mode, I still get the artefact in the blocky preview but the high quality preview (once Working... disappears) and the exported image looks fine.
This is an issue even without me doing any custom processing - just the default modules / processing in scene referred mode is enough for it to show up.
**To Reproduce**
1. Import RAF raw file in scene referred workflow.
2. Zoom in in dark room and move image so you can see bottom row - move around to trigger "Working..." blocky preview - this shows green artefacts at bottom of image
3. Export image (I used jpeg 98%) to see artifact covering bottom row of pixels
**Screenshots**
In darktable:

After export:




**Platform**
HP Spectra x360 (13-4109na) laptop
* darktable version : 3.4.1
* OS : Linux - kernel 5.11.11-1-default
* Linux - Distro : OpenSUSE Tumbleweed
* Memory : 8GB
* Graphics card : Intel integrated
* Graphics driver :
* OpenCL installed :
* OpenCL activated :
* Xorg :
* Desktop : GNOME 3
* GTK+ :
* gcc :
* cflags :
* CMAKE_BUILD_TYPE :
**Additional context**
Camera is Fujifilm X100V, RAW files are RAF (with lossless compression).
- Can you reproduce with another darktable version(s)? **not tested**
- Can you reproduce with a RAW or Jpeg or both? **not tested**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes**
- If the issue is with the output image, attach an XMP file if (you'll have to change the extension to `.txt`)
- Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? **yes**
|
1.0
|
Artefacts at bottom of image - Fujifilm X100V cropping issue? - **Describe the bug/issue**
Photos processed in scene referred mode have an artefact - the bottom of the image (maybe just the bottom row of pixels) is either lightened (washed out white) or bright green, or black line. The issue is most obvious in exported images but also visible in the blocky preview you get when you zoom in in dark room and move the image to trigger "working...". The artefact doesn't show up once the blocky preview is replaced by the higher quality preview. If I re-process the image in display referred mode, I still get the artefact in the blocky preview but the high quality preview (once Working... disappears) and the exported image looks fine.
This is an issue even without me doing any custom processing - just the default modules / processing in scene referred mode is enough for it to show up.
**To Reproduce**
1. Import RAF raw file in scene referred workflow.
2. Zoom in in dark room and move image so you can see bottom row - move around to trigger "Working..." blocky preview - this shows green artefacts at bottom of image
3. Export image (I used jpeg 98%) to see artifact covering bottom row of pixels
**Screenshots**
In darktable:

After export:




**Platform**
HP Spectra x360 (13-4109na) laptop
* darktable version : 3.4.1
* OS : Linux - kernel 5.11.11-1-default
* Linux - Distro : OpenSUSE Tumbleweed
* Memory : 8GB
* Graphics card : Intel integrated
* Graphics driver :
* OpenCL installed :
* OpenCL activated :
* Xorg :
* Desktop : GNOME 3
* GTK+ :
* gcc :
* cflags :
* CMAKE_BUILD_TYPE :
**Additional context**
Camera is Fujifilm X100V, RAW files are RAF (with lossless compression).
- Can you reproduce with another darktable version(s)? **not tested**
- Can you reproduce with a RAW or Jpeg or both? **not tested**
- Are the steps above reproducible with a fresh edit (i.e. after discarding history)? **yes**
- If the issue is with the output image, attach an XMP file if (you'll have to change the extension to `.txt`)
- Is the issue still present using an empty/new config-dir (e.g. start darktable with --configdir "/tmp")? **yes**
|
process
|
artefacts at bottom of image fujifilm cropping issue describe the bug issue photos processed in scene referred mode have an artefact the bottom of the image maybe just the bottom row of pixels is either lightened washed out white or bright green or black line the issue is most obvious in exported images but also visible in the blocky preview you get when you zoom in in dark room and move the image to trigger working the artefact doesn t show up once the blocky preview is replaced by the higher quality preview if i re process the image in display referred mode i still get the artefact in the blocky preview but the high quality preview once working disappears and the exported image looks fine this is an issue even without me doing any custom processing just the default modules processing in scene referred mode is enough for it to show up to reproduce import raf raw file in scene referred workflow zoom in in dark room and move image so you can see bottom row move around to trigger working blocky preview this shows green artefacts at bottom of image export image i used jpeg to see artifact covering bottom row of pixels screenshots in darktable after export platform hp spectra laptop darktable version os linux kernel default linux distro opensuse tumbleweed memory graphics card intel integrated graphics driver opencl installed opencl activated xorg desktop gnome gtk gcc cflags cmake build type additional context camera is fujifilm raw files are raf with lossless compression can you reproduce with another darktable version s not tested can you reproduce with a raw or jpeg or both not tested are the steps above reproducible with a fresh edit i e after discarding history yes if the issue is with the output image attach an xmp file if you ll have to change the extension to txt is the issue still present using an empty new config dir e g start darktable with configdir tmp yes
| 1
|
210,811
| 16,386,430,127
|
IssuesEvent
|
2021-05-17 11:04:18
|
packit/packit.dev
|
https://api.github.com/repos/packit/packit.dev
|
closed
|
[config] 'create_tarball_command:' vs. 'actions: create-archive:'
|
documentation
|
[create_tarball_command: doc](https://packit.dev/docs/configuration/#top-level-keys) says: "a command which generates upstream tarball in the root of the upstream directory"
[actions: create-archive: doc](https://packit.dev/docs/actions/#creating-srpm) says: "when the archive needs to be created"
What's the difference (between tarball and archive)?
EDIT: Another point is that `actions: create-archive:` is supposed to be used in [propose-update](https://packit.dev/docs/actions/#propose-update-command) and [srpm/copr-build](https://packit.dev/docs/actions/#creating-srpm), but looking at the code I see it only in [API.create_srpm()](https://github.com/packit-service/packit/blob/master/packit/api.py#L481) not in [API.sync_release()](https://github.com/packit-service/packit/blob/master/packit/api.py#L143)
|
1.0
|
[config] 'create_tarball_command:' vs. 'actions: create-archive:' - [create_tarball_command: doc](https://packit.dev/docs/configuration/#top-level-keys) says: "a command which generates upstream tarball in the root of the upstream directory"
[actions: create-archive: doc](https://packit.dev/docs/actions/#creating-srpm) says: "when the archive needs to be created"
What's the difference (between tarball and archive)?
EDIT: Another point is that `actions: create-archive:` is supposed to be used in [propose-update](https://packit.dev/docs/actions/#propose-update-command) and [srpm/copr-build](https://packit.dev/docs/actions/#creating-srpm), but looking at the code I see it only in [API.create_srpm()](https://github.com/packit-service/packit/blob/master/packit/api.py#L481) not in [API.sync_release()](https://github.com/packit-service/packit/blob/master/packit/api.py#L143)
|
non_process
|
create tarball command vs actions create archive says a command which generates upstream tarball in the root of the upstream directory says when the archive needs to be created what s the difference between tarball and archive edit another point is that actions create archive is supposed to be used in and but looking at the code i see it only in not in
| 0
|
217,295
| 16,848,853,906
|
IssuesEvent
|
2021-06-20 04:19:19
|
hakehuang/infoflow
|
https://api.github.com/repos/hakehuang/infoflow
|
opened
|
tests-ci :arch.interrupt.extra_exception_info.arm_user_interrupt : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
|
area: Tests
|
**Describe the bug**
arch.interrupt.extra_exception_info.arm_user_interrupt test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test arch.interrupt
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
?*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite arm_interrupt
===================================================================
START - test_arm_null_pointer_exception
Skipped
PASS - test_arm_null_pointer_exception in 0.1 seconds
===================================================================
START - test_arm_interrupt
Available IRQ line: 59
E: >>> ZEPHYR FATAL ERROR 1: Unhandled interrupt on CPU 0
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 1
E: r0/a1: 0x00000003 r1/a2: 0x300031e0 r2/a3: 0x00000003
E: r3/a4: 0x30002fc0 r12/ip: 0xa0000000 r14/lr: 0x100025f7
E: xpsr: 0x6100004b
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x1000070a
E: >>> ZEPHYR FATAL ERROR 3: Kernel oops on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 3
E: r0/a1: 0x00000004 r1/a2: 0x300031e0 r2/a3: 0x00000004
E: r3/a4: 0x30002fc0 r12/ip: 0x00000000 r14/lr: 0x100025f7
E: xpsr: 0x6100004b
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x10000728
E: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 4
ASSERTION FAIL [0] @ WEST_TOPDIR/zephyr/tests/arch/arm/arm_interrupt/src/arm_interrupt.c:216
Intentional assert
E: r0/a1: 0x00000004 r1/a2: 0x000000d8 r2/a3: 0x80000000
E: r3/a4: 0x0000004b r12/ip: 0xa0000000 r14/lr: 0x100025f7
E: xpsr: 0x4100004b
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x1000b9a4
E: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 4
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
1.0
|
tests-ci :arch.interrupt.extra_exception_info.arm_user_interrupt : zephyr-v2.6.0-286-g46029914a7ac: lpcxpresso55s28: test Timeout
-
**Describe the bug**
arch.interrupt.extra_exception_info.arm_user_interrupt test is Timeout on zephyr-v2.6.0-286-g46029914a7ac on lpcxpresso55s28
see logs for details
**To Reproduce**
1.
```
scripts/twister --device-testing --device-serial /dev/ttyACM0 -p lpcxpresso55s28 --testcase-root tests --sub-test arch.interrupt
```
2. See error
**Expected behavior**
test pass
**Impact**
**Logs and console output**
```
?*** Booting Zephyr OS build zephyr-v2.6.0-286-g46029914a7ac ***
Running test suite arm_interrupt
===================================================================
START - test_arm_null_pointer_exception
Skipped
PASS - test_arm_null_pointer_exception in 0.1 seconds
===================================================================
START - test_arm_interrupt
Available IRQ line: 59
E: >>> ZEPHYR FATAL ERROR 1: Unhandled interrupt on CPU 0
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 1
E: r0/a1: 0x00000003 r1/a2: 0x300031e0 r2/a3: 0x00000003
E: r3/a4: 0x30002fc0 r12/ip: 0xa0000000 r14/lr: 0x100025f7
E: xpsr: 0x6100004b
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x1000070a
E: >>> ZEPHYR FATAL ERROR 3: Kernel oops on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 3
E: r0/a1: 0x00000004 r1/a2: 0x300031e0 r2/a3: 0x00000004
E: r3/a4: 0x30002fc0 r12/ip: 0x00000000 r14/lr: 0x100025f7
E: xpsr: 0x6100004b
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x10000728
E: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 4
ASSERTION FAIL [0] @ WEST_TOPDIR/zephyr/tests/arch/arm/arm_interrupt/src/arm_interrupt.c:216
Intentional assert
E: r0/a1: 0x00000004 r1/a2: 0x000000d8 r2/a3: 0x80000000
E: r3/a4: 0x0000004b r12/ip: 0xa0000000 r14/lr: 0x100025f7
E: xpsr: 0x4100004b
E: EXC_RETURN: 0x0
E: Faulting instruction address (r15/pc): 0x1000b9a4
E: >>> ZEPHYR FATAL ERROR 4: Kernel panic on CPU 0
E: Fault during interrupt handling
E: Current thread: 0x30000138 (test_arm_interrupt)
Caught system error -- reason 4
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
ASSERTION FAIL [esf != ((void *)0)] @ WEST_TOPDIR/zephyr/arch/arm/core/aarch32/cortex_m/fault.c:993
ESF could not be retrieved successfully. Shall never occur.
```
**Environment (please complete the following information):**
- OS: (e.g. Linux )
- Toolchain (e.g Zephyr SDK)
- Commit SHA or Version used: zephyr-v2.6.0-286-g46029914a7ac
|
non_process
|
tests ci arch interrupt extra exception info arm user interrupt zephyr test timeout describe the bug arch interrupt extra exception info arm user interrupt test is timeout on zephyr on see logs for details to reproduce scripts twister device testing device serial dev p testcase root tests sub test arch interrupt see error expected behavior test pass impact logs and console output booting zephyr os build zephyr running test suite arm interrupt start test arm null pointer exception skipped pass test arm null pointer exception in seconds start test arm interrupt available irq line e zephyr fatal error unhandled interrupt on cpu e current thread test arm interrupt caught system error reason e e ip lr e xpsr e exc return e faulting instruction address pc e zephyr fatal error kernel oops on cpu e fault during interrupt handling e current thread test arm interrupt caught system error reason e e ip lr e xpsr e exc return e faulting instruction address pc e zephyr fatal error kernel panic on cpu e fault during interrupt handling e current thread test arm interrupt caught system error reason assertion fail west topdir zephyr tests arch arm arm interrupt src arm interrupt c intentional assert e e ip lr e xpsr e exc return e faulting instruction address pc e zephyr fatal error kernel panic on cpu e fault during interrupt handling e current thread test arm interrupt caught system error reason assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur assertion fail west topdir zephyr arch arm core cortex m fault c esf could not be retrieved successfully shall never occur environment please complete the following information os e g linux toolchain e g zephyr sdk commit sha or version used zephyr
| 0
|
15,070
| 18,766,002,107
|
IssuesEvent
|
2021-11-06 00:28:30
|
googleapis/java-translate
|
https://api.github.com/repos/googleapis/java-translate
|
closed
|
com.example.translate.CreateGlossaryTests: testCreateGlossary failed
|
priority: p2 type: process api: translate flakybot: issue flakybot: flaky
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 319552c6c29ae1c5033d9b3afeb9b8bfa65c6b54
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b), [Sponge](http://sponge2/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Glossary not found.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:567)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:92)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:66)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at com.example.translate.DeleteGlossary.deleteGlossary(DeleteGlossary.java:58)
at com.example.translate.CreateGlossaryTests.tearDown(CreateGlossaryTests.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Glossary not found.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:45)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Glossary not found.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details>
|
1.0
|
com.example.translate.CreateGlossaryTests: testCreateGlossary failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 319552c6c29ae1c5033d9b3afeb9b8bfa65c6b54
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b), [Sponge](http://sponge2/f84dd1dc-8ea0-4c4f-a2cc-c3f0eee75d4b)
status: failed
<details><summary>Test output</summary><br><pre>java.util.concurrent.ExecutionException: com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Glossary not found.
at com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:588)
at com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:567)
at com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:92)
at com.google.common.util.concurrent.ForwardingFuture.get(ForwardingFuture.java:66)
at com.google.api.gax.longrunning.OperationFutureImpl.get(OperationFutureImpl.java:127)
at com.example.translate.DeleteGlossary.deleteGlossary(DeleteGlossary.java:58)
at com.example.translate.CreateGlossaryTests.tearDown(CreateGlossaryTests.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:364)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:237)
at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:158)
at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:428)
at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:562)
at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:548)
Caused by: com.google.api.gax.rpc.NotFoundException: io.grpc.StatusRuntimeException: NOT_FOUND: Glossary not found.
at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:45)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:72)
at com.google.api.gax.grpc.GrpcApiExceptionFactory.create(GrpcApiExceptionFactory.java:60)
at com.google.api.gax.grpc.GrpcExceptionCallable$ExceptionTransformingFuture.onFailure(GrpcExceptionCallable.java:97)
at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:68)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1133)
at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1277)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1038)
at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:808)
at io.grpc.stub.ClientCalls$GrpcFuture.setException(ClientCalls.java:563)
at io.grpc.stub.ClientCalls$UnaryStreamToFuture.onClose(ClientCalls.java:533)
at io.grpc.internal.DelayedClientCall$DelayedListener$3.run(DelayedClientCall.java:463)
at io.grpc.internal.DelayedClientCall$DelayedListener.delayOrExecute(DelayedClientCall.java:427)
at io.grpc.internal.DelayedClientCall$DelayedListener.onClose(DelayedClientCall.java:460)
at io.grpc.internal.ClientCallImpl.closeObserver(ClientCallImpl.java:557)
at io.grpc.internal.ClientCallImpl.access$300(ClientCallImpl.java:69)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInternal(ClientCallImpl.java:738)
at io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1StreamClosed.runInContext(ClientCallImpl.java:717)
at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37)
at io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:133)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: io.grpc.StatusRuntimeException: NOT_FOUND: Glossary not found.
at io.grpc.Status.asRuntimeException(Status.java:535)
... 13 more
</pre></details>
|
process
|
com example translate createglossarytests testcreateglossary failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output java util concurrent executionexception com google api gax rpc notfoundexception io grpc statusruntimeexception not found glossary not found at com google common util concurrent abstractfuture getdonevalue abstractfuture java at com google common util concurrent abstractfuture get abstractfuture java at com google common util concurrent fluentfuture trustedfuture get fluentfuture java at com google common util concurrent forwardingfuture get forwardingfuture java at com google api gax longrunning operationfutureimpl get operationfutureimpl java at com example translate deleteglossary deleteglossary deleteglossary java at com example translate createglossarytests teardown createglossarytests java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements runafters invokemethod runafters java at org junit internal runners statements runafters evaluate runafters java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit internal runners statements runbefores evaluate runbefores java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org apache maven surefire execute java at org apache maven surefire executewithrerun java at org apache maven surefire executetestset java at org apache maven surefire invoke java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter run forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by com google api gax rpc notfoundexception io grpc statusruntimeexception not found glossary not found at com google api gax rpc apiexceptionfactory createexception apiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcapiexceptionfactory create grpcapiexceptionfactory java at com google api gax grpc grpcexceptioncallable exceptiontransformingfuture onfailure grpcexceptioncallable java at com google api core apifutures onfailure apifutures java at com google common util concurrent futures callbacklistener run futures java at com google common util concurrent directexecutor execute directexecutor java at com google common util concurrent abstractfuture executelistener abstractfuture java at com google common util concurrent abstractfuture complete abstractfuture java at com google common util concurrent abstractfuture setexception abstractfuture java at io grpc stub clientcalls grpcfuture setexception clientcalls java at io grpc stub clientcalls unarystreamtofuture onclose clientcalls java at io grpc internal delayedclientcall delayedlistener run delayedclientcall java at io grpc internal delayedclientcall delayedlistener delayorexecute delayedclientcall java at io grpc internal delayedclientcall delayedlistener onclose delayedclientcall java at io grpc internal clientcallimpl closeobserver clientcallimpl java at io grpc internal clientcallimpl access clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runinternal clientcallimpl java at io grpc internal clientcallimpl clientstreamlistenerimpl runincontext clientcallimpl java at io grpc internal contextrunnable run contextrunnable java at io grpc internal serializingexecutor run serializingexecutor java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by io grpc statusruntimeexception not found glossary not found at io grpc status asruntimeexception status java more
| 1
|
2,975
| 5,963,445,676
|
IssuesEvent
|
2017-05-30 05:03:30
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
process, doc: discourage use of process.exit() for control flow in docs
|
doc process
|
Userland library authors have a pattern for CLIs that has lead to various issues during the introduction of v6. We had a heated discussion that I would summarize as: the below pattern should have been used in like this, since it actually never ensured to deliver what it promised; but core has a historic responsibility of not breaking such widely adopted patterns.
``` js
process.on('exit', () => {
// do some post action here later
})
function doSomething() {
for (var i = 0; i < 1000; i++) {
process.stdout.write('some result' + i + '\n')
}
// decide that the execution of the CLI should end here
process.exit()
}
```
The problem with this that stdout doesn't get flushed on `process.exit()`, resulting in not all all 1000 calls being printed. This became apparent in v6. Eventually this calls exit(3). In any good c++ practice exit(3) is discouraged, since functions scopes are not guaranteed to unwind properly.
Imo, authors should at be just return from functions, from top scope, or use proper event emitters.
Someone could open a doc PR.
Ref: https://github.com/nodejs/node/issues/6980, https://github.com/nodejs/node/issues/6456
cc @Fishrock123
|
1.0
|
process, doc: discourage use of process.exit() for control flow in docs - Userland library authors have a pattern for CLIs that has lead to various issues during the introduction of v6. We had a heated discussion that I would summarize as: the below pattern should have been used in like this, since it actually never ensured to deliver what it promised; but core has a historic responsibility of not breaking such widely adopted patterns.
``` js
process.on('exit', () => {
// do some post action here later
})
function doSomething() {
for (var i = 0; i < 1000; i++) {
process.stdout.write('some result' + i + '\n')
}
// decide that the execution of the CLI should end here
process.exit()
}
```
The problem with this that stdout doesn't get flushed on `process.exit()`, resulting in not all all 1000 calls being printed. This became apparent in v6. Eventually this calls exit(3). In any good c++ practice exit(3) is discouraged, since functions scopes are not guaranteed to unwind properly.
Imo, authors should at be just return from functions, from top scope, or use proper event emitters.
Someone could open a doc PR.
Ref: https://github.com/nodejs/node/issues/6980, https://github.com/nodejs/node/issues/6456
cc @Fishrock123
|
process
|
process doc discourage use of process exit for control flow in docs userland library authors have a pattern for clis that has lead to various issues during the introduction of we had a heated discussion that i would summarize as the below pattern should have been used in like this since it actually never ensured to deliver what it promised but core has a historic responsibility of not breaking such widely adopted patterns js process on exit do some post action here later function dosomething for var i i i process stdout write some result i n decide that the execution of the cli should end here process exit the problem with this that stdout doesn t get flushed on process exit resulting in not all all calls being printed this became apparent in eventually this calls exit in any good c practice exit is discouraged since functions scopes are not guaranteed to unwind properly imo authors should at be just return from functions from top scope or use proper event emitters someone could open a doc pr ref cc
| 1
|
87,071
| 8,058,150,019
|
IssuesEvent
|
2018-08-02 17:32:55
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
opened
|
Add tests for downloading adblock / tracking protection data files
|
tests
|
Carried over from https://github.com/brave/browser-laptop/issues/125
The tests should:
- [ ] Check for proper handling for 404 if adblock data file isn't uploaded correctly
- [ ] Check for proper handling if adblock server is offline
- [ ] Check for proper handling if time elapsed since last check is > 1 day
- [ ] Check if there is proper handling when reading cached storage from disk
- [ ] Check if there is proper handling when a 2nd window is open
- [ ] Check for proper handling when etags match
- [ ] Check for proper handling when etags do not match
|
1.0
|
Add tests for downloading adblock / tracking protection data files - Carried over from https://github.com/brave/browser-laptop/issues/125
The tests should:
- [ ] Check for proper handling for 404 if adblock data file isn't uploaded correctly
- [ ] Check for proper handling if adblock server is offline
- [ ] Check for proper handling if time elapsed since last check is > 1 day
- [ ] Check if there is proper handling when reading cached storage from disk
- [ ] Check if there is proper handling when a 2nd window is open
- [ ] Check for proper handling when etags match
- [ ] Check for proper handling when etags do not match
|
non_process
|
add tests for downloading adblock tracking protection data files carried over from the tests should check for proper handling for if adblock data file isn t uploaded correctly check for proper handling if adblock server is offline check for proper handling if time elapsed since last check is day check if there is proper handling when reading cached storage from disk check if there is proper handling when a window is open check for proper handling when etags match check for proper handling when etags do not match
| 0
|
409,655
| 27,747,293,957
|
IssuesEvent
|
2023-03-15 17:54:48
|
ISPP-Grupo5/BugaLink
|
https://api.github.com/repos/ISPP-Grupo5/BugaLink
|
opened
|
Realizar la demo de la aplicación en el Sprint2
|
documentation
|
SEMANA 1
Realizar el video demostración de la aplicación con las nuevas funcionalidades implementadas.
SEMANA 2
Realizar el video demostración de la aplicación con las nuevas funcionalidades implementadas.
Importante tener en cuenta el feedback recibido hasta el momento sobre la realización de esta tarea.
|
1.0
|
Realizar la demo de la aplicación en el Sprint2 - SEMANA 1
Realizar el video demostración de la aplicación con las nuevas funcionalidades implementadas.
SEMANA 2
Realizar el video demostración de la aplicación con las nuevas funcionalidades implementadas.
Importante tener en cuenta el feedback recibido hasta el momento sobre la realización de esta tarea.
|
non_process
|
realizar la demo de la aplicación en el semana realizar el video demostración de la aplicación con las nuevas funcionalidades implementadas semana realizar el video demostración de la aplicación con las nuevas funcionalidades implementadas importante tener en cuenta el feedback recibido hasta el momento sobre la realización de esta tarea
| 0
|
351,633
| 10,521,602,545
|
IssuesEvent
|
2019-09-30 06:35:19
|
AY1920S1-CS2103T-T09-2/main
|
https://api.github.com/repos/AY1920S1-CS2103T-T09-2/main
|
opened
|
As a amateur at exercising i want to have the app come up with exercises for me based on my user profile
|
priority.High type.Story
|
so that i can better plan future regimes based on my previous attempt
|
1.0
|
As a amateur at exercising i want to have the app come up with exercises for me based on my user profile - so that i can better plan future regimes based on my previous attempt
|
non_process
|
as a amateur at exercising i want to have the app come up with exercises for me based on my user profile so that i can better plan future regimes based on my previous attempt
| 0
|
22,475
| 31,389,248,621
|
IssuesEvent
|
2023-08-26 05:48:18
|
nodejs/node
|
https://api.github.com/repos/nodejs/node
|
closed
|
Add riscv64 to the list of supported architectures
|
doc os process riscv64
|
### Affected URL(s)
https://nodejs.org/dist/latest-v20.x/docs/api/os.html#osarch
### Description of the problem
In `node.git/doc/api/os.md` file currently reads:
```text
Returns the operating system CPU architecture for which the Node.js binary was
compiled. Possible values are `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,
`'mipsel'`, `'ppc'`, `'ppc64'`, `'s390'`, `'s390x'`, and `'x64'`.
```
I suggest to change it to:
```text
Returns the operating system CPU architecture for which the Node.js binary was
compiled. Possible values are `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,
`'mipsel'`, `'ppc'`, `'ppc64'`, `'riscv64'`, `'s390'`, `'s390x'`, and `'x64'`.
```
The same for `node.git/doc/api/process.md`.
|
1.0
|
Add riscv64 to the list of supported architectures - ### Affected URL(s)
https://nodejs.org/dist/latest-v20.x/docs/api/os.html#osarch
### Description of the problem
In `node.git/doc/api/os.md` file currently reads:
```text
Returns the operating system CPU architecture for which the Node.js binary was
compiled. Possible values are `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,
`'mipsel'`, `'ppc'`, `'ppc64'`, `'s390'`, `'s390x'`, and `'x64'`.
```
I suggest to change it to:
```text
Returns the operating system CPU architecture for which the Node.js binary was
compiled. Possible values are `'arm'`, `'arm64'`, `'ia32'`, `'mips'`,
`'mipsel'`, `'ppc'`, `'ppc64'`, `'riscv64'`, `'s390'`, `'s390x'`, and `'x64'`.
```
The same for `node.git/doc/api/process.md`.
|
process
|
add to the list of supported architectures affected url s description of the problem in node git doc api os md file currently reads text returns the operating system cpu architecture for which the node js binary was compiled possible values are arm mips mipsel ppc and i suggest to change it to text returns the operating system cpu architecture for which the node js binary was compiled possible values are arm mips mipsel ppc and the same for node git doc api process md
| 1
|
28,892
| 7,044,057,341
|
IssuesEvent
|
2017-12-31 17:06:48
|
coala/coala
|
https://api.github.com/repos/coala/coala
|
opened
|
Don't re.escape() cmdline path args in tests
|
area/CLI area/tests type/codestyle
|
2nd step after https://github.com/coala/coala/issues/4890 for getting rid of inconveniently used `re.escape()`s
Since https://github.com/coala/coala/pull/4883 coala cmdline args can have single-backslash path separators on Windows, which were formerly doubled in the tests by using `re.escape` to avoid their removal by coala's settings handling (always interpreting backslashes as escape characters)
Occurencies of those obsolete `re.escape()`s can be found via https://github.com/coala/coala/pull/5019#discussion_r159142870 https://github.com/coala/coala/pull/5019#discussion_r159142883 https://github.com/coala/coala/pull/5019#discussion_r159142889
cc @Makman2
|
1.0
|
Don't re.escape() cmdline path args in tests - 2nd step after https://github.com/coala/coala/issues/4890 for getting rid of inconveniently used `re.escape()`s
Since https://github.com/coala/coala/pull/4883 coala cmdline args can have single-backslash path separators on Windows, which were formerly doubled in the tests by using `re.escape` to avoid their removal by coala's settings handling (always interpreting backslashes as escape characters)
Occurencies of those obsolete `re.escape()`s can be found via https://github.com/coala/coala/pull/5019#discussion_r159142870 https://github.com/coala/coala/pull/5019#discussion_r159142883 https://github.com/coala/coala/pull/5019#discussion_r159142889
cc @Makman2
|
non_process
|
don t re escape cmdline path args in tests step after for getting rid of inconveniently used re escape s since coala cmdline args can have single backslash path separators on windows which were formerly doubled in the tests by using re escape to avoid their removal by coala s settings handling always interpreting backslashes as escape characters occurencies of those obsolete re escape s can be found via cc
| 0
|
9,304
| 12,312,590,776
|
IssuesEvent
|
2020-05-12 14:08:52
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
System.Diagnostics.Process.ProcessName returning incorrect information on Linux with .NET 5
|
area-System.Diagnostics.Process os-linux untriaged
|
Moving PowerShell to .NET 5 via branch https://github.com/SteveL-MSFT/PowerShell/tree/dotnet-5
We have tests that expect the `pwsh` process to have a ProcessName of `pwsh`, but on Linux, the name is actually `ConsoleHost mai` (looks to be cut off). Works correctly on Windows and macOS.
Note that `[System.AppDomain]::CurrentDomain.FriendlyName` returns `pwsh` as expected.
|
1.0
|
System.Diagnostics.Process.ProcessName returning incorrect information on Linux with .NET 5 - Moving PowerShell to .NET 5 via branch https://github.com/SteveL-MSFT/PowerShell/tree/dotnet-5
We have tests that expect the `pwsh` process to have a ProcessName of `pwsh`, but on Linux, the name is actually `ConsoleHost mai` (looks to be cut off). Works correctly on Windows and macOS.
Note that `[System.AppDomain]::CurrentDomain.FriendlyName` returns `pwsh` as expected.
|
process
|
system diagnostics process processname returning incorrect information on linux with net moving powershell to net via branch we have tests that expect the pwsh process to have a processname of pwsh but on linux the name is actually consolehost mai looks to be cut off works correctly on windows and macos note that currentdomain friendlyname returns pwsh as expected
| 1
|
115,153
| 11,866,517,570
|
IssuesEvent
|
2020-03-26 03:58:58
|
kabanero-io/roadmap
|
https://api.github.com/repos/kabanero-io/roadmap
|
closed
|
Provide support for private image registry
|
Epic documentation-id
|
A number of issues have been uncovered where using a private image registry does not work.
Build test, validate piplelines to work with the 3 types of registries.
Define scenarios for SVT.
Scenario:
Customer has GHE and Docker registry to manage his stacks and apps in both of them.
Provide a page Kabanero with private and source register for consideration.
Each component needs a feature
- Appsody
- TA
- Pipelines
- Tekton
- kAppNav
- Codewind
- Codeready workspaces
**Suggested** Private registry configured IBM artifactory as a docker registry for hosting the stack hub containers. Jane configures the laptop registry, Champ builds his stack hub with the registry and todd configures OCP. The Kabanero scenarios including pipelines should pull the containers from the IBM docker registry.
All the above components should work using the stacks.
|
1.0
|
Provide support for private image registry - A number of issues have been uncovered where using a private image registry does not work.
Build test, validate piplelines to work with the 3 types of registries.
Define scenarios for SVT.
Scenario:
Customer has GHE and Docker registry to manage his stacks and apps in both of them.
Provide a page Kabanero with private and source register for consideration.
Each component needs a feature
- Appsody
- TA
- Pipelines
- Tekton
- kAppNav
- Codewind
- Codeready workspaces
**Suggested** Private registry configured IBM artifactory as a docker registry for hosting the stack hub containers. Jane configures the laptop registry, Champ builds his stack hub with the registry and todd configures OCP. The Kabanero scenarios including pipelines should pull the containers from the IBM docker registry.
All the above components should work using the stacks.
|
non_process
|
provide support for private image registry a number of issues have been uncovered where using a private image registry does not work build test validate piplelines to work with the types of registries define scenarios for svt scenario customer has ghe and docker registry to manage his stacks and apps in both of them provide a page kabanero with private and source register for consideration each component needs a feature appsody ta pipelines tekton kappnav codewind codeready workspaces suggested private registry configured ibm artifactory as a docker registry for hosting the stack hub containers jane configures the laptop registry champ builds his stack hub with the registry and todd configures ocp the kabanero scenarios including pipelines should pull the containers from the ibm docker registry all the above components should work using the stacks
| 0
|
455,529
| 13,128,560,921
|
IssuesEvent
|
2020-08-06 12:31:10
|
kubernetes/website
|
https://api.github.com/repos/kubernetes/website
|
closed
|
Migrate “Dynamic Admission Control” to concepts section
|
kind/feature lifecycle/rotten priority/backlog
|
**This is a Feature Request**
**What would you like to be changed**
- Review https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
-Decide what counts as reference documentation, and identify any sections that are better documented inside [Concepts](https://kubernetes.io/docs/concepts/) or [Tasks](https://kubernetes.io/docs/tasks/)
- Make appropriate changes **OR** log new issues for the tasks you identified (and then close this one)
**Why is this needed**
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ is a mix of Task- and Concept- shaped documentation. Though aimed at advanced readers it is not reference content.
|
1.0
|
Migrate “Dynamic Admission Control” to concepts section - **This is a Feature Request**
**What would you like to be changed**
- Review https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/
-Decide what counts as reference documentation, and identify any sections that are better documented inside [Concepts](https://kubernetes.io/docs/concepts/) or [Tasks](https://kubernetes.io/docs/tasks/)
- Make appropriate changes **OR** log new issues for the tasks you identified (and then close this one)
**Why is this needed**
https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/ is a mix of Task- and Concept- shaped documentation. Though aimed at advanced readers it is not reference content.
|
non_process
|
migrate “dynamic admission control” to concepts section this is a feature request what would you like to be changed review decide what counts as reference documentation and identify any sections that are better documented inside or make appropriate changes or log new issues for the tasks you identified and then close this one why is this needed is a mix of task and concept shaped documentation though aimed at advanced readers it is not reference content
| 0
|
151,281
| 5,809,265,361
|
IssuesEvent
|
2017-05-04 13:01:29
|
RobotLocomotion/drake
|
https://api.github.com/repos/RobotLocomotion/drake
|
closed
|
Bazel memcheck failing with shell scripts
|
priority: medium team: kitware type: bug
|
Bazel memecheck fails when shell tests are run with it.
In particular when these tests are run with memcheck:
lcm_vector_gen_test
clang_format_includes_test
|
1.0
|
Bazel memcheck failing with shell scripts - Bazel memecheck fails when shell tests are run with it.
In particular when these tests are run with memcheck:
lcm_vector_gen_test
clang_format_includes_test
|
non_process
|
bazel memcheck failing with shell scripts bazel memecheck fails when shell tests are run with it in particular when these tests are run with memcheck lcm vector gen test clang format includes test
| 0
|
177,829
| 29,170,712,408
|
IssuesEvent
|
2023-05-19 01:20:08
|
wpumacay/renderer
|
https://api.github.com/repos/wpumacay/renderer
|
closed
|
Shader Manager Implementation
|
component: python component: shaders component: assets priority: high type: design type: documentation type: feature
|
# Description
This tracks the implementation of the `shader_manager` singleton, used to both `create` and `store` shaders in a easier way. The rationale is that `shaders` will be considered as assets, and this manager will be in charge of easily create them and share the ownership of this shaders with user code that might request them.
## Tasks
- [ ] Implementatio of the **`sbader_manager`**
- [ ] Create an **`example`** of its usage
- [ ] Implement some **`unittests`**
- [ ] Implement **`python`** bindings
- [ ] Make **`documentation`** explaining its usage
## Notes
* There's the legacy implementation of the old [**`shader_manager`**][0], which we could use as a starting point.
* There's a legacy [**`example`**] of its usage; however, the API should change in order to have a cleaner and easier to use API. Notice in this example that the renderer is requesting the shaders that it might use. However, the role will likely be swapped, as the materials are going to be the ones that have a specific shader according to its shading model.
[0]: <https://github.com/wpumacay/loco_renderer/blob/legacy/legacy/include/shaders/CShaderManager.h> (reference-impl-1)
[1]: <https://github.com/wpumacay/loco_renderer/blob/legacy/legacy/src/renderers/CMeshRenderer.cpp#L9> (reference-sample-usage-1)
|
1.0
|
Shader Manager Implementation - # Description
This tracks the implementation of the `shader_manager` singleton, used to both `create` and `store` shaders in a easier way. The rationale is that `shaders` will be considered as assets, and this manager will be in charge of easily create them and share the ownership of this shaders with user code that might request them.
## Tasks
- [ ] Implementatio of the **`sbader_manager`**
- [ ] Create an **`example`** of its usage
- [ ] Implement some **`unittests`**
- [ ] Implement **`python`** bindings
- [ ] Make **`documentation`** explaining its usage
## Notes
* There's the legacy implementation of the old [**`shader_manager`**][0], which we could use as a starting point.
* There's a legacy [**`example`**] of its usage; however, the API should change in order to have a cleaner and easier to use API. Notice in this example that the renderer is requesting the shaders that it might use. However, the role will likely be swapped, as the materials are going to be the ones that have a specific shader according to its shading model.
[0]: <https://github.com/wpumacay/loco_renderer/blob/legacy/legacy/include/shaders/CShaderManager.h> (reference-impl-1)
[1]: <https://github.com/wpumacay/loco_renderer/blob/legacy/legacy/src/renderers/CMeshRenderer.cpp#L9> (reference-sample-usage-1)
|
non_process
|
shader manager implementation description this tracks the implementation of the shader manager singleton used to both create and store shaders in a easier way the rationale is that shaders will be considered as assets and this manager will be in charge of easily create them and share the ownership of this shaders with user code that might request them tasks implementatio of the sbader manager create an example of its usage implement some unittests implement python bindings make documentation explaining its usage notes there s the legacy implementation of the old which we could use as a starting point there s a legacy of its usage however the api should change in order to have a cleaner and easier to use api notice in this example that the renderer is requesting the shaders that it might use however the role will likely be swapped as the materials are going to be the ones that have a specific shader according to its shading model reference impl reference sample usage
| 0
|
15,791
| 19,982,504,916
|
IssuesEvent
|
2022-01-30 05:18:50
|
qgis/QGIS
|
https://api.github.com/repos/qgis/QGIS
|
closed
|
Delete Column in Matrix Parameter in Graphic Modeler not Working
|
Processing Bug Modeller
|
### What is the bug or the crash?
The button to delete column is not working for matrix parameter in graphic modeller.
### Steps to reproduce the issue
Try deleting a column in the matrix parameter in the graphical modeller.
### Versions
Checked on 3.16 and 3.22.
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
1.0
|
Delete Column in Matrix Parameter in Graphic Modeler not Working - ### What is the bug or the crash?
The button to delete column is not working for matrix parameter in graphic modeller.
### Steps to reproduce the issue
Try deleting a column in the matrix parameter in the graphical modeller.
### Versions
Checked on 3.16 and 3.22.
### Supported QGIS version
- [X] I'm running a supported QGIS version according to the roadmap.
### New profile
- [ ] I tried with a new QGIS profile
### Additional context
_No response_
|
process
|
delete column in matrix parameter in graphic modeler not working what is the bug or the crash the button to delete column is not working for matrix parameter in graphic modeller steps to reproduce the issue try deleting a column in the matrix parameter in the graphical modeller versions checked on and supported qgis version i m running a supported qgis version according to the roadmap new profile i tried with a new qgis profile additional context no response
| 1
|
5,773
| 8,614,995,724
|
IssuesEvent
|
2018-11-19 19:11:06
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
[Snackbar] Change Material Snackbar to use the new Material spec as a default
|
[Snackbar] type:Process
|
From the internal issue:
> As part of Material's efforts to provide components that use the latest Material design guidelines, we will be updating the default value of `usesLegacySnackbar` of the `MDCSnackbarMessage` API to be set to `NO`, and therefore allowing the new material Snackbar be the default Snackbar going forward.
>
> The change itself will be a one line change in MDCSnackbarMessage.m changing the line:
> static BOOL _usesLegacySnackbar = YES;
> to:
> static BOOL _usesLegacySnackbar = NO;
---
This is an internal issue. If you are a Googler, please visit [b/116850150](http://b/116850150) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/116850150](http://b/116850150)
|
1.0
|
[Snackbar] Change Material Snackbar to use the new Material spec as a default - From the internal issue:
> As part of Material's efforts to provide components that use the latest Material design guidelines, we will be updating the default value of `usesLegacySnackbar` of the `MDCSnackbarMessage` API to be set to `NO`, and therefore allowing the new material Snackbar be the default Snackbar going forward.
>
> The change itself will be a one line change in MDCSnackbarMessage.m changing the line:
> static BOOL _usesLegacySnackbar = YES;
> to:
> static BOOL _usesLegacySnackbar = NO;
---
This is an internal issue. If you are a Googler, please visit [b/116850150](http://b/116850150) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/116850150](http://b/116850150)
|
process
|
change material snackbar to use the new material spec as a default from the internal issue as part of material s efforts to provide components that use the latest material design guidelines we will be updating the default value of useslegacysnackbar of the mdcsnackbarmessage api to be set to no and therefore allowing the new material snackbar be the default snackbar going forward the change itself will be a one line change in mdcsnackbarmessage m changing the line static bool useslegacysnackbar yes to static bool useslegacysnackbar no this is an internal issue if you are a googler please visit for more details internal data associated internal bug
| 1
|
310,738
| 23,351,364,388
|
IssuesEvent
|
2022-08-10 00:33:12
|
osksergio/pontomais-challenge
|
https://api.github.com/repos/osksergio/pontomais-challenge
|
closed
|
Vantagens e desvantagens: dinamicamente tipada
|
documentation enhancement question
|
- Questão 9: Quais as vantagens e desvantagens do ruby ser uma linguagem dinamicamente tipada?
Resp.:
- Vantagens:
- Diminui a verbosidade, pois não é necessário fazer conversões;
- Pode facilitar a vida do dev, pois não há a preocupação com conversões;
- Decisões sobre tipos podem ser evitados ou adiadas aumentando a produtividade.
- Desvantagens:
- A linguagem pode se tornar mais lenta em tempo de execução, devido ao fato de que, a cada interação, o tipo deve ser verificado;
- Pode confundir o dev, pois ele pode não saber exatamente com quais tipos está tratando.
|
1.0
|
Vantagens e desvantagens: dinamicamente tipada - - Questão 9: Quais as vantagens e desvantagens do ruby ser uma linguagem dinamicamente tipada?
Resp.:
- Vantagens:
- Diminui a verbosidade, pois não é necessário fazer conversões;
- Pode facilitar a vida do dev, pois não há a preocupação com conversões;
- Decisões sobre tipos podem ser evitados ou adiadas aumentando a produtividade.
- Desvantagens:
- A linguagem pode se tornar mais lenta em tempo de execução, devido ao fato de que, a cada interação, o tipo deve ser verificado;
- Pode confundir o dev, pois ele pode não saber exatamente com quais tipos está tratando.
|
non_process
|
vantagens e desvantagens dinamicamente tipada questão quais as vantagens e desvantagens do ruby ser uma linguagem dinamicamente tipada resp vantagens diminui a verbosidade pois não é necessário fazer conversões pode facilitar a vida do dev pois não há a preocupação com conversões decisões sobre tipos podem ser evitados ou adiadas aumentando a produtividade desvantagens a linguagem pode se tornar mais lenta em tempo de execução devido ao fato de que a cada interação o tipo deve ser verificado pode confundir o dev pois ele pode não saber exatamente com quais tipos está tratando
| 0
|
340,039
| 30,491,779,369
|
IssuesEvent
|
2023-07-18 08:12:26
|
cockroachdb/cockroach
|
https://api.github.com/repos/cockroachdb/cockroach
|
opened
|
roachtest: failover/non-system/disk-stall failed
|
C-test-failure O-robot O-roachtest branch-master release-blocker T-kv
|
roachtest.failover/non-system/disk-stall [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=artifacts#/failover/non-system/disk-stall) on master @ [7675ca4998134028f0623e04737b5cb69fcc33a9](https://github.com/cockroachdb/cockroach/commits/7675ca4998134028f0623e04737b5cb69fcc33a9):
```
(cluster.go:2282).Run: output in run_081133.164172409_n1-7_echo-0-sudo-blockdev: echo "0 $(sudo blockdev --getsz /dev/sdb) linear /dev/sdb 0" | sudo dmsetup create data1 returned: COMMAND_PROBLEM: exit status 1
test artifacts and logs in: /artifacts/failover/non-system/disk-stall/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=2</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*failover/non-system/disk-stall.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
2.0
|
roachtest: failover/non-system/disk-stall failed - roachtest.failover/non-system/disk-stall [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_RoachtestNightlyGceBazel/10950435?buildTab=artifacts#/failover/non-system/disk-stall) on master @ [7675ca4998134028f0623e04737b5cb69fcc33a9](https://github.com/cockroachdb/cockroach/commits/7675ca4998134028f0623e04737b5cb69fcc33a9):
```
(cluster.go:2282).Run: output in run_081133.164172409_n1-7_echo-0-sudo-blockdev: echo "0 $(sudo blockdev --getsz /dev/sdb) linear /dev/sdb 0" | sudo dmsetup create data1 returned: COMMAND_PROBLEM: exit status 1
test artifacts and logs in: /artifacts/failover/non-system/disk-stall/run_1
```
<p>Parameters: <code>ROACHTEST_arch=amd64</code>
, <code>ROACHTEST_cloud=gce</code>
, <code>ROACHTEST_cpu=2</code>
, <code>ROACHTEST_encrypted=false</code>
, <code>ROACHTEST_fs=ext4</code>
, <code>ROACHTEST_localSSD=false</code>
, <code>ROACHTEST_ssd=0</code>
</p>
<details><summary>Help</summary>
<p>
See: [roachtest README](https://github.com/cockroachdb/cockroach/blob/master/pkg/cmd/roachtest/README.md)
See: [How To Investigate \(internal\)](https://cockroachlabs.atlassian.net/l/c/SSSBr8c7)
</p>
</details>
/cc @cockroachdb/kv-triage
<sub>
[This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*failover/non-system/disk-stall.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)
</sub>
|
non_process
|
roachtest failover non system disk stall failed roachtest failover non system disk stall with on master cluster go run output in run echo sudo blockdev echo sudo blockdev getsz dev sdb linear dev sdb sudo dmsetup create returned command problem exit status test artifacts and logs in artifacts failover non system disk stall run parameters roachtest arch roachtest cloud gce roachtest cpu roachtest encrypted false roachtest fs roachtest localssd false roachtest ssd help see see cc cockroachdb kv triage
| 0
|
15,787
| 19,977,790,332
|
IssuesEvent
|
2022-01-29 11:33:21
|
bdrum/kaggle
|
https://api.github.com/repos/bdrum/kaggle
|
opened
|
Preprocessing data
|
enhancement titanic preprocessing
|
In titanic dataset we have heterogeneous data. Let's try to categorize feature via sklearn algorithms and discretize such features as age and fare. Also add few preprocessing algorithms to a pipeline.
|
1.0
|
Preprocessing data - In titanic dataset we have heterogeneous data. Let's try to categorize feature via sklearn algorithms and discretize such features as age and fare. Also add few preprocessing algorithms to a pipeline.
|
process
|
preprocessing data in titanic dataset we have heterogeneous data let s try to categorize feature via sklearn algorithms and discretize such features as age and fare also add few preprocessing algorithms to a pipeline
| 1
|
15,209
| 19,042,241,130
|
IssuesEvent
|
2021-11-25 00:10:55
|
km4ack/patmenu2
|
https://api.github.com/repos/km4ack/patmenu2
|
closed
|
Add pat server status to quick stats
|
enhancement in process
|
The status of the pat http server is shown in Conky already but it would be helpful to add this to quick stats as well in case the user isn't running Conky.
|
1.0
|
Add pat server status to quick stats - The status of the pat http server is shown in Conky already but it would be helpful to add this to quick stats as well in case the user isn't running Conky.
|
process
|
add pat server status to quick stats the status of the pat http server is shown in conky already but it would be helpful to add this to quick stats as well in case the user isn t running conky
| 1
|
824,084
| 31,117,545,527
|
IssuesEvent
|
2023-08-15 01:58:59
|
openmsupply/open-msupply
|
https://api.github.com/repos/openmsupply/open-msupply
|
closed
|
Patient created in oms are not syncing well to mSupply Desktop
|
bug Priority: Should have
|
## What went wrong? 😲
Patient created in oms are not syncing well to mSupply Desktop dispensary despite the store pref `Patients created in other stores not visible in this store` OFF
## Expected behaviour 🤔
With that `Patients created in other stores not visible in this store` store pref OFF, all created patients should sync well to the cloud?
## How to Reproduce 🔨
Steps to reproduce the behaviour:
1. Have a sync setup with multiple remote sites: both oms and mSupply desktops
2. Create a patient1 from oms first and sync
3. Login to Desktop and see error: patient1 is not showing despite that store pref
4. Also try checking that pref ON/OFF (for desktop store) as [in the recording](https://recordit.co/hTYPju5l8h)
5. SYnc well
6. See error: you'll now notice the patient1's homestore is changed to that Desktop store.
## Screenshots
https://recordit.co/hTYPju5l8h
## Your environment 🌱
<!-- e.g. 1.2.3 -->
- Version: v1.1.12 server, apk
- Platform:
- [x] android (tablet)
- [x] browser (extra points if you tell us which one)
- [x] server (windows)
|
1.0
|
Patient created in oms are not syncing well to mSupply Desktop - ## What went wrong? 😲
Patient created in oms are not syncing well to mSupply Desktop dispensary despite the store pref `Patients created in other stores not visible in this store` OFF
## Expected behaviour 🤔
With that `Patients created in other stores not visible in this store` store pref OFF, all created patients should sync well to the cloud?
## How to Reproduce 🔨
Steps to reproduce the behaviour:
1. Have a sync setup with multiple remote sites: both oms and mSupply desktops
2. Create a patient1 from oms first and sync
3. Login to Desktop and see error: patient1 is not showing despite that store pref
4. Also try checking that pref ON/OFF (for desktop store) as [in the recording](https://recordit.co/hTYPju5l8h)
5. SYnc well
6. See error: you'll now notice the patient1's homestore is changed to that Desktop store.
## Screenshots
https://recordit.co/hTYPju5l8h
## Your environment 🌱
<!-- e.g. 1.2.3 -->
- Version: v1.1.12 server, apk
- Platform:
- [x] android (tablet)
- [x] browser (extra points if you tell us which one)
- [x] server (windows)
|
non_process
|
patient created in oms are not syncing well to msupply desktop what went wrong 😲 patient created in oms are not syncing well to msupply desktop dispensary despite the store pref patients created in other stores not visible in this store off expected behaviour 🤔 with that patients created in other stores not visible in this store store pref off all created patients should sync well to the cloud how to reproduce 🔨 steps to reproduce the behaviour have a sync setup with multiple remote sites both oms and msupply desktops create a from oms first and sync login to desktop and see error is not showing despite that store pref also try checking that pref on off for desktop store as sync well see error you ll now notice the s homestore is changed to that desktop store screenshots your environment 🌱 version server apk platform android tablet browser extra points if you tell us which one server windows
| 0
|
15,470
| 19,682,588,102
|
IssuesEvent
|
2022-01-11 18:16:41
|
USF-IMARS/python-tech-workgroup
|
https://api.github.com/repos/USF-IMARS/python-tech-workgroup
|
opened
|
run binderhub on imars proc server w/ mounted sat data
|
enhancement discussion proj: l2 .nc processing
|
prereq: https://github.com/USF-IMARS/server-status/issues/159
this isn't really called for rn but it would be something useful to have in the future, especially for WV data (bc that can't be public)
|
1.0
|
run binderhub on imars proc server w/ mounted sat data - prereq: https://github.com/USF-IMARS/server-status/issues/159
this isn't really called for rn but it would be something useful to have in the future, especially for WV data (bc that can't be public)
|
process
|
run binderhub on imars proc server w mounted sat data prereq this isn t really called for rn but it would be something useful to have in the future especially for wv data bc that can t be public
| 1
|
424
| 2,855,316,158
|
IssuesEvent
|
2015-06-02 08:46:41
|
genomizer/genomizer-server
|
https://api.github.com/repos/genomizer/genomizer-server
|
opened
|
Automate acceptance testing with Postman
|
enhancement Medium priority Processing
|
Large parts of the acceptance test document look like it could be automated with Postman (at least the server part). This ticket will track progress on the current state of this automation.
### Logging In
- [ ] Correct login
- [ ] Bad username
- [ ] Bad password
- [ ] No username
- [ ] No password
- [ ] Garbage username
- [ ] Garbage password
- [ ] Bad address
### Logging out
- [ ] Standard behaviour
- [ ] Logging in after logging out
### Upload genome release
- [ ] Add genome release with 1 file
- [ ] Add genome release with multiple files
- [ ] Add a genome release with a name already in use
- [ ] Add a genome release without a species
### Delete genome release
- [ ] Delete an unused genome release
- [ ] Delete a genome release which is still in use
### Add annotation
- [ ] Add forced dropdown annotation
- [ ] Add non-forced dropdown annotation
- [ ] Add forced freetext annotation
- [ ] Add non-forced freetext annotation
- [ ] Add dropdown annotation with default values
- [ ] Add dropdown annotation with freetext as only value
- [ ] Add dropdown annotation with garbage values
- [ ] Add freetext annotation with garbage values
### Delete annotation
- [ ] Delete unused annotation
- [ ] Delete an annotation in use
- [ ] Delete a "hardcoded" annotation
### Update annotation
- [ ] Update value in a forced dropdown
- [ ] Update value in a non-forced dropdown
- [ ] Update value in a freetext annotation
- [ ] Update value to a value which already exists
- [ ] Update value to contain a special character
- [ ] Update value to be empty
- [ ] Add an annotation value
- [ ] Add value identical to an already existing value
- [ ] Add value "freetext"
- [ ] Add empty value
- [ ] Remove value from dropdown
- [ ] Remove value currently in use
- [ ] Update name
- [ ] Update name to a name which already exists
- [ ] Update name to be empty
### Create experiment
- [ ] Create experiment without typing in an experiment name
- [ ] Create experiment without filling in forced annotations
- [ ] Create experiment without files
- [ ] Create experiment with one file
- [ ] Create experiment with multiple files
### Add files to experiment
- [ ] Add one file to an experiment
- [ ] Add multiple files to an experiment
- [ ] Add files without filling in forced annotations
### Delete experiment
- [ ] Delete empty experiment
- [ ] Delete non-empty experiment
### Searching
- [ ] Single term
- [ ] Multiple terms
- [ ] Multiple terms using constructor
- [ ] Incorrect syntax
- [ ] Empty string
- [ ] Garbage input
- [ ] Search after add
- [ ] Search after add
Not yet specified: processing, file conversion, user addition/deletion/updates.
|
1.0
|
Automate acceptance testing with Postman - Large parts of the acceptance test document look like it could be automated with Postman (at least the server part). This ticket will track progress on the current state of this automation.
### Logging In
- [ ] Correct login
- [ ] Bad username
- [ ] Bad password
- [ ] No username
- [ ] No password
- [ ] Garbage username
- [ ] Garbage password
- [ ] Bad address
### Logging out
- [ ] Standard behaviour
- [ ] Logging in after logging out
### Upload genome release
- [ ] Add genome release with 1 file
- [ ] Add genome release with multiple files
- [ ] Add a genome release with a name already in use
- [ ] Add a genome release without a species
### Delete genome release
- [ ] Delete an unused genome release
- [ ] Delete a genome release which is still in use
### Add annotation
- [ ] Add forced dropdown annotation
- [ ] Add non-forced dropdown annotation
- [ ] Add forced freetext annotation
- [ ] Add non-forced freetext annotation
- [ ] Add dropdown annotation with default values
- [ ] Add dropdown annotation with freetext as only value
- [ ] Add dropdown annotation with garbage values
- [ ] Add freetext annotation with garbage values
### Delete annotation
- [ ] Delete unused annotation
- [ ] Delete an annotation in use
- [ ] Delete a "hardcoded" annotation
### Update annotation
- [ ] Update value in a forced dropdown
- [ ] Update value in a non-forced dropdown
- [ ] Update value in a freetext annotation
- [ ] Update value to a value which already exists
- [ ] Update value to contain a special character
- [ ] Update value to be empty
- [ ] Add an annotation value
- [ ] Add value identical to an already existing value
- [ ] Add value "freetext"
- [ ] Add empty value
- [ ] Remove value from dropdown
- [ ] Remove value currently in use
- [ ] Update name
- [ ] Update name to a name which already exists
- [ ] Update name to be empty
### Create experiment
- [ ] Create experiment without typing in an experiment name
- [ ] Create experiment without filling in forced annotations
- [ ] Create experiment without files
- [ ] Create experiment with one file
- [ ] Create experiment with multiple files
### Add files to experiment
- [ ] Add one file to an experiment
- [ ] Add multiple files to an experiment
- [ ] Add files without filling in forced annotations
### Delete experiment
- [ ] Delete empty experiment
- [ ] Delete non-empty experiment
### Searching
- [ ] Single term
- [ ] Multiple terms
- [ ] Multiple terms using constructor
- [ ] Incorrect syntax
- [ ] Empty string
- [ ] Garbage input
- [ ] Search after add
- [ ] Search after add
Not yet specified: processing, file conversion, user addition/deletion/updates.
|
process
|
automate acceptance testing with postman large parts of the acceptance test document look like it could be automated with postman at least the server part this ticket will track progress on the current state of this automation logging in correct login bad username bad password no username no password garbage username garbage password bad address logging out standard behaviour logging in after logging out upload genome release add genome release with file add genome release with multiple files add a genome release with a name already in use add a genome release without a species delete genome release delete an unused genome release delete a genome release which is still in use add annotation add forced dropdown annotation add non forced dropdown annotation add forced freetext annotation add non forced freetext annotation add dropdown annotation with default values add dropdown annotation with freetext as only value add dropdown annotation with garbage values add freetext annotation with garbage values delete annotation delete unused annotation delete an annotation in use delete a hardcoded annotation update annotation update value in a forced dropdown update value in a non forced dropdown update value in a freetext annotation update value to a value which already exists update value to contain a special character update value to be empty add an annotation value add value identical to an already existing value add value freetext add empty value remove value from dropdown remove value currently in use update name update name to a name which already exists update name to be empty create experiment create experiment without typing in an experiment name create experiment without filling in forced annotations create experiment without files create experiment with one file create experiment with multiple files add files to experiment add one file to an experiment add multiple files to an experiment add files without filling in forced annotations delete experiment delete empty experiment delete non empty experiment searching single term multiple terms multiple terms using constructor incorrect syntax empty string garbage input search after add search after add not yet specified processing file conversion user addition deletion updates
| 1
|
624,116
| 19,687,144,659
|
IssuesEvent
|
2022-01-12 00:01:14
|
hashicorp/terraform-provider-google
|
https://api.github.com/repos/hashicorp/terraform-provider-google
|
closed
|
Add support for dns_config to google_container_cluster (ga)
|
enhancement size/s priority/2
|
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. Including use cases and why it would help you is a great way to convince maintainers to spend time on it. --->
Add support for dns_config to google_container_cluster
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* google_container_cluster
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
See [beta issue](https://github.com/hashicorp/terraform-provider-google/issues/9494).
### References
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#dnsconfig
https://pkg.go.dev/google.golang.org/api@v0.48.0/container/v1beta1#DNSConfig
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns
Internal issue: https://b.corp.google.com/issues/201688267
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
* [#9494](https://github.com/hashicorp/terraform-provider-google/issues/9494)
<!---
Note Google Cloud customers who are working with a dedicated Technical Account Manager / Customer Engineer: to expedite the investigation and resolution of this issue, please refer to these instructions: https://github.com/hashicorp/terraform-provider-google/wiki/Customer-Contact#raising-gcp-internal-issues-with-the-provider-development-team
--->
|
1.0
|
Add support for dns_config to google_container_cluster (ga) - <!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment. If the issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If the issue is assigned to a user, that user is claiming responsibility for the issue. If the issue is assigned to "hashibot", a community member has claimed the issue already.
<!--- Thank you for keeping this note for the community --->
### Description
<!--- Please leave a helpful description of the feature request here. Including use cases and why it would help you is a great way to convince maintainers to spend time on it. --->
Add support for dns_config to google_container_cluster
### New or Affected Resource(s)
<!--- Please list the new or affected resources and data sources. --->
* google_container_cluster
### Potential Terraform Configuration
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
See [beta issue](https://github.com/hashicorp/terraform-provider-google/issues/9494).
### References
https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1beta1/projects.locations.clusters#dnsconfig
https://pkg.go.dev/google.golang.org/api@v0.48.0/container/v1beta1#DNSConfig
https://cloud.google.com/kubernetes-engine/docs/how-to/cloud-dns
Internal issue: https://b.corp.google.com/issues/201688267
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor blog posts or documentation?
--->
* [#9494](https://github.com/hashicorp/terraform-provider-google/issues/9494)
<!---
Note Google Cloud customers who are working with a dedicated Technical Account Manager / Customer Engineer: to expedite the investigation and resolution of this issue, please refer to these instructions: https://github.com/hashicorp/terraform-provider-google/wiki/Customer-Contact#raising-gcp-internal-issues-with-the-provider-development-team
--->
|
non_process
|
add support for dns config to google container cluster ga community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment if the issue is assigned to the modular magician user it is either in the process of being autogenerated or is planned to be autogenerated soon if the issue is assigned to a user that user is claiming responsibility for the issue if the issue is assigned to hashibot a community member has claimed the issue already description add support for dns config to google container cluster new or affected resource s google container cluster potential terraform configuration see references internal issue information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor blog posts or documentation note google cloud customers who are working with a dedicated technical account manager customer engineer to expedite the investigation and resolution of this issue please refer to these instructions
| 0
|
673,332
| 22,958,702,877
|
IssuesEvent
|
2022-07-19 13:45:45
|
owncloud/ocis
|
https://api.github.com/repos/owncloud/ocis
|
closed
|
TUS uploads might lead to data-loss in sharing conditions
|
Type:Bug QA:team Priority:p2-high
|
Following scenario failed in nightly and needs further investigation https://drone.owncloud.com/owncloud/ocis/13277/49/6
```feature
Scenario Outline: Overwrite file to a received share folder # /srv/app/testrunner/tests/acceptance/features/apiWebdavUploadTUS/uploadToShare.feature:75
Given using <dav_version> DAV path # FeatureContext::usingOldOrNewDavPath()
And user "Alice" has created folder "/FOLDER" # FeatureContext::userHasCreatedFolder()
And user "Alice" has uploaded file with content "original content" to "/FOLDER/textfile.txt" # FeatureContext::userHasUploadedAFileWithContentTo()
And user "Alice" has shared folder "/FOLDER" with user "Brian" # FeatureContext::userHasSharedFileWithUserUsingTheSharingApi()
And user "Brian" has accepted share "/FOLDER" offered by user "Alice" # FeatureContext::userHasReactedToShareOfferedBy()
When user "Brian" uploads file with content "overwritten content" to "/Shares/FOLDER/textfile.txt" using the TUS protocol on the WebDAV API # TUSContext::userUploadsAFileWithContentToUsingTus()
Then the HTTP status code should be "200" # FeatureContext::thenTheHTTPStatusCodeShouldBe()
And as "Alice" file "/FOLDER/textfile.txt" should exist # FeatureContext::asFileOrFolderShouldExist()
And the content of file "/FOLDER/textfile.txt" for user "Alice" should be "overwritten content" # FeatureContext::contentOfFileForUserShouldBe()
Examples:
| dav_version |
| old |
| new |
cURL error 52: Empty reply from server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://ocis-server:9200/remote.php/dav/files/Alice/FOLDER/textfile.txt (GuzzleHttp\Exception\ConnectException)
| spaces |
The downloaded content was expected to be 'overwritten content', but actually is 'original content'. HTTP status was 200
Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
-'overwritten content'
+'original content'
```
|
1.0
|
TUS uploads might lead to data-loss in sharing conditions - Following scenario failed in nightly and needs further investigation https://drone.owncloud.com/owncloud/ocis/13277/49/6
```feature
Scenario Outline: Overwrite file to a received share folder # /srv/app/testrunner/tests/acceptance/features/apiWebdavUploadTUS/uploadToShare.feature:75
Given using <dav_version> DAV path # FeatureContext::usingOldOrNewDavPath()
And user "Alice" has created folder "/FOLDER" # FeatureContext::userHasCreatedFolder()
And user "Alice" has uploaded file with content "original content" to "/FOLDER/textfile.txt" # FeatureContext::userHasUploadedAFileWithContentTo()
And user "Alice" has shared folder "/FOLDER" with user "Brian" # FeatureContext::userHasSharedFileWithUserUsingTheSharingApi()
And user "Brian" has accepted share "/FOLDER" offered by user "Alice" # FeatureContext::userHasReactedToShareOfferedBy()
When user "Brian" uploads file with content "overwritten content" to "/Shares/FOLDER/textfile.txt" using the TUS protocol on the WebDAV API # TUSContext::userUploadsAFileWithContentToUsingTus()
Then the HTTP status code should be "200" # FeatureContext::thenTheHTTPStatusCodeShouldBe()
And as "Alice" file "/FOLDER/textfile.txt" should exist # FeatureContext::asFileOrFolderShouldExist()
And the content of file "/FOLDER/textfile.txt" for user "Alice" should be "overwritten content" # FeatureContext::contentOfFileForUserShouldBe()
Examples:
| dav_version |
| old |
| new |
cURL error 52: Empty reply from server (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for https://ocis-server:9200/remote.php/dav/files/Alice/FOLDER/textfile.txt (GuzzleHttp\Exception\ConnectException)
| spaces |
The downloaded content was expected to be 'overwritten content', but actually is 'original content'. HTTP status was 200
Failed asserting that two strings are equal.
--- Expected
+++ Actual
@@ @@
-'overwritten content'
+'original content'
```
|
non_process
|
tus uploads might lead to data loss in sharing conditions following scenario failed in nightly and needs further investigation feature scenario outline overwrite file to a received share folder srv app testrunner tests acceptance features apiwebdavuploadtus uploadtoshare feature given using dav path featurecontext usingoldornewdavpath and user alice has created folder folder featurecontext userhascreatedfolder and user alice has uploaded file with content original content to folder textfile txt featurecontext userhasuploadedafilewithcontentto and user alice has shared folder folder with user brian featurecontext userhassharedfilewithuserusingthesharingapi and user brian has accepted share folder offered by user alice featurecontext userhasreactedtoshareofferedby when user brian uploads file with content overwritten content to shares folder textfile txt using the tus protocol on the webdav api tuscontext useruploadsafilewithcontenttousingtus then the http status code should be featurecontext thenthehttpstatuscodeshouldbe and as alice file folder textfile txt should exist featurecontext asfileorfoldershouldexist and the content of file folder textfile txt for user alice should be overwritten content featurecontext contentoffileforusershouldbe examples dav version old new curl error empty reply from server see for guzzlehttp exception connectexception spaces the downloaded content was expected to be overwritten content but actually is original content http status was failed asserting that two strings are equal expected actual overwritten content original content
| 0
|
119,449
| 17,615,162,365
|
IssuesEvent
|
2021-08-18 08:49:21
|
fluentpos/fluentpos
|
https://api.github.com/repos/fluentpos/fluentpos
|
opened
|
Add Stackhawk
|
enhancement security
|
Should add [Stackhawk](https://www.stackhawk.com/blog/stackhawk-github-code-scanning/) - API and Application Security Testing in GitHub Code Scanning.
https://www.youtube.com/watch?v=_tBugEDPwpo
|
True
|
Add Stackhawk - Should add [Stackhawk](https://www.stackhawk.com/blog/stackhawk-github-code-scanning/) - API and Application Security Testing in GitHub Code Scanning.
https://www.youtube.com/watch?v=_tBugEDPwpo
|
non_process
|
add stackhawk should add api and application security testing in github code scanning
| 0
|
155,880
| 5,962,528,618
|
IssuesEvent
|
2017-05-29 22:54:58
|
statgen/pheweb
|
https://api.github.com/repos/statgen/pheweb
|
closed
|
Use Google OAuth and an email whitelist
|
priority
|
For now, let's not build a terms page. So, we just need:
- [x] decorate every sensitive page with `@check_auth`
- [x] `conf.whitelist = ['a@gmail.com', ...]
- [x] `conf.google_oauth_token` or whatever Bravo does.
- [x] `check_auth(f)` does `if 'whitelist' in conf and user.email not in conf.whitelist: return redirect(...)`
- [x] show current user and logout button in navbar of `layout.html`
- [x] add `pheweb.sph.umich.edu` and its various variations to the Google OAuth2 console.
- [x] make `/callback/google`
|
1.0
|
Use Google OAuth and an email whitelist - For now, let's not build a terms page. So, we just need:
- [x] decorate every sensitive page with `@check_auth`
- [x] `conf.whitelist = ['a@gmail.com', ...]
- [x] `conf.google_oauth_token` or whatever Bravo does.
- [x] `check_auth(f)` does `if 'whitelist' in conf and user.email not in conf.whitelist: return redirect(...)`
- [x] show current user and logout button in navbar of `layout.html`
- [x] add `pheweb.sph.umich.edu` and its various variations to the Google OAuth2 console.
- [x] make `/callback/google`
|
non_process
|
use google oauth and an email whitelist for now let s not build a terms page so we just need decorate every sensitive page with check auth conf whitelist conf google oauth token or whatever bravo does check auth f does if whitelist in conf and user email not in conf whitelist return redirect show current user and logout button in navbar of layout html add pheweb sph umich edu and its various variations to the google console make callback google
| 0
|
7,012
| 10,163,984,283
|
IssuesEvent
|
2019-08-07 10:31:18
|
GroceriStar/food-static-files-generator
|
https://api.github.com/repos/GroceriStar/food-static-files-generator
|
reopened
|
eslint errors/warnings
|
bug enhancement good first issue help wanted in-process
|
**Describe the bug**
when I run lint i see this message
let's go step by step.
1) fix fileSystem.js
2) fix objects.js
3) utils,js
4) writeFile.js
**Screenshots**

@NathanielFaber can be your task as well
|
1.0
|
eslint errors/warnings - **Describe the bug**
when I run lint i see this message
let's go step by step.
1) fix fileSystem.js
2) fix objects.js
3) utils,js
4) writeFile.js
**Screenshots**

@NathanielFaber can be your task as well
|
process
|
eslint errors warnings describe the bug when i run lint i see this message let s go step by step fix filesystem js fix objects js utils js writefile js screenshots nathanielfaber can be your task as well
| 1
|
67,223
| 12,889,848,931
|
IssuesEvent
|
2020-07-13 15:07:01
|
atilacamurca/glossario-friends
|
https://api.github.com/repos/atilacamurca/glossario-friends
|
opened
|
Adicionar múltiplas referências na página inicial
|
code
|
Fazer com que a página inicial use outras referências, dando um caráter dinâmico a página
Ver se é possível criar uma graphql query para obter as referências, livrando assim de precisar duplicar conteúdo.
### Referências
- <https://ssense.github.io/vue-carousel/> usar vue-carousel para colocar várias referências
|
1.0
|
Adicionar múltiplas referências na página inicial - Fazer com que a página inicial use outras referências, dando um caráter dinâmico a página
Ver se é possível criar uma graphql query para obter as referências, livrando assim de precisar duplicar conteúdo.
### Referências
- <https://ssense.github.io/vue-carousel/> usar vue-carousel para colocar várias referências
|
non_process
|
adicionar múltiplas referências na página inicial fazer com que a página inicial use outras referências dando um caráter dinâmico a página ver se é possível criar uma graphql query para obter as referências livrando assim de precisar duplicar conteúdo referências usar vue carousel para colocar várias referências
| 0
|
59,553
| 24,817,260,287
|
IssuesEvent
|
2022-10-25 13:59:54
|
azure-deprecation/dashboard
|
https://api.github.com/repos/azure-deprecation/dashboard
|
opened
|
Azure Application Insights SDK for Java v2.X is retiring on September 30th, 2025
|
impact:migration-required verified area:sdk cloud:public services:app-insights
|
Azure Application Insights SDK for Java v2.X is retiring on September 30th, 2025
**Deprecation ID:** d7be673b-1ec2-4927-a3ab-454a59cc44a0
**Deadline:** Sep 30, 2025
**Impacted Services:**
- Azure Application Insights
**More information:**
- [Announcement](https://azure.microsoft.com/updates/application-insights-java-2x-retirement/)
- [Migration Guide](https://aka.ms/Java3XUpgradeGuidance/)
### Notice
Here's the official report from Microsoft:
> **On 30 September 2025, we’ll be retiring the application insights Java 2.X SDK;** after that date it’ll no longer be supported. Before that date, we recommend you upgrade to OpenTelemetry-based Java 3.X auto-instrumentation, which provides all the functionality of the application insights Java 2.X SDK plus new ones, including:
>
> - Expanded distributed tracing auto-collection including the most common Azure SDKs, MongoDB, Kafka, Cassandra (and more)
> - JMX and micrometer metrics auto-collection
> - Codeless onboarding for easier deployments and upgrades
>
> If you choose to not upgrade, your data will continue to flow to application insights. However, we’ll be unable to support any Azure support cases opened on this SDK, and you won’t receive the latest product features.
### Timeline
| Phase | Date | Description |
|:------|------|-------------|
|Announcement|Sep 30, 2022|Deprecation was announced|
|Deprecation|Sep 30, 2025|SDK will no longer be supported.|
### Impact
Azure Application Insights SDK for Java v2.X is retiring on September 30th, 2025 and migration to Java SDK v3.X auto-instrument is required.
### Required Action
A migration guide is provided [here](https://aka.ms/Java3XUpgradeGuidance/).
Here's the official report from Microsoft:
> To avoid being on an unsupported SDK, [upgrade to application insights Java 3.X auto-instrumentation](https://aka.ms/Java3XUpgradeGuidance/) by **30 September 2025**.
### Contact
You can get in touch through the following options:
- Contact Azure support ([link](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)).
- Get answers from Microsoft Q&A ([link](https://aka.ms/AzMonQandAForum)).
|
1.0
|
Azure Application Insights SDK for Java v2.X is retiring on September 30th, 2025 - Azure Application Insights SDK for Java v2.X is retiring on September 30th, 2025
**Deprecation ID:** d7be673b-1ec2-4927-a3ab-454a59cc44a0
**Deadline:** Sep 30, 2025
**Impacted Services:**
- Azure Application Insights
**More information:**
- [Announcement](https://azure.microsoft.com/updates/application-insights-java-2x-retirement/)
- [Migration Guide](https://aka.ms/Java3XUpgradeGuidance/)
### Notice
Here's the official report from Microsoft:
> **On 30 September 2025, we’ll be retiring the application insights Java 2.X SDK;** after that date it’ll no longer be supported. Before that date, we recommend you upgrade to OpenTelemetry-based Java 3.X auto-instrumentation, which provides all the functionality of the application insights Java 2.X SDK plus new ones, including:
>
> - Expanded distributed tracing auto-collection including the most common Azure SDKs, MongoDB, Kafka, Cassandra (and more)
> - JMX and micrometer metrics auto-collection
> - Codeless onboarding for easier deployments and upgrades
>
> If you choose to not upgrade, your data will continue to flow to application insights. However, we’ll be unable to support any Azure support cases opened on this SDK, and you won’t receive the latest product features.
### Timeline
| Phase | Date | Description |
|:------|------|-------------|
|Announcement|Sep 30, 2022|Deprecation was announced|
|Deprecation|Sep 30, 2025|SDK will no longer be supported.|
### Impact
Azure Application Insights SDK for Java v2.X is retiring on September 30th, 2025 and migration to Java SDK v3.X auto-instrument is required.
### Required Action
A migration guide is provided [here](https://aka.ms/Java3XUpgradeGuidance/).
Here's the official report from Microsoft:
> To avoid being on an unsupported SDK, [upgrade to application insights Java 3.X auto-instrumentation](https://aka.ms/Java3XUpgradeGuidance/) by **30 September 2025**.
### Contact
You can get in touch through the following options:
- Contact Azure support ([link](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview)).
- Get answers from Microsoft Q&A ([link](https://aka.ms/AzMonQandAForum)).
|
non_process
|
azure application insights sdk for java x is retiring on september azure application insights sdk for java x is retiring on september deprecation id deadline sep impacted services azure application insights more information notice here s the official report from microsoft on september we’ll be retiring the application insights java x sdk after that date it’ll no longer be supported before that date we recommend you upgrade to opentelemetry based java x auto instrumentation which provides all the functionality of the application insights java x sdk plus new ones including expanded distributed tracing auto collection including the most common azure sdks mongodb kafka cassandra and more jmx and micrometer metrics auto collection codeless onboarding for easier deployments and upgrades if you choose to not upgrade your data will continue to flow to application insights however we’ll be unable to support any azure support cases opened on this sdk and you won’t receive the latest product features timeline phase date description announcement sep deprecation was announced deprecation sep sdk will no longer be supported impact azure application insights sdk for java x is retiring on september and migration to java sdk x auto instrument is required required action a migration guide is provided here s the official report from microsoft to avoid being on an unsupported sdk by september contact you can get in touch through the following options contact azure support get answers from microsoft q a
| 0
|
134,975
| 5,241,118,658
|
IssuesEvent
|
2017-01-31 14:58:27
|
exporl/apex3
|
https://api.github.com/repos/exporl/apex3
|
closed
|
Upgrade SLM interface to new B&K API (using external C# program and protobuf RPC) - needs testing
|
Medium Priority
|
https://trello.com/c/hmpynWJr/22-upgrade-slm-interface-to-new-b-k-api-using-external-c-program-and-protobuf-rpc-needs-testing
|
1.0
|
Upgrade SLM interface to new B&K API (using external C# program and protobuf RPC) - needs testing - https://trello.com/c/hmpynWJr/22-upgrade-slm-interface-to-new-b-k-api-using-external-c-program-and-protobuf-rpc-needs-testing
|
non_process
|
upgrade slm interface to new b k api using external c program and protobuf rpc needs testing
| 0
|
2,711
| 5,579,848,244
|
IssuesEvent
|
2017-03-28 15:24:09
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Bug: Toastr Error Does Not Display When Editing Cell
|
bug inprocess
|
If you go to Insert & Cell edit example here http://allenfang.github.io/react-bootstrap-table/example.html#advance
When you edit cell directly (not insert), the cell shakes but no Toastr error message appears. This used to work before 3.0.0.
|
1.0
|
Bug: Toastr Error Does Not Display When Editing Cell - If you go to Insert & Cell edit example here http://allenfang.github.io/react-bootstrap-table/example.html#advance
When you edit cell directly (not insert), the cell shakes but no Toastr error message appears. This used to work before 3.0.0.
|
process
|
bug toastr error does not display when editing cell if you go to insert cell edit example here when you edit cell directly not insert the cell shakes but no toastr error message appears this used to work before
| 1
|
14,388
| 17,403,912,200
|
IssuesEvent
|
2021-08-03 01:14:27
|
googleapis/python-eventarc
|
https://api.github.com/repos/googleapis/python-eventarc
|
closed
|
Release as GA
|
api: eventarc type: process
|
[GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-eventarc/releases).
- [x] Server API is GA. See [API Release Notes](https://cloud.google.com/eventarc/docs/release-notes#January_26_2021).
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
1.0
|
Release as GA - [GA release template](https://github.com/googleapis/google-cloud-common/issues/287)
## Required
- [x] 28 days elapsed since last beta release with new API surface. See [release history](https://github.com/googleapis/python-eventarc/releases).
- [x] Server API is GA. See [API Release Notes](https://cloud.google.com/eventarc/docs/release-notes#January_26_2021).
- [x] Package API is stable, and we can commit to backward compatibility
- [x] All dependencies are GA
|
process
|
release as ga required days elapsed since last beta release with new api surface see server api is ga see package api is stable and we can commit to backward compatibility all dependencies are ga
| 1
|
74,664
| 9,796,336,080
|
IssuesEvent
|
2019-06-11 07:22:12
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
DllImport with specific version
|
area-System.Runtime.InteropServices documentation
|
Hey guys,
I'm trying to run native code by using `DllImport`. You can take a look here if you want: https://github.com/tinchou/corefx/blob/master/src/System.Data.Odbc/src/Common/System/Data/Common/UnsafeNativeMethods.cs
[I was specifying `odbc` as my dll name](https://github.com/tinchou/corefx/blob/master/src/System.Data.Odbc/src/Common/System/Data/Common/ExternDll.Unix.cs), which seemed to work for OSX and also for most Linux distros, but then I found it wasn't working for some that didn't symlink `libodbc.so` to `libodbc.so.2`.
I thought I could change it to `odbc.2` but it didn't work, so I ended up using `odbc.so.2`.
Then:
1. Do we have any document with `DllImport`'s look up conventions? I think it's important to have this if we want anyone to use this feature.
2. Is there any way I can add the version number but also remove the file extension (`.so`) so I can reuse this name on OSX? Over there we have to bind to `libodbc.2.dylib`.
Thanks!
|
1.0
|
DllImport with specific version - Hey guys,
I'm trying to run native code by using `DllImport`. You can take a look here if you want: https://github.com/tinchou/corefx/blob/master/src/System.Data.Odbc/src/Common/System/Data/Common/UnsafeNativeMethods.cs
[I was specifying `odbc` as my dll name](https://github.com/tinchou/corefx/blob/master/src/System.Data.Odbc/src/Common/System/Data/Common/ExternDll.Unix.cs), which seemed to work for OSX and also for most Linux distros, but then I found it wasn't working for some that didn't symlink `libodbc.so` to `libodbc.so.2`.
I thought I could change it to `odbc.2` but it didn't work, so I ended up using `odbc.so.2`.
Then:
1. Do we have any document with `DllImport`'s look up conventions? I think it's important to have this if we want anyone to use this feature.
2. Is there any way I can add the version number but also remove the file extension (`.so`) so I can reuse this name on OSX? Over there we have to bind to `libodbc.2.dylib`.
Thanks!
|
non_process
|
dllimport with specific version hey guys i m trying to run native code by using dllimport you can take a look here if you want which seemed to work for osx and also for most linux distros but then i found it wasn t working for some that didn t symlink libodbc so to libodbc so i thought i could change it to odbc but it didn t work so i ended up using odbc so then do we have any document with dllimport s look up conventions i think it s important to have this if we want anyone to use this feature is there any way i can add the version number but also remove the file extension so so i can reuse this name on osx over there we have to bind to libodbc dylib thanks
| 0
|
20,470
| 27,131,324,205
|
IssuesEvent
|
2023-02-16 09:59:50
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
closed
|
Remove Python uses_shared_libraries provider field
|
P3 type: process team-Rules-Python
|
On the py provider, `uses_shared_libraries` is unused in Bazel and only used for a deprecated runtime within Google. There's likely no need to keep it in the codebase.
|
1.0
|
Remove Python uses_shared_libraries provider field - On the py provider, `uses_shared_libraries` is unused in Bazel and only used for a deprecated runtime within Google. There's likely no need to keep it in the codebase.
|
process
|
remove python uses shared libraries provider field on the py provider uses shared libraries is unused in bazel and only used for a deprecated runtime within google there s likely no need to keep it in the codebase
| 1
|
17,688
| 23,534,610,401
|
IssuesEvent
|
2022-08-19 19:06:10
|
LLNL/axom
|
https://api.github.com/repos/LLNL/axom
|
closed
|
New symlink breaks `git difftool` (for me)
|
bug question Software process usability
|
The new symlink added in #901 breaks `git difftool` for me.
The symlink lives in axom's `scripts` directory and points to a script in `blt`
#### Reproducer:
```
>git difftool -d develop
fatal: could not open '<tmp>/left/scripts/make_local_branch_from_fork_pr.sh' for writing: Not a directory
```
Should we remove the symlink, and rely on knowledge of its existence in `blt`?
Alternatively, we could add the symlink file to .gitignore
```
# in .gitignore
...
scripts/make_local_branch_from_fork_pr.sh
```
**Edit:** In case it helps, I use `meld` as my difftool, and based on web searches, this seems like a problem with `git difftool`
|
1.0
|
New symlink breaks `git difftool` (for me) - The new symlink added in #901 breaks `git difftool` for me.
The symlink lives in axom's `scripts` directory and points to a script in `blt`
#### Reproducer:
```
>git difftool -d develop
fatal: could not open '<tmp>/left/scripts/make_local_branch_from_fork_pr.sh' for writing: Not a directory
```
Should we remove the symlink, and rely on knowledge of its existence in `blt`?
Alternatively, we could add the symlink file to .gitignore
```
# in .gitignore
...
scripts/make_local_branch_from_fork_pr.sh
```
**Edit:** In case it helps, I use `meld` as my difftool, and based on web searches, this seems like a problem with `git difftool`
|
process
|
new symlink breaks git difftool for me the new symlink added in breaks git difftool for me the symlink lives in axom s scripts directory and points to a script in blt reproducer git difftool d develop fatal could not open left scripts make local branch from fork pr sh for writing not a directory should we remove the symlink and rely on knowledge of its existence in blt alternatively we could add the symlink file to gitignore in gitignore scripts make local branch from fork pr sh edit in case it helps i use meld as my difftool and based on web searches this seems like a problem with git difftool
| 1
|
6,219
| 6,305,057,449
|
IssuesEvent
|
2017-07-21 17:25:04
|
tempesta-tech/tempesta
|
https://api.github.com/repos/tempesta-tech/tempesta
|
opened
|
Frang: client connection timeout (sockstress)
|
enhancement security
|
Currently we don't limit how long a client can keep open connection without doing anything, so following command:
$ nc tempesta 80
keeps the open connection until TCP keepalive closes it. Current `client_header_timeout` and `client_body_timeout` limits don't affect the behaviour as well as `keepalive_timeout`. This issues isn't a crucial since we have `concurrent_connections`, so an attacker can't efficiently launch sockstress attack, but it's still not desirable to spend resources.
|
True
|
Frang: client connection timeout (sockstress) - Currently we don't limit how long a client can keep open connection without doing anything, so following command:
$ nc tempesta 80
keeps the open connection until TCP keepalive closes it. Current `client_header_timeout` and `client_body_timeout` limits don't affect the behaviour as well as `keepalive_timeout`. This issues isn't a crucial since we have `concurrent_connections`, so an attacker can't efficiently launch sockstress attack, but it's still not desirable to spend resources.
|
non_process
|
frang client connection timeout sockstress currently we don t limit how long a client can keep open connection without doing anything so following command nc tempesta keeps the open connection until tcp keepalive closes it current client header timeout and client body timeout limits don t affect the behaviour as well as keepalive timeout this issues isn t a crucial since we have concurrent connections so an attacker can t efficiently launch sockstress attack but it s still not desirable to spend resources
| 0
|
9,033
| 12,129,836,738
|
IssuesEvent
|
2020-04-22 23:42:23
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
remove gcp-devrel-py-tools from iam/api-client/requirements-test.txt
|
priority: p2 remove-gcp-devrel-py-tools type: process
|
remove gcp-devrel-py-tools from iam/api-client/requirements-test.txt
|
1.0
|
remove gcp-devrel-py-tools from iam/api-client/requirements-test.txt - remove gcp-devrel-py-tools from iam/api-client/requirements-test.txt
|
process
|
remove gcp devrel py tools from iam api client requirements test txt remove gcp devrel py tools from iam api client requirements test txt
| 1
|
1,242
| 3,779,430,655
|
IssuesEvent
|
2016-03-18 08:15:38
|
sci-visus/visus-issues
|
https://api.github.com/repos/sci-visus/visus-issues
|
closed
|
missing timesteps do not seem to be ignored
|
Bug Processing
|
Here is the visus.config for the dataset:
```xml
<dataset name="Pin2d (MultipleAccess)" url="$(atlantis)pin2d" >
<access name="MultipleAccess" type="MultipleAccess">
<A type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/A/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/A" />
</A>
<B type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/B/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/B" />
</B>
<C type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/C/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/C" />
</C>
<D type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/D/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/D" />
</D>
<E type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/E/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/E" />
</E>
</access>
</dataset>
```
And the script:
```javascript
f0=input.A.TS;
f1=input.B.TS;
f2=input.C.TS;
f3=input.D.TS;
f4=input.E.TS;
output=Visus.Array.avg([f0,f1,f2,f3,f4]);
```
Even though the output indicates they are ignored, the computation clearly includes fields of zeros. Note the data range in this image is lower than any of the actual (extant) data:

The average image looks like this:

Output:
17:25:55.1342 INFO DB_MULTIPLEDATASET.820 Missing timestep(215) for input['D']...ignoring it
17:25:55.1686 INFO DB_MULTIPLEDATASET.820 Missing timestep(216) for input['D']...ignoring it
17:25:55.2032 INFO DB_MULTIPLEDATASET.820 Missing timestep(217) for input['D']...ignoring it
17:25:55.2368 INFO DB_MULTIPLEDATASET.820 Missing timestep(218) for input['D']...ignoring it
17:25:57.1756 INFO DB_MULTIPLEDATASET.820 Missing timestep(219) for input['D']...ignoring it
17:25:57.1919 INFO DB_MULTIPLEDATASET.820 Missing timestep(220) for input['B']...ignoring it
17:25:57.2064 INFO DB_MULTIPLEDATASET.820 Missing timestep(220) for input['D']...ignoring it
17:25:58.006 INFO DB_MULTIPLEDATASET.820 Missing timestep(221) for input['B']...ignoring it
17:25:58.7877 INFO DB_MULTIPLEDATASET.820 Missing timestep(221) for input['D']...ignoring it
17:25:59.5764 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['A']...ignoring it
17:25:59.5764 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['B']...ignoring it
17:25:59.5769 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['C']...ignoring it
17:25:59.5770 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['D']...ignoring it
17:25:59.5771 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['E']...ignoring it
17:25:59.5772 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['A']...ignoring it
17:25:59.5773 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['B']...ignoring it
17:25:59.5774 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['C']...ignoring it
17:25:59.5775 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['D']...ignoring it
17:25:59.5776 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['E']...ignoring it
17:25:59.5777 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['A']...ignoring it
17:25:59.5779 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['B']...ignoring it
17:25:59.5779 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['C']...ignoring it
17:25:59.5780 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['D']...ignoring it
17:25:59.5781 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['E']...ignoring it
17:25:59.5782 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['A']...ignoring it
17:25:59.5783 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['B']...ignoring it
17:25:59.5784 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['C']...ignoring it
17:25:59.5785 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['D']...ignoring it
17:25:59.5786 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['E']...ignoring it
17:25:59.5787 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['A']...ignoring it
17:25:59.5788 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['B']...ignoring it
17:25:59.5789 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['C']...ignoring it
17:25:59.5790 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['D']...ignoring it
17:25:59.5791 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['E']...ignoring it
17:25:59.6706 INFO DB_PROGRESSIVEQUERY.478 BoxQuery msec(4557) level(0/1/19/19) dims(1024 512 1 1 1) dtype(float32) filter(nullptr) access(yes) url(http://atlantis.sci.utah.edu/mod_visus?action=readdataset&dataset=pin2d)
17:25:59.7056 INFO RENDERARRAY.230 dtype(float32) dimension(1024 512 1) dims(1024 512 1) transfered on GPU in msec(1)
|
1.0
|
missing timesteps do not seem to be ignored - Here is the visus.config for the dataset:
```xml
<dataset name="Pin2d (MultipleAccess)" url="$(atlantis)pin2d" >
<access name="MultipleAccess" type="MultipleAccess">
<A type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/A/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/A" />
</A>
<B type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/B/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/B" />
</B>
<C type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/C/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/C" />
</C>
<D type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/D/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/D" />
</D>
<E type='multiplex'>
<access type='disk' chmod='rw' url='file://$(cache)/pin2d/E/visus.idx' />
<access type='network' chmod='r' compression='zip' url="$(atlantis)pin2d/E" />
</E>
</access>
</dataset>
```
And the script:
```javascript
f0=input.A.TS;
f1=input.B.TS;
f2=input.C.TS;
f3=input.D.TS;
f4=input.E.TS;
output=Visus.Array.avg([f0,f1,f2,f3,f4]);
```
Even though the output indicates they are ignored, the computation clearly includes fields of zeros. Note the data range in this image is lower than any of the actual (extant) data:

The average image looks like this:

Output:
17:25:55.1342 INFO DB_MULTIPLEDATASET.820 Missing timestep(215) for input['D']...ignoring it
17:25:55.1686 INFO DB_MULTIPLEDATASET.820 Missing timestep(216) for input['D']...ignoring it
17:25:55.2032 INFO DB_MULTIPLEDATASET.820 Missing timestep(217) for input['D']...ignoring it
17:25:55.2368 INFO DB_MULTIPLEDATASET.820 Missing timestep(218) for input['D']...ignoring it
17:25:57.1756 INFO DB_MULTIPLEDATASET.820 Missing timestep(219) for input['D']...ignoring it
17:25:57.1919 INFO DB_MULTIPLEDATASET.820 Missing timestep(220) for input['B']...ignoring it
17:25:57.2064 INFO DB_MULTIPLEDATASET.820 Missing timestep(220) for input['D']...ignoring it
17:25:58.006 INFO DB_MULTIPLEDATASET.820 Missing timestep(221) for input['B']...ignoring it
17:25:58.7877 INFO DB_MULTIPLEDATASET.820 Missing timestep(221) for input['D']...ignoring it
17:25:59.5764 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['A']...ignoring it
17:25:59.5764 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['B']...ignoring it
17:25:59.5769 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['C']...ignoring it
17:25:59.5770 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['D']...ignoring it
17:25:59.5771 INFO DB_MULTIPLEDATASET.820 Missing timestep(222) for input['E']...ignoring it
17:25:59.5772 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['A']...ignoring it
17:25:59.5773 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['B']...ignoring it
17:25:59.5774 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['C']...ignoring it
17:25:59.5775 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['D']...ignoring it
17:25:59.5776 INFO DB_MULTIPLEDATASET.820 Missing timestep(223) for input['E']...ignoring it
17:25:59.5777 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['A']...ignoring it
17:25:59.5779 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['B']...ignoring it
17:25:59.5779 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['C']...ignoring it
17:25:59.5780 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['D']...ignoring it
17:25:59.5781 INFO DB_MULTIPLEDATASET.820 Missing timestep(224) for input['E']...ignoring it
17:25:59.5782 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['A']...ignoring it
17:25:59.5783 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['B']...ignoring it
17:25:59.5784 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['C']...ignoring it
17:25:59.5785 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['D']...ignoring it
17:25:59.5786 INFO DB_MULTIPLEDATASET.820 Missing timestep(225) for input['E']...ignoring it
17:25:59.5787 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['A']...ignoring it
17:25:59.5788 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['B']...ignoring it
17:25:59.5789 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['C']...ignoring it
17:25:59.5790 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['D']...ignoring it
17:25:59.5791 INFO DB_MULTIPLEDATASET.820 Missing timestep(226) for input['E']...ignoring it
17:25:59.6706 INFO DB_PROGRESSIVEQUERY.478 BoxQuery msec(4557) level(0/1/19/19) dims(1024 512 1 1 1) dtype(float32) filter(nullptr) access(yes) url(http://atlantis.sci.utah.edu/mod_visus?action=readdataset&dataset=pin2d)
17:25:59.7056 INFO RENDERARRAY.230 dtype(float32) dimension(1024 512 1) dims(1024 512 1) transfered on GPU in msec(1)
|
process
|
missing timesteps do not seem to be ignored here is the visus config for the dataset xml and the script javascript input a ts input b ts input c ts input d ts input e ts output visus array avg even though the output indicates they are ignored the computation clearly includes fields of zeros note the data range in this image is lower than any of the actual extant data the average image looks like this output info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db multipledataset missing timestep for input ignoring it info db progressivequery boxquery msec level dims dtype filter nullptr access yes url info renderarray dtype dimension dims transfered on gpu in msec
| 1
|
3,354
| 6,487,654,205
|
IssuesEvent
|
2017-08-20 10:02:36
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
closed
|
BLOCK_CACHE has to be made arbitrary and not fixed
|
libs-etherlib status-inprocess type-bug
|
It is currently pretty darn hard coded. That cannot possibly work.
From https://github.com/Great-Hill-Corporation/ethslurp/issues/147
|
1.0
|
BLOCK_CACHE has to be made arbitrary and not fixed - It is currently pretty darn hard coded. That cannot possibly work.
From https://github.com/Great-Hill-Corporation/ethslurp/issues/147
|
process
|
block cache has to be made arbitrary and not fixed it is currently pretty darn hard coded that cannot possibly work from
| 1
|
19,689
| 26,040,816,692
|
IssuesEvent
|
2022-12-22 10:14:41
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
[Mirror] Boost 1.81.0
|
P2 type: process team-OSS mirror request
|
### Please list the URLs of the archives you'd like to mirror:
Please mirror
https://boostorg.jfrog.io/artifactory/main/release/1.81.0/source/boost_1_81_0.tar.gz
Expected mirror URL:
https://mirror.bazel.build/boostorg.jfrog.io/artifactory/main/release/1.81.0/source/boost_1_81_0.tar.gz
Needed here:
https://github.com/nelhage/rules_boost/blob/c37cb35c641d9520e46923ebf2d38be7350fa8bd/boost/boost.bzl#L176
|
1.0
|
[Mirror] Boost 1.81.0 - ### Please list the URLs of the archives you'd like to mirror:
Please mirror
https://boostorg.jfrog.io/artifactory/main/release/1.81.0/source/boost_1_81_0.tar.gz
Expected mirror URL:
https://mirror.bazel.build/boostorg.jfrog.io/artifactory/main/release/1.81.0/source/boost_1_81_0.tar.gz
Needed here:
https://github.com/nelhage/rules_boost/blob/c37cb35c641d9520e46923ebf2d38be7350fa8bd/boost/boost.bzl#L176
|
process
|
boost please list the urls of the archives you d like to mirror please mirror expected mirror url needed here
| 1
|
390,771
| 11,563,315,224
|
IssuesEvent
|
2020-02-20 05:38:24
|
ubclaunchpad/rocket2
|
https://api.github.com/repos/ubclaunchpad/rocket2
|
closed
|
Increase coverage for like the project
|
high priority refactoring & cleanup
|
**Please give a one-sentence summary of the cleanup you would like done.**
Increase the coverage for the project by omitting `tests` directory, omitting the things that aren't unit-tested, and adding more tests.
**Please give as many details as possible about the cleanup or refactoring.**
See above.
**Please list any additional context; in particular, list what areas of the code base this would affect.**
Like, most of it that has low coverage percentage.
|
1.0
|
Increase coverage for like the project - **Please give a one-sentence summary of the cleanup you would like done.**
Increase the coverage for the project by omitting `tests` directory, omitting the things that aren't unit-tested, and adding more tests.
**Please give as many details as possible about the cleanup or refactoring.**
See above.
**Please list any additional context; in particular, list what areas of the code base this would affect.**
Like, most of it that has low coverage percentage.
|
non_process
|
increase coverage for like the project please give a one sentence summary of the cleanup you would like done increase the coverage for the project by omitting tests directory omitting the things that aren t unit tested and adding more tests please give as many details as possible about the cleanup or refactoring see above please list any additional context in particular list what areas of the code base this would affect like most of it that has low coverage percentage
| 0
|
16,674
| 21,778,528,717
|
IssuesEvent
|
2022-05-13 16:05:28
|
camunda/zeebe
|
https://api.github.com/repos/camunda/zeebe
|
closed
|
MetricsExporter does not configure a record filter
|
kind/bug scope/broker area/performance area/reliability team/process-automation
|
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The [MetricsExporter](https://github.com/camunda/zeebe/blob/8.0.0/broker/src/main/java/io/camunda/zeebe/broker/exporter/metrics/MetricsExporter.java#L27) does not override the [configure](https://github.com/camunda/zeebe/blob/8.0.0/exporter-api/src/main/java/io/camunda/zeebe/exporter/api/Exporter.java#L43) method. This method allows to set a filter for which records are available to export. Since the MetricsExporter only exports records of a few value types (i.e. JOB, JOB_BATCH, PROCESS_INSTANCE), it unnecessarily reads the record values of all other records.
This can completely halt exporting on a partition when combined with #6442.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
- configure the MetricsExporter for the broker
- run the broker
- deploy a very large process with mistakes that lead to zeebe rejecting the deployment with validation errors
- note that the exporter stops making progress
- note an increase in CPU because it keeps trying to re-read the record value indefinitely
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The metrics exporter should be configured in such a way that only those records that it wants to export are actually read from the log by the exporter director.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.IllegalArgumentException: invalid offset: -31808
at org.agrona.concurrent.UnsafeBuffer.boundsCheckWrap(UnsafeBuffer.java:2425) ~[agrona-1.15.0.jar:1.15.0]
at org.agrona.concurrent.UnsafeBuffer.wrap(UnsafeBuffer.java:277) ~[agrona-1.15.0.jar:1.15.0]
at io.camunda.zeebe.msgpack.spec.MsgPackReader.wrap(MsgPackReader.java:49) ~[zeebe-msgpack-core-8.0.0.jar:8.0.0]
at io.camunda.zeebe.msgpack.UnpackedObject.wrap(UnpackedObject.java:30) ~[zeebe-msgpack-value-8.0.0.jar:8.0.0]
at io.camunda.zeebe.logstreams.impl.log.LoggedEventImpl.readValue(LoggedEventImpl.java:135) ~[zeebe-logstreams-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.RecordValues.readRecordValue(RecordValues.java:35) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.broker.exporter.stream.ExporterDirector$RecordExporter.wrap(ExporterDirector.java:489) ~[zeebe-broker-8.0.0.jar:8.0.0]
at io.camunda.zeebe.broker.exporter.stream.ExporterDirector.lambda$exportEvent$9(ExporterDirector.java:391) ~[zeebe-broker-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.retry.ActorRetryMechanism.run(ActorRetryMechanism.java:36) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.retry.EndlessRetryStrategy.run(EndlessRetryStrategy.java:50) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0]
```
</p>
</details>
**Environment:**
- Zeebe Version: 8.0
- Configuration: MetricsExporter
|
1.0
|
MetricsExporter does not configure a record filter - **Describe the bug**
<!-- A clear and concise description of what the bug is. -->
The [MetricsExporter](https://github.com/camunda/zeebe/blob/8.0.0/broker/src/main/java/io/camunda/zeebe/broker/exporter/metrics/MetricsExporter.java#L27) does not override the [configure](https://github.com/camunda/zeebe/blob/8.0.0/exporter-api/src/main/java/io/camunda/zeebe/exporter/api/Exporter.java#L43) method. This method allows to set a filter for which records are available to export. Since the MetricsExporter only exports records of a few value types (i.e. JOB, JOB_BATCH, PROCESS_INSTANCE), it unnecessarily reads the record values of all other records.
This can completely halt exporting on a partition when combined with #6442.
**To Reproduce**
<!--
Steps to reproduce the behavior
If possible add a minimal reproducer code sample
- when using the Java client: https://github.com/zeebe-io/zeebe-test-template-java
-->
- configure the MetricsExporter for the broker
- run the broker
- deploy a very large process with mistakes that lead to zeebe rejecting the deployment with validation errors
- note that the exporter stops making progress
- note an increase in CPU because it keeps trying to re-read the record value indefinitely
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The metrics exporter should be configured in such a way that only those records that it wants to export are actually read from the log by the exporter director.
**Log/Stacktrace**
<!-- If possible add the full stacktrace or Zeebe log which contains the issue. -->
<details><summary>Full Stacktrace</summary>
<p>
```
java.lang.IllegalArgumentException: invalid offset: -31808
at org.agrona.concurrent.UnsafeBuffer.boundsCheckWrap(UnsafeBuffer.java:2425) ~[agrona-1.15.0.jar:1.15.0]
at org.agrona.concurrent.UnsafeBuffer.wrap(UnsafeBuffer.java:277) ~[agrona-1.15.0.jar:1.15.0]
at io.camunda.zeebe.msgpack.spec.MsgPackReader.wrap(MsgPackReader.java:49) ~[zeebe-msgpack-core-8.0.0.jar:8.0.0]
at io.camunda.zeebe.msgpack.UnpackedObject.wrap(UnpackedObject.java:30) ~[zeebe-msgpack-value-8.0.0.jar:8.0.0]
at io.camunda.zeebe.logstreams.impl.log.LoggedEventImpl.readValue(LoggedEventImpl.java:135) ~[zeebe-logstreams-8.0.0.jar:8.0.0]
at io.camunda.zeebe.engine.processing.streamprocessor.RecordValues.readRecordValue(RecordValues.java:35) ~[zeebe-workflow-engine-8.0.0.jar:8.0.0]
at io.camunda.zeebe.broker.exporter.stream.ExporterDirector$RecordExporter.wrap(ExporterDirector.java:489) ~[zeebe-broker-8.0.0.jar:8.0.0]
at io.camunda.zeebe.broker.exporter.stream.ExporterDirector.lambda$exportEvent$9(ExporterDirector.java:391) ~[zeebe-broker-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.retry.ActorRetryMechanism.run(ActorRetryMechanism.java:36) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.retry.EndlessRetryStrategy.run(EndlessRetryStrategy.java:50) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.invoke(ActorJob.java:79) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorJob.execute(ActorJob.java:44) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorTask.execute(ActorTask.java:122) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.executeCurrentTask(ActorThread.java:97) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.doWork(ActorThread.java:80) ~[zeebe-util-8.0.0.jar:8.0.0]
at io.camunda.zeebe.util.sched.ActorThread.run(ActorThread.java:189) ~[zeebe-util-8.0.0.jar:8.0.0]
```
</p>
</details>
**Environment:**
- Zeebe Version: 8.0
- Configuration: MetricsExporter
|
process
|
metricsexporter does not configure a record filter describe the bug the does not override the method this method allows to set a filter for which records are available to export since the metricsexporter only exports records of a few value types i e job job batch process instance it unnecessarily reads the record values of all other records this can completely halt exporting on a partition when combined with to reproduce steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client configure the metricsexporter for the broker run the broker deploy a very large process with mistakes that lead to zeebe rejecting the deployment with validation errors note that the exporter stops making progress note an increase in cpu because it keeps trying to re read the record value indefinitely expected behavior the metrics exporter should be configured in such a way that only those records that it wants to export are actually read from the log by the exporter director log stacktrace full stacktrace java lang illegalargumentexception invalid offset at org agrona concurrent unsafebuffer boundscheckwrap unsafebuffer java at org agrona concurrent unsafebuffer wrap unsafebuffer java at io camunda zeebe msgpack spec msgpackreader wrap msgpackreader java at io camunda zeebe msgpack unpackedobject wrap unpackedobject java at io camunda zeebe logstreams impl log loggedeventimpl readvalue loggedeventimpl java at io camunda zeebe engine processing streamprocessor recordvalues readrecordvalue recordvalues java at io camunda zeebe broker exporter stream exporterdirector recordexporter wrap exporterdirector java at io camunda zeebe broker exporter stream exporterdirector lambda exportevent exporterdirector java at io camunda zeebe util retry actorretrymechanism run actorretrymechanism java at io camunda zeebe util retry endlessretrystrategy run endlessretrystrategy java at io camunda zeebe util sched actorjob invoke actorjob java at io camunda zeebe util sched actorjob execute actorjob java at io camunda zeebe util sched actortask execute actortask java at io camunda zeebe util sched actorthread executecurrenttask actorthread java at io camunda zeebe util sched actorthread dowork actorthread java at io camunda zeebe util sched actorthread run actorthread java environment zeebe version configuration metricsexporter
| 1
|
20,901
| 27,739,382,115
|
IssuesEvent
|
2023-03-15 13:25:19
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Enable the `spanmetrics` connector component.
|
processor/spanmetrics connector/spanmetrics
|
### Component(s)
connector/spanmetrics
### Describe the issue you're reporting
The `spanmetrics` connector is a port of the `spanmetrics` processor, but since it is going to be a new component, we thought that it can be a good time to fix some issues of the former `spanmetrics` processor in the new connector and bring the connector closer to the OTel spec, drop any related Prometheus specific conventions that the processor heavily uses, etc.
We can deprecate the processor and keep both components for some time to give users time to migrate. It should not be that critical anyways because the `spanmetrics` processor's stability level is `in development`.
**TODO when enabling the component:**
- Enable the component
- Add a change log entry listing all the breaking changes and instructions on how to migrate from the `spanmetrics` processor to the `spanmetrics` connector
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18529
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18678
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18677
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18502
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/19214
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18698
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18528
- Update READMEs of the processor and connector
- Deprecate the `spanmetrics` processor
|
1.0
|
Enable the `spanmetrics` connector component. - ### Component(s)
connector/spanmetrics
### Describe the issue you're reporting
The `spanmetrics` connector is a port of the `spanmetrics` processor, but since it is going to be a new component, we thought that it can be a good time to fix some issues of the former `spanmetrics` processor in the new connector and bring the connector closer to the OTel spec, drop any related Prometheus specific conventions that the processor heavily uses, etc.
We can deprecate the processor and keep both components for some time to give users time to migrate. It should not be that critical anyways because the `spanmetrics` processor's stability level is `in development`.
**TODO when enabling the component:**
- Enable the component
- Add a change log entry listing all the breaking changes and instructions on how to migrate from the `spanmetrics` processor to the `spanmetrics` connector
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18529
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18678
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18677
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18502
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/19214
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18698
- https://github.com/open-telemetry/opentelemetry-collector-contrib/issues/18528
- Update READMEs of the processor and connector
- Deprecate the `spanmetrics` processor
|
process
|
enable the spanmetrics connector component component s connector spanmetrics describe the issue you re reporting the spanmetrics connector is a port of the spanmetrics processor but since it is going to be a new component we thought that it can be a good time to fix some issues of the former spanmetrics processor in the new connector and bring the connector closer to the otel spec drop any related prometheus specific conventions that the processor heavily uses etc we can deprecate the processor and keep both components for some time to give users time to migrate it should not be that critical anyways because the spanmetrics processor s stability level is in development todo when enabling the component enable the component add a change log entry listing all the breaking changes and instructions on how to migrate from the spanmetrics processor to the spanmetrics connector update readmes of the processor and connector deprecate the spanmetrics processor
| 1
|
422,557
| 12,280,194,177
|
IssuesEvent
|
2020-05-08 13:42:42
|
grpc/grpc
|
https://api.github.com/repos/grpc/grpc
|
closed
|
Expose C# API to set env-driven variables via code
|
disposition/help wanted kind/enhancement lang/C# priority/P2
|
I wish to configure `GRPC_VERBOSITY` from my C# code, as my deployment model does not make it easy to use environment variables. I see a `gpr_set_log_verbosity` function in the C core but this does not appear to be exposed to C# anywhere, as far as I can tell.
I request that such a member be added to the API. I would like to be able to configure all the environment variable driven options from C# code. The `GrpcEnvironment` class would seem like a good host for this functionality.
|
1.0
|
Expose C# API to set env-driven variables via code - I wish to configure `GRPC_VERBOSITY` from my C# code, as my deployment model does not make it easy to use environment variables. I see a `gpr_set_log_verbosity` function in the C core but this does not appear to be exposed to C# anywhere, as far as I can tell.
I request that such a member be added to the API. I would like to be able to configure all the environment variable driven options from C# code. The `GrpcEnvironment` class would seem like a good host for this functionality.
|
non_process
|
expose c api to set env driven variables via code i wish to configure grpc verbosity from my c code as my deployment model does not make it easy to use environment variables i see a gpr set log verbosity function in the c core but this does not appear to be exposed to c anywhere as far as i can tell i request that such a member be added to the api i would like to be able to configure all the environment variable driven options from c code the grpcenvironment class would seem like a good host for this functionality
| 0
|
128,886
| 17,644,020,650
|
IssuesEvent
|
2021-08-20 01:28:52
|
apollographql/apollo-client
|
https://api.github.com/repos/apollographql/apollo-client
|
closed
|
refetchQueries does not update "loading" from query
|
🏓 awaiting-response ✍️ working-as-designed
|
Setting ```refetchQueries``` in ```useMutation``` reloads the given query successfully, but the ```loading``` variable from that query does not get updated to ```true/false```, it stays as ```false```
Given the following:
```ts
const query = useQuery(GQL_QUERY);
const [mutation, { data: mutationData, loading: mutationLoading, error: mutationError }] = useMutation(GQL_MUTATION,
{
refetchQueries: [GQL_QUERY]
});
```
On first load, ```query.loading``` is ```true``` and later changes to ```false``` after loading the data, that works great.
Later in the code, execute the mutation function ```mutation()```, after execution, GQL_QUERY gets reloaded and its results assigned to ```query.data```, but ```query.loading``` stays ```false``` thru the whole process.
That seems inconsistent.
|
1.0
|
refetchQueries does not update "loading" from query - Setting ```refetchQueries``` in ```useMutation``` reloads the given query successfully, but the ```loading``` variable from that query does not get updated to ```true/false```, it stays as ```false```
Given the following:
```ts
const query = useQuery(GQL_QUERY);
const [mutation, { data: mutationData, loading: mutationLoading, error: mutationError }] = useMutation(GQL_MUTATION,
{
refetchQueries: [GQL_QUERY]
});
```
On first load, ```query.loading``` is ```true``` and later changes to ```false``` after loading the data, that works great.
Later in the code, execute the mutation function ```mutation()```, after execution, GQL_QUERY gets reloaded and its results assigned to ```query.data```, but ```query.loading``` stays ```false``` thru the whole process.
That seems inconsistent.
|
non_process
|
refetchqueries does not update loading from query setting refetchqueries in usemutation reloads the given query successfully but the loading variable from that query does not get updated to true false it stays as false given the following ts const query usequery gql query const usemutation gql mutation refetchqueries on first load query loading is true and later changes to false after loading the data that works great later in the code execute the mutation function mutation after execution gql query gets reloaded and its results assigned to query data but query loading stays false thru the whole process that seems inconsistent
| 0
|
20,221
| 26,812,498,915
|
IssuesEvent
|
2023-02-01 23:59:20
|
hashicorp/packer-plugin-vsphere
|
https://api.github.com/repos/hashicorp/packer-plugin-vsphere
|
closed
|
`vsphere-template`: Packer can't find source VM if folder not specified
|
duplicate post-processor/vsphere-template
|
_This issue was originally opened by @razethion in https://github.com/hashicorp/packer/issues/12229 and has been migrated to this repository. The original issue description is below._
---
<!--- Please keep this note for the community --->
#### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
#### Overview of the Issue
If you don't specify a folder in the vsphere-iso source, the builder for vsphere-template will fail
#### Reproduction Steps
In this HCL:
https://gist.github.com/razethion/23a4e61c39109b87cb5498406ac5350e
If you don't specify `folder = "Discovered virtual machine"` in the source, the VM will be created in the root of the datacenter. Packer, however, assumes the VM is in the Discovered virtual machine folder. This causes a build error:
```
==> vsphere-iso.server22desktop: Running post-processor: (type vsphere-
template)
vsphere-iso.server22desktop (vsphere-template): Waiting 10s for VMwa
re vSphere to start
vsphere-iso.server22desktop (vsphere-template): Choosing datacenter.
..
vsphere-iso.server22desktop (vsphere-template): Creating or checking
destination folders...
==> vsphere-iso.server22desktop (vsphere-template): VM at path /datacenter/vm
/Discovered virtual machine/T-WIN2K22STD not found
Build 'vsphere-iso.server22desktop' errored after 9 minutes 25 seconds:
1 error(s) occurred:
* Post-processor failed: VM at path /datacenter/vm/Discovered virtual machine
/T-WIN2K22STD not found
```
### Packer version
Packer v1.8.5
### Operating system and Environment details
vmware vcenter 7.0
|
1.0
|
`vsphere-template`: Packer can't find source VM if folder not specified - _This issue was originally opened by @razethion in https://github.com/hashicorp/packer/issues/12229 and has been migrated to this repository. The original issue description is below._
---
<!--- Please keep this note for the community --->
#### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
#### Overview of the Issue
If you don't specify a folder in the vsphere-iso source, the builder for vsphere-template will fail
#### Reproduction Steps
In this HCL:
https://gist.github.com/razethion/23a4e61c39109b87cb5498406ac5350e
If you don't specify `folder = "Discovered virtual machine"` in the source, the VM will be created in the root of the datacenter. Packer, however, assumes the VM is in the Discovered virtual machine folder. This causes a build error:
```
==> vsphere-iso.server22desktop: Running post-processor: (type vsphere-
template)
vsphere-iso.server22desktop (vsphere-template): Waiting 10s for VMwa
re vSphere to start
vsphere-iso.server22desktop (vsphere-template): Choosing datacenter.
..
vsphere-iso.server22desktop (vsphere-template): Creating or checking
destination folders...
==> vsphere-iso.server22desktop (vsphere-template): VM at path /datacenter/vm
/Discovered virtual machine/T-WIN2K22STD not found
Build 'vsphere-iso.server22desktop' errored after 9 minutes 25 seconds:
1 error(s) occurred:
* Post-processor failed: VM at path /datacenter/vm/Discovered virtual machine
/T-WIN2K22STD not found
```
### Packer version
Packer v1.8.5
### Operating system and Environment details
vmware vcenter 7.0
|
process
|
vsphere template packer can t find source vm if folder not specified this issue was originally opened by razethion in and has been migrated to this repository the original issue description is below community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment overview of the issue if you don t specify a folder in the vsphere iso source the builder for vsphere template will fail reproduction steps in this hcl if you don t specify folder discovered virtual machine in the source the vm will be created in the root of the datacenter packer however assumes the vm is in the discovered virtual machine folder this causes a build error vsphere iso running post processor type vsphere template vsphere iso vsphere template waiting for vmwa re vsphere to start vsphere iso vsphere template choosing datacenter vsphere iso vsphere template creating or checking destination folders vsphere iso vsphere template vm at path datacenter vm discovered virtual machine t not found build vsphere iso errored after minutes seconds error s occurred post processor failed vm at path datacenter vm discovered virtual machine t not found packer version packer operating system and environment details vmware vcenter
| 1
|
314,544
| 23,527,059,750
|
IssuesEvent
|
2022-08-19 11:56:53
|
executablebooks/jupyter-book
|
https://api.github.com/repos/executablebooks/jupyter-book
|
closed
|
No syntax highlighting code blocks are still highlighted
|
bug documentation
|
### Describe the bug
The code blocks without language indication are wrongly highlighted.
### Reproduce the bug
See: https://jupyterbook.org/reference/cheatsheet.html#code-and-syntax-highlighting

### List your environment
_No response_
|
1.0
|
No syntax highlighting code blocks are still highlighted - ### Describe the bug
The code blocks without language indication are wrongly highlighted.
### Reproduce the bug
See: https://jupyterbook.org/reference/cheatsheet.html#code-and-syntax-highlighting

### List your environment
_No response_
|
non_process
|
no syntax highlighting code blocks are still highlighted describe the bug the code blocks without language indication are wrongly highlighted reproduce the bug see list your environment no response
| 0
|
21,310
| 28,504,060,115
|
IssuesEvent
|
2023-04-18 19:47:27
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
opened
|
Make like/dislike button clicked only once by user and only one of them can be selected
|
Processing Task Sprint 4
|
**Task Tests**
test 1:
1) go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2) log in
3)click on an event thumbnail to open the event popup
4)click the like button 2 times to ensure the counter only went up by 1.
test 2:
1) go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2) log in
3)click on an event thumbnail to open the event popup
4)click the like button, ensure it went up by 1, then click the dislike button and ensure that number didn't change
|
1.0
|
Make like/dislike button clicked only once by user and only one of them can be selected - **Task Tests**
test 1:
1) go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2) log in
3)click on an event thumbnail to open the event popup
4)click the like button 2 times to ensure the counter only went up by 1.
test 2:
1) go to website url: https://www-student.cse.buffalo.edu/CSE442-542/2023-Spring/cse-442b/build/
2) log in
3)click on an event thumbnail to open the event popup
4)click the like button, ensure it went up by 1, then click the dislike button and ensure that number didn't change
|
process
|
make like dislike button clicked only once by user and only one of them can be selected task tests test go to website url log in click on an event thumbnail to open the event popup click the like button times to ensure the counter only went up by test go to website url log in click on an event thumbnail to open the event popup click the like button ensure it went up by then click the dislike button and ensure that number didn t change
| 1
|
7,463
| 10,562,911,932
|
IssuesEvent
|
2019-10-04 19:33:02
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
BigQuery: flaky 'test_create_table' snippet.
|
api: bigquery flaky testing type: process
|
From: [this test failure](https://source.cloud.google.com/results/invocations/234ff781-271e-4ce0-b5aa-8e39ee129ca5/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fbigquery/log):
```python
py.test samples
============================= test session starts ==============================
platform linux -- Python 3.6.0, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
rootdir: /tmpfs/src/github/google-cloud-python/bigquery
collected 12 items
samples/tests/test_create_dataset.py . [ 8%]
samples/tests/test_create_table.py E [ 16%]
samples/tests/test_delete_dataset.py . [ 25%]
samples/tests/test_delete_table.py . [ 33%]
samples/tests/test_get_dataset.py . [ 41%]
samples/tests/test_get_table.py . [ 50%]
samples/tests/test_list_datasets.py . [ 58%]
samples/tests/test_list_tables.py . [ 66%]
samples/tests/test_model_samples.py . [ 75%]
samples/tests/test_update_dataset_access.py . [ 83%]
samples/tests/test_update_dataset_default_table_expiration.py . [ 91%]
samples/tests/test_update_dataset_description.py . [100%]
==================================== ERRORS ====================================
_____________________ ERROR at setup of test_create_table ______________________
client = <google.cloud.bigquery.client.Client object at 0x7f9d84a7be10>
@pytest.fixture
def dataset_id(client):
now = datetime.datetime.now()
dataset_id = "python_samples_{}_{}".format(
now.strftime("%Y%m%d%H%M%S"), uuid.uuid4().hex[:8]
)
> dataset = client.create_dataset(dataset_id)
samples/tests/conftest.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigquery/client.py:364: in create_dataset
api_response = self._call_api(retry, method="POST", path=path, data=data)
google/cloud/bigquery/client.py:413: in _call_api
return call()
../api_core/google/api_core/retry.py:270: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:179: in retry_target
return target()
../core/google/cloud/_http.py:315: in api_request
target_object=_target_object,
../core/google/cloud/_http.py:192: in _make_request
return self._do_request(method, url, headers, data, target_object)
../core/google/cloud/_http.py:221: in _do_request
return self.http.request(url=url, method=method, headers=headers, data=data)
.nox/snippets-3-6/lib/python3.6/site-packages/google/auth/transport/requests.py:205: in request
self._auth_request, method, url, request_headers)
.nox/snippets-3-6/lib/python3.6/site-packages/google/auth/credentials.py:122: in before_request
self.refresh(request)
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/service_account.py:322: in refresh
request, self._token_uri, assertion)
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/_client.py:145: in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/_client.py:111: in _token_endpoint_request
_handle_error_response(response_body)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
response_body = '{\n "error": "internal_failure"\n}'
def _handle_error_response(response_body):
""""Translates an error response into an exception.
Args:
response_body (str): The decoded response data.
Raises:
google.auth.exceptions.RefreshError
"""
try:
error_data = json.loads(response_body)
error_details = '{}: {}'.format(
error_data['error'],
error_data.get('error_description'))
# If no details could be extracted, use the response data.
except (KeyError, ValueError):
error_details = response_body
raise exceptions.RefreshError(
> error_details, response_body)
E google.auth.exceptions.RefreshError: ('internal_failure: None', '{\n "error": "internal_failure"\n}')
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/_client.py:61: RefreshError
===================== 11 passed, 1 error in 32.84 seconds ======================
```
|
1.0
|
BigQuery: flaky 'test_create_table' snippet. - From: [this test failure](https://source.cloud.google.com/results/invocations/234ff781-271e-4ce0-b5aa-8e39ee129ca5/targets/cloud-devrel%2Fclient-libraries%2Fgoogle-cloud-python%2Fpresubmit%2Fbigquery/log):
```python
py.test samples
============================= test session starts ==============================
platform linux -- Python 3.6.0, pytest-4.4.1, py-1.8.0, pluggy-0.9.0
rootdir: /tmpfs/src/github/google-cloud-python/bigquery
collected 12 items
samples/tests/test_create_dataset.py . [ 8%]
samples/tests/test_create_table.py E [ 16%]
samples/tests/test_delete_dataset.py . [ 25%]
samples/tests/test_delete_table.py . [ 33%]
samples/tests/test_get_dataset.py . [ 41%]
samples/tests/test_get_table.py . [ 50%]
samples/tests/test_list_datasets.py . [ 58%]
samples/tests/test_list_tables.py . [ 66%]
samples/tests/test_model_samples.py . [ 75%]
samples/tests/test_update_dataset_access.py . [ 83%]
samples/tests/test_update_dataset_default_table_expiration.py . [ 91%]
samples/tests/test_update_dataset_description.py . [100%]
==================================== ERRORS ====================================
_____________________ ERROR at setup of test_create_table ______________________
client = <google.cloud.bigquery.client.Client object at 0x7f9d84a7be10>
@pytest.fixture
def dataset_id(client):
now = datetime.datetime.now()
dataset_id = "python_samples_{}_{}".format(
now.strftime("%Y%m%d%H%M%S"), uuid.uuid4().hex[:8]
)
> dataset = client.create_dataset(dataset_id)
samples/tests/conftest.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
google/cloud/bigquery/client.py:364: in create_dataset
api_response = self._call_api(retry, method="POST", path=path, data=data)
google/cloud/bigquery/client.py:413: in _call_api
return call()
../api_core/google/api_core/retry.py:270: in retry_wrapped_func
on_error=on_error,
../api_core/google/api_core/retry.py:179: in retry_target
return target()
../core/google/cloud/_http.py:315: in api_request
target_object=_target_object,
../core/google/cloud/_http.py:192: in _make_request
return self._do_request(method, url, headers, data, target_object)
../core/google/cloud/_http.py:221: in _do_request
return self.http.request(url=url, method=method, headers=headers, data=data)
.nox/snippets-3-6/lib/python3.6/site-packages/google/auth/transport/requests.py:205: in request
self._auth_request, method, url, request_headers)
.nox/snippets-3-6/lib/python3.6/site-packages/google/auth/credentials.py:122: in before_request
self.refresh(request)
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/service_account.py:322: in refresh
request, self._token_uri, assertion)
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/_client.py:145: in jwt_grant
response_data = _token_endpoint_request(request, token_uri, body)
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/_client.py:111: in _token_endpoint_request
_handle_error_response(response_body)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
response_body = '{\n "error": "internal_failure"\n}'
def _handle_error_response(response_body):
""""Translates an error response into an exception.
Args:
response_body (str): The decoded response data.
Raises:
google.auth.exceptions.RefreshError
"""
try:
error_data = json.loads(response_body)
error_details = '{}: {}'.format(
error_data['error'],
error_data.get('error_description'))
# If no details could be extracted, use the response data.
except (KeyError, ValueError):
error_details = response_body
raise exceptions.RefreshError(
> error_details, response_body)
E google.auth.exceptions.RefreshError: ('internal_failure: None', '{\n "error": "internal_failure"\n}')
.nox/snippets-3-6/lib/python3.6/site-packages/google/oauth2/_client.py:61: RefreshError
===================== 11 passed, 1 error in 32.84 seconds ======================
```
|
process
|
bigquery flaky test create table snippet from python py test samples test session starts platform linux python pytest py pluggy rootdir tmpfs src github google cloud python bigquery collected items samples tests test create dataset py samples tests test create table py e samples tests test delete dataset py samples tests test delete table py samples tests test get dataset py samples tests test get table py samples tests test list datasets py samples tests test list tables py samples tests test model samples py samples tests test update dataset access py samples tests test update dataset default table expiration py samples tests test update dataset description py errors error at setup of test create table client pytest fixture def dataset id client now datetime datetime now dataset id python samples format now strftime y m d h m s uuid hex dataset client create dataset dataset id samples tests conftest py google cloud bigquery client py in create dataset api response self call api retry method post path path data data google cloud bigquery client py in call api return call api core google api core retry py in retry wrapped func on error on error api core google api core retry py in retry target return target core google cloud http py in api request target object target object core google cloud http py in make request return self do request method url headers data target object core google cloud http py in do request return self http request url url method method headers headers data data nox snippets lib site packages google auth transport requests py in request self auth request method url request headers nox snippets lib site packages google auth credentials py in before request self refresh request nox snippets lib site packages google service account py in refresh request self token uri assertion nox snippets lib site packages google client py in jwt grant response data token endpoint request request token uri body nox snippets lib site packages google client py in token endpoint request handle error response response body response body n error internal failure n def handle error response response body translates an error response into an exception args response body str the decoded response data raises google auth exceptions refresherror try error data json loads response body error details format error data error data get error description if no details could be extracted use the response data except keyerror valueerror error details response body raise exceptions refresherror error details response body e google auth exceptions refresherror internal failure none n error internal failure n nox snippets lib site packages google client py refresherror passed error in seconds
| 1
|
227,167
| 7,527,641,291
|
IssuesEvent
|
2018-04-13 17:48:03
|
AKQuaternion/L-Systems-Expander
|
https://api.github.com/repos/AKQuaternion/L-Systems-Expander
|
opened
|
Have an automatic level that draws just enough detail
|
graphics help wanted priority-med
|
Not at all sure how to go about this, it's only possible if the generated fractal passes through exactly the same points at each level.
|
1.0
|
Have an automatic level that draws just enough detail - Not at all sure how to go about this, it's only possible if the generated fractal passes through exactly the same points at each level.
|
non_process
|
have an automatic level that draws just enough detail not at all sure how to go about this it s only possible if the generated fractal passes through exactly the same points at each level
| 0
|
4,988
| 7,822,173,506
|
IssuesEvent
|
2018-06-14 00:48:38
|
StrikeNP/trac_test
|
https://api.github.com/repos/StrikeNP/trac_test
|
closed
|
Create a library structure for post_processing (Trac #2)
|
Migrated from Trac enhancement fasching@uwm.edu post_processing
|
Several files are used repeatedly by post_processing scripts.
These files include:
header_read.m
read_grads_hoc_endian.m
convert.m
It would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/2
```json
{
"status": "closed",
"changetime": "2009-05-16T10:11:18",
"description": "Several files are used repeatedly by post_processing scripts.\n\nThese files include:\n\nheader_read.m\nread_grads_hoc_endian.m\nconvert.m\n\nIt would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )",
"reporter": "fasching@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1242468678000000",
"component": "post_processing",
"summary": "Create a library structure for post_processing",
"priority": "minor",
"keywords": "matlab, conversion",
"time": "2009-05-01T21:14:16",
"milestone": "",
"owner": "fasching@uwm.edu",
"type": "enhancement"
}
```
|
1.0
|
Create a library structure for post_processing (Trac #2) - Several files are used repeatedly by post_processing scripts.
These files include:
header_read.m
read_grads_hoc_endian.m
convert.m
It would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )
Attachments:
Migrated from http://carson.math.uwm.edu/trac/clubb/ticket/2
```json
{
"status": "closed",
"changetime": "2009-05-16T10:11:18",
"description": "Several files are used repeatedly by post_processing scripts.\n\nThese files include:\n\nheader_read.m\nread_grads_hoc_endian.m\nconvert.m\n\nIt would be useful if these each had only one version in one place (e.g. ../post_processing/library/ )",
"reporter": "fasching@uwm.edu",
"cc": "",
"resolution": "Verified by V. Larson",
"_ts": "1242468678000000",
"component": "post_processing",
"summary": "Create a library structure for post_processing",
"priority": "minor",
"keywords": "matlab, conversion",
"time": "2009-05-01T21:14:16",
"milestone": "",
"owner": "fasching@uwm.edu",
"type": "enhancement"
}
```
|
process
|
create a library structure for post processing trac several files are used repeatedly by post processing scripts these files include header read m read grads hoc endian m convert m it would be useful if these each had only one version in one place e g post processing library attachments migrated from json status closed changetime description several files are used repeatedly by post processing scripts n nthese files include n nheader read m nread grads hoc endian m nconvert m n nit would be useful if these each had only one version in one place e g post processing library reporter fasching uwm edu cc resolution verified by v larson ts component post processing summary create a library structure for post processing priority minor keywords matlab conversion time milestone owner fasching uwm edu type enhancement
| 1
|
13,716
| 16,480,495,127
|
IssuesEvent
|
2021-05-24 10:55:31
|
New-Time-Development/OmeCord
|
https://api.github.com/repos/New-Time-Development/OmeCord
|
closed
|
Queue stuff
|
Big issue Known bug work in process
|
Add to queue when match was failt.
Test if a user is in the queue when typing in !start
|
1.0
|
Queue stuff - Add to queue when match was failt.
Test if a user is in the queue when typing in !start
|
process
|
queue stuff add to queue when match was failt test if a user is in the queue when typing in start
| 1
|
2,194
| 5,038,422,763
|
IssuesEvent
|
2016-12-18 08:03:40
|
AllenFang/react-bootstrap-table
|
https://api.github.com/repos/AllenFang/react-bootstrap-table
|
closed
|
Enhancement request for a way to indicate columns that are editable.
|
help wanted inprocess
|
I tried to add an image to the column header to indicate that its editable. This renders as needed but does throw error below.
It only accepts strings as headers. Is there any other way to indicate that only particular columns are editable in the table column headers?
**Error Message:**
warning.js:36 Warning: Failed prop type: Invalid prop `columnName` of type `array` supplied to `NumberFilter`, expected `string`.
in NumberFilter (created by TableHeaderColumn)
in TableHeaderColumn (created by ManagedTable)
**Code:**
[code.txt](https://github.com/AllenFang/react-bootstrap-table/files/649656/code.txt)
|
1.0
|
Enhancement request for a way to indicate columns that are editable. - I tried to add an image to the column header to indicate that its editable. This renders as needed but does throw error below.
It only accepts strings as headers. Is there any other way to indicate that only particular columns are editable in the table column headers?
**Error Message:**
warning.js:36 Warning: Failed prop type: Invalid prop `columnName` of type `array` supplied to `NumberFilter`, expected `string`.
in NumberFilter (created by TableHeaderColumn)
in TableHeaderColumn (created by ManagedTable)
**Code:**
[code.txt](https://github.com/AllenFang/react-bootstrap-table/files/649656/code.txt)
|
process
|
enhancement request for a way to indicate columns that are editable i tried to add an image to the column header to indicate that its editable this renders as needed but does throw error below it only accepts strings as headers is there any other way to indicate that only particular columns are editable in the table column headers error message warning js warning failed prop type invalid prop columnname of type array supplied to numberfilter expected string in numberfilter created by tableheadercolumn in tableheadercolumn created by managedtable code
| 1
|
11,534
| 14,408,557,225
|
IssuesEvent
|
2020-12-04 00:04:48
|
GoogleCloudPlatform/fourkeys
|
https://api.github.com/repos/GoogleCloudPlatform/fourkeys
|
closed
|
Create end-to-end tests for data generator
|
type: process
|
Deployment definition change in recent PR (https://github.com/GoogleCloudPlatform/fourkeys/pull/34) was not reflected in the data generator, causing new dashboard setups to fail: https://github.com/GoogleCloudPlatform/fourkeys/issues/37
|
1.0
|
Create end-to-end tests for data generator - Deployment definition change in recent PR (https://github.com/GoogleCloudPlatform/fourkeys/pull/34) was not reflected in the data generator, causing new dashboard setups to fail: https://github.com/GoogleCloudPlatform/fourkeys/issues/37
|
process
|
create end to end tests for data generator deployment definition change in recent pr was not reflected in the data generator causing new dashboard setups to fail
| 1
|
29,873
| 11,782,209,040
|
IssuesEvent
|
2020-03-17 01:04:55
|
andrasta/lgsvl_simulator
|
https://api.github.com/repos/andrasta/lgsvl_simulator
|
opened
|
CVE-2012-6708 (Medium) detected in jquery-1.7.1.min.js
|
security vulnerability
|
## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: /lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/multiplex/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/echo/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/hapi/html/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/express-3.x/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0"}],"vulnerabilityIdentifier":"CVE-2012-6708","vulnerabilityDetails":"jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the \u0027\u003c\u0027 character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the \u0027\u003c\u0027 character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2012-6708 (Medium) detected in jquery-1.7.1.min.js - ## CVE-2012-6708 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/multiplex/index.html</p>
<p>Path to vulnerable library: /lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/multiplex/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/echo/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/hapi/html/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/sockjs/examples/express-3.x/index.html,/lgsvl_simulator/WebUI/node_modules/@enact/cli/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the '<' character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the '<' character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708>CVE-2012-6708</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2012-6708">https://nvd.nist.gov/vuln/detail/CVE-2012-6708</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v1.9.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"JavaScript","packageName":"jquery","packageVersion":"1.7.1","isTransitiveDependency":false,"dependencyTree":"jquery:1.7.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"jQuery - v1.9.0"}],"vulnerabilityIdentifier":"CVE-2012-6708","vulnerabilityDetails":"jQuery before 1.9.0 is vulnerable to Cross-site Scripting (XSS) attacks. The jQuery(strInput) function does not differentiate selectors from HTML in a reliable fashion. In vulnerable versions, jQuery determined whether the input was HTML by looking for the \u0027\u003c\u0027 character anywhere in the string, giving attackers more flexibility when attempting to construct a malicious payload. In fixed versions, jQuery only deems the input to be HTML if it explicitly starts with the \u0027\u003c\u0027 character, limiting exploitability only to attackers who can control the beginning of a string, which is far less common.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2012-6708","cvss3Severity":"medium","cvss3Score":"6.1","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Changed","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm lgsvl simulator webui node modules enact cli node modules sockjs examples multiplex index html path to vulnerable library lgsvl simulator webui node modules enact cli node modules sockjs examples multiplex index html lgsvl simulator webui node modules enact cli node modules sockjs examples echo index html lgsvl simulator webui node modules enact cli node modules sockjs examples hapi html index html lgsvl simulator webui node modules enact cli node modules sockjs examples express x index html lgsvl simulator webui node modules enact cli node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library vulnerability details jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before is vulnerable to cross site scripting xss attacks the jquery strinput function does not differentiate selectors from html in a reliable fashion in vulnerable versions jquery determined whether the input was html by looking for the character anywhere in the string giving attackers more flexibility when attempting to construct a malicious payload in fixed versions jquery only deems the input to be html if it explicitly starts with the character limiting exploitability only to attackers who can control the beginning of a string which is far less common vulnerabilityurl
| 0
|
22,187
| 30,737,262,161
|
IssuesEvent
|
2023-07-28 08:39:30
|
dita-ot/dita-ot
|
https://api.github.com/repos/dita-ot/dita-ot
|
closed
|
NullPointerException in AbstractReaderModule
|
bug priority/high preprocess2
|
I'm attaching a sample DITA project, published to PDF using DITA OT 3.1 the following NullPointerException is reported:
```
D:\projects\oxygen-dita-ot-3.x\build.xml:45: The following error occurred while executing this line:
D:\projects\oxygen-dita-ot-3.x\plugins\org.dita.base\build_preprocess2.xml:175: java.lang.NullPointerException
at org.dita.dost.module.reader.AbstractReaderModule.addToWaitList(AbstractReaderModule.java:535)
at java.util.ArrayList.forEach(Unknown Source)
at org.dita.dost.module.reader.TopicReaderModule.readStartFile(TopicReaderModule.java:133)
at org.dita.dost.module.reader.TopicReaderModule.execute(TopicReaderModule.java:70)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:169)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
```
[npe_pdf_reader_module.zip](https://github.com/dita-ot/dita-ot/files/2144456/npe_pdf_reader_module.zip)
From what I looked, probably just guarding the NPE should be fine. There is a key definition there with a copy-to and the key is later referenced with a related link from a topic. Just guarding the NPE and then publishing to PDF the related link to the key seems to work.
|
1.0
|
NullPointerException in AbstractReaderModule - I'm attaching a sample DITA project, published to PDF using DITA OT 3.1 the following NullPointerException is reported:
```
D:\projects\oxygen-dita-ot-3.x\build.xml:45: The following error occurred while executing this line:
D:\projects\oxygen-dita-ot-3.x\plugins\org.dita.base\build_preprocess2.xml:175: java.lang.NullPointerException
at org.dita.dost.module.reader.AbstractReaderModule.addToWaitList(AbstractReaderModule.java:535)
at java.util.ArrayList.forEach(Unknown Source)
at org.dita.dost.module.reader.TopicReaderModule.readStartFile(TopicReaderModule.java:133)
at org.dita.dost.module.reader.TopicReaderModule.execute(TopicReaderModule.java:70)
at org.dita.dost.ant.ExtensibleAntInvoker.execute(ExtensibleAntInvoker.java:169)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:292)
at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
```
[npe_pdf_reader_module.zip](https://github.com/dita-ot/dita-ot/files/2144456/npe_pdf_reader_module.zip)
From what I looked, probably just guarding the NPE should be fine. There is a key definition there with a copy-to and the key is later referenced with a related link from a topic. Just guarding the NPE and then publishing to PDF the related link to the key seems to work.
|
process
|
nullpointerexception in abstractreadermodule i m attaching a sample dita project published to pdf using dita ot the following nullpointerexception is reported d projects oxygen dita ot x build xml the following error occurred while executing this line d projects oxygen dita ot x plugins org dita base build xml java lang nullpointerexception at org dita dost module reader abstractreadermodule addtowaitlist abstractreadermodule java at java util arraylist foreach unknown source at org dita dost module reader topicreadermodule readstartfile topicreadermodule java at org dita dost module reader topicreadermodule execute topicreadermodule java at org dita dost ant extensibleantinvoker execute extensibleantinvoker java at org apache tools ant unknownelement execute unknownelement java at sun reflect invoke unknown source at sun reflect delegatingmethodaccessorimpl invoke unknown source at java lang reflect method invoke unknown source from what i looked probably just guarding the npe should be fine there is a key definition there with a copy to and the key is later referenced with a related link from a topic just guarding the npe and then publishing to pdf the related link to the key seems to work
| 1
|
4,710
| 7,548,873,927
|
IssuesEvent
|
2018-04-18 12:42:54
|
DynareTeam/dynare
|
https://api.github.com/repos/DynareTeam/dynare
|
closed
|
Fix predetermined_variables command with model-local variables
|
bug preprocessor
|
The mod-file
```
@#define predet=1
var K q;
@#if predet
predetermined_variables K;
@#endif
parameters
a b delta r alpha dt
K_inf q_inf;
a=1;
b=1;
delta = 0.023;
alpha = 0.33;
r = 0.01;
dt=1;
q_inf = 1+(1+a)*b*delta^a;
K_inf = (((r+delta)*q_inf-a*b*delta^(a+1))/alpha)^(1/(alpha-1));
@#if predet==0
// Model block without predetermined_variables statement
model;
# I = K(-1)*((q-1)/((1+a)*b))^(1/a);
# w = alpha*K(-1)^(alpha-1)+a*b*(I/K(-1))^(a+1);
K = K(-1) + (I-delta*K(-1))*dt;
q(+1) = q + (r+delta)*q*dt - w*dt;
end;
@#else
// Model block with predetermined_variables statement
model;
# I = K*((q-1)/((1+a)*b))^(1/a);
# w = alpha*K^(alpha-1)+a*b*(I/K)^(a+1);
K(+1) = K + (I-delta*K)*dt;
q(+1) = q + (r+delta)*q*dt - w*dt;
end;
@#endif
steady_state_model;
K = K_inf;
q = q_inf;
end;
initval;
K = 20;
q = q_inf;
end;
endval;
K = K_inf;
q = q_inf1;
end;
check;
simul(periods=100, tolx=1e-12, tolf=1e-12, no_homotopy);
# I = K*((q-1)/((1+a)*b))^(1/a);
# w = alpha*K^(alpha-1)+a*b*(I/K)^(a+1);
K(+1) = K + (I-delta*K)*dt;
rplot K;
rplot q;
```
yields wrong results. The relevant parts of the dynamic file corresponding to
```
# I = K*((q-1)/((1+a)*b))^(1/a);
K(+1) = K + (I-delta*K)*dt;
```
are
```
I__ = y(2)*((y(3)-1)/((1+params(1))*params(2)))^(1/params(1));
lhs =y(2);
rhs =y(1)+params(6)*(I__-params(3)*y(1));
```
Here, `y(1)` stores `K` and `y(2)` stores `K(+1)`. As can be seen, in the created model-local variable, the capital stock is not shifted backwards, despite the `predetermined_variables` statement.
Upon fixing this, we should turn the file into a unit test.
|
1.0
|
Fix predetermined_variables command with model-local variables - The mod-file
```
@#define predet=1
var K q;
@#if predet
predetermined_variables K;
@#endif
parameters
a b delta r alpha dt
K_inf q_inf;
a=1;
b=1;
delta = 0.023;
alpha = 0.33;
r = 0.01;
dt=1;
q_inf = 1+(1+a)*b*delta^a;
K_inf = (((r+delta)*q_inf-a*b*delta^(a+1))/alpha)^(1/(alpha-1));
@#if predet==0
// Model block without predetermined_variables statement
model;
# I = K(-1)*((q-1)/((1+a)*b))^(1/a);
# w = alpha*K(-1)^(alpha-1)+a*b*(I/K(-1))^(a+1);
K = K(-1) + (I-delta*K(-1))*dt;
q(+1) = q + (r+delta)*q*dt - w*dt;
end;
@#else
// Model block with predetermined_variables statement
model;
# I = K*((q-1)/((1+a)*b))^(1/a);
# w = alpha*K^(alpha-1)+a*b*(I/K)^(a+1);
K(+1) = K + (I-delta*K)*dt;
q(+1) = q + (r+delta)*q*dt - w*dt;
end;
@#endif
steady_state_model;
K = K_inf;
q = q_inf;
end;
initval;
K = 20;
q = q_inf;
end;
endval;
K = K_inf;
q = q_inf1;
end;
check;
simul(periods=100, tolx=1e-12, tolf=1e-12, no_homotopy);
# I = K*((q-1)/((1+a)*b))^(1/a);
# w = alpha*K^(alpha-1)+a*b*(I/K)^(a+1);
K(+1) = K + (I-delta*K)*dt;
rplot K;
rplot q;
```
yields wrong results. The relevant parts of the dynamic file corresponding to
```
# I = K*((q-1)/((1+a)*b))^(1/a);
K(+1) = K + (I-delta*K)*dt;
```
are
```
I__ = y(2)*((y(3)-1)/((1+params(1))*params(2)))^(1/params(1));
lhs =y(2);
rhs =y(1)+params(6)*(I__-params(3)*y(1));
```
Here, `y(1)` stores `K` and `y(2)` stores `K(+1)`. As can be seen, in the created model-local variable, the capital stock is not shifted backwards, despite the `predetermined_variables` statement.
Upon fixing this, we should turn the file into a unit test.
|
process
|
fix predetermined variables command with model local variables the mod file define predet var k q if predet predetermined variables k endif parameters a b delta r alpha dt k inf q inf a b delta alpha r dt q inf a b delta a k inf r delta q inf a b delta a alpha alpha if predet model block without predetermined variables statement model i k q a b a w alpha k alpha a b i k a k k i delta k dt q q r delta q dt w dt end else model block with predetermined variables statement model i k q a b a w alpha k alpha a b i k a k k i delta k dt q q r delta q dt w dt end endif steady state model k k inf q q inf end initval k q q inf end endval k k inf q q end check simul periods tolx tolf no homotopy i k q a b a w alpha k alpha a b i k a k k i delta k dt rplot k rplot q yields wrong results the relevant parts of the dynamic file corresponding to i k q a b a k k i delta k dt are i y y params params params lhs y rhs y params i params y here y stores k and y stores k as can be seen in the created model local variable the capital stock is not shifted backwards despite the predetermined variables statement upon fixing this we should turn the file into a unit test
| 1
|
423
| 2,855,178,062
|
IssuesEvent
|
2015-06-02 07:53:21
|
Sheep-y/Sheep-y.github.io
|
https://api.github.com/repos/Sheep-y/Sheep-y.github.io
|
opened
|
Proper multilingual (en,zh,ja) support
|
Data DDD Process UI
|
1. [ ] Fill in Japanese text.
2. [ ] Update transphase module to support multilingual highlight.
3. [ ] Dynamically create CSS rules to fit language order. (Use Flexbox for single column)
|
1.0
|
Proper multilingual (en,zh,ja) support - 1. [ ] Fill in Japanese text.
2. [ ] Update transphase module to support multilingual highlight.
3. [ ] Dynamically create CSS rules to fit language order. (Use Flexbox for single column)
|
process
|
proper multilingual en zh ja support fill in japanese text update transphase module to support multilingual highlight dynamically create css rules to fit language order use flexbox for single column
| 1
|
17,078
| 22,579,226,381
|
IssuesEvent
|
2022-06-28 10:03:38
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
closed
|
Clap Deprecations
|
good first issue enhancement development-process help wanted
|
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
The most recent clap release deprecated some functionality which is causing clippy to fail on master
```
use of deprecated unit variant clap::ArgAction::StoreValue
```
https://epage.github.io/blog/2022/02/clap-31-a-step-towards-40/
https://github.com/clap-rs/clap/issues/3822
**Describe the solution you'd like**
We should fix this
|
1.0
|
Clap Deprecations - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
The most recent clap release deprecated some functionality which is causing clippy to fail on master
```
use of deprecated unit variant clap::ArgAction::StoreValue
```
https://epage.github.io/blog/2022/02/clap-31-a-step-towards-40/
https://github.com/clap-rs/clap/issues/3822
**Describe the solution you'd like**
We should fix this
|
process
|
clap deprecations is your feature request related to a problem or challenge please describe what you are trying to do the most recent clap release deprecated some functionality which is causing clippy to fail on master use of deprecated unit variant clap argaction storevalue describe the solution you d like we should fix this
| 1
|
306,999
| 23,177,632,060
|
IssuesEvent
|
2022-07-31 16:57:26
|
brad-cannell/freqtables
|
https://api.github.com/repos/brad-cannell/freqtables
|
closed
|
Make a package down site for freqtables
|
documentation
|
You might want to wait until after you make the `group_by()` changes. The vignettes might chance quite a bit when you do that.
Start with something really simple. Just get it going. You can improve over time.
This will have more detail than R4Epi, and it will be more formal than R Notes.
https://pkgdown.r-lib.org/index.html
|
1.0
|
Make a package down site for freqtables - You might want to wait until after you make the `group_by()` changes. The vignettes might chance quite a bit when you do that.
Start with something really simple. Just get it going. You can improve over time.
This will have more detail than R4Epi, and it will be more formal than R Notes.
https://pkgdown.r-lib.org/index.html
|
non_process
|
make a package down site for freqtables you might want to wait until after you make the group by changes the vignettes might chance quite a bit when you do that start with something really simple just get it going you can improve over time this will have more detail than and it will be more formal than r notes
| 0
|
8,617
| 11,772,663,086
|
IssuesEvent
|
2020-03-16 04:45:12
|
dCentralizedSystems/customer-support
|
https://api.github.com/repos/dCentralizedSystems/customer-support
|
closed
|
Add proximity egress/ingress counters using user defined tethers on ground tasks, produce alerts
|
UI enhancement fleet management stream processing
|
using the task UI or just the data sample client UI, add a tether and have the user specify a radius, then start tracking in.out counters. we can add alerts as well
|
1.0
|
Add proximity egress/ingress counters using user defined tethers on ground tasks, produce alerts - using the task UI or just the data sample client UI, add a tether and have the user specify a radius, then start tracking in.out counters. we can add alerts as well
|
process
|
add proximity egress ingress counters using user defined tethers on ground tasks produce alerts using the task ui or just the data sample client ui add a tether and have the user specify a radius then start tracking in out counters we can add alerts as well
| 1
|
37,501
| 8,407,650,346
|
IssuesEvent
|
2018-10-11 21:39:59
|
riksanyal/et
|
https://api.github.com/repos/riksanyal/et
|
closed
|
Superfluous error message (Trac #54)
|
Migrated from Trac SimFactory defect mthomas
|
I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about "mkdir". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.
$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4
sim: manage and submit cactus jobs
defs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini
defs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini
Cactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg
SimEnvironment.COMMAND: submit
Executing command: submit
Parfile: par/static_tov.par
[log] Assigned restart_id of: 0001
[log] Found the following restart_ids: [0]
[log] Maximum restart id determined to be: 0000
[log] Determined submit restart id: 1
writing to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY
writing to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
mkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied
Executing submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
Submit finished, job id is 788131.nid00016
Migrated from https://trac.einsteintoolkit.org/ticket/54
```json
{
"status": "closed",
"changetime": "2010-10-20T16:14:27",
"description": "I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about \"mkdir\". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.\n\n\n\n$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4\nsim: manage and submit cactus jobs\n\ndefs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini\ndefs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini\n\nCactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg\nSimEnvironment.COMMAND: submit\nExecuting command: submit\nParfile: par/static_tov.par\n[log] Assigned restart_id of: 0001\n[log] Found the following restart_ids: [0]\n[log] Maximum restart id determined to be: 0000\n[log] Determined submit restart id: 1 \nwriting to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY\nwriting to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nmkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied\nExecuting submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nSubmit finished, job id is 788131.nid00016\n",
"reporter": "eschnett",
"cc": "",
"resolution": "fixed",
"_ts": "1287591267709359",
"component": "SimFactory",
"summary": "Superfluous error message",
"priority": "minor",
"keywords": "",
"version": "",
"time": "2010-10-19T02:40:57",
"milestone": "",
"owner": "mthomas",
"type": "defect"
}
```
|
1.0
|
Superfluous error message (Trac #54) - I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about "mkdir". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.
$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4
sim: manage and submit cactus jobs
defs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini
defs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini
Cactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg
SimEnvironment.COMMAND: submit
Executing command: submit
Parfile: par/static_tov.par
[log] Assigned restart_id of: 0001
[log] Found the following restart_ids: [0]
[log] Maximum restart id determined to be: 0000
[log] Determined submit restart id: 1
writing to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY
writing to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
mkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied
Executing submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript
Submit finished, job id is 788131.nid00016
Migrated from https://trac.einsteintoolkit.org/ticket/54
```json
{
"status": "closed",
"changetime": "2010-10-20T16:14:27",
"description": "I submitted a job for a simulation which already existed. This lead to the following output, where there is an error message about \"mkdir\". Even though there is an error, simfactory proceeds (it probably shouldn't). In this case, however, I assume that the error is benign because the directory existed before, and thus the error message should not be shown at all.\n\n\n\n$ ./simfactory/sim submit par/static_tov.par --procs=24 --walltime=4:0:0 --num-threads=4\nsim: manage and submit cactus jobs\n\ndefs: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.ini\ndefs.local: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg/simfactory/etc/defs.local.ini\n\nCactus Directory: /nics/a/proj/cactus/eschnett/xt5/EinsteinToolkit-hg\nSimEnvironment.COMMAND: submit\nExecuting command: submit\nParfile: par/static_tov.par\n[log] Assigned restart_id of: 0001\n[log] Found the following restart_ids: [0]\n[log] Maximum restart id determined to be: 0000\n[log] Determined submit restart id: 1 \nwriting to internalDir: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY\nwriting to: /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nmkdir: cannot create directory `/lustre/scratch/@USER@': Permission denied\nExecuting submit command: /opt/torque/2.3.5/bin/qsub /lustre/scratch/eschnett/simulations/static_tov/output-0001/SIMFACTORY/PreparedSubmitScript\nSubmit finished, job id is 788131.nid00016\n",
"reporter": "eschnett",
"cc": "",
"resolution": "fixed",
"_ts": "1287591267709359",
"component": "SimFactory",
"summary": "Superfluous error message",
"priority": "minor",
"keywords": "",
"version": "",
"time": "2010-10-19T02:40:57",
"milestone": "",
"owner": "mthomas",
"type": "defect"
}
```
|
non_process
|
superfluous error message trac i submitted a job for a simulation which already existed this lead to the following output where there is an error message about mkdir even though there is an error simfactory proceeds it probably shouldn t in this case however i assume that the error is benign because the directory existed before and thus the error message should not be shown at all simfactory sim submit par static tov par procs walltime num threads sim manage and submit cactus jobs defs nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs ini defs local nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs local ini cactus directory nics a proj cactus eschnett einsteintoolkit hg simenvironment command submit executing command submit parfile par static tov par assigned restart id of found the following restart ids maximum restart id determined to be determined submit restart id writing to internaldir lustre scratch eschnett simulations static tov output simfactory writing to lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript mkdir cannot create directory lustre scratch user permission denied executing submit command opt torque bin qsub lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript submit finished job id is migrated from json status closed changetime description i submitted a job for a simulation which already existed this lead to the following output where there is an error message about mkdir even though there is an error simfactory proceeds it probably shouldn t in this case however i assume that the error is benign because the directory existed before and thus the error message should not be shown at all n n n n simfactory sim submit par static tov par procs walltime num threads nsim manage and submit cactus jobs n ndefs nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs ini ndefs local nics a proj cactus eschnett einsteintoolkit hg simfactory etc defs local ini n ncactus directory nics a proj cactus eschnett einsteintoolkit hg nsimenvironment command submit nexecuting command submit nparfile par static tov par n assigned restart id of n found the following restart ids n maximum restart id determined to be n determined submit restart id nwriting to internaldir lustre scratch eschnett simulations static tov output simfactory nwriting to lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript nmkdir cannot create directory lustre scratch user permission denied nexecuting submit command opt torque bin qsub lustre scratch eschnett simulations static tov output simfactory preparedsubmitscript nsubmit finished job id is n reporter eschnett cc resolution fixed ts component simfactory summary superfluous error message priority minor keywords version time milestone owner mthomas type defect
| 0
|
297,801
| 25,765,082,882
|
IssuesEvent
|
2022-12-09 00:48:18
|
Myrfion/LinkFree
|
https://api.github.com/repos/Myrfion/LinkFree
|
opened
|
New Testimonial
|
testimonial
|
### Name
test-name
### Title
Some title
### Description
asdas dasd asd asd
|
1.0
|
New Testimonial - ### Name
test-name
### Title
Some title
### Description
asdas dasd asd asd
|
non_process
|
new testimonial name test name title some title description asdas dasd asd asd
| 0
|
269,780
| 23,465,138,074
|
IssuesEvent
|
2022-08-16 16:04:54
|
intelowlproject/IntelOwl
|
https://api.github.com/repos/intelowlproject/IntelOwl
|
closed
|
[React New Frontend] provide navigable JSON result for each analyzer
|
enhance/testing
|
In the IntelOwl v3 version we have this navigable component that allows us to easily surf the JSON result of an analyzers

In IntelOwl v4, by default, we get all the JSON data at the same time. While this can be helpful, most of the time is not.
Example:

My idea is to replace that new component with something similar to the previous one available in the Angular GUI
|
1.0
|
[React New Frontend] provide navigable JSON result for each analyzer - In the IntelOwl v3 version we have this navigable component that allows us to easily surf the JSON result of an analyzers

In IntelOwl v4, by default, we get all the JSON data at the same time. While this can be helpful, most of the time is not.
Example:

My idea is to replace that new component with something similar to the previous one available in the Angular GUI
|
non_process
|
provide navigable json result for each analyzer in the intelowl version we have this navigable component that allows us to easily surf the json result of an analyzers in intelowl by default we get all the json data at the same time while this can be helpful most of the time is not example my idea is to replace that new component with something similar to the previous one available in the angular gui
| 0
|
98,616
| 30,018,165,745
|
IssuesEvent
|
2023-06-26 20:29:19
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Wrong computed RID on Alpine Linux alpha / beta releases
|
source-build area-Host
|
### Description
Runtime does not support RID format for alpha / beta releases of Alpine Linux which uses a `VERSION_ID` format which goes as such: `x.xx_alphayyyymmdd` (i.e. `alpine.3.17_alpha20220715-x64`) During build, even when RID is added to JSON RID graph using source-build's `_IsBootstrapping=true` flag and `eng/native/init-distro-rid.sh` logics are modified to not cut what's after `alpine.3`, fails with `MSB4018: System.FormatException: Input string was not in a correct format.` in `src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5)`
### Reproduction Steps
* Run `abuild -r` using aport as available [here](https://gitlab.alpinelinux.org/ayakael/aports/-/tree/community/dotnet6-fix-dev-rid/community/dotnet6-stage0) in current Alpine Edge environment
### Expected behavior
Either `GenerateRuntimeGraph` should be able to cope with that `VERSION_ID` format, as it is within spec of `/etc/os-release`, or `[System.Runtime.InteropServices.RuntimeInformation]::RuntimeIdentifier` should cut whatever's after `_` as irrelevant information, just as it does with subversions. (i.e 3.17.1 becomes 3.17, thus should 3.17_alpha20220715 become 3.17)
### Actual behavior
Computed RID through `eng/native/init-distro-rid` is `alpine.3-x64` and through `[System.Runtime.InteropServices.RuntimeInformation]::RuntimeIdentifier` it is `alpine.3.17_alpha20220715-x64`
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
Note that this is done within source-build environment with `6.0.107` release on an Alpine Linux LXC container. Many patches are also applied by Alpine's build system. Patches relevant to runtime are prefixed by `runtime_` in aport [here](https://gitlab.alpinelinux.org/ayakael/aports/-/tree/community/dotnet6-fix-dev-rid/community/dotnet6-stage0)
### Other information
Full error is as follows:
```
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: The "GenerateRuntimeGraph" task failed unexpectedly.
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: System.FormatException: Input string was not in a correct format.
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Number.ThrowOverflowOrFormatException(ParsingStatus status, TypeCode type)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Version.TryParseComponent(ReadOnlySpan`1 component, String componentName, Boolean throwOnFailure, Int32& parsedComponent)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Version.ParseVersion(ReadOnlySpan`1 input, Boolean throwOnFailure)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Version.Parse(String input)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.RuntimeVersion..ctor(String versionString) in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/RuntimeVersion.cs:line 37
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.RID.Parse(String runtimeIdentifier) in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/RID.cs:line 152
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.GenerateRuntimeGraph.AddRuntimeIdentifiers(ICollection`1 runtimeGroups) in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/GenerateRuntimeGraph.cs:line 325
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.GenerateRuntimeGraph.Execute() in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/GenerateRuntimeGraph.cs:line 157
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask)
478 Warning(s)
1 Error(s)
Time Elapsed 00:10:20.07
Build failed with exit code 1. Check errors above.
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/Tools/source-built/Microsoft.DotNet.Arcade.Sdk/tools/SourceBuild/SourceBuildArcadeBuild.targets(194,5): error MSB3073: The command "./build.sh --arch x64 --configuration Release --allconfigurations --verbosity minimal --nodereuse false --warnAsError false /p:MicrosoftNetFrameworkReferenceAssembliesVersion=1.0.2 /p:PackageRid=alpine.3.17_alpha20220715-x64 /p:NoPgoOptimize=true /p:KeepNativeSymbols=true /p:RuntimeOS=alpine.3.17_alpha20220715 /p:PortableBuild=false /p:BuildDebPackage=false --cmakeargs -DCLR_CMAKE_USE_SYSTEM_LIBUNWIND=TRUE /p:ArcadeInnerBuildFromSource=true /p:DotNetBuildFromSource=true /p:RepoRoot=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/ /p:ArtifactsDir=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/artifacts/ /bl:/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/artifacts/sourcebuild.binlog /p:ContinuousIntegrationBuild=true /p:SourceBuildOutputDir=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/ /p:SourceBuiltBlobFeedDir= /p:EnableSourceControlManagerQueries=false /p:EnableSourceLink=false /p:DeterministicSourcePaths=false /p:DotNetBuildOffline=true /p:DotNetPackageVersionPropsPath=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/artifacts/obj/x64/Release/PackageVersions.props" exited with code 1. [/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/Tools/source-built/Microsoft.DotNet.Arcade.Sdk/tools/Build.proj]
```
Full log of build [here](https://gitlab.alpinelinux.org/ayakael/aports/-/jobs/793921/raw)
|
1.0
|
Wrong computed RID on Alpine Linux alpha / beta releases - ### Description
Runtime does not support RID format for alpha / beta releases of Alpine Linux which uses a `VERSION_ID` format which goes as such: `x.xx_alphayyyymmdd` (i.e. `alpine.3.17_alpha20220715-x64`) During build, even when RID is added to JSON RID graph using source-build's `_IsBootstrapping=true` flag and `eng/native/init-distro-rid.sh` logics are modified to not cut what's after `alpine.3`, fails with `MSB4018: System.FormatException: Input string was not in a correct format.` in `src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5)`
### Reproduction Steps
* Run `abuild -r` using aport as available [here](https://gitlab.alpinelinux.org/ayakael/aports/-/tree/community/dotnet6-fix-dev-rid/community/dotnet6-stage0) in current Alpine Edge environment
### Expected behavior
Either `GenerateRuntimeGraph` should be able to cope with that `VERSION_ID` format, as it is within spec of `/etc/os-release`, or `[System.Runtime.InteropServices.RuntimeInformation]::RuntimeIdentifier` should cut whatever's after `_` as irrelevant information, just as it does with subversions. (i.e 3.17.1 becomes 3.17, thus should 3.17_alpha20220715 become 3.17)
### Actual behavior
Computed RID through `eng/native/init-distro-rid` is `alpine.3-x64` and through `[System.Runtime.InteropServices.RuntimeInformation]::RuntimeIdentifier` it is `alpine.3.17_alpha20220715-x64`
### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
Note that this is done within source-build environment with `6.0.107` release on an Alpine Linux LXC container. Many patches are also applied by Alpine's build system. Patches relevant to runtime are prefixed by `runtime_` in aport [here](https://gitlab.alpinelinux.org/ayakael/aports/-/tree/community/dotnet6-fix-dev-rid/community/dotnet6-stage0)
### Other information
Full error is as follows:
```
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: The "GenerateRuntimeGraph" task failed unexpectedly.
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: System.FormatException: Input string was not in a correct format.
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Number.ThrowOverflowOrFormatException(ParsingStatus status, TypeCode type)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Version.TryParseComponent(ReadOnlySpan`1 component, String componentName, Boolean throwOnFailure, Int32& parsedComponent)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Version.ParseVersion(ReadOnlySpan`1 input, Boolean throwOnFailure)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at System.Version.Parse(String input)
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.RuntimeVersion..ctor(String versionString) in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/RuntimeVersion.cs:line 37
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.RID.Parse(String runtimeIdentifier) in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/RID.cs:line 152
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.GenerateRuntimeGraph.AddRuntimeIdentifiers(ICollection`1 runtimeGroups) in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/GenerateRuntimeGraph.cs:line 325
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.NETCore.Platforms.BuildTasks.GenerateRuntimeGraph.Execute() in /var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/GenerateRuntimeGraph.cs:line 157
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute()
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/src/libraries/Microsoft.NETCore.Platforms/src/Microsoft.NETCore.Platforms.csproj(55,5): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask)
478 Warning(s)
1 Error(s)
Time Elapsed 00:10:20.07
Build failed with exit code 1. Check errors above.
/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/Tools/source-built/Microsoft.DotNet.Arcade.Sdk/tools/SourceBuild/SourceBuildArcadeBuild.targets(194,5): error MSB3073: The command "./build.sh --arch x64 --configuration Release --allconfigurations --verbosity minimal --nodereuse false --warnAsError false /p:MicrosoftNetFrameworkReferenceAssembliesVersion=1.0.2 /p:PackageRid=alpine.3.17_alpha20220715-x64 /p:NoPgoOptimize=true /p:KeepNativeSymbols=true /p:RuntimeOS=alpine.3.17_alpha20220715 /p:PortableBuild=false /p:BuildDebPackage=false --cmakeargs -DCLR_CMAKE_USE_SYSTEM_LIBUNWIND=TRUE /p:ArcadeInnerBuildFromSource=true /p:DotNetBuildFromSource=true /p:RepoRoot=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/ /p:ArtifactsDir=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/artifacts/ /bl:/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/self/src/artifacts/sourcebuild.binlog /p:ContinuousIntegrationBuild=true /p:SourceBuildOutputDir=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/src/runtime/artifacts/source-build/ /p:SourceBuiltBlobFeedDir= /p:EnableSourceControlManagerQueries=false /p:EnableSourceLink=false /p:DeterministicSourcePaths=false /p:DotNetBuildOffline=true /p:DotNetPackageVersionPropsPath=/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/artifacts/obj/x64/Release/PackageVersions.props" exited with code 1. [/var/build/dotnet6-fix-rid/community/dotnet6-stage0/src/source-build-tarball-6.0.107/Tools/source-built/Microsoft.DotNet.Arcade.Sdk/tools/Build.proj]
```
Full log of build [here](https://gitlab.alpinelinux.org/ayakael/aports/-/jobs/793921/raw)
|
non_process
|
wrong computed rid on alpine linux alpha beta releases description runtime does not support rid format for alpha beta releases of alpine linux which uses a version id format which goes as such x xx alphayyyymmdd i e alpine during build even when rid is added to json rid graph using source build s isbootstrapping true flag and eng native init distro rid sh logics are modified to not cut what s after alpine fails with system formatexception input string was not in a correct format in src libraries microsoft netcore platforms src microsoft netcore platforms csproj reproduction steps run abuild r using aport as available in current alpine edge environment expected behavior either generateruntimegraph should be able to cope with that version id format as it is within spec of etc os release or runtimeidentifier should cut whatever s after as irrelevant information just as it does with subversions i e becomes thus should become actual behavior computed rid through eng native init distro rid is alpine and through runtimeidentifier it is alpine regression no response known workarounds no response configuration note that this is done within source build environment with release on an alpine linux lxc container many patches are also applied by alpine s build system patches relevant to runtime are prefixed by runtime in aport other information full error is as follows var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error the generateruntimegraph task failed unexpectedly var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error system formatexception input string was not in a correct format var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at system number throwoverfloworformatexception parsingstatus status typecode type var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at system version tryparsecomponent readonlyspan component string componentname boolean throwonfailure parsedcomponent var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at system version parseversion readonlyspan input boolean throwonfailure var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at system version parse string input var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at microsoft netcore platforms buildtasks runtimeversion ctor string versionstring in var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src runtimeversion cs line var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at microsoft netcore platforms buildtasks rid parse string runtimeidentifier in var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src rid cs line var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at microsoft netcore platforms buildtasks generateruntimegraph addruntimeidentifiers icollection runtimegroups in var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src generateruntimegraph cs line var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at microsoft netcore platforms buildtasks generateruntimegraph execute in var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src generateruntimegraph cs line var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at microsoft build backend taskexecutionhost microsoft build backend itaskexecutionhost execute var build fix rid community src source build tarball src runtime artifacts source build self src src libraries microsoft netcore platforms src microsoft netcore platforms csproj error at microsoft build backend taskbuilder executeinstantiatedtask itaskexecutionhost taskexecutionhost taskloggingcontext taskloggingcontext taskhost taskhost itembucket bucket taskexecutionmode howtoexecutetask warning s error s time elapsed build failed with exit code check errors above var build fix rid community src source build tarball tools source built microsoft dotnet arcade sdk tools sourcebuild sourcebuildarcadebuild targets error the command build sh arch configuration release allconfigurations verbosity minimal nodereuse false warnaserror false p microsoftnetframeworkreferenceassembliesversion p packagerid alpine p nopgooptimize true p keepnativesymbols true p runtimeos alpine p portablebuild false p builddebpackage false cmakeargs dclr cmake use system libunwind true p arcadeinnerbuildfromsource true p dotnetbuildfromsource true p reporoot var build fix rid community src source build tarball src runtime artifacts source build self src p artifactsdir var build fix rid community src source build tarball src runtime artifacts source build self src artifacts bl var build fix rid community src source build tarball src runtime artifacts source build self src artifacts sourcebuild binlog p continuousintegrationbuild true p sourcebuildoutputdir var build fix rid community src source build tarball src runtime artifacts source build p sourcebuiltblobfeeddir p enablesourcecontrolmanagerqueries false p enablesourcelink false p deterministicsourcepaths false p dotnetbuildoffline true p dotnetpackageversionpropspath var build fix rid community src source build tarball artifacts obj release packageversions props exited with code full log of build
| 0
|
791,573
| 27,868,295,043
|
IssuesEvent
|
2023-03-21 11:50:01
|
googleapis/google-api-go-client
|
https://api.github.com/repos/googleapis/google-api-go-client
|
opened
|
NodePool Get/Describe returns the desired number of node
|
type: question priority: p3
|
This question is regarding to the container service (GKE).
I am trying to get the `desired number of nodes of a nodepool` (especially when the nodepool is managed by a CA, cluster autoscaler). However, I tried both the go SDK and the gcloud CLI. Neither returns the desired number. I wonder if this is a way to achieve it. Thanks!
|
1.0
|
NodePool Get/Describe returns the desired number of node - This question is regarding to the container service (GKE).
I am trying to get the `desired number of nodes of a nodepool` (especially when the nodepool is managed by a CA, cluster autoscaler). However, I tried both the go SDK and the gcloud CLI. Neither returns the desired number. I wonder if this is a way to achieve it. Thanks!
|
non_process
|
nodepool get describe returns the desired number of node this question is regarding to the container service gke i am trying to get the desired number of nodes of a nodepool especially when the nodepool is managed by a ca cluster autoscaler however i tried both the go sdk and the gcloud cli neither returns the desired number i wonder if this is a way to achieve it thanks
| 0
|
80,851
| 15,589,009,582
|
IssuesEvent
|
2021-03-18 07:23:51
|
soumya132/pomscan
|
https://api.github.com/repos/soumya132/pomscan
|
closed
|
CVE-2016-0762 (Medium) detected in tomcat-embed-core-8.5.4.jar
|
security vulnerability
|
## CVE-2016-0762 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.4.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: pomscan/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.4/tomcat-embed-core-8.5.4.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-jersey-1.4.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-1.4.0.RELEASE.jar
- :x: **tomcat-embed-core-8.5.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/soumya132/pomscan/commit/861f87574eb468ca8fdec6e4b2ea25783804ec34">861f87574eb468ca8fdec6e4b2ea25783804ec34</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Realm implementations in Apache Tomcat versions 9.0.0.M1 to 9.0.0.M9, 8.5.0 to 8.5.4, 8.0.0.RC1 to 8.0.36, 7.0.0 to 7.0.70 and 6.0.0 to 6.0.45 did not process the supplied password if the supplied user name did not exist. This made a timing attack possible to determine valid user names. Note that the default configuration includes the LockOutRealm which makes exploitation of this vulnerability harder.
<p>Publish Date: 2017-08-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-0762>CVE-2016-0762</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2017-08-10</p>
<p>Fix Resolution: 9.0.0.M10, 8.0.37,8.5.5, 7.0.72, 6.0.46</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-0762 (Medium) detected in tomcat-embed-core-8.5.4.jar - ## CVE-2016-0762 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.4.jar</b></p></summary>
<p>Core Tomcat implementation</p>
<p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p>
<p>Path to dependency file: pomscan/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.4/tomcat-embed-core-8.5.4.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-jersey-1.4.0.RELEASE.jar (Root Library)
- spring-boot-starter-tomcat-1.4.0.RELEASE.jar
- :x: **tomcat-embed-core-8.5.4.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/soumya132/pomscan/commit/861f87574eb468ca8fdec6e4b2ea25783804ec34">861f87574eb468ca8fdec6e4b2ea25783804ec34</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Realm implementations in Apache Tomcat versions 9.0.0.M1 to 9.0.0.M9, 8.5.0 to 8.5.4, 8.0.0.RC1 to 8.0.36, 7.0.0 to 7.0.70 and 6.0.0 to 6.0.45 did not process the supplied password if the supplied user name did not exist. This made a timing attack possible to determine valid user names. Note that the default configuration includes the LockOutRealm which makes exploitation of this vulnerability harder.
<p>Publish Date: 2017-08-10
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-0762>CVE-2016-0762</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E">https://lists.apache.org/thread.html/1872f96bad43647832bdd84a408794cd06d9cbb557af63085ca10009@%3Cannounce.tomcat.apache.org%3E</a></p>
<p>Release Date: 2017-08-10</p>
<p>Fix Resolution: 9.0.0.M10, 8.0.37,8.5.5, 7.0.72, 6.0.46</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in tomcat embed core jar cve medium severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file pomscan pom xml path to vulnerable library home wss scanner repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter jersey release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library found in head commit a href found in base branch master vulnerability details the realm implementations in apache tomcat versions to to to to and to did not process the supplied password if the supplied user name did not exist this made a timing attack possible to determine valid user names note that the default configuration includes the lockoutrealm which makes exploitation of this vulnerability harder publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
3,824
| 6,802,323,099
|
IssuesEvent
|
2017-11-02 19:47:25
|
WikiWatershed/model-my-watershed
|
https://api.github.com/repos/WikiWatershed/model-my-watershed
|
closed
|
Hide user endpoints in swagger api
|
Geoprocessing API WPF
|
We had to remove the `namespace` that was having these endpoints excluded in swagger (#2377). Figure out another way to exclude them.
Testing for this card should make sure itsi sign on still works.

|
1.0
|
Hide user endpoints in swagger api - We had to remove the `namespace` that was having these endpoints excluded in swagger (#2377). Figure out another way to exclude them.
Testing for this card should make sure itsi sign on still works.

|
process
|
hide user endpoints in swagger api we had to remove the namespace that was having these endpoints excluded in swagger figure out another way to exclude them testing for this card should make sure itsi sign on still works
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.