Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
250,841 | 21,365,548,179 | IssuesEvent | 2022-04-20 00:54:38 | microsoft/vscode | https://api.github.com/repos/microsoft/vscode | opened | Test: Rename in markdown files | testplan-item | Refs #146291
- [ ] Mac
- [ ] Linux
- [ ] Windows
Complexity: 4
---
#146291 adds support for using rename (`F2`) inside markdown files.
Here's a quick review of the places where this can be triggered and the expected types of references that should be picked up
## `# Some Header`
On a header in a md file. Expected refs should include:
- `# Some Header` — The header itself
- `[text](#some-header)` — Link within the file to the header (the links are slugified)
- `[text]: #some-header` — Definition link within the file to the header (the links are slugified)
- `[text](./other.md#some-header)` — Links across files to the header
- `[text](./other#some-header)` — Links across files to the header. In this case the file does not have an file extension
- `[text](/path/to/other#some-header)` — Link from the workspace root to the file
## `[text](#header)`
On `#header` in a markdown link. Expected refers are the same as above
## `[text](./other.md#header)`
On `#header` in a another markdown file referencing a header. Expected refers are the same as above
## `[text][ref]`
On a reference link (`ref`) in a markdown file. Expected refs include:
- `[text][ref]` — All uses of ref for links
- `[ref]` — Shorthand reference links
- `[ref]: https://example.com` — The link definition
Make sure reference links are not picked up across files for this case
## `<http://example.com>`
You should also be able to find references to http(s). Expected references include
- `[text](http://example.com)` — Links
- `<http://example.com>` — Auto links
- `[text]: http://example.com` — Definition links
---
For this test plan item, try using rename in markdown. See what works as expected and what doesn't
I've intentionally kept this issue open ended in order to collect feedback on the expected behavior of this feature. I know there's a lot to test here, so please give this item what you feel like is an appropriate amount of testing and don't go overboard | 1.0 | Test: Rename in markdown files - Refs #146291
- [ ] Mac
- [ ] Linux
- [ ] Windows
Complexity: 4
---
#146291 adds support for using rename (`F2`) inside markdown files.
Here's a quick review of the places where this can be triggered and the expected types of references that should be picked up
## `# Some Header`
On a header in a md file. Expected refs should include:
- `# Some Header` — The header itself
- `[text](#some-header)` — Link within the file to the header (the links are slugified)
- `[text]: #some-header` — Definition link within the file to the header (the links are slugified)
- `[text](./other.md#some-header)` — Links across files to the header
- `[text](./other#some-header)` — Links across files to the header. In this case the file does not have an file extension
- `[text](/path/to/other#some-header)` — Link from the workspace root to the file
## `[text](#header)`
On `#header` in a markdown link. Expected refers are the same as above
## `[text](./other.md#header)`
On `#header` in a another markdown file referencing a header. Expected refers are the same as above
## `[text][ref]`
On a reference link (`ref`) in a markdown file. Expected refs include:
- `[text][ref]` — All uses of ref for links
- `[ref]` — Shorthand reference links
- `[ref]: https://example.com` — The link definition
Make sure reference links are not picked up across files for this case
## `<http://example.com>`
You should also be able to find references to http(s). Expected references include
- `[text](http://example.com)` — Links
- `<http://example.com>` — Auto links
- `[text]: http://example.com` — Definition links
---
For this test plan item, try using rename in markdown. See what works as expected and what doesn't
I've intentionally kept this issue open ended in order to collect feedback on the expected behavior of this feature. I know there's a lot to test here, so please give this item what you feel like is an appropriate amount of testing and don't go overboard | non_infrastructure | test rename in markdown files refs mac linux windows complexity adds support for using rename inside markdown files here s a quick review of the places where this can be triggered and the expected types of references that should be picked up some header on a header in a md file expected refs should include some header — the header itself some header — link within the file to the header the links are slugified some header — definition link within the file to the header the links are slugified other md some header — links across files to the header other some header — links across files to the header in this case the file does not have an file extension path to other some header — link from the workspace root to the file header on header in a markdown link expected refers are the same as above other md header on header in a another markdown file referencing a header expected refers are the same as above on a reference link ref in a markdown file expected refs include — all uses of ref for links — shorthand reference links — the link definition make sure reference links are not picked up across files for this case you should also be able to find references to http s expected references include — links — auto links — definition links for this test plan item try using rename in markdown see what works as expected and what doesn t i ve intentionally kept this issue open ended in order to collect feedback on the expected behavior of this feature i know there s a lot to test here so please give this item what you feel like is an appropriate amount of testing and don t go overboard | 0 |
683 | 2,850,220,822 | IssuesEvent | 2015-05-31 11:29:31 | KSP-CKAN/CKAN | https://api.github.com/repos/KSP-CKAN/CKAN | closed | The great repo merge. | in progress infrastructure | As per #807, I'm going to be merging the repos <s>today</s> soon. The plan is:
- [x] Merge any outstanding PRs that I can. PRs which aren't merge-ready won't block this process, as we'll be integrating the git trees to re-merge.
- [x] Merge the repos themselves. This is a "simple matter" of importing all the trees into the `CKAN` repo, repositioning files as requied, and merging them. @RichardLake as done a proof of concept of this which appears to have worked. This will be done on a side branch.
- [x] Heal our development files: All the code in the repo should be available via a single solution.
- [x] Heal our build processes: Everything should be buildable (along with running tests) via a single command (likely `build.sh`, with a `make` target that is a thin alias).
- [x] Heal our CI testing. Jenkins will need adjusting.
- [x] Heal our release processes. We used to be able to make a release by pressing the 'release' button on github. I want this again. I want this so bad.
- [x] Heal our documentation. README and other files should represent the new changes.
- [x] Check what the auto-update notes is looking for. If it's looking for special formatting we'll need to adjust its expectations.
- [x] Merge the tickets, by flinging them from the other repos into the CKAN repo.
- [x] Close the other repos for new ticket creation.
- [x] Make the merged branch the new `master` branch.
- [x] Triage and classify everything, using [Waffle](https://waffle.io/KSP-CKAN/CKAN). This will also heal much of our workflow.
In particular, I'm merging `CKAN-Core`, `CKAN-GUI`, `CKAN-Cmdline`, and `CKAN-NetKAN` into one.
Optional:
- [x] Evaluate what we're doing with tickets in the support repo. If we need a better way of promoting these to dev when appropriate.
- [ ] Re-create the past releases in the repos. The post-split repos that contain code are currently *not* tagged. It'd be great if we could just `git checkout <version>` to get exactly what was released.
If I complete all of these tasks today, then I'll look into trialing better support systems.
This is being done as [part of my sprint](https://patreon.com/pjf0) today. | 1.0 | The great repo merge. - As per #807, I'm going to be merging the repos <s>today</s> soon. The plan is:
- [x] Merge any outstanding PRs that I can. PRs which aren't merge-ready won't block this process, as we'll be integrating the git trees to re-merge.
- [x] Merge the repos themselves. This is a "simple matter" of importing all the trees into the `CKAN` repo, repositioning files as requied, and merging them. @RichardLake as done a proof of concept of this which appears to have worked. This will be done on a side branch.
- [x] Heal our development files: All the code in the repo should be available via a single solution.
- [x] Heal our build processes: Everything should be buildable (along with running tests) via a single command (likely `build.sh`, with a `make` target that is a thin alias).
- [x] Heal our CI testing. Jenkins will need adjusting.
- [x] Heal our release processes. We used to be able to make a release by pressing the 'release' button on github. I want this again. I want this so bad.
- [x] Heal our documentation. README and other files should represent the new changes.
- [x] Check what the auto-update notes is looking for. If it's looking for special formatting we'll need to adjust its expectations.
- [x] Merge the tickets, by flinging them from the other repos into the CKAN repo.
- [x] Close the other repos for new ticket creation.
- [x] Make the merged branch the new `master` branch.
- [x] Triage and classify everything, using [Waffle](https://waffle.io/KSP-CKAN/CKAN). This will also heal much of our workflow.
In particular, I'm merging `CKAN-Core`, `CKAN-GUI`, `CKAN-Cmdline`, and `CKAN-NetKAN` into one.
Optional:
- [x] Evaluate what we're doing with tickets in the support repo. If we need a better way of promoting these to dev when appropriate.
- [ ] Re-create the past releases in the repos. The post-split repos that contain code are currently *not* tagged. It'd be great if we could just `git checkout <version>` to get exactly what was released.
If I complete all of these tasks today, then I'll look into trialing better support systems.
This is being done as [part of my sprint](https://patreon.com/pjf0) today. | infrastructure | the great repo merge as per i m going to be merging the repos today soon the plan is merge any outstanding prs that i can prs which aren t merge ready won t block this process as we ll be integrating the git trees to re merge merge the repos themselves this is a simple matter of importing all the trees into the ckan repo repositioning files as requied and merging them richardlake as done a proof of concept of this which appears to have worked this will be done on a side branch heal our development files all the code in the repo should be available via a single solution heal our build processes everything should be buildable along with running tests via a single command likely build sh with a make target that is a thin alias heal our ci testing jenkins will need adjusting heal our release processes we used to be able to make a release by pressing the release button on github i want this again i want this so bad heal our documentation readme and other files should represent the new changes check what the auto update notes is looking for if it s looking for special formatting we ll need to adjust its expectations merge the tickets by flinging them from the other repos into the ckan repo close the other repos for new ticket creation make the merged branch the new master branch triage and classify everything using this will also heal much of our workflow in particular i m merging ckan core ckan gui ckan cmdline and ckan netkan into one optional evaluate what we re doing with tickets in the support repo if we need a better way of promoting these to dev when appropriate re create the past releases in the repos the post split repos that contain code are currently not tagged it d be great if we could just git checkout to get exactly what was released if i complete all of these tasks today then i ll look into trialing better support systems this is being done as today | 1 |
17,982 | 12,710,261,364 | IssuesEvent | 2020-06-23 13:38:30 | libero/reviewer | https://api.github.com/repos/libero/reviewer | closed | Update reviewer chart to match tidied up config | Infrastructure | Infrastructure work from #779
Todo
- [ ] Client newrelic and hotjar config as configmap mounted file served through nginx
- [x] Continuum adaptor: move all config to environment variables
- [x] Submission: move all config to environment variables (client config is going into client) | 1.0 | Update reviewer chart to match tidied up config - Infrastructure work from #779
Todo
- [ ] Client newrelic and hotjar config as configmap mounted file served through nginx
- [x] Continuum adaptor: move all config to environment variables
- [x] Submission: move all config to environment variables (client config is going into client) | infrastructure | update reviewer chart to match tidied up config infrastructure work from todo client newrelic and hotjar config as configmap mounted file served through nginx continuum adaptor move all config to environment variables submission move all config to environment variables client config is going into client | 1 |
464,967 | 13,348,981,466 | IssuesEvent | 2020-08-29 21:36:35 | kubernetes-sigs/krew | https://api.github.com/repos/kubernetes-sigs/krew | closed | Dry install snippets in scripts | help wanted kind/cleanup lifecycle/rotten priority/P3 | Under `hack` folder, there are scripts to ensure code quality.
In general, there is a script for validation to generate a diff between current code and expected, and there is another script to apply the diff.
Both of the scripts need to install the tool that does heavy lifting. That's why install snippet (ex: `install_shfmt` function in `hack/run-lint.sh` and `hack/format-scripts.sh`) is repeated.
Extract repeated snippet(s) into its own script and call it where relevant to reduce duplication.
/kind cleanup | 1.0 | Dry install snippets in scripts - Under `hack` folder, there are scripts to ensure code quality.
In general, there is a script for validation to generate a diff between current code and expected, and there is another script to apply the diff.
Both of the scripts need to install the tool that does heavy lifting. That's why install snippet (ex: `install_shfmt` function in `hack/run-lint.sh` and `hack/format-scripts.sh`) is repeated.
Extract repeated snippet(s) into its own script and call it where relevant to reduce duplication.
/kind cleanup | non_infrastructure | dry install snippets in scripts under hack folder there are scripts to ensure code quality in general there is a script for validation to generate a diff between current code and expected and there is another script to apply the diff both of the scripts need to install the tool that does heavy lifting that s why install snippet ex install shfmt function in hack run lint sh and hack format scripts sh is repeated extract repeated snippet s into its own script and call it where relevant to reduce duplication kind cleanup | 0 |
207,514 | 23,451,017,538 | IssuesEvent | 2022-08-16 02:47:02 | postgres-ai/database-lab-engine | https://api.github.com/repos/postgres-ai/database-lab-engine | closed | CVE-2022-1996 (High) detected in github.com/containerd/containerd-v1.6.1 - autoclosed | security vulnerability | ## CVE-2022-1996 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/containerd/containerd-v1.6.1</b></p></summary>
<p>An open and reliable container runtime</p>
<p>
Dependency Hierarchy:
- github.com/moby/moby-v20.10.17 (Root Library)
- :x: **github.com/containerd/containerd-v1.6.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/postgres-ai/database-lab-engine/commit/b3ac62d12e3d43994ff7ad836e34da801ed665fb">b3ac62d12e3d43994ff7ad836e34da801ed665fb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.
<p>Publish Date: 2022-06-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1996>CVE-2022-1996</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1996">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1996</a></p>
<p>Release Date: 2022-06-08</p>
<p>Fix Resolution: v3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-1996 (High) detected in github.com/containerd/containerd-v1.6.1 - autoclosed - ## CVE-2022-1996 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/containerd/containerd-v1.6.1</b></p></summary>
<p>An open and reliable container runtime</p>
<p>
Dependency Hierarchy:
- github.com/moby/moby-v20.10.17 (Root Library)
- :x: **github.com/containerd/containerd-v1.6.1** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/postgres-ai/database-lab-engine/commit/b3ac62d12e3d43994ff7ad836e34da801ed665fb">b3ac62d12e3d43994ff7ad836e34da801ed665fb</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Authorization Bypass Through User-Controlled Key in GitHub repository emicklei/go-restful prior to v3.8.0.
<p>Publish Date: 2022-06-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-1996>CVE-2022-1996</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1996">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-1996</a></p>
<p>Release Date: 2022-06-08</p>
<p>Fix Resolution: v3.8.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in github com containerd containerd autoclosed cve high severity vulnerability vulnerable library github com containerd containerd an open and reliable container runtime dependency hierarchy github com moby moby root library x github com containerd containerd vulnerable library found in head commit a href found in base branch master vulnerability details authorization bypass through user controlled key in github repository emicklei go restful prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
4,206 | 2,718,350,061 | IssuesEvent | 2015-04-12 07:00:05 | HwaYo/parrot | https://api.github.com/repos/HwaYo/parrot | closed | 음파 position: fixed 고려하기 | A-Design C-Proposal | Prerequisite: feature/best-design-ever 머지하기 (#39)
음파를 navbar 밑에 fix 시켜서 보여주기 (minimap 등을 고려하자) | 1.0 | 음파 position: fixed 고려하기 - Prerequisite: feature/best-design-ever 머지하기 (#39)
음파를 navbar 밑에 fix 시켜서 보여주기 (minimap 등을 고려하자) | non_infrastructure | 음파 position fixed 고려하기 prerequisite feature best design ever 머지하기 음파를 navbar 밑에 fix 시켜서 보여주기 minimap 등을 고려하자 | 0 |
397,100 | 27,148,842,597 | IssuesEvent | 2023-02-16 22:34:02 | conekta/conekta-php | https://api.github.com/repos/conekta/conekta-php | closed | Conekta Customer find | documentation_required | El método `find`no funciona en customer
Version de paquete 4.0.4
Api version 2.0.0
`Conekta\Customer::find($this->getConektaId())`
```
El recurso no ha sido encontrado.
Conekta\\ResourceNotFoundError
``` | 1.0 | Conekta Customer find - El método `find`no funciona en customer
Version de paquete 4.0.4
Api version 2.0.0
`Conekta\Customer::find($this->getConektaId())`
```
El recurso no ha sido encontrado.
Conekta\\ResourceNotFoundError
``` | non_infrastructure | conekta customer find el método find no funciona en customer version de paquete api version conekta customer find this getconektaid el recurso no ha sido encontrado conekta resourcenotfounderror | 0 |
66,649 | 14,791,025,738 | IssuesEvent | 2021-01-12 12:55:47 | Kijacode/mwengeSMS | https://api.github.com/repos/Kijacode/mwengeSMS | opened | CVE-2020-7788 (High) detected in ini-1.3.5.tgz | security vulnerability | ## CVE-2020-7788 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ini-1.3.5.tgz</b></p></summary>
<p>An ini encoder/decoder for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.5.tgz">https://registry.npmjs.org/ini/-/ini-1.3.5.tgz</a></p>
<p>Path to dependency file: mwengeSMS/package.json</p>
<p>Path to vulnerable library: mwengeSMS/node_modules/grpc/node_modules/ini/package.json</p>
<p>
Dependency Hierarchy:
- africastalking-0.4.5.tgz (Root Library)
- grpc-1.24.2.tgz
- node-pre-gyp-0.14.0.tgz
- rc-1.2.8.tgz
- :x: **ini-1.3.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Kijacode/mwengeSMS/commit/51402e4003b52f720682d8da0a1ccee128508385">51402e4003b52f720682d8da0a1ccee128508385</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788>CVE-2020-7788</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: v1.3.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-7788 (High) detected in ini-1.3.5.tgz - ## CVE-2020-7788 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ini-1.3.5.tgz</b></p></summary>
<p>An ini encoder/decoder for node</p>
<p>Library home page: <a href="https://registry.npmjs.org/ini/-/ini-1.3.5.tgz">https://registry.npmjs.org/ini/-/ini-1.3.5.tgz</a></p>
<p>Path to dependency file: mwengeSMS/package.json</p>
<p>Path to vulnerable library: mwengeSMS/node_modules/grpc/node_modules/ini/package.json</p>
<p>
Dependency Hierarchy:
- africastalking-0.4.5.tgz (Root Library)
- grpc-1.24.2.tgz
- node-pre-gyp-0.14.0.tgz
- rc-1.2.8.tgz
- :x: **ini-1.3.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Kijacode/mwengeSMS/commit/51402e4003b52f720682d8da0a1ccee128508385">51402e4003b52f720682d8da0a1ccee128508385</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects the package ini before 1.3.6. If an attacker submits a malicious INI file to an application that parses it with ini.parse, they will pollute the prototype on the application. This can be exploited further depending on the context.
<p>Publish Date: 2020-12-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7788>CVE-2020-7788</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-7788</a></p>
<p>Release Date: 2020-12-11</p>
<p>Fix Resolution: v1.3.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in ini tgz cve high severity vulnerability vulnerable library ini tgz an ini encoder decoder for node library home page a href path to dependency file mwengesms package json path to vulnerable library mwengesms node modules grpc node modules ini package json dependency hierarchy africastalking tgz root library grpc tgz node pre gyp tgz rc tgz x ini tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package ini before if an attacker submits a malicious ini file to an application that parses it with ini parse they will pollute the prototype on the application this can be exploited further depending on the context publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource | 0 |
30,724 | 25,016,573,525 | IssuesEvent | 2022-11-03 19:22:48 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | [release/6.0] tvOS test queues failing a lot | area-Infrastructure-mono os-tvos test-failure | Many backport PRs for the 6.0 branch are getting lots of tvOS failures. See for example the branding PR results, which finished yesterday:
PR: https://github.com/dotnet/runtime/pull/77750
Results: https://dev.azure.com/dnceng-public/public/_build/results?buildId=71208&view=results
I am unsure what could be causing the problem. The test logs show the tests pass but the execution is reported as failed. | 1.0 | [release/6.0] tvOS test queues failing a lot - Many backport PRs for the 6.0 branch are getting lots of tvOS failures. See for example the branding PR results, which finished yesterday:
PR: https://github.com/dotnet/runtime/pull/77750
Results: https://dev.azure.com/dnceng-public/public/_build/results?buildId=71208&view=results
I am unsure what could be causing the problem. The test logs show the tests pass but the execution is reported as failed. | infrastructure | tvos test queues failing a lot many backport prs for the branch are getting lots of tvos failures see for example the branding pr results which finished yesterday pr results i am unsure what could be causing the problem the test logs show the tests pass but the execution is reported as failed | 1 |
16,113 | 11,841,095,195 | IssuesEvent | 2020-03-23 20:07:28 | liferay/clay | https://api.github.com/repos/liferay/clay | opened | Update to Prettier v2.0.0 | comp: infrastructure | copied from https://github.com/liferay/liferay-npm-tools/issues/418
> Low-ish priority on this one because there may be some bugs that need to get sorted out in the next few patch releases, and this could generate a fair bit of churn:
>
> https://prettier.io/blog/2020/03/21/2.0.0.html
>
> But still, something we'll want to do at some point in order to benefit from the bug fixes. | 1.0 | Update to Prettier v2.0.0 - copied from https://github.com/liferay/liferay-npm-tools/issues/418
> Low-ish priority on this one because there may be some bugs that need to get sorted out in the next few patch releases, and this could generate a fair bit of churn:
>
> https://prettier.io/blog/2020/03/21/2.0.0.html
>
> But still, something we'll want to do at some point in order to benefit from the bug fixes. | infrastructure | update to prettier copied from low ish priority on this one because there may be some bugs that need to get sorted out in the next few patch releases and this could generate a fair bit of churn but still something we ll want to do at some point in order to benefit from the bug fixes | 1 |
108,950 | 4,364,608,907 | IssuesEvent | 2016-08-03 07:37:53 | octobercms/october | https://api.github.com/repos/octobercms/october | closed | Error reporting in debug mode | Priority: Low Status: Review Needed Type: Unconfirmed Bug | This one is not urgent, but would really help a lot when developing plugins.
##### Expected behavior
When snippets throw exceptions (i.e. ModelNotFoundException) and debug mode is active (app.debug), a nice error message with stack trace should appear.
##### Actual behavior

##### Reproduce steps
Create a component and register it as a snippet. Throw an Exception in ``onRun()`` method.
##### October build
353
| 1.0 | Error reporting in debug mode - This one is not urgent, but would really help a lot when developing plugins.
##### Expected behavior
When snippets throw exceptions (i.e. ModelNotFoundException) and debug mode is active (app.debug), a nice error message with stack trace should appear.
##### Actual behavior

##### Reproduce steps
Create a component and register it as a snippet. Throw an Exception in ``onRun()`` method.
##### October build
353
| non_infrastructure | error reporting in debug mode this one is not urgent but would really help a lot when developing plugins expected behavior when snippets throw exceptions i e modelnotfoundexception and debug mode is active app debug a nice error message with stack trace should appear actual behavior reproduce steps create a component and register it as a snippet throw an exception in onrun method october build | 0 |
2,391 | 2,525,835,897 | IssuesEvent | 2015-01-21 06:32:56 | graybeal/ont | https://api.github.com/repos/graybeal/ont | opened | Vocabulary as an extension of one or more others | 1 star enhancement imported Priority-Medium voc2rdf | _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on November 19, 2008 08:20:17_
What capability do you want added or improved? The ability to define a vocabulary that is a extension of another vocabulary. Where do you want this capability to be accessible? In Voc2RDF What is the desired output (content, format, location)? Other details of your desired capability? What version of the product are you using? Please provide any additional information below (particular ontology/ies,
text contents of vocabulary (voc2rdf), operating system, browser/version
(Firefox, Safari, IE, etc.), screenshot, etc.)
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=66_ | 1.0 | Vocabulary as an extension of one or more others - _From [caru...@gmail.com](https://code.google.com/u/113886747689301365533/) on November 19, 2008 08:20:17_
What capability do you want added or improved? The ability to define a vocabulary that is a extension of another vocabulary. Where do you want this capability to be accessible? In Voc2RDF What is the desired output (content, format, location)? Other details of your desired capability? What version of the product are you using? Please provide any additional information below (particular ontology/ies,
text contents of vocabulary (voc2rdf), operating system, browser/version
(Firefox, Safari, IE, etc.), screenshot, etc.)
_Original issue: http://code.google.com/p/mmisw/issues/detail?id=66_ | non_infrastructure | vocabulary as an extension of one or more others from on november what capability do you want added or improved the ability to define a vocabulary that is a extension of another vocabulary where do you want this capability to be accessible in what is the desired output content format location other details of your desired capability what version of the product are you using please provide any additional information below particular ontology ies text contents of vocabulary operating system browser version firefox safari ie etc screenshot etc original issue | 0 |
742,816 | 25,870,984,298 | IssuesEvent | 2022-12-14 02:36:36 | aacitelli/wowcraftingorders.com | https://api.github.com/repos/aacitelli/wowcraftingorders.com | closed | Provide only partial amount of reagents | enhancement good first issue priority-high | It should be an all-or-nothing, yes-or-no kind of thing with reagents. | 1.0 | Provide only partial amount of reagents - It should be an all-or-nothing, yes-or-no kind of thing with reagents. | non_infrastructure | provide only partial amount of reagents it should be an all or nothing yes or no kind of thing with reagents | 0 |
207,156 | 23,428,922,248 | IssuesEvent | 2022-08-14 20:29:36 | MidnightBSD/src | https://api.github.com/repos/MidnightBSD/src | reopened | CVE-2022-28805 (High) detected in freebsd-srcrelease/13.1.0 | security vulnerability | ## CVE-2022-28805 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/13.1.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p>
<p>Found in base branch: <b>stable/2.2</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lparser.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
singlevar in lparser.c in Lua from (including) 5.4.0 up to (excluding) 5.4.4 lacks a certain luaK_exp2anyregup call, leading to a heap-based buffer over-read that might affect a system that compiles untrusted Lua code.
<p>Publish Date: 2022-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-28805>CVE-2022-28805</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-28805 (High) detected in freebsd-srcrelease/13.1.0 - ## CVE-2022-28805 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>freebsd-srcrelease/13.1.0</b></p></summary>
<p>
<p>FreeBSD src tree (read-only mirror)</p>
<p>Library home page: <a href=https://github.com/freebsd/freebsd-src.git>https://github.com/freebsd/freebsd-src.git</a></p>
<p>Found in HEAD commit: <a href="https://github.com/MidnightBSD/src/commit/816463d989cc5839c1cca2efb5bf2503408507fb">816463d989cc5839c1cca2efb5bf2503408507fb</a></p>
<p>Found in base branch: <b>stable/2.2</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lparser.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
singlevar in lparser.c in Lua from (including) 5.4.0 up to (excluding) 5.4.4 lacks a certain luaK_exp2anyregup call, leading to a heap-based buffer over-read that might affect a system that compiles untrusted Lua code.
<p>Publish Date: 2022-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-28805>CVE-2022-28805</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in freebsd srcrelease cve high severity vulnerability vulnerable library freebsd srcrelease freebsd src tree read only mirror library home page a href found in head commit a href found in base branch stable vulnerable source files lparser c vulnerability details singlevar in lparser c in lua from including up to excluding lacks a certain luak call leading to a heap based buffer over read that might affect a system that compiles untrusted lua code publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href step up your open source security game with mend | 0 |
4,802 | 5,282,963,491 | IssuesEvent | 2017-02-07 20:10:03 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Workflow on an individual library broken | area-Infrastructure bug dev-eng | If I make a change to src (e.g. src\System.Collections.Concurrent\src) and do "msbuild /t:rebuild", then switch to the tests and try to re-compile/run them with "msbuild /t:rebuildandtest", I get an error like:
```
Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'System.Console, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Ref
erence assemblies should not be loaded for execution. They can only be loaded in the Reflection-only loader context. (Exception from HRESULT: 0x80131058) ---> System.BadIma
geFormatException: Cannot load a reference assembly for execution.
```
The only way I've found to recover is to do a full build.cmd/build-tests.cmd from the root.
cc: @weshaggard | 1.0 | Workflow on an individual library broken - If I make a change to src (e.g. src\System.Collections.Concurrent\src) and do "msbuild /t:rebuild", then switch to the tests and try to re-compile/run them with "msbuild /t:rebuildandtest", I get an error like:
```
Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'System.Console, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. Ref
erence assemblies should not be loaded for execution. They can only be loaded in the Reflection-only loader context. (Exception from HRESULT: 0x80131058) ---> System.BadIma
geFormatException: Cannot load a reference assembly for execution.
```
The only way I've found to recover is to do a full build.cmd/build-tests.cmd from the root.
cc: @weshaggard | infrastructure | workflow on an individual library broken if i make a change to src e g src system collections concurrent src and do msbuild t rebuild then switch to the tests and try to re compile run them with msbuild t rebuildandtest i get an error like unhandled exception system badimageformatexception could not load file or assembly system console version culture neutral publickeytoken ref erence assemblies should not be loaded for execution they can only be loaded in the reflection only loader context exception from hresult system badima geformatexception cannot load a reference assembly for execution the only way i ve found to recover is to do a full build cmd build tests cmd from the root cc weshaggard | 1 |
467,857 | 13,456,815,098 | IssuesEvent | 2020-09-09 08:22:41 | pingcap/tidb-operator | https://api.github.com/repos/pingcap/tidb-operator | closed | failover and scaling are blocked if one Pod failed during rolling update | priority:P1 status/help-wanted | ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Take TiKV for example, during TiKV rolling update, if one TiKV Pod failed, e.g. something wrong with the node it was running and it cannot be scheduled to any node, then the upgrade will be stuck in waiting this Pod ready and its store UP, however, in this case, if the failover cannot occur because it's blocked by the logic https://github.com/pingcap/tidb-operator/blob/master/pkg/manager/member/tikv_upgrader.go#L110-L115, and if users want to scale out a new TiKV to increase the replicas, it still is impossible due to the same reason.
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Teachability, Documentation, Adoption, Migration Strategy:**
<!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. -->
| 1.0 | failover and scaling are blocked if one Pod failed during rolling update - ## Feature Request
**Is your feature request related to a problem? Please describe:**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Take TiKV for example, during TiKV rolling update, if one TiKV Pod failed, e.g. something wrong with the node it was running and it cannot be scheduled to any node, then the upgrade will be stuck in waiting this Pod ready and its store UP, however, in this case, if the failover cannot occur because it's blocked by the logic https://github.com/pingcap/tidb-operator/blob/master/pkg/manager/member/tikv_upgrader.go#L110-L115, and if users want to scale out a new TiKV to increase the replicas, it still is impossible due to the same reason.
**Describe the feature you'd like:**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered:**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Teachability, Documentation, Adoption, Migration Strategy:**
<!-- If you can, explain some scenarios how users might use this, situations it would be helpful in. Any API designs, mockups, or diagrams are also helpful. -->
| non_infrastructure | failover and scaling are blocked if one pod failed during rolling update feature request is your feature request related to a problem please describe take tikv for example during tikv rolling update if one tikv pod failed e g something wrong with the node it was running and it cannot be scheduled to any node then the upgrade will be stuck in waiting this pod ready and its store up however in this case if the failover cannot occur because it s blocked by the logic and if users want to scale out a new tikv to increase the replicas it still is impossible due to the same reason describe the feature you d like describe alternatives you ve considered teachability documentation adoption migration strategy | 0 |
25,803 | 19,188,155,456 | IssuesEvent | 2021-12-05 15:01:56 | ephios-dev/ephios | https://api.github.com/repos/ephios-dev/ephios | closed | Release-drafter is broken | [C] bug [P] minor [C] infrastructure | For some reason, release-drafter isn't doing anything on this repo anymore. We could switch to https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes instead. | 1.0 | Release-drafter is broken - For some reason, release-drafter isn't doing anything on this repo anymore. We could switch to https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes instead. | infrastructure | release drafter is broken for some reason release drafter isn t doing anything on this repo anymore we could switch to instead | 1 |
6,168 | 6,208,842,458 | IssuesEvent | 2017-07-07 01:28:49 | dotnet/core-setup | https://api.github.com/repos/dotnet/core-setup | closed | Proprietary license | area-Infrastructure | The main repository contains the MIT license in the https://github.com/dotnet/core-setup/blob/master/LICENSE file. But the packaging dir contains a proprietary license file: https://github.com/dotnet/core-setup/blob/master/packaging/LICENSE.txt.
Specifically, this other license file says:
> You may not
> * work around any technical limitations in the software;
> * reverse engineer, decompile or disassemble the software, except and only to the extent that applicable law expressly permits, despite this limitation;
> * publish the software for others to copy;
There are other copies of this license too:
- https://github.com/dotnet/core-setup/blob/master/resources/LICENSE.txt
- https://github.com/dotnet/core-setup/blob/master/packaging/osx/sharedframework/resources/zh-hans.lproj/eula.rtf (and various variants)
I believe this license file ends up in dotnet.tar.gz that's availalbe from: https://www.microsoft.com/net/core
Could you please replace this with the LICENSE file in the main dir? | 1.0 | Proprietary license - The main repository contains the MIT license in the https://github.com/dotnet/core-setup/blob/master/LICENSE file. But the packaging dir contains a proprietary license file: https://github.com/dotnet/core-setup/blob/master/packaging/LICENSE.txt.
Specifically, this other license file says:
> You may not
> * work around any technical limitations in the software;
> * reverse engineer, decompile or disassemble the software, except and only to the extent that applicable law expressly permits, despite this limitation;
> * publish the software for others to copy;
There are other copies of this license too:
- https://github.com/dotnet/core-setup/blob/master/resources/LICENSE.txt
- https://github.com/dotnet/core-setup/blob/master/packaging/osx/sharedframework/resources/zh-hans.lproj/eula.rtf (and various variants)
I believe this license file ends up in dotnet.tar.gz that's availalbe from: https://www.microsoft.com/net/core
Could you please replace this with the LICENSE file in the main dir? | infrastructure | proprietary license the main repository contains the mit license in the file but the packaging dir contains a proprietary license file specifically this other license file says you may not work around any technical limitations in the software reverse engineer decompile or disassemble the software except and only to the extent that applicable law expressly permits despite this limitation publish the software for others to copy there are other copies of this license too and various variants i believe this license file ends up in dotnet tar gz that s availalbe from could you please replace this with the license file in the main dir | 1 |
25,117 | 18,111,059,711 | IssuesEvent | 2021-09-23 04:04:16 | E3SM-Project/scream | https://api.github.com/repos/E3SM-Project/scream | closed | Group "tracers_prev" not handled correctly for dynamics | bug infrastructure homme priority:high | The "tracers_prev" group should be a hard copy of the "tracers" group, both defined on the dyn grid. The "tracers_prev" group is used by Homme to back out a tendency for the tracers.
Currently, Homme declares a group request for "tracers_prev", declared as an `Alias` of the "tracers" group. This, however, makes "tracers_prev" contains _the same_ fields as "tracers", rather than copies.
The difficulty with groups is that, at the time where they are declared, we still don't know their size. E.g., Homme does not know how many tracers there are in the "tracers" group. For the "tracers" group, things are "easy", since other atm procs will provide that info. For "tracers_prev", the only way we have to funnel all tracers in it is to make it "related" to another group. That's what the "relationship" var inside a GroupRequest object is for: it allows to make the list of fields in this group dependent on another group (addinig/excluding fields from the other group).
This issue is of high priority, since it's making it impossible for Homme to load the correct initial condition for "tracers_prev". | 1.0 | Group "tracers_prev" not handled correctly for dynamics - The "tracers_prev" group should be a hard copy of the "tracers" group, both defined on the dyn grid. The "tracers_prev" group is used by Homme to back out a tendency for the tracers.
Currently, Homme declares a group request for "tracers_prev", declared as an `Alias` of the "tracers" group. This, however, makes "tracers_prev" contains _the same_ fields as "tracers", rather than copies.
The difficulty with groups is that, at the time where they are declared, we still don't know their size. E.g., Homme does not know how many tracers there are in the "tracers" group. For the "tracers" group, things are "easy", since other atm procs will provide that info. For "tracers_prev", the only way we have to funnel all tracers in it is to make it "related" to another group. That's what the "relationship" var inside a GroupRequest object is for: it allows to make the list of fields in this group dependent on another group (addinig/excluding fields from the other group).
This issue is of high priority, since it's making it impossible for Homme to load the correct initial condition for "tracers_prev". | infrastructure | group tracers prev not handled correctly for dynamics the tracers prev group should be a hard copy of the tracers group both defined on the dyn grid the tracers prev group is used by homme to back out a tendency for the tracers currently homme declares a group request for tracers prev declared as an alias of the tracers group this however makes tracers prev contains the same fields as tracers rather than copies the difficulty with groups is that at the time where they are declared we still don t know their size e g homme does not know how many tracers there are in the tracers group for the tracers group things are easy since other atm procs will provide that info for tracers prev the only way we have to funnel all tracers in it is to make it related to another group that s what the relationship var inside a grouprequest object is for it allows to make the list of fields in this group dependent on another group addinig excluding fields from the other group this issue is of high priority since it s making it impossible for homme to load the correct initial condition for tracers prev | 1 |
95,999 | 12,069,506,291 | IssuesEvent | 2020-04-16 16:09:29 | phetsims/ph-scale | https://api.github.com/repos/phetsims/ph-scale | closed | identify featured elements | design:phet-io | Feedback on initial PhET-iO instrumentation was completed in https://github.com/phetsims/ph-scale/issues/117, and all resulting GitHub issues have been either addressed or deferred.
The sim is now ready to identify featured elements using Studio.
| 1.0 | identify featured elements - Feedback on initial PhET-iO instrumentation was completed in https://github.com/phetsims/ph-scale/issues/117, and all resulting GitHub issues have been either addressed or deferred.
The sim is now ready to identify featured elements using Studio.
| non_infrastructure | identify featured elements feedback on initial phet io instrumentation was completed in and all resulting github issues have been either addressed or deferred the sim is now ready to identify featured elements using studio | 0 |
18,962 | 13,179,105,917 | IssuesEvent | 2020-08-12 10:18:10 | lpc-rs/lpc8xx-hal | https://api.github.com/repos/lpc-rs/lpc8xx-hal | opened | Consider integrating parts of LPC845 Test Stand | type: infrastructure | I've been working on a client project this year that included lots of improvements to this HAL. In addition, the client insisted on having automated tests for all features that I add. This resulted in the creation of [LPC845 Test Stand](https://github.com/braun-embedded/lpc845-test-stand).
LCP845 Test Stand basically consists of the following parts:
- A test suite for some LPC8xx HAL APIs, running on the host PC.
- Libraries for communication between the test suite and firmware.
- The target firmware, which uses the HAL APIs as directed by the test suite.
- The assistant firmware, which assists the test suite in directing the target and verifying its results.
Both target and assistant run on LPC845-BRK boards.
Right now, the test stand is specific to the LPC845/LPC845-BRK, but I've structured it in such a way that separating the LCP845 test suite and target from infrastructure components should be relatively straight-forward (there's been talk about a project that would include creating a test suite for another HAL, which would require me to work on this separation, but nothing's set in stone yet).
I'd like to suggest the possibility of moving the LPC845-specific parts (which by extension also cover the LPC82x) into this repository, basically hosting the test suite here and adding the infrastructure parts of the test stand as a dependency of that test suite. I don't think we should necessarily do that now or in its current form. I just want to start the discussion. (There's also the question of manpower. I might not be able to do this work any time soon.)
The test stand in its current form has some drawbacks, most of which we might want to fix before integrating it here:
- It is not fully automated. It would be nice to integrate probe-rs to automatically flash firmware, use a USB library to make sure the test suite talks to the right serial ports, etc. Right now, you need to do some manual work (connecting USB plugs in the right order; having two terminals open for running `cargo embed`, a third for running `cargo test`).
- The assistant board board brings a lot of complexity with it, mostly in the form of lots of wiring between target and assistant. This is necessary for what my clients wants to do with it, but I think it's largely unnecessary for a HAL API test suite. What the assistant mostly does currently is serve as a counterpart for USART/I2C/SPI, but I think we can achieve the same effect using simpler means (as all those peripherals support some form of loopback mode). For our use case here, it should be possible to do away with the assistant completely and have everything run on a single LPC845-BRK.
Once the test stand has been improved and simplified sufficiently, I believe it would be a great asset for this project. It would give us instant feedback on how changes affect existing functionality. Integrating it would require adding the following crates to the repository:
- Test suite: This is a standard crate with tests and some wrappers around test stand APIs, to provide the tests with a tailor-made API for their specific use case.
- Test target: The firmware that does the actual testing, as directed by the test suite. Currently, there's only one big target firmware (which is in need of some clean-up), but if we had the aforementioned automation based on probe-rs, we could have small and focused firmwares (e.g. one for I2C, one for SPI, one for async USART, one for sync USART, etc.).
- Protocol library: A small `no_std` library that defines the protocol the suite and the target use to communicate. Communication is currently based on Postcard/Serde.
I would suggest to add all those crates in a new subdirectory, to not clutter things up too much.
So much for the possible plan. I think it would be great if we could make this happen some time. I'd be interested in your thoughts, @david-sawatzke. | 1.0 | Consider integrating parts of LPC845 Test Stand - I've been working on a client project this year that included lots of improvements to this HAL. In addition, the client insisted on having automated tests for all features that I add. This resulted in the creation of [LPC845 Test Stand](https://github.com/braun-embedded/lpc845-test-stand).
LCP845 Test Stand basically consists of the following parts:
- A test suite for some LPC8xx HAL APIs, running on the host PC.
- Libraries for communication between the test suite and firmware.
- The target firmware, which uses the HAL APIs as directed by the test suite.
- The assistant firmware, which assists the test suite in directing the target and verifying its results.
Both target and assistant run on LPC845-BRK boards.
Right now, the test stand is specific to the LPC845/LPC845-BRK, but I've structured it in such a way that separating the LCP845 test suite and target from infrastructure components should be relatively straight-forward (there's been talk about a project that would include creating a test suite for another HAL, which would require me to work on this separation, but nothing's set in stone yet).
I'd like to suggest the possibility of moving the LPC845-specific parts (which by extension also cover the LPC82x) into this repository, basically hosting the test suite here and adding the infrastructure parts of the test stand as a dependency of that test suite. I don't think we should necessarily do that now or in its current form. I just want to start the discussion. (There's also the question of manpower. I might not be able to do this work any time soon.)
The test stand in its current form has some drawbacks, most of which we might want to fix before integrating it here:
- It is not fully automated. It would be nice to integrate probe-rs to automatically flash firmware, use a USB library to make sure the test suite talks to the right serial ports, etc. Right now, you need to do some manual work (connecting USB plugs in the right order; having two terminals open for running `cargo embed`, a third for running `cargo test`).
- The assistant board board brings a lot of complexity with it, mostly in the form of lots of wiring between target and assistant. This is necessary for what my clients wants to do with it, but I think it's largely unnecessary for a HAL API test suite. What the assistant mostly does currently is serve as a counterpart for USART/I2C/SPI, but I think we can achieve the same effect using simpler means (as all those peripherals support some form of loopback mode). For our use case here, it should be possible to do away with the assistant completely and have everything run on a single LPC845-BRK.
Once the test stand has been improved and simplified sufficiently, I believe it would be a great asset for this project. It would give us instant feedback on how changes affect existing functionality. Integrating it would require adding the following crates to the repository:
- Test suite: This is a standard crate with tests and some wrappers around test stand APIs, to provide the tests with a tailor-made API for their specific use case.
- Test target: The firmware that does the actual testing, as directed by the test suite. Currently, there's only one big target firmware (which is in need of some clean-up), but if we had the aforementioned automation based on probe-rs, we could have small and focused firmwares (e.g. one for I2C, one for SPI, one for async USART, one for sync USART, etc.).
- Protocol library: A small `no_std` library that defines the protocol the suite and the target use to communicate. Communication is currently based on Postcard/Serde.
I would suggest to add all those crates in a new subdirectory, to not clutter things up too much.
So much for the possible plan. I think it would be great if we could make this happen some time. I'd be interested in your thoughts, @david-sawatzke. | infrastructure | consider integrating parts of test stand i ve been working on a client project this year that included lots of improvements to this hal in addition the client insisted on having automated tests for all features that i add this resulted in the creation of test stand basically consists of the following parts a test suite for some hal apis running on the host pc libraries for communication between the test suite and firmware the target firmware which uses the hal apis as directed by the test suite the assistant firmware which assists the test suite in directing the target and verifying its results both target and assistant run on brk boards right now the test stand is specific to the brk but i ve structured it in such a way that separating the test suite and target from infrastructure components should be relatively straight forward there s been talk about a project that would include creating a test suite for another hal which would require me to work on this separation but nothing s set in stone yet i d like to suggest the possibility of moving the specific parts which by extension also cover the into this repository basically hosting the test suite here and adding the infrastructure parts of the test stand as a dependency of that test suite i don t think we should necessarily do that now or in its current form i just want to start the discussion there s also the question of manpower i might not be able to do this work any time soon the test stand in its current form has some drawbacks most of which we might want to fix before integrating it here it is not fully automated it would be nice to integrate probe rs to automatically flash firmware use a usb library to make sure the test suite talks to the right serial ports etc right now you need to do some manual work connecting usb plugs in the right order having two terminals open for running cargo embed a third for running cargo test the assistant board board brings a lot of complexity with it mostly in the form of lots of wiring between target and assistant this is necessary for what my clients wants to do with it but i think it s largely unnecessary for a hal api test suite what the assistant mostly does currently is serve as a counterpart for usart spi but i think we can achieve the same effect using simpler means as all those peripherals support some form of loopback mode for our use case here it should be possible to do away with the assistant completely and have everything run on a single brk once the test stand has been improved and simplified sufficiently i believe it would be a great asset for this project it would give us instant feedback on how changes affect existing functionality integrating it would require adding the following crates to the repository test suite this is a standard crate with tests and some wrappers around test stand apis to provide the tests with a tailor made api for their specific use case test target the firmware that does the actual testing as directed by the test suite currently there s only one big target firmware which is in need of some clean up but if we had the aforementioned automation based on probe rs we could have small and focused firmwares e g one for one for spi one for async usart one for sync usart etc protocol library a small no std library that defines the protocol the suite and the target use to communicate communication is currently based on postcard serde i would suggest to add all those crates in a new subdirectory to not clutter things up too much so much for the possible plan i think it would be great if we could make this happen some time i d be interested in your thoughts david sawatzke | 1 |
34,499 | 30,029,268,720 | IssuesEvent | 2023-06-27 08:31:41 | grafana/agent | https://api.github.com/repos/grafana/agent | closed | blackbox metric is collected when blackbox integration is disabled | upstream type/infrastructure | `blackbox_exporter_config_last_reload_successful: 0` metric returned by grafana agent 0.29 even if blackbox_exporter integration is not enabled.
I think there could be two issues with this:
- We try to collect only metrics that are useful for users so, so users don’t pay too much if they use Grafana Cloud. Of course it just one metrics but…
- Also, since it has 0, it could trigger some blackbox exporter related alerts , as 0 actually means there is an issue with reloading config | 1.0 | blackbox metric is collected when blackbox integration is disabled - `blackbox_exporter_config_last_reload_successful: 0` metric returned by grafana agent 0.29 even if blackbox_exporter integration is not enabled.
I think there could be two issues with this:
- We try to collect only metrics that are useful for users so, so users don’t pay too much if they use Grafana Cloud. Of course it just one metrics but…
- Also, since it has 0, it could trigger some blackbox exporter related alerts , as 0 actually means there is an issue with reloading config | infrastructure | blackbox metric is collected when blackbox integration is disabled blackbox exporter config last reload successful metric returned by grafana agent even if blackbox exporter integration is not enabled i think there could be two issues with this we try to collect only metrics that are useful for users so so users don’t pay too much if they use grafana cloud of course it just one metrics but… also since it has it could trigger some blackbox exporter related alerts as actually means there is an issue with reloading config | 1 |
33,859 | 27,953,302,019 | IssuesEvent | 2023-03-24 10:28:04 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | closed | Update checked-in Dart SDK | area-infrastructure | In https://github.com/flutter/engine/pull/40394 I ran into the issue that the checked-in Dart SDK doesn't yet have class modifiers enabled by default. Could the checked-in Dart SDK be updated past 3.0.0-325.0.dev to get that support? | 1.0 | Update checked-in Dart SDK - In https://github.com/flutter/engine/pull/40394 I ran into the issue that the checked-in Dart SDK doesn't yet have class modifiers enabled by default. Could the checked-in Dart SDK be updated past 3.0.0-325.0.dev to get that support? | infrastructure | update checked in dart sdk in i ran into the issue that the checked in dart sdk doesn t yet have class modifiers enabled by default could the checked in dart sdk be updated past dev to get that support | 1 |
166,463 | 6,305,082,156 | IssuesEvent | 2017-07-21 17:28:48 | minio/minio-go | https://api.github.com/repos/minio/minio-go | closed | v3.0.0 breaks restic backend tests? | priority: medium triage | Hi,
I've tried to update the vendored minio-go for restic, but now the backend tests (using minio as the server) fail:
```
# restic/backend/s3
time="2017-07-17T19:49:10+02:00" level=error msg="{\"method\":\"PUT\",\"reqURI\":\"/restictestbucket/test-1500313750128396834/data/c3/c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2\",\"header\":{\"Accept-Encoding\":[\"gzip\"],\"Authorization\":[\"AWS4-HMAC-SHA256 Credential=ad2bb89ea0d6eeb70899/20170717/us-east-1/s3/aws4_request,SignedHeaders=content-encoding;host;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length,Signature=1f51da997169f2f3aa0824bbf651e078212da68efead06a14e63e1875f7ab339\"],\"Content-Encoding\":[\"aws-chunked\"],\"Content-Length\":[\"178\"],\"Content-Type\":[\"application/octet-stream\"],\"Host\":[\"localhost:9000\"],\"User-Agent\":[\"Minio (linux; amd64) minio-go/2.1.0\"],\"X-Amz-Content-Sha256\":[\"STREAMING-AWS4-HMAC-SHA256-PAYLOAD\"],\"X-Amz-Date\":[\"20170717T174910Z\"],\"X-Amz-Decoded-Content-Length\":[\"6\"]}}" cause="Signature does not match" source="[object-handlers.go:510:objectAPIHandlers.PutObjectHandler()]"
tests.go:419: unexpected error: The request signature we calculated does not match the signature you provided. Check your key and signing method.
client.PutObject
restic/backend/s3.(*Backend).Save
/home/fd0/shared/work/restic/restic/src/restic/backend/s3/s3.go:282
restic/backend/test.store
/home/fd0/shared/work/restic/restic/src/restic/backend/test/tests.go:418
restic/backend/test.(*Suite).TestBackend
/home/fd0/shared/work/restic/restic/src/restic/backend/test/tests.go:517
runtime.call32
/usr/lib/go/src/runtime/asm_amd64.s:514
reflect.callMethod
/usr/lib/go/src/reflect/value.go:640
reflect.methodValueCall
/usr/lib/go/src/reflect/asm_amd64.s:29
testing.tRunner
/usr/lib/go/src/testing/testing.go:657
runtime.goexit
/usr/lib/go/src/runtime/asm_amd64.s:2197
```
Bisecting from fe53a65ebc43b5d22626b29a19a3de81170e42d3 to bd8e1d8a93f006a0207e026353bf0644ffcdd320 shows that this is the first bad commit:
```
ebce2a3eeb6cf07c95aa71f37965244773c48ea5 is the first bad commit
commit ebce2a3eeb6cf07c95aa71f37965244773c48ea5
Author: Harshavardhana <harsha@minio.io>
Date: Sun Jul 2 10:49:04 2017 -0700
api: Pass down encryption metadata to all the multipart callers. (#736)
```
Any idea what's going on here? | 1.0 | v3.0.0 breaks restic backend tests? - Hi,
I've tried to update the vendored minio-go for restic, but now the backend tests (using minio as the server) fail:
```
# restic/backend/s3
time="2017-07-17T19:49:10+02:00" level=error msg="{\"method\":\"PUT\",\"reqURI\":\"/restictestbucket/test-1500313750128396834/data/c3/c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2\",\"header\":{\"Accept-Encoding\":[\"gzip\"],\"Authorization\":[\"AWS4-HMAC-SHA256 Credential=ad2bb89ea0d6eeb70899/20170717/us-east-1/s3/aws4_request,SignedHeaders=content-encoding;host;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length,Signature=1f51da997169f2f3aa0824bbf651e078212da68efead06a14e63e1875f7ab339\"],\"Content-Encoding\":[\"aws-chunked\"],\"Content-Length\":[\"178\"],\"Content-Type\":[\"application/octet-stream\"],\"Host\":[\"localhost:9000\"],\"User-Agent\":[\"Minio (linux; amd64) minio-go/2.1.0\"],\"X-Amz-Content-Sha256\":[\"STREAMING-AWS4-HMAC-SHA256-PAYLOAD\"],\"X-Amz-Date\":[\"20170717T174910Z\"],\"X-Amz-Decoded-Content-Length\":[\"6\"]}}" cause="Signature does not match" source="[object-handlers.go:510:objectAPIHandlers.PutObjectHandler()]"
tests.go:419: unexpected error: The request signature we calculated does not match the signature you provided. Check your key and signing method.
client.PutObject
restic/backend/s3.(*Backend).Save
/home/fd0/shared/work/restic/restic/src/restic/backend/s3/s3.go:282
restic/backend/test.store
/home/fd0/shared/work/restic/restic/src/restic/backend/test/tests.go:418
restic/backend/test.(*Suite).TestBackend
/home/fd0/shared/work/restic/restic/src/restic/backend/test/tests.go:517
runtime.call32
/usr/lib/go/src/runtime/asm_amd64.s:514
reflect.callMethod
/usr/lib/go/src/reflect/value.go:640
reflect.methodValueCall
/usr/lib/go/src/reflect/asm_amd64.s:29
testing.tRunner
/usr/lib/go/src/testing/testing.go:657
runtime.goexit
/usr/lib/go/src/runtime/asm_amd64.s:2197
```
Bisecting from fe53a65ebc43b5d22626b29a19a3de81170e42d3 to bd8e1d8a93f006a0207e026353bf0644ffcdd320 shows that this is the first bad commit:
```
ebce2a3eeb6cf07c95aa71f37965244773c48ea5 is the first bad commit
commit ebce2a3eeb6cf07c95aa71f37965244773c48ea5
Author: Harshavardhana <harsha@minio.io>
Date: Sun Jul 2 10:49:04 2017 -0700
api: Pass down encryption metadata to all the multipart callers. (#736)
```
Any idea what's going on here? | non_infrastructure | breaks restic backend tests hi i ve tried to update the vendored minio go for restic but now the backend tests using minio as the server fail restic backend time level error msg method put requri restictestbucket test data header accept encoding authorization content encoding content length content type host user agent x amz content x amz date x amz decoded content length cause signature does not match source tests go unexpected error the request signature we calculated does not match the signature you provided check your key and signing method client putobject restic backend backend save home shared work restic restic src restic backend go restic backend test store home shared work restic restic src restic backend test tests go restic backend test suite testbackend home shared work restic restic src restic backend test tests go runtime usr lib go src runtime asm s reflect callmethod usr lib go src reflect value go reflect methodvaluecall usr lib go src reflect asm s testing trunner usr lib go src testing testing go runtime goexit usr lib go src runtime asm s bisecting from to shows that this is the first bad commit is the first bad commit commit author harshavardhana date sun jul api pass down encryption metadata to all the multipart callers any idea what s going on here | 0 |
30,526 | 24,894,953,370 | IssuesEvent | 2022-10-28 15:01:57 | safe-global/safe-android | https://api.github.com/repos/safe-global/safe-android | opened | Update Infura credentials | infrastructure | Infura credentials have changed.
We should update both production and staging credentials.
Where to update:
1. Buildkite server's credentials
2. Your local configuration
The old tokens will expire in 1 month and we'll have to force-update users to new app version by that time.
Where to find new tokens: Ask @DmitryBespalov | 1.0 | Update Infura credentials - Infura credentials have changed.
We should update both production and staging credentials.
Where to update:
1. Buildkite server's credentials
2. Your local configuration
The old tokens will expire in 1 month and we'll have to force-update users to new app version by that time.
Where to find new tokens: Ask @DmitryBespalov | infrastructure | update infura credentials infura credentials have changed we should update both production and staging credentials where to update buildkite server s credentials your local configuration the old tokens will expire in month and we ll have to force update users to new app version by that time where to find new tokens ask dmitrybespalov | 1 |
69,092 | 17,570,557,897 | IssuesEvent | 2021-08-14 15:51:16 | org-jonnala/lab | https://api.github.com/repos/org-jonnala/lab | closed | test4 | type:build/install type:others subtype:Mendel Linux subtype:ubuntu/linux Hardware:USB Accelerator comp:thirdparty | ### Description
test5
<details><summary>Click to expand!</summary>
### Issue Type
Build/Install
### Operating System
Mendel Linux
### Coral Device
USB Accelerator
### Other Devices
Rapsberry Pi 4
### Programming Language
Python 3.5
### Relevant Log Output
_No response_</details> | 1.0 | test4 - ### Description
test5
<details><summary>Click to expand!</summary>
### Issue Type
Build/Install
### Operating System
Mendel Linux
### Coral Device
USB Accelerator
### Other Devices
Rapsberry Pi 4
### Programming Language
Python 3.5
### Relevant Log Output
_No response_</details> | non_infrastructure | description click to expand issue type build install operating system mendel linux coral device usb accelerator other devices rapsberry pi programming language python relevant log output no response | 0 |
32,259 | 26,576,002,210 | IssuesEvent | 2023-01-21 20:37:34 | ArturWincenciak/Blef | https://api.github.com/repos/ArturWincenciak/Blef | closed | Make sure this permissive `CORS` policy is safe here | technical infrastructure | [Make sure this permissive CORS policy is safe here](https://sonarcloud.io/project/security_hotspots?id=ArturWincenciak_Blef&hotspots=AYWoOfBQiyjUyu43PAuX)
`src/Shared/Blef.Shared.Infrastructure/Extensions/Extensions.Infrastructure.cs`

| 1.0 | Make sure this permissive `CORS` policy is safe here - [Make sure this permissive CORS policy is safe here](https://sonarcloud.io/project/security_hotspots?id=ArturWincenciak_Blef&hotspots=AYWoOfBQiyjUyu43PAuX)
`src/Shared/Blef.Shared.Infrastructure/Extensions/Extensions.Infrastructure.cs`

| infrastructure | make sure this permissive cors policy is safe here src shared blef shared infrastructure extensions extensions infrastructure cs | 1 |
247,420 | 20,978,242,608 | IssuesEvent | 2022-03-28 17:11:02 | opensearch-project/opensearch-build | https://api.github.com/repos/opensearch-project/opensearch-build | closed | Automate the RPM distribution builds validation | ci-test-automation deb/rpm | RPM distribution is really close (https://github.com/opensearch-project/opensearch-build/issues/1117), we need an automated way to validate that the constructed RPM image works. This will make sure that we can catch any issues that prevent the components from being viable.
Note; this might require an additional realm of testing that we do not have today we should consider how we can best integrate this with our processes and make sure it is documented.
Acceptance criteria:
- Given an RPM with OpenSearch verify that the service operations | 1.0 | Automate the RPM distribution builds validation - RPM distribution is really close (https://github.com/opensearch-project/opensearch-build/issues/1117), we need an automated way to validate that the constructed RPM image works. This will make sure that we can catch any issues that prevent the components from being viable.
Note; this might require an additional realm of testing that we do not have today we should consider how we can best integrate this with our processes and make sure it is documented.
Acceptance criteria:
- Given an RPM with OpenSearch verify that the service operations | non_infrastructure | automate the rpm distribution builds validation rpm distribution is really close we need an automated way to validate that the constructed rpm image works this will make sure that we can catch any issues that prevent the components from being viable note this might require an additional realm of testing that we do not have today we should consider how we can best integrate this with our processes and make sure it is documented acceptance criteria given an rpm with opensearch verify that the service operations | 0 |
19,379 | 3,442,097,151 | IssuesEvent | 2015-12-14 21:11:48 | mozilla/teach.mozilla.org | https://api.github.com/repos/mozilla/teach.mozilla.org | opened | [Design Audit] - Master Style Guide | design design audit | As we progress with the design audit we can document all the design decisions and progress in one place for designers and developers to refer to. Also, we can have one folder in google drive with all the assets.
cc/ @kristinashu @mmmavis @cassiemc | 2.0 | [Design Audit] - Master Style Guide - As we progress with the design audit we can document all the design decisions and progress in one place for designers and developers to refer to. Also, we can have one folder in google drive with all the assets.
cc/ @kristinashu @mmmavis @cassiemc | non_infrastructure | master style guide as we progress with the design audit we can document all the design decisions and progress in one place for designers and developers to refer to also we can have one folder in google drive with all the assets cc kristinashu mmmavis cassiemc | 0 |
8,467 | 7,456,883,116 | IssuesEvent | 2018-03-30 00:14:36 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Move Roslyn over to .xlf for localization | Area-Infrastructure | Right now we still using some legacy localization tools for building our localization for Roslyn that has a few bad impacts:
1. The localizations themselves are held elsewhere.
2. We can't build our localized setup packages in our current build.
3. Insertions can potentially get flaky because of the previous item.
We should move to the newer .xlf-based system, once the remaining concerns are understood there. | 1.0 | Move Roslyn over to .xlf for localization - Right now we still using some legacy localization tools for building our localization for Roslyn that has a few bad impacts:
1. The localizations themselves are held elsewhere.
2. We can't build our localized setup packages in our current build.
3. Insertions can potentially get flaky because of the previous item.
We should move to the newer .xlf-based system, once the remaining concerns are understood there. | infrastructure | move roslyn over to xlf for localization right now we still using some legacy localization tools for building our localization for roslyn that has a few bad impacts the localizations themselves are held elsewhere we can t build our localized setup packages in our current build insertions can potentially get flaky because of the previous item we should move to the newer xlf based system once the remaining concerns are understood there | 1 |
68,267 | 28,310,944,664 | IssuesEvent | 2023-04-10 15:20:05 | amplication/amplication | https://api.github.com/repos/amplication/amplication | closed | 🐛 Bug Report: Onboarding (new service) wizard - getting error: Cannot read properties of null (reading 'isOverrideGitRepository') | type: bug epic: Service Creation | ### What happened?
In various flows found by Ofek, like adding a message broker, or skipping the GH repo selection, you get an error:
Cannot read properties of null (reading 'isOverrideGitRepository')
See here:
https://jam.dev/c/d03d9ee8-8b5e-4277-9671-bbe336aa1322
And here:
https://jam.dev/c/76829931-2e45-406d-9536-1ecc46fd844e
### What you expected to happen
Correctly populate _isOverrideGitRepository_ or be aware it is nullable
### How to reproduce
See videos in the description
### Amplication version
1.4.7 Sandbox
### Environment
_No response_
### Are you willing to submit PR?
_No response_ | 1.0 | 🐛 Bug Report: Onboarding (new service) wizard - getting error: Cannot read properties of null (reading 'isOverrideGitRepository') - ### What happened?
In various flows found by Ofek, like adding a message broker, or skipping the GH repo selection, you get an error:
Cannot read properties of null (reading 'isOverrideGitRepository')
See here:
https://jam.dev/c/d03d9ee8-8b5e-4277-9671-bbe336aa1322
And here:
https://jam.dev/c/76829931-2e45-406d-9536-1ecc46fd844e
### What you expected to happen
Correctly populate _isOverrideGitRepository_ or be aware it is nullable
### How to reproduce
See videos in the description
### Amplication version
1.4.7 Sandbox
### Environment
_No response_
### Are you willing to submit PR?
_No response_ | non_infrastructure | 🐛 bug report onboarding new service wizard getting error cannot read properties of null reading isoverridegitrepository what happened in various flows found by ofek like adding a message broker or skipping the gh repo selection you get an error cannot read properties of null reading isoverridegitrepository see here and here what you expected to happen correctly populate isoverridegitrepository or be aware it is nullable how to reproduce see videos in the description amplication version sandbox environment no response are you willing to submit pr no response | 0 |
24,217 | 17,015,133,478 | IssuesEvent | 2021-07-02 10:53:11 | bashi-nobu/qumitoru | https://api.github.com/repos/bashi-nobu/qumitoru | closed | [Task]CI/CD環境構築 | Task Type: Infrastructure | ## 成果物:
githubactionsによるCI/CD環境
- commit時にPyTest&Jest・NightWatchによるテスト実行
- staging branchへのmergeでECRへのコンテナpush&ECS(service)更新
## 作業:
- [x] ymlファイル作成 | 1.0 | [Task]CI/CD環境構築 - ## 成果物:
githubactionsによるCI/CD環境
- commit時にPyTest&Jest・NightWatchによるテスト実行
- staging branchへのmergeでECRへのコンテナpush&ECS(service)更新
## 作業:
- [x] ymlファイル作成 | infrastructure | ci cd環境構築 成果物 githubactionsによるci cd環境 commit時にpytest jest・nightwatchによるテスト実行 staging branchへのmergeでecrへのコンテナpush ecs service 更新 作業 ymlファイル作成 | 1 |
28,740 | 23,472,401,897 | IssuesEvent | 2022-08-17 00:01:05 | antlr/grammars-v4 | https://api.github.com/repos/antlr/grammars-v4 | closed | Go build--issues with connection | infrastructure | ```
2022-08-08T15:25:37.0856980Z go: github.com/antlr/antlr4/runtime/Go/antlr@4.10: invalid version: Get "https://proxy.golang.org/github.com/antlr/antlr4/runtime/%21go/antlr/@v/4.10.info": stream error: stream ID 7; INTERNAL_ERROR; received from peer
2022-08-08T15:25:37.0877720Z Build failed
```
Github has crappy links to the network. The Dart target [failed before](https://github.com/antlr/grammars-v4/issues/2743). Now it's Go. The solution employed in the Dart target was to loop 5x's until it got it right. This seems ok for the ["go get"](https://github.com/antlr/grammars-v4/blob/4133623828bac83757a150c96e7f7a9f7978c087/_scripts/templates/Go/tester.psm1#L19) line.
I don't think it'll fail for the [go build](https://github.com/antlr/grammars-v4/blob/4133623828bac83757a150c96e7f7a9f7978c087/_scripts/templates/Go/tester.psm1#L26) line because I don't think there are any further dependencies??? Hmm, I though "go build" looks at the go.mod file and actually does the fetch from the internet. | 1.0 | Go build--issues with connection - ```
2022-08-08T15:25:37.0856980Z go: github.com/antlr/antlr4/runtime/Go/antlr@4.10: invalid version: Get "https://proxy.golang.org/github.com/antlr/antlr4/runtime/%21go/antlr/@v/4.10.info": stream error: stream ID 7; INTERNAL_ERROR; received from peer
2022-08-08T15:25:37.0877720Z Build failed
```
Github has crappy links to the network. The Dart target [failed before](https://github.com/antlr/grammars-v4/issues/2743). Now it's Go. The solution employed in the Dart target was to loop 5x's until it got it right. This seems ok for the ["go get"](https://github.com/antlr/grammars-v4/blob/4133623828bac83757a150c96e7f7a9f7978c087/_scripts/templates/Go/tester.psm1#L19) line.
I don't think it'll fail for the [go build](https://github.com/antlr/grammars-v4/blob/4133623828bac83757a150c96e7f7a9f7978c087/_scripts/templates/Go/tester.psm1#L26) line because I don't think there are any further dependencies??? Hmm, I though "go build" looks at the go.mod file and actually does the fetch from the internet. | infrastructure | go build issues with connection go github com antlr runtime go antlr invalid version get stream error stream id internal error received from peer build failed github has crappy links to the network the dart target now it s go the solution employed in the dart target was to loop s until it got it right this seems ok for the line i don t think it ll fail for the line because i don t think there are any further dependencies hmm i though go build looks at the go mod file and actually does the fetch from the internet | 1 |
130,034 | 12,422,091,098 | IssuesEvent | 2020-05-23 20:11:06 | Minhacps/votacidade-site | https://api.github.com/repos/Minhacps/votacidade-site | closed | Traduzir read.me | documentation | O Gatsby gera uma documentação bem legal de start do projeto no read.me, mas está em inglês. Seria legal traduzirmos para ficar mais acessível a todos. | 1.0 | Traduzir read.me - O Gatsby gera uma documentação bem legal de start do projeto no read.me, mas está em inglês. Seria legal traduzirmos para ficar mais acessível a todos. | non_infrastructure | traduzir read me o gatsby gera uma documentação bem legal de start do projeto no read me mas está em inglês seria legal traduzirmos para ficar mais acessível a todos | 0 |
31,025 | 25,261,837,265 | IssuesEvent | 2022-11-15 23:37:21 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Question: nuget package release schedule | Area-Infrastructure untriaged | Hi team, is there a release schedule for the NuGet packages?
Since .NET 7 has been released, is there a release for the NuGet packages?
It seemed the NuGet packages are still in preview

And when I use the preview package(`4.4.0-4.final`) to pack a stable package it reports a `NU5104` warning, should I ignore this warning or wait for the NuGet package release?
Thanks
| 1.0 | Question: nuget package release schedule - Hi team, is there a release schedule for the NuGet packages?
Since .NET 7 has been released, is there a release for the NuGet packages?
It seemed the NuGet packages are still in preview

And when I use the preview package(`4.4.0-4.final`) to pack a stable package it reports a `NU5104` warning, should I ignore this warning or wait for the NuGet package release?
Thanks
| infrastructure | question nuget package release schedule hi team is there a release schedule for the nuget packages since net has been released is there a release for the nuget packages it seemed the nuget packages are still in preview and when i use the preview package final to pack a stable package it reports a warning should i ignore this warning or wait for the nuget package release thanks | 1 |
1,747 | 3,357,817,383 | IssuesEvent | 2015-11-19 04:40:22 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Please publish debug symbols for Release builds | Area-Infrastructure | Can you please publish PDBs for Roslyn assemblies in NuGet release & VS previews on a symbol server? (or even just as a download somewhere)
I'd be able to give much more detailed bug reports if I could load symbols and see what's actually going on inside a debugger. | 1.0 | Please publish debug symbols for Release builds - Can you please publish PDBs for Roslyn assemblies in NuGet release & VS previews on a symbol server? (or even just as a download somewhere)
I'd be able to give much more detailed bug reports if I could load symbols and see what's actually going on inside a debugger. | infrastructure | please publish debug symbols for release builds can you please publish pdbs for roslyn assemblies in nuget release vs previews on a symbol server or even just as a download somewhere i d be able to give much more detailed bug reports if i could load symbols and see what s actually going on inside a debugger | 1 |
48,352 | 12,195,735,792 | IssuesEvent | 2020-04-29 17:52:35 | kwk/test-llvm-bz-import-5 | https://api.github.com/repos/kwk/test-llvm-bz-import-5 | closed | cmake build enables exceptions in more files than configure | BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED Build scripts/cmake dummy import from bugzilla | This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=9886. | 1.0 | cmake build enables exceptions in more files than configure - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=9886. | non_infrastructure | cmake build enables exceptions in more files than configure this issue was imported from bugzilla | 0 |
92,770 | 10,763,364,822 | IssuesEvent | 2019-11-01 03:39:25 | fkvn/Hiring_process | https://api.github.com/repos/fkvn/Hiring_process | opened | Finish README.md as soon as possible | documentation | We need to move on to Data Requirement and Data modeling | 1.0 | Finish README.md as soon as possible - We need to move on to Data Requirement and Data modeling | non_infrastructure | finish readme md as soon as possible we need to move on to data requirement and data modeling | 0 |
39,284 | 8,621,748,501 | IssuesEvent | 2018-11-20 18:13:46 | autoforce/APIcasso | https://api.github.com/repos/autoforce/APIcasso | closed | Fix "dangerous_send" issue in app/controllers/apicasso/crud_controller.rb | codeclimate enhancement security | User controlled method execution
https://codeclimate.com/github/autoforce/APIcasso/app/controllers/apicasso/crud_controller.rb#issue_5be5c81ae7e0a2000100003a | 1.0 | Fix "dangerous_send" issue in app/controllers/apicasso/crud_controller.rb - User controlled method execution
https://codeclimate.com/github/autoforce/APIcasso/app/controllers/apicasso/crud_controller.rb#issue_5be5c81ae7e0a2000100003a | non_infrastructure | fix dangerous send issue in app controllers apicasso crud controller rb user controlled method execution | 0 |
31,638 | 25,962,592,315 | IssuesEvent | 2022-12-19 01:55:03 | TNG-dev/Tachi | https://api.github.com/repos/TNG-dev/Tachi | closed | Sending logs to discord should be the concern of Seq | bug Infrastructure | Or another external service. Could we add discord as an outbound transport from seq, or generally refactor our logging framework?
As it stands, the discord transport we use is attached onto *every* process we run. When we try to boot everything back up at the same time, we get a horrific brownout, as we get 429'd by discord and killed.
I've hackily patched over this with [d790063](https://github.com/TNG-dev/Tachi/commit/d7900638ba2e8ec033a6fb0a4cf99bbc1ed2f0da). Could we do better?
@ereti | 1.0 | Sending logs to discord should be the concern of Seq - Or another external service. Could we add discord as an outbound transport from seq, or generally refactor our logging framework?
As it stands, the discord transport we use is attached onto *every* process we run. When we try to boot everything back up at the same time, we get a horrific brownout, as we get 429'd by discord and killed.
I've hackily patched over this with [d790063](https://github.com/TNG-dev/Tachi/commit/d7900638ba2e8ec033a6fb0a4cf99bbc1ed2f0da). Could we do better?
@ereti | infrastructure | sending logs to discord should be the concern of seq or another external service could we add discord as an outbound transport from seq or generally refactor our logging framework as it stands the discord transport we use is attached onto every process we run when we try to boot everything back up at the same time we get a horrific brownout as we get d by discord and killed i ve hackily patched over this with could we do better ereti | 1 |
5,152 | 26,252,517,782 | IssuesEvent | 2023-01-05 20:43:04 | chocolatey-community/chocolatey-package-requests | https://api.github.com/repos/chocolatey-community/chocolatey-package-requests | closed | RFM - plexmediaserver | Status: Available For Maintainer(s) | ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://community.chocolatey.org/packages/plexmediaserver
Package source URL: https://github.com/mikecole/chocolatey-packages/tree/master/automatic/plexmediaserver
This is a working package with a functional AU script. I simply don't have the capacity to keep it updated as it is a popular package. Currently, there are a handful of requests to add the 64-bit version and I have not been able to field these requests. I will help as much as I can to transfer ownership. | True | RFM - plexmediaserver - ## Current Maintainer
- [x] I am the maintainer of the package and wish to pass it to someone else;
## Checklist
- [x] Issue title starts with 'RFM - '
## Existing Package Details
Package URL: https://community.chocolatey.org/packages/plexmediaserver
Package source URL: https://github.com/mikecole/chocolatey-packages/tree/master/automatic/plexmediaserver
This is a working package with a functional AU script. I simply don't have the capacity to keep it updated as it is a popular package. Currently, there are a handful of requests to add the 64-bit version and I have not been able to field these requests. I will help as much as I can to transfer ownership. | non_infrastructure | rfm plexmediaserver current maintainer i am the maintainer of the package and wish to pass it to someone else checklist issue title starts with rfm existing package details package url package source url this is a working package with a functional au script i simply don t have the capacity to keep it updated as it is a popular package currently there are a handful of requests to add the bit version and i have not been able to field these requests i will help as much as i can to transfer ownership | 0 |
21,783 | 14,856,523,221 | IssuesEvent | 2021-01-18 14:15:51 | airyhq/airy | https://api.github.com/repos/airyhq/airy | closed | Ingress host rules should be configurable | feature infrastructure | The current ingress rules have hardcoded values, such as `api.airy` and `chatplugin.airy`. We need to make them customizable, so that we can start core instance in AWS/GCP or other cloud environments.
Also the ingress rules can be deployed inside helm. | 1.0 | Ingress host rules should be configurable - The current ingress rules have hardcoded values, such as `api.airy` and `chatplugin.airy`. We need to make them customizable, so that we can start core instance in AWS/GCP or other cloud environments.
Also the ingress rules can be deployed inside helm. | infrastructure | ingress host rules should be configurable the current ingress rules have hardcoded values such as api airy and chatplugin airy we need to make them customizable so that we can start core instance in aws gcp or other cloud environments also the ingress rules can be deployed inside helm | 1 |
31,851 | 26,192,683,246 | IssuesEvent | 2023-01-03 10:31:40 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Disabling test on Linux Bionic is not effective with `PlatformSpecific` | area-Infrastructure-libraries test-bug | ### Description
`System.Formats.Tar` tests were failing on `runtime-staging`, so they got temporary disabled in https://github.com/dotnet/runtime/pull/72256. These, that got disabled with `ConditionalFact` did not get triggered, but these that got disabled with `PlatformSpecific`:
```
[PlatformSpecific(TestPlatforms.AnyUnix & ~TestPlatforms.tvOS & ~TestPlatforms.LinuxBionic)]
```
were still triggered and failing. Eventually they got disabled in a separate PR: https://github.com/dotnet/runtime/pull/72355 with `ConditionalFact`.
### Reproduction Steps
Rerun the PR:
/azp run runtime-staging
### Expected behavior
Both ways of disabling should work according to comment: https://github.com/dotnet/runtime/pull/72355#pullrequestreview-1041650673.
### Actual behavior
See `runtime-staging` job in https://github.com/dotnet/runtime/pull/72256/ - it has 6 failures, two of which are the tests that should have been disabled:

### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
_No response_
### Other information
_No response_ | 1.0 | Disabling test on Linux Bionic is not effective with `PlatformSpecific` - ### Description
`System.Formats.Tar` tests were failing on `runtime-staging`, so they got temporary disabled in https://github.com/dotnet/runtime/pull/72256. These, that got disabled with `ConditionalFact` did not get triggered, but these that got disabled with `PlatformSpecific`:
```
[PlatformSpecific(TestPlatforms.AnyUnix & ~TestPlatforms.tvOS & ~TestPlatforms.LinuxBionic)]
```
were still triggered and failing. Eventually they got disabled in a separate PR: https://github.com/dotnet/runtime/pull/72355 with `ConditionalFact`.
### Reproduction Steps
Rerun the PR:
/azp run runtime-staging
### Expected behavior
Both ways of disabling should work according to comment: https://github.com/dotnet/runtime/pull/72355#pullrequestreview-1041650673.
### Actual behavior
See `runtime-staging` job in https://github.com/dotnet/runtime/pull/72256/ - it has 6 failures, two of which are the tests that should have been disabled:

### Regression?
_No response_
### Known Workarounds
_No response_
### Configuration
_No response_
### Other information
_No response_ | infrastructure | disabling test on linux bionic is not effective with platformspecific description system formats tar tests were failing on runtime staging so they got temporary disabled in these that got disabled with conditionalfact did not get triggered but these that got disabled with platformspecific were still triggered and failing eventually they got disabled in a separate pr with conditionalfact reproduction steps rerun the pr azp run runtime staging expected behavior both ways of disabling should work according to comment actual behavior see runtime staging job in it has failures two of which are the tests that should have been disabled regression no response known workarounds no response configuration no response other information no response | 1 |
132,894 | 12,520,961,917 | IssuesEvent | 2020-06-03 16:42:29 | DataONEorg/bookkeeper | https://api.github.com/repos/DataONEorg/bookkeeper | opened | Document the initial bookkeeper release | documentation | Move documentation from https://github.com/csjx/d1-membership-plan-mgmt/blob/master/membership-plan-management.rst to the `docs` directory in this repo, and modify as needed for the `1.0.0` release. | 1.0 | Document the initial bookkeeper release - Move documentation from https://github.com/csjx/d1-membership-plan-mgmt/blob/master/membership-plan-management.rst to the `docs` directory in this repo, and modify as needed for the `1.0.0` release. | non_infrastructure | document the initial bookkeeper release move documentation from to the docs directory in this repo and modify as needed for the release | 0 |
16,966 | 12,152,293,411 | IssuesEvent | 2020-04-24 21:53:16 | InstituteforDiseaseModeling/covasim | https://api.github.com/repos/InstituteforDiseaseModeling/covasim | closed | Update DNS URL to covasim.idmod.org with HTTPS Support | highpriority infrastructure | Reproducing slack chat:
> @celiot-IDM
> HTTPS support has come up as an issue recently. For example, our DOD partners using locked-down browsers can't get to idmod.org. So let's plan to have https support from the start.
>
> @cliffckerr 17:23
> Would it be possible to have covasim.idmod.org? VOI stands for "value of information" and is for a completely different model and I just used that because it was the only DNS I had access to
>
> @John-Sheppard 17:31
> We could. I'll check with ITOPS on the availability. I thought that is what you wanted to use the whole time so I didn't pursue anything different. | 1.0 | Update DNS URL to covasim.idmod.org with HTTPS Support - Reproducing slack chat:
> @celiot-IDM
> HTTPS support has come up as an issue recently. For example, our DOD partners using locked-down browsers can't get to idmod.org. So let's plan to have https support from the start.
>
> @cliffckerr 17:23
> Would it be possible to have covasim.idmod.org? VOI stands for "value of information" and is for a completely different model and I just used that because it was the only DNS I had access to
>
> @John-Sheppard 17:31
> We could. I'll check with ITOPS on the availability. I thought that is what you wanted to use the whole time so I didn't pursue anything different. | infrastructure | update dns url to covasim idmod org with https support reproducing slack chat celiot idm https support has come up as an issue recently for example our dod partners using locked down browsers can t get to idmod org so let s plan to have https support from the start cliffckerr would it be possible to have covasim idmod org voi stands for value of information and is for a completely different model and i just used that because it was the only dns i had access to john sheppard we could i ll check with itops on the availability i thought that is what you wanted to use the whole time so i didn t pursue anything different | 1 |
34,004 | 28,084,272,859 | IssuesEvent | 2023-03-30 08:44:29 | Altinn/altinn-platform | https://api.github.com/repos/Altinn/altinn-platform | opened | Deploy Events Function app NAT Gateway to PROD | area/infrastructure | Need to add support for NAT Gateway for Events function app.
NAT Gateway will give the function app a single static IP for external communication.
## Tasks
- [ ] 19.04.2023 12:00, perform the change in PROD
- [ ] Investigate poison queue for events we're unable to send
- [ ] Follow up support questions from App owners related to missing events | 1.0 | Deploy Events Function app NAT Gateway to PROD - Need to add support for NAT Gateway for Events function app.
NAT Gateway will give the function app a single static IP for external communication.
## Tasks
- [ ] 19.04.2023 12:00, perform the change in PROD
- [ ] Investigate poison queue for events we're unable to send
- [ ] Follow up support questions from App owners related to missing events | infrastructure | deploy events function app nat gateway to prod need to add support for nat gateway for events function app nat gateway will give the function app a single static ip for external communication tasks perform the change in prod investigate poison queue for events we re unable to send follow up support questions from app owners related to missing events | 1 |
32,454 | 26,709,242,494 | IssuesEvent | 2023-01-27 21:28:16 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | opened | Inconsistent translation of template strings | investigate area-infrastructure feature-localization feature-templates | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
As described in [Bug 1719748, [DevExE2E] [Loc] 'ASP.NET Core Web App' and 'ASP.NET Core Razor Pages' aren't localized.](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1719748>), some strings in the German `RazorPagesWeb-CSharp` template are shown in English. The problem is actually inconsistent use and capitalization of "Web App", "Razor Pages", and "razor pages" in our English sources and inconsistent translations from the loc team.
### Expected Behavior
These terms should be consistently capitalized and translated (or not). I asked the "translated (or not)" question in [LOC QUESTIONS 771866, [SW] How to handle translation of "Razor Pages" and "Web App"?](https://dev.azure.com/ceapex/CEINTL/_workitems/edit/771866/).
### Steps To Reproduce
1. Prepare DE OS, install VS language packs and apply.
2. Install ASP.NET and web development workload
3. Open VS -> Create a new project -> Search 'asp'
## Note:
1.It isn't a regression issue, also repro on Version 17.4.4.
2.Repro VM: 172.16.195.94.
## Expected result:
All Strings should be localized.
## Actual result:
'ASP.NET Core Web App' and 'ASP.NET Core Razor Pages' aren't localized.

### Exceptions (if any)
n/a
### .NET Version
8.0.0-alpha1
### Anything else?
n/a | 1.0 | Inconsistent translation of template strings - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
As described in [Bug 1719748, [DevExE2E] [Loc] 'ASP.NET Core Web App' and 'ASP.NET Core Razor Pages' aren't localized.](https://devdiv.visualstudio.com/DevDiv/_workitems/edit/1719748>), some strings in the German `RazorPagesWeb-CSharp` template are shown in English. The problem is actually inconsistent use and capitalization of "Web App", "Razor Pages", and "razor pages" in our English sources and inconsistent translations from the loc team.
### Expected Behavior
These terms should be consistently capitalized and translated (or not). I asked the "translated (or not)" question in [LOC QUESTIONS 771866, [SW] How to handle translation of "Razor Pages" and "Web App"?](https://dev.azure.com/ceapex/CEINTL/_workitems/edit/771866/).
### Steps To Reproduce
1. Prepare DE OS, install VS language packs and apply.
2. Install ASP.NET and web development workload
3. Open VS -> Create a new project -> Search 'asp'
## Note:
1.It isn't a regression issue, also repro on Version 17.4.4.
2.Repro VM: 172.16.195.94.
## Expected result:
All Strings should be localized.
## Actual result:
'ASP.NET Core Web App' and 'ASP.NET Core Razor Pages' aren't localized.

### Exceptions (if any)
n/a
### .NET Version
8.0.0-alpha1
### Anything else?
n/a | infrastructure | inconsistent translation of template strings is there an existing issue for this i have searched the existing issues describe the bug as described in asp net core web app and asp net core razor pages aren t localized some strings in the german razorpagesweb csharp template are shown in english the problem is actually inconsistent use and capitalization of web app razor pages and razor pages in our english sources and inconsistent translations from the loc team expected behavior these terms should be consistently capitalized and translated or not i asked the translated or not question in how to handle translation of razor pages and web app steps to reproduce prepare de os install vs language packs and apply install asp net and web development workload open vs create a new project search asp note it isn t a regression issue also repro on version repro vm expected result all strings should be localized actual result asp net core web app and asp net core razor pages aren t localized exceptions if any n a net version anything else n a | 1 |
482,517 | 13,908,516,101 | IssuesEvent | 2020-10-20 13:54:08 | svthalia/concrexit | https://api.github.com/repos/svthalia/concrexit | opened | Use Django permission for event permissions instead of ChoiceField | priority: low | ### Describe the change
Currently, the `Profile` model contains a field that states whether `Member`s are allowed to attend borrels and/or events. This information, however, is only used/relevant for the `events` app. Therefore it would be nicer to separate logic, just as we did with `PaymentUser`s in `payments`:
- create an `EventUser` proxy model of `Member`
- add custom Django permissions `can_attend_borrels` and `can_attend_non-borrel_events`
- create a custom `EventUserAdmin` with actions to manage this (like in `payments`)
Basically just redo #1320 / #1277 on `payments`
### Motivation
Separation of duties. The `Member` model shouldn't contain information that only has a meaning in `Events`
| 1.0 | Use Django permission for event permissions instead of ChoiceField - ### Describe the change
Currently, the `Profile` model contains a field that states whether `Member`s are allowed to attend borrels and/or events. This information, however, is only used/relevant for the `events` app. Therefore it would be nicer to separate logic, just as we did with `PaymentUser`s in `payments`:
- create an `EventUser` proxy model of `Member`
- add custom Django permissions `can_attend_borrels` and `can_attend_non-borrel_events`
- create a custom `EventUserAdmin` with actions to manage this (like in `payments`)
Basically just redo #1320 / #1277 on `payments`
### Motivation
Separation of duties. The `Member` model shouldn't contain information that only has a meaning in `Events`
| non_infrastructure | use django permission for event permissions instead of choicefield describe the change currently the profile model contains a field that states whether member s are allowed to attend borrels and or events this information however is only used relevant for the events app therefore it would be nicer to separate logic just as we did with paymentuser s in payments create an eventuser proxy model of member add custom django permissions can attend borrels and can attend non borrel events create a custom eventuseradmin with actions to manage this like in payments basically just redo on payments motivation separation of duties the member model shouldn t contain information that only has a meaning in events | 0 |
27,327 | 21,628,289,752 | IssuesEvent | 2022-05-05 06:49:49 | spdk/spdk | https://api.github.com/repos/spdk/spdk | opened | [VM-host-CH2] Fetch failure: premature end of Content-Length delimited message body | Infrastructure Intermittent Failure | # CI Intermittent Failure
Looks like a failure during fetching sources.
```
00:00:12.827 Found a total of 1 nodes with the 'sorcerer' label
00:00:12.838 [Pipeline] httpRequest
00:00:13.037 HttpMethod: GET
00:00:13.038 URL: http://spdk-GP-10.igk.intel.com/spdk_af4214d1bf79895b9996eb1aabb02e5eac1d8aa9.tar.gz
00:00:13.039 Sending request to url: http://spdk-GP-10.igk.intel.com/spdk_af4214d1bf79895b9996eb1aabb02e5eac1d8aa9.tar.gz
00:00:13.442 Response Code: HTTP/1.1 200 OK
00:00:13.442 Success: Status code 200 is in the accepted range: 200,404
00:00:13.443 Saving response body to /var/jenkins/workspace/scanbuild-vg-autotest/spdk_af4214d1bf79895b9996eb1aabb02e5eac1d8aa9.tar.gz
00:01:29.154 [Pipeline] }
00:01:29.177 [Pipeline] // stage
00:01:29.186 [Pipeline] }
00:01:29.204 [Pipeline] // node
00:01:29.222 [Pipeline] End of Pipeline
00:01:29.237 org.apache.http.ConnectionClosedException: Premature end of Content-Length delimited message body (expected: 201,771,937; received: 56,051,464)
00:01:29.237 at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
00:01:29.237 at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
00:01:29.237 at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:148)
00:01:29.237 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1309)
00:01:29.237 at org.apache.commons.io.IOUtils.copy(IOUtils.java:978)
00:01:29.237 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282)
00:01:29.237 at org.apache.commons.io.IOUtils.copy(IOUtils.java:953)
00:01:29.237 at jenkins.plugins.http_request.HttpRequestExecution.processResponse(HttpRequestExecution.java:484)
00:01:29.237 at jenkins.plugins.http_request.HttpRequestExecution.authAndRequest(HttpRequestExecution.java:367)
00:01:29.237 at jenkins.plugins.http_request.HttpRequestExecution.call(HttpRequestExecution.java:271)
```
## Link to the failed CI build
https://ci.spdk.io/results/autotest-per-patch/builds/76472/archive/scanbuild-vg-autotest/build.log
https://ci.spdk.io/public_build/autotest-per-patch_76472.html
| 1.0 | [VM-host-CH2] Fetch failure: premature end of Content-Length delimited message body - # CI Intermittent Failure
Looks like a failure during fetching sources.
```
00:00:12.827 Found a total of 1 nodes with the 'sorcerer' label
00:00:12.838 [Pipeline] httpRequest
00:00:13.037 HttpMethod: GET
00:00:13.038 URL: http://spdk-GP-10.igk.intel.com/spdk_af4214d1bf79895b9996eb1aabb02e5eac1d8aa9.tar.gz
00:00:13.039 Sending request to url: http://spdk-GP-10.igk.intel.com/spdk_af4214d1bf79895b9996eb1aabb02e5eac1d8aa9.tar.gz
00:00:13.442 Response Code: HTTP/1.1 200 OK
00:00:13.442 Success: Status code 200 is in the accepted range: 200,404
00:00:13.443 Saving response body to /var/jenkins/workspace/scanbuild-vg-autotest/spdk_af4214d1bf79895b9996eb1aabb02e5eac1d8aa9.tar.gz
00:01:29.154 [Pipeline] }
00:01:29.177 [Pipeline] // stage
00:01:29.186 [Pipeline] }
00:01:29.204 [Pipeline] // node
00:01:29.222 [Pipeline] End of Pipeline
00:01:29.237 org.apache.http.ConnectionClosedException: Premature end of Content-Length delimited message body (expected: 201,771,937; received: 56,051,464)
00:01:29.237 at org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:178)
00:01:29.237 at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
00:01:29.237 at org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:148)
00:01:29.237 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1309)
00:01:29.237 at org.apache.commons.io.IOUtils.copy(IOUtils.java:978)
00:01:29.237 at org.apache.commons.io.IOUtils.copyLarge(IOUtils.java:1282)
00:01:29.237 at org.apache.commons.io.IOUtils.copy(IOUtils.java:953)
00:01:29.237 at jenkins.plugins.http_request.HttpRequestExecution.processResponse(HttpRequestExecution.java:484)
00:01:29.237 at jenkins.plugins.http_request.HttpRequestExecution.authAndRequest(HttpRequestExecution.java:367)
00:01:29.237 at jenkins.plugins.http_request.HttpRequestExecution.call(HttpRequestExecution.java:271)
```
## Link to the failed CI build
https://ci.spdk.io/results/autotest-per-patch/builds/76472/archive/scanbuild-vg-autotest/build.log
https://ci.spdk.io/public_build/autotest-per-patch_76472.html
| infrastructure | fetch failure premature end of content length delimited message body ci intermittent failure looks like a failure during fetching sources found a total of nodes with the sorcerer label httprequest httpmethod get url sending request to url response code http ok success status code is in the accepted range saving response body to var jenkins workspace scanbuild vg autotest spdk tar gz stage node end of pipeline org apache http connectionclosedexception premature end of content length delimited message body expected received at org apache http impl io contentlengthinputstream read contentlengthinputstream java at org apache http conn eofsensorinputstream read eofsensorinputstream java at org apache http conn eofsensorinputstream read eofsensorinputstream java at org apache commons io ioutils copylarge ioutils java at org apache commons io ioutils copy ioutils java at org apache commons io ioutils copylarge ioutils java at org apache commons io ioutils copy ioutils java at jenkins plugins http request httprequestexecution processresponse httprequestexecution java at jenkins plugins http request httprequestexecution authandrequest httprequestexecution java at jenkins plugins http request httprequestexecution call httprequestexecution java link to the failed ci build | 1 |
21,978 | 14,948,498,697 | IssuesEvent | 2021-01-26 10:09:48 | RasaHQ/rasa | https://api.github.com/repos/RasaHQ/rasa | opened | Setup integration tests for Slack Channel | area:rasa-oss :ferris_wheel: area:rasa-oss/infrastructure :bullettrain_front: type:maintenance :wrench: | **Description of Problem**:
We should add integration tests for our `Facebook` channel which test the entire roundtrip of sending a message, processing it via Rasa Open Source and receiving the bot's response.
**Overview of the Solution**:
TBD
**Blockers**
This task is dependent on https://github.com/RasaHQ/rasa/issues/7804.
**Definition of Done**:
- [ ] Tests are added
| 1.0 | Setup integration tests for Slack Channel - **Description of Problem**:
We should add integration tests for our `Facebook` channel which test the entire roundtrip of sending a message, processing it via Rasa Open Source and receiving the bot's response.
**Overview of the Solution**:
TBD
**Blockers**
This task is dependent on https://github.com/RasaHQ/rasa/issues/7804.
**Definition of Done**:
- [ ] Tests are added
| infrastructure | setup integration tests for slack channel description of problem we should add integration tests for our facebook channel which test the entire roundtrip of sending a message processing it via rasa open source and receiving the bot s response overview of the solution tbd blockers this task is dependent on definition of done tests are added | 1 |
501 | 2,756,402,181 | IssuesEvent | 2015-04-27 08:07:45 | swl10/pyslet | https://api.github.com/repos/swl10/pyslet | closed | Stop using the host header in WSGI framework | security | The WSGI specification says: "Note, however, that HTTP_HOST , if present, should be used in preference to SERVER_NAME for reconstructing the request URL."
This would be good advice if middleware had validated the host header for your before placing it in the environ but this is not generally the case leaving us open to attacks based on spoofed host headers. Currently the wsgi module uses the URL resolution method described in the spec but this needs to be changed to use the SERVER_NAME or to select a host name from a configured list based on the HTTP_HOST header value. | True | Stop using the host header in WSGI framework - The WSGI specification says: "Note, however, that HTTP_HOST , if present, should be used in preference to SERVER_NAME for reconstructing the request URL."
This would be good advice if middleware had validated the host header for your before placing it in the environ but this is not generally the case leaving us open to attacks based on spoofed host headers. Currently the wsgi module uses the URL resolution method described in the spec but this needs to be changed to use the SERVER_NAME or to select a host name from a configured list based on the HTTP_HOST header value. | non_infrastructure | stop using the host header in wsgi framework the wsgi specification says note however that http host if present should be used in preference to server name for reconstructing the request url this would be good advice if middleware had validated the host header for your before placing it in the environ but this is not generally the case leaving us open to attacks based on spoofed host headers currently the wsgi module uses the url resolution method described in the spec but this needs to be changed to use the server name or to select a host name from a configured list based on the http host header value | 0 |
23,290 | 16,038,365,871 | IssuesEvent | 2021-04-22 02:49:30 | t3kt/raytk | https://api.github.com/repos/t3kt/raytk | closed | Keep track of versioning on infrastructure components | development infrastructure wontfix | It would be good to be able to detect and fix issues with breaking infrastructure changes | 1.0 | Keep track of versioning on infrastructure components - It would be good to be able to detect and fix issues with breaking infrastructure changes | infrastructure | keep track of versioning on infrastructure components it would be good to be able to detect and fix issues with breaking infrastructure changes | 1 |
28,211 | 23,091,131,053 | IssuesEvent | 2022-07-26 15:18:05 | arduino/arduino-ide | https://api.github.com/repos/arduino/arduino-ide | closed | Check for changes to formatter output resulting from clangd bump | type: enhancement topic: infrastructure topic: language server | ## Describe the current behavior
The Arduino IDE's **Tools > Auto Format** functionality is provided by [ClangFormat](https://clang.llvm.org/docs/ClangFormat.html).
Arduino IDE contains an embedded ClangFormat configuration that defines the standard Arduino code formatting style, which is used by default when the user formats their code via **Tools > Auto Format**.:
https://github.com/arduino/arduino-ide/blob/main/arduino-ide-extension/src/node/clang-formatter.ts
This configuration was developed using [ClangFormat 11.0.1](https://releases.llvm.org/11.0.1/tools/clang/docs/ClangFormatStyleOptions.html), but will be used with whatever version of clangd is installed with the Arduino IDE:
https://github.com/arduino/arduino-ide/blob/main/arduino-ide-extension/package.json#L165
We received valued advice from someone with experience using ClangFormat on Arduino code:
https://github.com/arduino/Arduino/pull/11543#issuecomment-850080301
> we use clang-format-8 and believe me we have learned they are not all the same so you really have to pick one and stick to it. that said, if y'all want to pick one clang format version i'm happy to change our CI to match
ClangFormat has a strict approach to formatting. While the formatting style is very configurable, often it is not possible to configure it to leave the code as-is. This means that newly introduced configurations are likely to have a default setting that imposes formatting of some form, with no guarantees that it will align with the official Arduino code style.
## To reproduce
1. Start the Arduino IDE.
1. Open a sketch that does not [contain a `.clang-format` file](https://github.com/arduino/arduino-ide/issues/42#issuecomment-954682764), on a machine that does not have [a custom global `.clang-format` file](https://github.com/arduino/arduino-ide/pull/1019).
1. Select **Tools > Auto Format** from the Arduino IDE menus.
The sketch file currently open in the editor will be formatted according to the ClangFormat configuration embedded in Arduino IDE.
## Describe the request
Set up a formal system to check for formatter output changes at every update to the Arduino IDE 2.x clangd dependency:
https://github.com/arduino/arduino-ide/blob/5499c255283ed4d125b6f9bb8f1b64e746df3b7d/arduino-ide-extension/package.json#L164-L166
My proposal is that we produce a file containing test data code that will exercise the significant C++ formatting capabilities of ClangFormat then check for a diff after formatting that code with the new version. If there is no diff, then we have a reasonable certainty that the bump will not necessitate any adjustments to the ClangFormat configuration.
Ideally this would be set up to run automatically as part of the CI/CD system of the appropriate repository. Since the clangd version in use is currently defined in this repository, it seems to be the best place.
## Additional context
Related:
- https://github.com/arduino/arduino-ide/issues/42 | 1.0 | Check for changes to formatter output resulting from clangd bump - ## Describe the current behavior
The Arduino IDE's **Tools > Auto Format** functionality is provided by [ClangFormat](https://clang.llvm.org/docs/ClangFormat.html).
Arduino IDE contains an embedded ClangFormat configuration that defines the standard Arduino code formatting style, which is used by default when the user formats their code via **Tools > Auto Format**.:
https://github.com/arduino/arduino-ide/blob/main/arduino-ide-extension/src/node/clang-formatter.ts
This configuration was developed using [ClangFormat 11.0.1](https://releases.llvm.org/11.0.1/tools/clang/docs/ClangFormatStyleOptions.html), but will be used with whatever version of clangd is installed with the Arduino IDE:
https://github.com/arduino/arduino-ide/blob/main/arduino-ide-extension/package.json#L165
We received valued advice from someone with experience using ClangFormat on Arduino code:
https://github.com/arduino/Arduino/pull/11543#issuecomment-850080301
> we use clang-format-8 and believe me we have learned they are not all the same so you really have to pick one and stick to it. that said, if y'all want to pick one clang format version i'm happy to change our CI to match
ClangFormat has a strict approach to formatting. While the formatting style is very configurable, often it is not possible to configure it to leave the code as-is. This means that newly introduced configurations are likely to have a default setting that imposes formatting of some form, with no guarantees that it will align with the official Arduino code style.
## To reproduce
1. Start the Arduino IDE.
1. Open a sketch that does not [contain a `.clang-format` file](https://github.com/arduino/arduino-ide/issues/42#issuecomment-954682764), on a machine that does not have [a custom global `.clang-format` file](https://github.com/arduino/arduino-ide/pull/1019).
1. Select **Tools > Auto Format** from the Arduino IDE menus.
The sketch file currently open in the editor will be formatted according to the ClangFormat configuration embedded in Arduino IDE.
## Describe the request
Set up a formal system to check for formatter output changes at every update to the Arduino IDE 2.x clangd dependency:
https://github.com/arduino/arduino-ide/blob/5499c255283ed4d125b6f9bb8f1b64e746df3b7d/arduino-ide-extension/package.json#L164-L166
My proposal is that we produce a file containing test data code that will exercise the significant C++ formatting capabilities of ClangFormat then check for a diff after formatting that code with the new version. If there is no diff, then we have a reasonable certainty that the bump will not necessitate any adjustments to the ClangFormat configuration.
Ideally this would be set up to run automatically as part of the CI/CD system of the appropriate repository. Since the clangd version in use is currently defined in this repository, it seems to be the best place.
## Additional context
Related:
- https://github.com/arduino/arduino-ide/issues/42 | infrastructure | check for changes to formatter output resulting from clangd bump describe the current behavior the arduino ide s tools auto format functionality is provided by arduino ide contains an embedded clangformat configuration that defines the standard arduino code formatting style which is used by default when the user formats their code via tools auto format this configuration was developed using but will be used with whatever version of clangd is installed with the arduino ide we received valued advice from someone with experience using clangformat on arduino code we use clang format and believe me we have learned they are not all the same so you really have to pick one and stick to it that said if y all want to pick one clang format version i m happy to change our ci to match clangformat has a strict approach to formatting while the formatting style is very configurable often it is not possible to configure it to leave the code as is this means that newly introduced configurations are likely to have a default setting that imposes formatting of some form with no guarantees that it will align with the official arduino code style to reproduce start the arduino ide open a sketch that does not on a machine that does not have select tools auto format from the arduino ide menus the sketch file currently open in the editor will be formatted according to the clangformat configuration embedded in arduino ide describe the request set up a formal system to check for formatter output changes at every update to the arduino ide x clangd dependency my proposal is that we produce a file containing test data code that will exercise the significant c formatting capabilities of clangformat then check for a diff after formatting that code with the new version if there is no diff then we have a reasonable certainty that the bump will not necessitate any adjustments to the clangformat configuration ideally this would be set up to run automatically as part of the ci cd system of the appropriate repository since the clangd version in use is currently defined in this repository it seems to be the best place additional context related | 1 |
33,151 | 27,263,688,255 | IssuesEvent | 2023-02-22 16:29:34 | GIScience/openrouteservice | https://api.github.com/repos/GIScience/openrouteservice | closed | docker volume folder connection not working properly | bug :bug: infrastructure | See forum post:
https://ask.openrouteservice.org/t/problems-with-local-installation-of-ors/4558
Probably introduced with https://github.com/GIScience/openrouteservice/pull/1272
#### Here's what i did
`docker compose up`
#### Here's what I got
<!-- we :heart: json outputs -->
no new files were created inside the folders `conf`, `elevation_cahche`, `graphs`, and `logs`.
---
#### Here's what I was expecting
<!-- try being as explicit as possible here so we know how to fix this issue -->
the docker compose workflow to work
---
#### Here's what I think could be improved
fixing the workflow | 1.0 | docker volume folder connection not working properly - See forum post:
https://ask.openrouteservice.org/t/problems-with-local-installation-of-ors/4558
Probably introduced with https://github.com/GIScience/openrouteservice/pull/1272
#### Here's what i did
`docker compose up`
#### Here's what I got
<!-- we :heart: json outputs -->
no new files were created inside the folders `conf`, `elevation_cahche`, `graphs`, and `logs`.
---
#### Here's what I was expecting
<!-- try being as explicit as possible here so we know how to fix this issue -->
the docker compose workflow to work
---
#### Here's what I think could be improved
fixing the workflow | infrastructure | docker volume folder connection not working properly see forum post probably introduced with here s what i did docker compose up here s what i got no new files were created inside the folders conf elevation cahche graphs and logs here s what i was expecting the docker compose workflow to work here s what i think could be improved fixing the workflow | 1 |
272,505 | 8,514,249,203 | IssuesEvent | 2018-10-31 18:02:38 | unfoldingWord-dev/translationCore | https://api.github.com/repos/unfoldingWord-dev/translationCore | closed | App allows users to import usfm3 projects from Door43 but indicate that all verses are missing | Epic Priority/High QA/Pass | 0.10.0 (6f1cb6a)
https://git.door43.org/tCore-test-data/AlignedUlb_hi
The project is imported but the book name is not picked up and all verses are reported as empty.
Need to either warn and block the user from importing usfm3 from Door43, or fully implement #4552.
Block them from importing it. | 1.0 | App allows users to import usfm3 projects from Door43 but indicate that all verses are missing - 0.10.0 (6f1cb6a)
https://git.door43.org/tCore-test-data/AlignedUlb_hi
The project is imported but the book name is not picked up and all verses are reported as empty.
Need to either warn and block the user from importing usfm3 from Door43, or fully implement #4552.
Block them from importing it. | non_infrastructure | app allows users to import projects from but indicate that all verses are missing the project is imported but the book name is not picked up and all verses are reported as empty need to either warn and block the user from importing from or fully implement block them from importing it | 0 |
250,087 | 21,259,255,285 | IssuesEvent | 2022-04-13 01:02:18 | RamiMustafa/WAF_Sec_Test | https://api.github.com/repos/RamiMustafa/WAF_Sec_Test | opened | Establish a designated point of contact to receive Azure incident notifications from Microsoft | WARP-Import WAF_Sec_Test Security Security & Compliance Separation of duties | <a href="https://docs.microsoft.com/azure/defender-for-cloud/configure-email-notifications">Establish a designated point of contact to receive Azure incident notifications from Microsoft</a>
<p><b>Why Consider This?</b></p>
Security alerts need to reach the right people in your organization. It is important to ensure a designated security contact receives Azure incident notifications, or alerts from Microsoft Defender for Cloud - for example, a notification that a resource is compromised and/or attacking another customer.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Establish a designated point of contact to receive Azure incident notifications from Microsoft.</span></p><p><span>Ensure that administrator contact information in the Azure enrollment portal (Enterprise/EA portal) includes contact information which will notify security operations directly (or rapidly via an internal process).</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/defender-for-cloud/configure-email-notifications" target="_blank"><span>Configure email notifications for security alerts</span></a><span/></p><p><a href="https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-administration#update-notification-settings" target="_blank"><span>Update notification settings</span></a><span /></p> | 1.0 | Establish a designated point of contact to receive Azure incident notifications from Microsoft - <a href="https://docs.microsoft.com/azure/defender-for-cloud/configure-email-notifications">Establish a designated point of contact to receive Azure incident notifications from Microsoft</a>
<p><b>Why Consider This?</b></p>
Security alerts need to reach the right people in your organization. It is important to ensure a designated security contact receives Azure incident notifications, or alerts from Microsoft Defender for Cloud - for example, a notification that a resource is compromised and/or attacking another customer.
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Establish a designated point of contact to receive Azure incident notifications from Microsoft.</span></p><p><span>Ensure that administrator contact information in the Azure enrollment portal (Enterprise/EA portal) includes contact information which will notify security operations directly (or rapidly via an internal process).</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/defender-for-cloud/configure-email-notifications" target="_blank"><span>Configure email notifications for security alerts</span></a><span/></p><p><a href="https://docs.microsoft.com/en-us/azure/cost-management-billing/manage/ea-portal-administration#update-notification-settings" target="_blank"><span>Update notification settings</span></a><span /></p> | non_infrastructure | establish a designated point of contact to receive azure incident notifications from microsoft why consider this security alerts need to reach the right people in your organization it is important to ensure a designated security contact receives azure incident notifications or alerts from microsoft defender for cloud for example a notification that a resource is compromised and or attacking another customer context suggested actions establish a designated point of contact to receive azure incident notifications from microsoft ensure that administrator contact information in the azure enrollment portal enterprise ea portal includes contact information which will notify security operations directly or rapidly via an internal process learn more configure email notifications for security alerts update notification settings | 0 |
773,182 | 27,148,756,940 | IssuesEvent | 2023-02-16 22:28:34 | wesnoth/wesnoth | https://api.github.com/repos/wesnoth/wesnoth | closed | wmlxgettext: add option to set style of paths placed in comments | Enhancement Translations WML Tools Low Priority | ### Describe the desired feature
If two developers are collaborating on an add-on, and one is on a system with Windows-style paths, and the other is on a system with Unix-style paths, and they both want to regenerate the potfile with `wmlxgettext` occasionally, each regeneration will introduce a huge amount of noise due to all the paths in comments in the potfile changing style. It would be nice to have an option to choose just 1 style to avoid this.
(prompted by inferno8/wesnoth-To_Lands_Unknown#11) | 1.0 | wmlxgettext: add option to set style of paths placed in comments - ### Describe the desired feature
If two developers are collaborating on an add-on, and one is on a system with Windows-style paths, and the other is on a system with Unix-style paths, and they both want to regenerate the potfile with `wmlxgettext` occasionally, each regeneration will introduce a huge amount of noise due to all the paths in comments in the potfile changing style. It would be nice to have an option to choose just 1 style to avoid this.
(prompted by inferno8/wesnoth-To_Lands_Unknown#11) | non_infrastructure | wmlxgettext add option to set style of paths placed in comments describe the desired feature if two developers are collaborating on an add on and one is on a system with windows style paths and the other is on a system with unix style paths and they both want to regenerate the potfile with wmlxgettext occasionally each regeneration will introduce a huge amount of noise due to all the paths in comments in the potfile changing style it would be nice to have an option to choose just style to avoid this prompted by wesnoth to lands unknown | 0 |
26,310 | 19,984,842,410 | IssuesEvent | 2022-01-30 13:54:57 | yt-project/yt | https://api.github.com/repos/yt-project/yt | closed | Reduce size of pep8speaks config file | new contributor friendly infrastructure | After https://github.com/OrkoHunter/pep8speaks/pull/106 has been merged, it looks like we can reduce the config file presence -- and reduce duplication -- for pep8speaks.
I believe it would be sufficient to remove our .pep8speaks.yml file, but we should investigate if we can remove the "ignore" and "exclude" sections and leave the bits where we define how the bot should talk. | 1.0 | Reduce size of pep8speaks config file - After https://github.com/OrkoHunter/pep8speaks/pull/106 has been merged, it looks like we can reduce the config file presence -- and reduce duplication -- for pep8speaks.
I believe it would be sufficient to remove our .pep8speaks.yml file, but we should investigate if we can remove the "ignore" and "exclude" sections and leave the bits where we define how the bot should talk. | infrastructure | reduce size of config file after has been merged it looks like we can reduce the config file presence and reduce duplication for i believe it would be sufficient to remove our yml file but we should investigate if we can remove the ignore and exclude sections and leave the bits where we define how the bot should talk | 1 |
2,069 | 3,491,947,736 | IssuesEvent | 2016-01-04 17:58:33 | codeforamerica/communities | https://api.github.com/repos/codeforamerica/communities | opened | Clean up brigade-staff infrastructure | infrastructure | We've gone through a lot of different names as a team. Lets focus and do one thing, support the Brigade program. All of our internal stuff should reflect that.
- [ ] Change this github repos name
- [ ] Change the Slack channels
- [ ] Change our name on the staff meeting sheet | 1.0 | Clean up brigade-staff infrastructure - We've gone through a lot of different names as a team. Lets focus and do one thing, support the Brigade program. All of our internal stuff should reflect that.
- [ ] Change this github repos name
- [ ] Change the Slack channels
- [ ] Change our name on the staff meeting sheet | infrastructure | clean up brigade staff infrastructure we ve gone through a lot of different names as a team lets focus and do one thing support the brigade program all of our internal stuff should reflect that change this github repos name change the slack channels change our name on the staff meeting sheet | 1 |
30,890 | 4,226,267,872 | IssuesEvent | 2016-07-02 10:27:52 | FAC-GM/app | https://api.github.com/repos/FAC-GM/app | opened | Our star indicator for favourite candidates dissapear | new-design | @Adam-JF star indicator on the home page and candidate view page has been overridden to be colour: white and this is why we can't see it.
This is the screenshot which shows that the star is in html.

And by changing the color we can see that is popped back to the view. Colour and position has to be amended.

| 1.0 | Our star indicator for favourite candidates dissapear - @Adam-JF star indicator on the home page and candidate view page has been overridden to be colour: white and this is why we can't see it.
This is the screenshot which shows that the star is in html.

And by changing the color we can see that is popped back to the view. Colour and position has to be amended.

| non_infrastructure | our star indicator for favourite candidates dissapear adam jf star indicator on the home page and candidate view page has been overridden to be colour white and this is why we can t see it this is the screenshot which shows that the star is in html and by changing the color we can see that is popped back to the view colour and position has to be amended | 0 |
14,262 | 10,730,520,202 | IssuesEvent | 2019-10-28 17:34:41 | jhu-sheridan-libraries/slarx-general-issues | https://api.github.com/repos/jhu-sheridan-libraries/slarx-general-issues | closed | Github permissions are preventing linking Issues/PRs to Projects | infrastructure | There's something in the GitHub permissions that is preventing developers / PMs of setting the Project association of a issue or PR. | 1.0 | Github permissions are preventing linking Issues/PRs to Projects - There's something in the GitHub permissions that is preventing developers / PMs of setting the Project association of a issue or PR. | infrastructure | github permissions are preventing linking issues prs to projects there s something in the github permissions that is preventing developers pms of setting the project association of a issue or pr | 1 |
9,059 | 7,794,968,559 | IssuesEvent | 2018-06-08 06:08:42 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | opened | Empty column in soil water data copied from old APSIM | bug interface/infrastructure | After copying and pasting a soil water node from APSIM classic to next gen, the pawc columns are empty. | 1.0 | Empty column in soil water data copied from old APSIM - After copying and pasting a soil water node from APSIM classic to next gen, the pawc columns are empty. | infrastructure | empty column in soil water data copied from old apsim after copying and pasting a soil water node from apsim classic to next gen the pawc columns are empty | 1 |
357 | 2,524,590,407 | IssuesEvent | 2015-01-20 18:47:33 | SemanticMediaWiki/SemanticMediaWiki | https://api.github.com/repos/SemanticMediaWiki/SemanticMediaWiki | opened | CacheableResultCollector::findPropertyTableByType uses undefined field | code quality | It accessed a $store field, which is not defined in the class itself. It implicitly relies on the deriving classes to define one. | 1.0 | CacheableResultCollector::findPropertyTableByType uses undefined field - It accessed a $store field, which is not defined in the class itself. It implicitly relies on the deriving classes to define one. | non_infrastructure | cacheableresultcollector findpropertytablebytype uses undefined field it accessed a store field which is not defined in the class itself it implicitly relies on the deriving classes to define one | 0 |
21,290 | 14,498,142,602 | IssuesEvent | 2020-12-11 15:08:19 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | Update runtime tests on Android devices | area-Infrastructure os-android | Once the functionality of https://github.com/dotnet/xharness/issues/397 is available, update runtime tests to only install/uninstall once per test wrapper, when they are running on Android devices. And enable tests mentioned here:
https://github.com/dotnet/runtime/blob/master/src/tests/issues.targets#L3352-L3374
With this work, not all the work items mentioned in https://github.com/dotnet/runtime/issues/45568 need to be split. | 1.0 | Update runtime tests on Android devices - Once the functionality of https://github.com/dotnet/xharness/issues/397 is available, update runtime tests to only install/uninstall once per test wrapper, when they are running on Android devices. And enable tests mentioned here:
https://github.com/dotnet/runtime/blob/master/src/tests/issues.targets#L3352-L3374
With this work, not all the work items mentioned in https://github.com/dotnet/runtime/issues/45568 need to be split. | infrastructure | update runtime tests on android devices once the functionality of is available update runtime tests to only install uninstall once per test wrapper when they are running on android devices and enable tests mentioned here with this work not all the work items mentioned in need to be split | 1 |
28,539 | 23,322,036,396 | IssuesEvent | 2022-08-08 17:18:53 | UnitTestBot/UTBotJava | https://api.github.com/repos/UnitTestBot/UTBotJava | closed | Remove `CLI: publish image` workflow | bug infrastructure | **Description**
`CLI: publish image` workflow is not removed but Python script it uses is removed so the workflow fails.
It should has been removed with the PR: https://github.com/UnitTestBot/UTBotJava/pull/666
**To Reproduce**
Open **main** branch and see pipeline fails due to `CLI: publish image` workflow fails.
**Expected behavior**
`CLI: publish image` workflow is removed.
**Actual behavior**
`CLI: publish image` workflow is not removed but Python script it uses is removed so the workflow fails.
**Visual proofs (screenshots, logs, images)**
Not attached.
**Environment**
Not applicable.
**Additional context**
No context. | 1.0 | Remove `CLI: publish image` workflow - **Description**
`CLI: publish image` workflow is not removed but Python script it uses is removed so the workflow fails.
It should has been removed with the PR: https://github.com/UnitTestBot/UTBotJava/pull/666
**To Reproduce**
Open **main** branch and see pipeline fails due to `CLI: publish image` workflow fails.
**Expected behavior**
`CLI: publish image` workflow is removed.
**Actual behavior**
`CLI: publish image` workflow is not removed but Python script it uses is removed so the workflow fails.
**Visual proofs (screenshots, logs, images)**
Not attached.
**Environment**
Not applicable.
**Additional context**
No context. | infrastructure | remove cli publish image workflow description cli publish image workflow is not removed but python script it uses is removed so the workflow fails it should has been removed with the pr to reproduce open main branch and see pipeline fails due to cli publish image workflow fails expected behavior cli publish image workflow is removed actual behavior cli publish image workflow is not removed but python script it uses is removed so the workflow fails visual proofs screenshots logs images not attached environment not applicable additional context no context | 1 |
150,725 | 11,982,564,728 | IssuesEvent | 2020-04-07 13:10:01 | Coderockr/backstage | https://api.github.com/repos/Coderockr/backstage | opened | Microserviços para o Frontend - Single Spa | frontend frontend tools not tested yet | Link: https://single-spa.js.org/
<img width="1228" alt="Screen Shot 2020-04-07 at 10 09 31" src="https://user-images.githubusercontent.com/2267327/78673017-e84e1100-78b7-11ea-855f-29feca07e427.png">
| 1.0 | Microserviços para o Frontend - Single Spa - Link: https://single-spa.js.org/
<img width="1228" alt="Screen Shot 2020-04-07 at 10 09 31" src="https://user-images.githubusercontent.com/2267327/78673017-e84e1100-78b7-11ea-855f-29feca07e427.png">
| non_infrastructure | microserviços para o frontend single spa link img width alt screen shot at src | 0 |
8,648 | 7,544,973,985 | IssuesEvent | 2018-04-17 20:08:05 | shaughnessyar/driftR | https://api.github.com/repos/shaughnessyar/driftR | closed | Unit testing for dr_replace | type:infrastructure | The new `dr_replace` function does not currently have unit testing. We'll need to create a new "clean" data set to compare against, and then write tests for each approach as well as different configurations of `cleanVar` and `sourceVar` with `overwrite`. | 1.0 | Unit testing for dr_replace - The new `dr_replace` function does not currently have unit testing. We'll need to create a new "clean" data set to compare against, and then write tests for each approach as well as different configurations of `cleanVar` and `sourceVar` with `overwrite`. | infrastructure | unit testing for dr replace the new dr replace function does not currently have unit testing we ll need to create a new clean data set to compare against and then write tests for each approach as well as different configurations of cleanvar and sourcevar with overwrite | 1 |
18,432 | 10,228,285,064 | IssuesEvent | 2019-08-17 00:56:23 | CodaProtocol/coda | https://api.github.com/repos/CodaProtocol/coda | opened | Improve offline/partition detection | security | The current implementation simply checks if we've seen a block within some constant time period.
ISTM there are two separate things we're interested in when we talk about whether the node is online.
## Non-adversarial network failures
For users in the testnets, they look at `coda client status` to figure out if they're connected properly. If they're status if offline, their client is probably misconfigured, their network is down, etc. We could answer that question by querying the network layer. E.g. ask how many peers we've successfully connected to in the last 30 seconds.
## Potentially adversarial network partitions
But the second thing is whether or not >50% of stake is online on the same network as the user. If the attacker controls the network then they may censor blocks arbitrarily, and cause the target to believe a state which will never be finalized. It's very plausible an attacker could do this, e.g. with a [stingray](https://en.wikipedia.org/wiki/Stingray_phone_tracker), malicious WiFi, or in more extreme situations if they own or have compromised the target's ISP. It's unacceptable if I can go into a coffee shop, switch to their WiFi, sell something to a guy and see my account balance increase, then leave, switch back to cellular and see the money disappear. A less bad but still problematic scenario is where the attacker makes me think I *haven't* received a transaction that I have on the chain that will be final.
We can estimate the fraction of stake online in the last *n* slots through Bayesian inference, although the choice of hyperparameters is non-obvious since the fraction of stake that is active may rise and fall over time. An attacker may manipulate our estimate of the active stake fraction as well.
Having thought of all that it seems to me what we *really* want is not the probability that >50% of stake is online, but the probabilities that my best tip a) will be finalized and b) is the global best tip. We want the finalization probability for old blocks as well. This probability is equal to 1 - the probability there exists a stronger, distinct, chain. Under the assumption that if such a chain existed it'd be broadcast. My intuition says there's no efficient closed form equation for that probability (at least in the presence of min_window) and we'd have to go to Monte Carlo, which is not great but might be OK. | True | Improve offline/partition detection - The current implementation simply checks if we've seen a block within some constant time period.
ISTM there are two separate things we're interested in when we talk about whether the node is online.
## Non-adversarial network failures
For users in the testnets, they look at `coda client status` to figure out if they're connected properly. If they're status if offline, their client is probably misconfigured, their network is down, etc. We could answer that question by querying the network layer. E.g. ask how many peers we've successfully connected to in the last 30 seconds.
## Potentially adversarial network partitions
But the second thing is whether or not >50% of stake is online on the same network as the user. If the attacker controls the network then they may censor blocks arbitrarily, and cause the target to believe a state which will never be finalized. It's very plausible an attacker could do this, e.g. with a [stingray](https://en.wikipedia.org/wiki/Stingray_phone_tracker), malicious WiFi, or in more extreme situations if they own or have compromised the target's ISP. It's unacceptable if I can go into a coffee shop, switch to their WiFi, sell something to a guy and see my account balance increase, then leave, switch back to cellular and see the money disappear. A less bad but still problematic scenario is where the attacker makes me think I *haven't* received a transaction that I have on the chain that will be final.
We can estimate the fraction of stake online in the last *n* slots through Bayesian inference, although the choice of hyperparameters is non-obvious since the fraction of stake that is active may rise and fall over time. An attacker may manipulate our estimate of the active stake fraction as well.
Having thought of all that it seems to me what we *really* want is not the probability that >50% of stake is online, but the probabilities that my best tip a) will be finalized and b) is the global best tip. We want the finalization probability for old blocks as well. This probability is equal to 1 - the probability there exists a stronger, distinct, chain. Under the assumption that if such a chain existed it'd be broadcast. My intuition says there's no efficient closed form equation for that probability (at least in the presence of min_window) and we'd have to go to Monte Carlo, which is not great but might be OK. | non_infrastructure | improve offline partition detection the current implementation simply checks if we ve seen a block within some constant time period istm there are two separate things we re interested in when we talk about whether the node is online non adversarial network failures for users in the testnets they look at coda client status to figure out if they re connected properly if they re status if offline their client is probably misconfigured their network is down etc we could answer that question by querying the network layer e g ask how many peers we ve successfully connected to in the last seconds potentially adversarial network partitions but the second thing is whether or not of stake is online on the same network as the user if the attacker controls the network then they may censor blocks arbitrarily and cause the target to believe a state which will never be finalized it s very plausible an attacker could do this e g with a malicious wifi or in more extreme situations if they own or have compromised the target s isp it s unacceptable if i can go into a coffee shop switch to their wifi sell something to a guy and see my account balance increase then leave switch back to cellular and see the money disappear a less bad but still problematic scenario is where the attacker makes me think i haven t received a transaction that i have on the chain that will be final we can estimate the fraction of stake online in the last n slots through bayesian inference although the choice of hyperparameters is non obvious since the fraction of stake that is active may rise and fall over time an attacker may manipulate our estimate of the active stake fraction as well having thought of all that it seems to me what we really want is not the probability that of stake is online but the probabilities that my best tip a will be finalized and b is the global best tip we want the finalization probability for old blocks as well this probability is equal to the probability there exists a stronger distinct chain under the assumption that if such a chain existed it d be broadcast my intuition says there s no efficient closed form equation for that probability at least in the presence of min window and we d have to go to monte carlo which is not great but might be ok | 0 |
8,734 | 7,602,595,518 | IssuesEvent | 2018-04-29 03:25:20 | matrumz/matrumz-toolbox | https://api.github.com/repos/matrumz/matrumz-toolbox | closed | NPM - Not Importable | bug infrastructure | After installing matrumz-toolbox via NPM into a NodeJS project, VSCode gives error "Cannot find module 'matrumz-toolbox'".
This may be because package.json is specifying main & types to be dist/index.js & dist/index.d.ts, respectively, while index.ts does not exist.
I'm thinking to fix this issue, I'll have to move everything down from src/ to src/lib, and have a src/index.ts that re-exports everything. Will require updating test/NodeJSTests/tests.ci.js | 1.0 | NPM - Not Importable - After installing matrumz-toolbox via NPM into a NodeJS project, VSCode gives error "Cannot find module 'matrumz-toolbox'".
This may be because package.json is specifying main & types to be dist/index.js & dist/index.d.ts, respectively, while index.ts does not exist.
I'm thinking to fix this issue, I'll have to move everything down from src/ to src/lib, and have a src/index.ts that re-exports everything. Will require updating test/NodeJSTests/tests.ci.js | infrastructure | npm not importable after installing matrumz toolbox via npm into a nodejs project vscode gives error cannot find module matrumz toolbox this may be because package json is specifying main types to be dist index js dist index d ts respectively while index ts does not exist i m thinking to fix this issue i ll have to move everything down from src to src lib and have a src index ts that re exports everything will require updating test nodejstests tests ci js | 1 |
29,419 | 23,999,523,269 | IssuesEvent | 2022-09-14 10:14:47 | tskit-dev/tskit | https://api.github.com/repos/tskit-dev/tskit | closed | 32bit CI fails with GSL error | bug Infrastructure and tools | See https://app.circleci.com/pipelines/github/tskit-dev/tskit/7448/workflows/4c8538ec-396a-4870-9e5d-063ed7494c20/jobs/9829
I think this just needs some extra packages for apt-get, although I'm not sure why this CI passed yesterday if it is broken now. | 1.0 | 32bit CI fails with GSL error - See https://app.circleci.com/pipelines/github/tskit-dev/tskit/7448/workflows/4c8538ec-396a-4870-9e5d-063ed7494c20/jobs/9829
I think this just needs some extra packages for apt-get, although I'm not sure why this CI passed yesterday if it is broken now. | infrastructure | ci fails with gsl error see i think this just needs some extra packages for apt get although i m not sure why this ci passed yesterday if it is broken now | 1 |
119,898 | 12,054,217,488 | IssuesEvent | 2020-04-15 10:44:07 | operator-framework/operator-sdk | https://api.github.com/repos/operator-framework/operator-sdk | closed | Doc how to contribute and test the deploy of binaries | kind/documentation | ## Feature Request
**Is your feature request related to a problem? Please describe.**
We need to describe where the deploy of different types are built as was described in the comment https://github.com/operator-framework/operator-sdk/issues/2686#issuecomment-603938038
and then, how to contribute and/or test these deployments. See: https://github.com/operator-framework/operator-sdk/pull/2742#issuecomment-606363491 and check that also is required to add a valid token in the Travis env var COVERALLS_TOKEN in order to not face it. The token can be obtained in the coveralls.io.
**Describe the solution you'd like**
Add a doc with this information. | 1.0 | Doc how to contribute and test the deploy of binaries - ## Feature Request
**Is your feature request related to a problem? Please describe.**
We need to describe where the deploy of different types are built as was described in the comment https://github.com/operator-framework/operator-sdk/issues/2686#issuecomment-603938038
and then, how to contribute and/or test these deployments. See: https://github.com/operator-framework/operator-sdk/pull/2742#issuecomment-606363491 and check that also is required to add a valid token in the Travis env var COVERALLS_TOKEN in order to not face it. The token can be obtained in the coveralls.io.
**Describe the solution you'd like**
Add a doc with this information. | non_infrastructure | doc how to contribute and test the deploy of binaries feature request is your feature request related to a problem please describe we need to describe where the deploy of different types are built as was described in the comment and then how to contribute and or test these deployments see and check that also is required to add a valid token in the travis env var coveralls token in order to not face it the token can be obtained in the coveralls io describe the solution you d like add a doc with this information | 0 |
11,318 | 9,103,119,516 | IssuesEvent | 2019-02-20 15:14:53 | HumanCellAtlas/metadata-schema | https://api.github.com/repos/HumanCellAtlas/metadata-schema | closed | Add /health endpoint for https://schema.humancellatlas.org | infrastructure | To conform with DCP operational policies, all components must have a `/health` endpoint that can be checked to see if the component is up. https://schema.humancellatlas.org needs this.
| 1.0 | Add /health endpoint for https://schema.humancellatlas.org - To conform with DCP operational policies, all components must have a `/health` endpoint that can be checked to see if the component is up. https://schema.humancellatlas.org needs this.
| infrastructure | add health endpoint for to conform with dcp operational policies all components must have a health endpoint that can be checked to see if the component is up needs this | 1 |
121,351 | 4,807,833,656 | IssuesEvent | 2016-11-02 22:45:54 | JTSwagger/GoogleBot | https://api.github.com/repos/JTSwagger/GoogleBot | opened | No response for "don't have driver's license". | Priority: 3-Minor Type: Bug | #### Brief description of the issue
Customer can say "I don't even have a driver's license" but the bot will still go into the intro rather than end the call.
#### What you expected to happen
Cheryl to end the call. We can't offer car insurance quotes to a dude who doesn't have a car.
#### What actually happened
Cheryl doesn't understand this, and continues.
#### Steps to reproduce
1.) Say "I don't have a driver's license" after intro
2.) Watch as the bot continues with the call
#### Additional info:
- **Bot Revision**: Pre-Github
- **Anything else you may wish to add**: (Related issues, for example.) | 1.0 | No response for "don't have driver's license". - #### Brief description of the issue
Customer can say "I don't even have a driver's license" but the bot will still go into the intro rather than end the call.
#### What you expected to happen
Cheryl to end the call. We can't offer car insurance quotes to a dude who doesn't have a car.
#### What actually happened
Cheryl doesn't understand this, and continues.
#### Steps to reproduce
1.) Say "I don't have a driver's license" after intro
2.) Watch as the bot continues with the call
#### Additional info:
- **Bot Revision**: Pre-Github
- **Anything else you may wish to add**: (Related issues, for example.) | non_infrastructure | no response for don t have driver s license brief description of the issue customer can say i don t even have a driver s license but the bot will still go into the intro rather than end the call what you expected to happen cheryl to end the call we can t offer car insurance quotes to a dude who doesn t have a car what actually happened cheryl doesn t understand this and continues steps to reproduce say i don t have a driver s license after intro watch as the bot continues with the call additional info bot revision pre github anything else you may wish to add related issues for example | 0 |
28,639 | 23,413,625,688 | IssuesEvent | 2022-08-12 20:36:59 | OregonDigital/OD2 | https://api.github.com/repos/OregonDigital/OD2 | closed | Run a profiler against some key processes | Infrastructure Migration | ### Descriptive summary
We don't know why key parts of the app are so incredibly slow, and we need to figure this out. Even if we can't do much about some areas, it could help us know where we should focus. And maybe there are ways to tweak settings for other libraries or even push up code to improve things that are hurting us more than we would realize.
A promising stack-based profiler is at https://github.com/tmm1/stackprof.
I suggest profiling reindexing an asset as well as migrating an asset. Other profile target may make sense, but offhand these two seem the most abusive operations. (migration may in fact be a red herring if the indexing is the real culprit)
### Expected behavior
Know our stack a little better. Maybe make new tickets to address shortcomings. | 1.0 | Run a profiler against some key processes - ### Descriptive summary
We don't know why key parts of the app are so incredibly slow, and we need to figure this out. Even if we can't do much about some areas, it could help us know where we should focus. And maybe there are ways to tweak settings for other libraries or even push up code to improve things that are hurting us more than we would realize.
A promising stack-based profiler is at https://github.com/tmm1/stackprof.
I suggest profiling reindexing an asset as well as migrating an asset. Other profile target may make sense, but offhand these two seem the most abusive operations. (migration may in fact be a red herring if the indexing is the real culprit)
### Expected behavior
Know our stack a little better. Maybe make new tickets to address shortcomings. | infrastructure | run a profiler against some key processes descriptive summary we don t know why key parts of the app are so incredibly slow and we need to figure this out even if we can t do much about some areas it could help us know where we should focus and maybe there are ways to tweak settings for other libraries or even push up code to improve things that are hurting us more than we would realize a promising stack based profiler is at i suggest profiling reindexing an asset as well as migrating an asset other profile target may make sense but offhand these two seem the most abusive operations migration may in fact be a red herring if the indexing is the real culprit expected behavior know our stack a little better maybe make new tickets to address shortcomings | 1 |
11,801 | 9,428,986,586 | IssuesEvent | 2019-04-12 03:46:22 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Multi-process job runner cannot run all IRunnable implementations | bug interface/infrastructure | The multi process job runner seems to assume that all `IRunnable` jobs are of type `RunSimulation`:
```csharp
IRunnable jobToRun = job.job;
RunSimulation simulationRunner = job.job as RunSimulation;
```
The problem is that `job.job` could be any `IRunnable` - e.g. an `ExcelInput`. As a result, we get a null reference exception before the job gets run.
To reproduce the problem just use the multi-process runner on any file which contains an excel input. | 1.0 | Multi-process job runner cannot run all IRunnable implementations - The multi process job runner seems to assume that all `IRunnable` jobs are of type `RunSimulation`:
```csharp
IRunnable jobToRun = job.job;
RunSimulation simulationRunner = job.job as RunSimulation;
```
The problem is that `job.job` could be any `IRunnable` - e.g. an `ExcelInput`. As a result, we get a null reference exception before the job gets run.
To reproduce the problem just use the multi-process runner on any file which contains an excel input. | infrastructure | multi process job runner cannot run all irunnable implementations the multi process job runner seems to assume that all irunnable jobs are of type runsimulation csharp irunnable jobtorun job job runsimulation simulationrunner job job as runsimulation the problem is that job job could be any irunnable e g an excelinput as a result we get a null reference exception before the job gets run to reproduce the problem just use the multi process runner on any file which contains an excel input | 1 |
144,125 | 22,281,813,020 | IssuesEvent | 2022-06-11 01:59:36 | DeveloperAcademy-POSTECH/MC2-Team14-OXY | https://api.github.com/repos/DeveloperAcademy-POSTECH/MC2-Team14-OXY | closed | [Feature] 카드 long press & swipe 제스처 | feature design 다니 | ## Description
카드를 꾹 눌러서 위/아래로 보내는 제스처 구현
## ScreenShot
## To-do
- [ ] 카드를 꾹 누르는 제스처 + 드래그 제스처 합치기
- [ ] 카드를 위로 보내기
- [ ] 카드를 아래로 내리기
## Etc
| 1.0 | [Feature] 카드 long press & swipe 제스처 - ## Description
카드를 꾹 눌러서 위/아래로 보내는 제스처 구현
## ScreenShot
## To-do
- [ ] 카드를 꾹 누르는 제스처 + 드래그 제스처 합치기
- [ ] 카드를 위로 보내기
- [ ] 카드를 아래로 내리기
## Etc
| non_infrastructure | 카드 long press swipe 제스처 description 카드를 꾹 눌러서 위 아래로 보내는 제스처 구현 screenshot to do 카드를 꾹 누르는 제스처 드래그 제스처 합치기 카드를 위로 보내기 카드를 아래로 내리기 etc | 0 |
27,762 | 22,318,030,740 | IssuesEvent | 2022-06-14 01:31:25 | google/iree | https://api.github.com/repos/google/iree | closed | Using GCC on Linux fails with error undefined reference to `dlsym' | infrastructure infrastructure/cmake | Using GCC-10 on Linux fails with an undefined reference to `dlsym`. The Cmake configuration
`cmake -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_BUILD_TYPE=Debug`
Based on whats described here
https://github.com/Intel-Media-SDK/MediaSDK/issues/34
The fix was to add `-Wl,--no-as-needed -ldl`
This change to CMakeLists.txt at the root directory seemed to get past it
```
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 243e3e618..fc9554248 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -233,11 +233,13 @@ set(CMAKE_POSITION_INDEPENDENT_CODE TRUE)
iree_append_list_to_string(CMAKE_C_FLAGS_DEBUG ${IREE_C_FLAGS_DEBUG_LIST})
iree_append_list_to_string(CMAKE_CXX_FLAGS_DEBUG ${IREE_CXX_FLAGS_DEBUG_LIST})
+set(CMAKE_EXE_LINKER_FLAGS "-Wl,--no-as-needed -ldl")
set(CMAKE_CXX_FLAGS_FASTBUILD "-gmlt" CACHE STRING "Flags used by the C++ compiler during fast builds." FORCE)
set(CMAKE_C_FLAGS_FASTBUILD "-gmlt" CACHE STRING "Flags used by the C compiler during fast builds." FORCE)
set(CMAKE_EXE_LINKER_FLAGS_FASTBUILD "-Wl,-S" CACHE STRING "Flags used for linking binaries during fast builds." FORCE)
set(CMAKE_SHARED_LINKER_FLAGS_FASTBUILD "-Wl,-S" CACHE STRING "Flags used by the shared libraries linker binaries during fast builds." FORCE)
mark_as_advanced(
+ CMAKE_EXE_LINKER_FLAGS
CMAKE_CXX_FLAGS_FASTBUILD
CMAKE_C_FLAGS_FASTBUILD
CMAKE_EXE_LINKER_FLAGS_FASTBUILD
```
| 2.0 | Using GCC on Linux fails with error undefined reference to `dlsym' - Using GCC-10 on Linux fails with an undefined reference to `dlsym`. The Cmake configuration
`cmake -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_BUILD_TYPE=Debug`
Based on whats described here
https://github.com/Intel-Media-SDK/MediaSDK/issues/34
The fix was to add `-Wl,--no-as-needed -ldl`
This change to CMakeLists.txt at the root directory seemed to get past it
```
diff --git a/CMakeLists.txt b/CMakeLists.txt
index 243e3e618..fc9554248 100644
--- a/CMakeLists.txt
+++ b/CMakeLists.txt
@@ -233,11 +233,13 @@ set(CMAKE_POSITION_INDEPENDENT_CODE TRUE)
iree_append_list_to_string(CMAKE_C_FLAGS_DEBUG ${IREE_C_FLAGS_DEBUG_LIST})
iree_append_list_to_string(CMAKE_CXX_FLAGS_DEBUG ${IREE_CXX_FLAGS_DEBUG_LIST})
+set(CMAKE_EXE_LINKER_FLAGS "-Wl,--no-as-needed -ldl")
set(CMAKE_CXX_FLAGS_FASTBUILD "-gmlt" CACHE STRING "Flags used by the C++ compiler during fast builds." FORCE)
set(CMAKE_C_FLAGS_FASTBUILD "-gmlt" CACHE STRING "Flags used by the C compiler during fast builds." FORCE)
set(CMAKE_EXE_LINKER_FLAGS_FASTBUILD "-Wl,-S" CACHE STRING "Flags used for linking binaries during fast builds." FORCE)
set(CMAKE_SHARED_LINKER_FLAGS_FASTBUILD "-Wl,-S" CACHE STRING "Flags used by the shared libraries linker binaries during fast builds." FORCE)
mark_as_advanced(
+ CMAKE_EXE_LINKER_FLAGS
CMAKE_CXX_FLAGS_FASTBUILD
CMAKE_C_FLAGS_FASTBUILD
CMAKE_EXE_LINKER_FLAGS_FASTBUILD
```
| infrastructure | using gcc on linux fails with error undefined reference to dlsym using gcc on linux fails with an undefined reference to dlsym the cmake configuration cmake dcmake c compiler gcc dcmake cxx compiler g dcmake build type debug based on whats described here the fix was to add wl no as needed ldl this change to cmakelists txt at the root directory seemed to get past it diff git a cmakelists txt b cmakelists txt index a cmakelists txt b cmakelists txt set cmake position independent code true iree append list to string cmake c flags debug iree c flags debug list iree append list to string cmake cxx flags debug iree cxx flags debug list set cmake exe linker flags wl no as needed ldl set cmake cxx flags fastbuild gmlt cache string flags used by the c compiler during fast builds force set cmake c flags fastbuild gmlt cache string flags used by the c compiler during fast builds force set cmake exe linker flags fastbuild wl s cache string flags used for linking binaries during fast builds force set cmake shared linker flags fastbuild wl s cache string flags used by the shared libraries linker binaries during fast builds force mark as advanced cmake exe linker flags cmake cxx flags fastbuild cmake c flags fastbuild cmake exe linker flags fastbuild | 1 |
13,869 | 10,513,934,021 | IssuesEvent | 2019-09-27 22:09:08 | ryardley/pdsl | https://api.github.com/repos/ryardley/pdsl | closed | Refactor to monorepo to hold related libs | infrastructure priority | * Syntax highlighting
* Babel plugin
* Compiler
___
Along with this we need to ensure we have our release structure organised in a script that will publish the `canary` as well as the `latest` tag.
WIP is over at the [`monorepo`](https://github.com/ryardley/pdsl/tree/monorepo) branch | 1.0 | Refactor to monorepo to hold related libs - * Syntax highlighting
* Babel plugin
* Compiler
___
Along with this we need to ensure we have our release structure organised in a script that will publish the `canary` as well as the `latest` tag.
WIP is over at the [`monorepo`](https://github.com/ryardley/pdsl/tree/monorepo) branch | infrastructure | refactor to monorepo to hold related libs syntax highlighting babel plugin compiler along with this we need to ensure we have our release structure organised in a script that will publish the canary as well as the latest tag wip is over at the branch | 1 |
26,323 | 19,988,334,620 | IssuesEvent | 2022-01-31 00:36:18 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | The GUI is somewhat unstable and crashes | bug interface/infrastructure | This is hard to reproduce reliably. It seems to happen after much clicking on nodes in the tree, copying and pasting within and between manager scripts and other components. In the screenshot below I copied a large chunk of c# from a manager script and renamed a node in the tree and hit ctrl V to paste the c# code. Not a valid thing to do but I was trying to get it to crash. The APSIM status message I think was from a previous attemp to paste something invalid on a node in the tree.

| 1.0 | The GUI is somewhat unstable and crashes - This is hard to reproduce reliably. It seems to happen after much clicking on nodes in the tree, copying and pasting within and between manager scripts and other components. In the screenshot below I copied a large chunk of c# from a manager script and renamed a node in the tree and hit ctrl V to paste the c# code. Not a valid thing to do but I was trying to get it to crash. The APSIM status message I think was from a previous attemp to paste something invalid on a node in the tree.

| infrastructure | the gui is somewhat unstable and crashes this is hard to reproduce reliably it seems to happen after much clicking on nodes in the tree copying and pasting within and between manager scripts and other components in the screenshot below i copied a large chunk of c from a manager script and renamed a node in the tree and hit ctrl v to paste the c code not a valid thing to do but i was trying to get it to crash the apsim status message i think was from a previous attemp to paste something invalid on a node in the tree | 1 |
26,155 | 19,692,918,846 | IssuesEvent | 2022-01-12 09:10:24 | OpenLiberty/openliberty.io | https://api.github.com/repos/OpenLiberty/openliberty.io | closed | Move guide specific scrolling code into respective guide javascript | infrastructure good first issue | Move this part of nav scrolling code in openliberty.js to the correct guide javascript instead.
```//handles where the top of the code column should be
if(typeof(inSingleColumnView) === 'function'){
if (!inSingleColumnView()) {
//at the top of the browser window in multi-column view
$("#code_column").css({"position":"fixed", "top":"0px"})
} else {
//below the hotspot in single column view
$("#code_column").css("position", "fixed");
}
}
```
This was added when fixing https://github.com/OpenLiberty/openliberty.io/issues/2307 which caused the nav to disappear when scrolling the page on any page and a temporary fix was put in https://github.com/OpenLiberty/openliberty.io/pull/2375. | 1.0 | Move guide specific scrolling code into respective guide javascript - Move this part of nav scrolling code in openliberty.js to the correct guide javascript instead.
```//handles where the top of the code column should be
if(typeof(inSingleColumnView) === 'function'){
if (!inSingleColumnView()) {
//at the top of the browser window in multi-column view
$("#code_column").css({"position":"fixed", "top":"0px"})
} else {
//below the hotspot in single column view
$("#code_column").css("position", "fixed");
}
}
```
This was added when fixing https://github.com/OpenLiberty/openliberty.io/issues/2307 which caused the nav to disappear when scrolling the page on any page and a temporary fix was put in https://github.com/OpenLiberty/openliberty.io/pull/2375. | infrastructure | move guide specific scrolling code into respective guide javascript move this part of nav scrolling code in openliberty js to the correct guide javascript instead handles where the top of the code column should be if typeof insinglecolumnview function if insinglecolumnview at the top of the browser window in multi column view code column css position fixed top else below the hotspot in single column view code column css position fixed this was added when fixing which caused the nav to disappear when scrolling the page on any page and a temporary fix was put in | 1 |
29,249 | 23,852,157,359 | IssuesEvent | 2022-09-06 19:01:43 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Microsoft.AspNetCore.App.Ref needs to be produced and versioned correctly in servicing for source-build | area-infrastructure | Currently building release/6.0 for 6.0.1 doesn't produce a Microsoft.AspNetCore.App.Ref package. This is because the package wasn't serviced in 6.0.1. 6.0.0 is the active version yet the build doesn't produce it. This breaks source-build which needs to produce the entire product.
In previous versions, source-build handled this by treating the package as a reference package via https://github.com/dotnet/source-build-reference-packages. This is no longer an option in 6.0 with the introduction of roslyn analyzers (e.g source) in the package.
dotnet/runtime made changes in 6.0 to support this [here](https://github.com/dotnet/arcade/blob/e7ede87875f41a9b3df898ae08da5ebc96e24f56/src/Microsoft.DotNet.SharedFramework.Sdk/targets/Microsoft.DotNet.SharedFramework.Sdk.targets#L19).
| 1.0 | Microsoft.AspNetCore.App.Ref needs to be produced and versioned correctly in servicing for source-build - Currently building release/6.0 for 6.0.1 doesn't produce a Microsoft.AspNetCore.App.Ref package. This is because the package wasn't serviced in 6.0.1. 6.0.0 is the active version yet the build doesn't produce it. This breaks source-build which needs to produce the entire product.
In previous versions, source-build handled this by treating the package as a reference package via https://github.com/dotnet/source-build-reference-packages. This is no longer an option in 6.0 with the introduction of roslyn analyzers (e.g source) in the package.
dotnet/runtime made changes in 6.0 to support this [here](https://github.com/dotnet/arcade/blob/e7ede87875f41a9b3df898ae08da5ebc96e24f56/src/Microsoft.DotNet.SharedFramework.Sdk/targets/Microsoft.DotNet.SharedFramework.Sdk.targets#L19).
| infrastructure | microsoft aspnetcore app ref needs to be produced and versioned correctly in servicing for source build currently building release for doesn t produce a microsoft aspnetcore app ref package this is because the package wasn t serviced in is the active version yet the build doesn t produce it this breaks source build which needs to produce the entire product in previous versions source build handled this by treating the package as a reference package via this is no longer an option in with the introduction of roslyn analyzers e g source in the package dotnet runtime made changes in to support this | 1 |
22,675 | 15,367,834,628 | IssuesEvent | 2021-03-02 04:13:11 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | More explicit error messages from experiment | bug interface/infrastructure | - When a factor contains multiple composite factor children, the descriptor should be left blank. In the event that it's not blank, an appropriate error should be shown
- When a composite factor's number of paths is not equal to its number of children it should throw an error | 1.0 | More explicit error messages from experiment - - When a factor contains multiple composite factor children, the descriptor should be left blank. In the event that it's not blank, an appropriate error should be shown
- When a composite factor's number of paths is not equal to its number of children it should throw an error | infrastructure | more explicit error messages from experiment when a factor contains multiple composite factor children the descriptor should be left blank in the event that it s not blank an appropriate error should be shown when a composite factor s number of paths is not equal to its number of children it should throw an error | 1 |
1,415 | 3,201,147,581 | IssuesEvent | 2015-10-02 03:27:28 | emberjs/guides | https://api.github.com/repos/emberjs/guides | closed | Flip version ordering in the version drop down | infrastructure | Over time it seems to make more sense to have older versions of the guides at the bottom of the versions menu instead of at the top ... | 1.0 | Flip version ordering in the version drop down - Over time it seems to make more sense to have older versions of the guides at the bottom of the versions menu instead of at the top ... | infrastructure | flip version ordering in the version drop down over time it seems to make more sense to have older versions of the guides at the bottom of the versions menu instead of at the top | 1 |
32,718 | 26,934,949,757 | IssuesEvent | 2023-02-07 19:49:36 | centerofci/mathesar | https://api.github.com/repos/centerofci/mathesar | closed | Unable to create a schema in the browser while running in the development environment | type: bug work: infrastructure status: ready | ## Description
When local environment is set up and run using `docker compose up` ,i am unable to create a schema via the browser.I am getting an internal server error with the following description ,"When making an XHR request, the server responded with an error, but the response body was not valid JSON."
## Expected behavior
User should be able to create schema
## To Reproduce
1.Clone repository at `master` at https://github.com/centerofci/mathesar/commit/8f47f3412467eee4a2e899845392194484e33232
2.Follow the steps to set the local environment
3.go to the web page 127.0.0.1:8000
4.click on the new schema icon
5.once submit button is clicked the following error shows up,"When making an XHR request, the server responded with an error, but the response body was not valid JSON."
## Environment
- OS:macos
- Browser: chrome
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
| 1.0 | Unable to create a schema in the browser while running in the development environment - ## Description
When local environment is set up and run using `docker compose up` ,i am unable to create a schema via the browser.I am getting an internal server error with the following description ,"When making an XHR request, the server responded with an error, but the response body was not valid JSON."
## Expected behavior
User should be able to create schema
## To Reproduce
1.Clone repository at `master` at https://github.com/centerofci/mathesar/commit/8f47f3412467eee4a2e899845392194484e33232
2.Follow the steps to set the local environment
3.go to the web page 127.0.0.1:8000
4.click on the new schema icon
5.once submit button is clicked the following error shows up,"When making an XHR request, the server responded with an error, but the response body was not valid JSON."
## Environment
- OS:macos
- Browser: chrome
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
| infrastructure | unable to create a schema in the browser while running in the development environment description when local environment is set up and run using docker compose up i am unable to create a schema via the browser i am getting an internal server error with the following description when making an xhr request the server responded with an error but the response body was not valid json expected behavior user should be able to create schema to reproduce clone repository at master at follow the steps to set the local environment go to the web page click on the new schema icon once submit button is clicked the following error shows up when making an xhr request the server responded with an error but the response body was not valid json environment os macos browser chrome additional context | 1 |
1,549 | 3,265,689,062 | IssuesEvent | 2015-10-22 17:20:09 | dotnet/roslyn | https://api.github.com/repos/dotnet/roslyn | closed | Let nuget dependencies on Immutable and Metadata packages float | Area-Infrastructure Resolution-Fixed | Currently, the roslyn nuget packages depend on fixed versions of BCL packages.
See for example: http://www.nuget.org/packages/Microsoft.CodeAnalysis.Common/1.0.0-rc2
**Dependencies**
* System.Collections.Immutable (**=** 1.1.33-beta)
* System.Reflection.Metadata (**=** 1.0.18-beta)
Once you adopt the stable versions, these should **>=**. There won't be breaking changes and developers should be allowed to upgrade the above without upgrading Roslyn. On the desktop framework, it will require them to have binding redirects, but those can be generated by VS, and on every other platform it will just work.
cc @jaredpar @davkean | 1.0 | Let nuget dependencies on Immutable and Metadata packages float - Currently, the roslyn nuget packages depend on fixed versions of BCL packages.
See for example: http://www.nuget.org/packages/Microsoft.CodeAnalysis.Common/1.0.0-rc2
**Dependencies**
* System.Collections.Immutable (**=** 1.1.33-beta)
* System.Reflection.Metadata (**=** 1.0.18-beta)
Once you adopt the stable versions, these should **>=**. There won't be breaking changes and developers should be allowed to upgrade the above without upgrading Roslyn. On the desktop framework, it will require them to have binding redirects, but those can be generated by VS, and on every other platform it will just work.
cc @jaredpar @davkean | infrastructure | let nuget dependencies on immutable and metadata packages float currently the roslyn nuget packages depend on fixed versions of bcl packages see for example dependencies system collections immutable beta system reflection metadata beta once you adopt the stable versions these should there won t be breaking changes and developers should be allowed to upgrade the above without upgrading roslyn on the desktop framework it will require them to have binding redirects but those can be generated by vs and on every other platform it will just work cc jaredpar davkean | 1 |
317,890 | 23,693,682,471 | IssuesEvent | 2022-08-29 13:04:11 | software-mansion/starknet-jvm | https://api.github.com/repos/software-mansion/starknet-jvm | closed | Do not store history when deploying docs on gh pages | documentation enhancement | Storing old versions of deployed docs in gh-pages branch can result in very large repository size. Old deployments (commits) should not be stored. | 1.0 | Do not store history when deploying docs on gh pages - Storing old versions of deployed docs in gh-pages branch can result in very large repository size. Old deployments (commits) should not be stored. | non_infrastructure | do not store history when deploying docs on gh pages storing old versions of deployed docs in gh pages branch can result in very large repository size old deployments commits should not be stored | 0 |
420,367 | 12,237,156,110 | IssuesEvent | 2020-05-04 17:32:34 | RoeiRom/NewSpectrum-Client | https://api.github.com/repos/RoeiRom/NewSpectrum-Client | closed | הוספת כפתור - "היום" ללוח שנה | enhancement priority:noraml | בדף לוח השנה יופיע בtoolbar כפתור חדש - "היום" וכאשר המשתמש מדפדף בין החודשים בלחיצה על כפתור "היום" המערכת תעבור לחודש הנוכחי

| 1.0 | הוספת כפתור - "היום" ללוח שנה - בדף לוח השנה יופיע בtoolbar כפתור חדש - "היום" וכאשר המשתמש מדפדף בין החודשים בלחיצה על כפתור "היום" המערכת תעבור לחודש הנוכחי

| non_infrastructure | הוספת כפתור היום ללוח שנה בדף לוח השנה יופיע בtoolbar כפתור חדש היום וכאשר המשתמש מדפדף בין החודשים בלחיצה על כפתור היום המערכת תעבור לחודש הנוכחי | 0 |
26,654 | 20,384,470,088 | IssuesEvent | 2022-02-22 04:25:09 | itchysats/itchysats | https://api.github.com/repos/itchysats/itchysats | opened | Fix release workflow to bump version of crates correctly | bug infrastructure | The "draft new release" workflow bumps the version of the wrong crate since after the crate split: https://github.com/itchysats/itchysats/blob/a6f858e496ee7385092d7896ab86efb769f781fc/.github/workflows/draft-new-release.yml#L32-L36
This needs to be changed to `maker` & `taker`. | 1.0 | Fix release workflow to bump version of crates correctly - The "draft new release" workflow bumps the version of the wrong crate since after the crate split: https://github.com/itchysats/itchysats/blob/a6f858e496ee7385092d7896ab86efb769f781fc/.github/workflows/draft-new-release.yml#L32-L36
This needs to be changed to `maker` & `taker`. | infrastructure | fix release workflow to bump version of crates correctly the draft new release workflow bumps the version of the wrong crate since after the crate split this needs to be changed to maker taker | 1 |
13,817 | 10,475,912,999 | IssuesEvent | 2019-09-23 17:23:51 | coq/coq | https://api.github.com/repos/coq/coq | closed | Deploy to coq-on-cachix fails to deploy tags. | kind: infrastructure | Example: https://gitlab.com/coq/coq/-/jobs/235978502.
Fixing this is mostly a matter of treating tags specially. | 1.0 | Deploy to coq-on-cachix fails to deploy tags. - Example: https://gitlab.com/coq/coq/-/jobs/235978502.
Fixing this is mostly a matter of treating tags specially. | infrastructure | deploy to coq on cachix fails to deploy tags example fixing this is mostly a matter of treating tags specially | 1 |
150,629 | 13,349,597,059 | IssuesEvent | 2020-08-30 02:02:59 | spacelab-ufsc/interface-board | https://api.github.com/repos/spacelab-ufsc/interface-board | closed | Hardware: Update imagens, diagrams, and templates to "embbeded source" | bug documentation hardware | In properties enable the embedded option, since it works in any computer and file system in case of non-attachment of the source images.

| 1.0 | Hardware: Update imagens, diagrams, and templates to "embbeded source" - In properties enable the embedded option, since it works in any computer and file system in case of non-attachment of the source images.

| non_infrastructure | hardware update imagens diagrams and templates to embbeded source in properties enable the embedded option since it works in any computer and file system in case of non attachment of the source images | 0 |
28,485 | 23,290,126,508 | IssuesEvent | 2022-08-05 21:30:47 | IMLS/estimating-wifi | https://api.github.com/repos/IMLS/estimating-wifi | closed | Add basic security scanning | story infrastructure internal | As a security engineer, so that issues may be identified quickly and without manual intervention, I would like to have automated tooling that scans our code, containers, etc. for common security concerns, dependency updates, etc..
# Acceptance Criteria
1. automated processes run against our code base either on periodic or event-driven basis
2. findings are tracked and communicated
## Optional Criteria
1. SARIF support
# Resources:
* semgrep.dev
* snyk.io
* codeql.github.com
* dependabot.com
### Notes
As with #14 , these are common processes used by other GSA / 10x projects and are run in a CI/CD context | 1.0 | Add basic security scanning - As a security engineer, so that issues may be identified quickly and without manual intervention, I would like to have automated tooling that scans our code, containers, etc. for common security concerns, dependency updates, etc..
# Acceptance Criteria
1. automated processes run against our code base either on periodic or event-driven basis
2. findings are tracked and communicated
## Optional Criteria
1. SARIF support
# Resources:
* semgrep.dev
* snyk.io
* codeql.github.com
* dependabot.com
### Notes
As with #14 , these are common processes used by other GSA / 10x projects and are run in a CI/CD context | infrastructure | add basic security scanning as a security engineer so that issues may be identified quickly and without manual intervention i would like to have automated tooling that scans our code containers etc for common security concerns dependency updates etc acceptance criteria automated processes run against our code base either on periodic or event driven basis findings are tracked and communicated optional criteria sarif support resources semgrep dev snyk io codeql github com dependabot com notes as with these are common processes used by other gsa projects and are run in a ci cd context | 1 |
437,341 | 30,594,936,011 | IssuesEvent | 2023-07-21 20:50:48 | DCC-EX/dcc-ex.github.io | https://api.github.com/repos/DCC-EX/dcc-ex.github.io | closed | [Documentation Update]: Add signal summary page | Documentation | ### Documentation details
To help users understand how to configure the software, and what hardware to use to drive the various sorts of signals, we need to add a signal summary page in the Inputs and Outputs section.
This should cover the various sorts of signals available (servo/semaphore, two/three aspect, etc.) and provide some guidance on what options will enable each type, with links to the command summary/reference, EX-RAIL, and/or Big Picture pages as appropriate.
### Page with issues
_No response_ | 1.0 | [Documentation Update]: Add signal summary page - ### Documentation details
To help users understand how to configure the software, and what hardware to use to drive the various sorts of signals, we need to add a signal summary page in the Inputs and Outputs section.
This should cover the various sorts of signals available (servo/semaphore, two/three aspect, etc.) and provide some guidance on what options will enable each type, with links to the command summary/reference, EX-RAIL, and/or Big Picture pages as appropriate.
### Page with issues
_No response_ | non_infrastructure | add signal summary page documentation details to help users understand how to configure the software and what hardware to use to drive the various sorts of signals we need to add a signal summary page in the inputs and outputs section this should cover the various sorts of signals available servo semaphore two three aspect etc and provide some guidance on what options will enable each type with links to the command summary reference ex rail and or big picture pages as appropriate page with issues no response | 0 |
53,741 | 11,135,180,761 | IssuesEvent | 2019-12-20 13:47:04 | tlienart/JuDoc.jl | https://api.github.com/repos/tlienart/JuDoc.jl | closed | merge resolve_input_*code functions | code_quality easy | removed usage of Highlight.jl for now, seems superfluous for now so there's no point in distinguishing the two functions. | 1.0 | merge resolve_input_*code functions - removed usage of Highlight.jl for now, seems superfluous for now so there's no point in distinguishing the two functions. | non_infrastructure | merge resolve input code functions removed usage of highlight jl for now seems superfluous for now so there s no point in distinguishing the two functions | 0 |
25,952 | 19,492,092,608 | IssuesEvent | 2021-12-27 08:32:22 | IBM-Cloud/terraform-provider-ibm | https://api.github.com/repos/IBM-Cloud/terraform-provider-ibm | closed | ibm_hardware_firewall_shared creation timeout | service/Classic Infrastructure | Hi there,
I hit a timeout when provisioning a Hardware Firewall (Shared) instance for a bare metal server. Not sure if this is a regular occurrence, or just a once-off.
### Terraform Version
```
$ terraform -v
Terraform v0.14.5
+ provider registry.terraform.io/ibm-cloud/ibm v1.21.0
```
### Affected Resource(s)
- ibm_hardware_firewall_shared
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
### Terraform Configuration Files
```hcl
resource "ibm_compute_bare_metal" "my_bm_test" {
hourly_billing = false
hostname = "my_bm_test"
domain = "my.lab.com"
datacenter = "dal10"
package_key_name = "DUAL_INTEL_XEON_PROCESSOR_SCALABLE_FAMILY_4_DRIVES"
process_key_name = "INTEL_INTEL_XEON_4110_2_10"
os_key_name = "OS_RHEL_7_1_64_BIT"
memory = 32
disk_key_names = [ "HARD_DRIVE_800GB_SSD" ]
network_speed = 1000
public_bandwidth = 500
public_vlan_id = <snip>
private_vlan_id = <snip>
ssh_key_ids = [<snip>]
}
resource "ibm_hardware_firewall_shared" "my_test_fw" {
firewall_type = "1000MBPS_HARDWARE_FIREWALL"
hardware_instance_id = ibm_compute_bare_metal.my_bm_test.id
}
```
### Expected Behavior
"Apply" waits until the firewall is created successfully, or throws an error returned by the API.
### Actual Behavior
Resource creation timed out resulting in an error:
```
ibm_hardware_firewall_shared.my_test_fw: Creating...
ibm_hardware_firewall_shared.my_test_fw: Still creating... [10s elapsed]
...
ibm_hardware_firewall_shared.my_test_fw: Still creating... [10m30s elapsed]
Error: timeout while waiting for state to become 'completed' (last state: 'pending', timeout: 10m0s)
```
When I navigate to the device's page, there is a pending "Firewall Setup" transaction:
```
ID <snip>
Started <snip>
Group Firewall Setup
Status Assign firewall context (FIREWALL_ASSIGN_CONTEXT)
Elapsed time 62 Minutes
Average duration 184.45 Minutes
```
### Steps to Reproduce
1. `terraform plan -out=wip`
2. `terraform apply "wip"`
| 1.0 | ibm_hardware_firewall_shared creation timeout - Hi there,
I hit a timeout when provisioning a Hardware Firewall (Shared) instance for a bare metal server. Not sure if this is a regular occurrence, or just a once-off.
### Terraform Version
```
$ terraform -v
Terraform v0.14.5
+ provider registry.terraform.io/ibm-cloud/ibm v1.21.0
```
### Affected Resource(s)
- ibm_hardware_firewall_shared
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
### Terraform Configuration Files
```hcl
resource "ibm_compute_bare_metal" "my_bm_test" {
hourly_billing = false
hostname = "my_bm_test"
domain = "my.lab.com"
datacenter = "dal10"
package_key_name = "DUAL_INTEL_XEON_PROCESSOR_SCALABLE_FAMILY_4_DRIVES"
process_key_name = "INTEL_INTEL_XEON_4110_2_10"
os_key_name = "OS_RHEL_7_1_64_BIT"
memory = 32
disk_key_names = [ "HARD_DRIVE_800GB_SSD" ]
network_speed = 1000
public_bandwidth = 500
public_vlan_id = <snip>
private_vlan_id = <snip>
ssh_key_ids = [<snip>]
}
resource "ibm_hardware_firewall_shared" "my_test_fw" {
firewall_type = "1000MBPS_HARDWARE_FIREWALL"
hardware_instance_id = ibm_compute_bare_metal.my_bm_test.id
}
```
### Expected Behavior
"Apply" waits until the firewall is created successfully, or throws an error returned by the API.
### Actual Behavior
Resource creation timed out resulting in an error:
```
ibm_hardware_firewall_shared.my_test_fw: Creating...
ibm_hardware_firewall_shared.my_test_fw: Still creating... [10s elapsed]
...
ibm_hardware_firewall_shared.my_test_fw: Still creating... [10m30s elapsed]
Error: timeout while waiting for state to become 'completed' (last state: 'pending', timeout: 10m0s)
```
When I navigate to the device's page, there is a pending "Firewall Setup" transaction:
```
ID <snip>
Started <snip>
Group Firewall Setup
Status Assign firewall context (FIREWALL_ASSIGN_CONTEXT)
Elapsed time 62 Minutes
Average duration 184.45 Minutes
```
### Steps to Reproduce
1. `terraform plan -out=wip`
2. `terraform apply "wip"`
| infrastructure | ibm hardware firewall shared creation timeout hi there i hit a timeout when provisioning a hardware firewall shared instance for a bare metal server not sure if this is a regular occurrence or just a once off terraform version terraform v terraform provider registry terraform io ibm cloud ibm affected resource s ibm hardware firewall shared if this issue appears to affect multiple resources it may be an issue with terraform s core so please mention this terraform configuration files hcl resource ibm compute bare metal my bm test hourly billing false hostname my bm test domain my lab com datacenter package key name dual intel xeon processor scalable family drives process key name intel intel xeon os key name os rhel bit memory disk key names network speed public bandwidth public vlan id private vlan id ssh key ids resource ibm hardware firewall shared my test fw firewall type hardware firewall hardware instance id ibm compute bare metal my bm test id expected behavior apply waits until the firewall is created successfully or throws an error returned by the api actual behavior resource creation timed out resulting in an error ibm hardware firewall shared my test fw creating ibm hardware firewall shared my test fw still creating ibm hardware firewall shared my test fw still creating error timeout while waiting for state to become completed last state pending timeout when i navigate to the device s page there is a pending firewall setup transaction id started group firewall setup status assign firewall context firewall assign context elapsed time minutes average duration minutes steps to reproduce terraform plan out wip terraform apply wip | 1 |
10,626 | 8,656,573,155 | IssuesEvent | 2018-11-27 18:50:35 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Cleaning up ETL sessions | area-Infrastructure test enhancement | On lab machines during CoreFX runs we will occasionally start to see errors like this. I investigated this and the problem is that the System process is still holding a handle to an ETL file in that workspace. When we then try to delete the workspace we can't because that handle prevents deletion. We need to do a better job of cleaning up our ETL sessions. Since test crashes and timeouts are mostly what cause this we should probably handle this issue at the run level and not the harness level.
@billwert @brianrob
```ERROR: [WS-CLEANUP] Cannot delete workspace: remote file operation failed: C:\J\w\perf_windows_---356c2fc4 at hudson.remoting.Channel@7fecbfe2:JNLP4-connect connection from 131.107.147.138/131.107.147.138:5616: java.io.IOException: Unable to delete 'C:\J\w\perf_windows_---356c2fc4\artifacts\bin\tests\System.Text.Encoding.Performance.Tests\netcoreapp-Windows_NT-Release-x64'. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts.``` | 1.0 | Cleaning up ETL sessions - On lab machines during CoreFX runs we will occasionally start to see errors like this. I investigated this and the problem is that the System process is still holding a handle to an ETL file in that workspace. When we then try to delete the workspace we can't because that handle prevents deletion. We need to do a better job of cleaning up our ETL sessions. Since test crashes and timeouts are mostly what cause this we should probably handle this issue at the run level and not the harness level.
@billwert @brianrob
```ERROR: [WS-CLEANUP] Cannot delete workspace: remote file operation failed: C:\J\w\perf_windows_---356c2fc4 at hudson.remoting.Channel@7fecbfe2:JNLP4-connect connection from 131.107.147.138/131.107.147.138:5616: java.io.IOException: Unable to delete 'C:\J\w\perf_windows_---356c2fc4\artifacts\bin\tests\System.Text.Encoding.Performance.Tests\netcoreapp-Windows_NT-Release-x64'. Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts.``` | infrastructure | cleaning up etl sessions on lab machines during corefx runs we will occasionally start to see errors like this i investigated this and the problem is that the system process is still holding a handle to an etl file in that workspace when we then try to delete the workspace we can t because that handle prevents deletion we need to do a better job of cleaning up our etl sessions since test crashes and timeouts are mostly what cause this we should probably handle this issue at the run level and not the harness level billwert brianrob error cannot delete workspace remote file operation failed c j w perf windows at hudson remoting channel connect connection from java io ioexception unable to delete c j w perf windows artifacts bin tests system text encoding performance tests netcoreapp windows nt release tried times of a maximum of waiting sec between attempts | 1 |
7,083 | 6,755,945,343 | IssuesEvent | 2017-10-24 03:53:01 | marklogic-community/marklogic-spring-batch | https://api.github.com/repos/marklogic-community/marklogic-spring-batch | opened | Recursively traverse a directory for files | infrastructure | Need to modify the file reader to run this type of behavior
Need to ingest CSV files recursively down the file directory | 1.0 | Recursively traverse a directory for files - Need to modify the file reader to run this type of behavior
Need to ingest CSV files recursively down the file directory | infrastructure | recursively traverse a directory for files need to modify the file reader to run this type of behavior need to ingest csv files recursively down the file directory | 1 |
175,028 | 13,529,383,661 | IssuesEvent | 2020-09-15 18:12:25 | astropy/specutils | https://api.github.com/repos/astropy/specutils | opened | Run bandit on specutils and integrate it into the CI | testing | [Bandit](https://pypi.org/project/bandit/) is a now I think fairly standard tool for auditing Python packages for known security issues. We should try running that on specutils to make sure there aren't issues, and if not set it up to run on new PRs as part of the CI. (There's a pretty much ready-to-go github action for this it looks like, at least based on what I see @pllim implemented in https://github.com/spacetelescope/synphot_refactor). | 1.0 | Run bandit on specutils and integrate it into the CI - [Bandit](https://pypi.org/project/bandit/) is a now I think fairly standard tool for auditing Python packages for known security issues. We should try running that on specutils to make sure there aren't issues, and if not set it up to run on new PRs as part of the CI. (There's a pretty much ready-to-go github action for this it looks like, at least based on what I see @pllim implemented in https://github.com/spacetelescope/synphot_refactor). | non_infrastructure | run bandit on specutils and integrate it into the ci is a now i think fairly standard tool for auditing python packages for known security issues we should try running that on specutils to make sure there aren t issues and if not set it up to run on new prs as part of the ci there s a pretty much ready to go github action for this it looks like at least based on what i see pllim implemented in | 0 |
214,656 | 16,602,858,661 | IssuesEvent | 2021-06-01 22:10:17 | links-lang/links | https://api.github.com/repos/links-lang/links | closed | Travis build fails due to Sphinx requiring Python >= 3.6 | testsuite | Travis currently [fails](https://travis-ci.org/github/links-lang/links/builds/770921976#L848) at `make doc` on master because the `sphinx` Python library seems to have started using features that are only available in Python >= 3.6.
There are several remedies:
1. Try to hard-code using an older version of `sphinx`.
2. Manually request installation of Python 3.6 (Ubuntu Xenial/16.04, which we currently use in Travis, has a default version of 3.5 when installing "python3", but Python 3.6 can be requested manually).
3. Update our Travis bots to use Ubuntu Bionic/18.04, which installs Python 3.6 as default.
The last option seems favorable to me, as we've had several problems in the past due do using an ancient Ubuntu version. However, this would mean that we never make the requirement for Python >= 3.6 explicit anywhere. I have verified in an experiment that simply raising the Ubuntu version in Travis from Xenial to Bionic fixes the problem, without needing to change anything else in the Travis config. | 1.0 | Travis build fails due to Sphinx requiring Python >= 3.6 - Travis currently [fails](https://travis-ci.org/github/links-lang/links/builds/770921976#L848) at `make doc` on master because the `sphinx` Python library seems to have started using features that are only available in Python >= 3.6.
There are several remedies:
1. Try to hard-code using an older version of `sphinx`.
2. Manually request installation of Python 3.6 (Ubuntu Xenial/16.04, which we currently use in Travis, has a default version of 3.5 when installing "python3", but Python 3.6 can be requested manually).
3. Update our Travis bots to use Ubuntu Bionic/18.04, which installs Python 3.6 as default.
The last option seems favorable to me, as we've had several problems in the past due do using an ancient Ubuntu version. However, this would mean that we never make the requirement for Python >= 3.6 explicit anywhere. I have verified in an experiment that simply raising the Ubuntu version in Travis from Xenial to Bionic fixes the problem, without needing to change anything else in the Travis config. | non_infrastructure | travis build fails due to sphinx requiring python travis currently at make doc on master because the sphinx python library seems to have started using features that are only available in python there are several remedies try to hard code using an older version of sphinx manually request installation of python ubuntu xenial which we currently use in travis has a default version of when installing but python can be requested manually update our travis bots to use ubuntu bionic which installs python as default the last option seems favorable to me as we ve had several problems in the past due do using an ancient ubuntu version however this would mean that we never make the requirement for python explicit anywhere i have verified in an experiment that simply raising the ubuntu version in travis from xenial to bionic fixes the problem without needing to change anything else in the travis config | 0 |
472,600 | 13,627,916,196 | IssuesEvent | 2020-09-24 13:15:32 | googleapis/java-monitoring | https://api.github.com/repos/googleapis/java-monitoring | closed | Add support for Cloud Monitoring API v3 Client Library | api: monitoring priority: p2 type: bug | Currently monitoring client does not provide support to execute mql query.for execute a mql query following library is necessary.
```
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-monitoring</artifactId>
<version>{google-api-services-monitoring.version}</version>
</dependency>
```
./cc @chingor13 | 1.0 | Add support for Cloud Monitoring API v3 Client Library - Currently monitoring client does not provide support to execute mql query.for execute a mql query following library is necessary.
```
<dependency>
<groupId>com.google.apis</groupId>
<artifactId>google-api-services-monitoring</artifactId>
<version>{google-api-services-monitoring.version}</version>
</dependency>
```
./cc @chingor13 | non_infrastructure | add support for cloud monitoring api client library currently monitoring client does not provide support to execute mql query for execute a mql query following library is necessary com google apis google api services monitoring google api services monitoring version cc | 0 |
21,013 | 14,274,398,075 | IssuesEvent | 2020-11-22 03:40:20 | google/iree | https://api.github.com/repos/google/iree | closed | Setup clang-tidy github action | help wanted infrastructure | We should be able to get clang-tidy running on github actions along with the linters. The compile-commands json we generate for ninja should be all we need.
Locally, I'm running clang-tidy like this:
```
clang-tidy d:\Dev\iree\iree\vm\bytecode_module.cc \
--export-fixes=- \
--checks=abseil-*,bugprone-*,-bugprone-exception-escape,google-*,misc-*,-misc-unused-parameters,performance-* \
-p=D:\Dev\iree-build\
```
There is likely a github action already on the market we can plug in and use, otherwise it should be pretty easy to do ourselves for the output. Ideally there's an action that would apply the fixes from clang-tidy (using [github actions](https://developer.github.com/v3/checks/runs/#actions-object):

| 1.0 | Setup clang-tidy github action - We should be able to get clang-tidy running on github actions along with the linters. The compile-commands json we generate for ninja should be all we need.
Locally, I'm running clang-tidy like this:
```
clang-tidy d:\Dev\iree\iree\vm\bytecode_module.cc \
--export-fixes=- \
--checks=abseil-*,bugprone-*,-bugprone-exception-escape,google-*,misc-*,-misc-unused-parameters,performance-* \
-p=D:\Dev\iree-build\
```
There is likely a github action already on the market we can plug in and use, otherwise it should be pretty easy to do ourselves for the output. Ideally there's an action that would apply the fixes from clang-tidy (using [github actions](https://developer.github.com/v3/checks/runs/#actions-object):

| infrastructure | setup clang tidy github action we should be able to get clang tidy running on github actions along with the linters the compile commands json we generate for ninja should be all we need locally i m running clang tidy like this clang tidy d dev iree iree vm bytecode module cc export fixes checks abseil bugprone bugprone exception escape google misc misc unused parameters performance p d dev iree build there is likely a github action already on the market we can plug in and use otherwise it should be pretty easy to do ourselves for the output ideally there s an action that would apply the fixes from clang tidy using | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.