Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
174,248 | 27,606,003,106 | IssuesEvent | 2023-03-09 13:02:56 | Kotlin/kotlinx.coroutines | https://api.github.com/repos/Kotlin/kotlinx.coroutines | closed | Consider deprecating or changing the behaviour of CoroutineContext.isActive | enhancement design breaking change for 1.7 | According to the [doc](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/is-active.html), `isActive` has the following property:
>The coroutineContext.isActive expression is a shortcut for coroutineContext[Job]?.isActive == true. See [Job.isActive](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/-job/is-active.html).
It means that, if the `Job` is not present, `isActive` always returns `false`.
We have multiple reports that such behaviour can be error-prone when used with non-`kotlinx.coroutines` entry points, such as Ktor and `suspend fun main`, because it is inconsistent with the overall contract:
>(Job) has not been completed and was not cancelled yet
`CoroutineContext.isActive` predates both `CoroutineScope` (which should always have a `Job` in it, if it's not `GlobalScope`) and `job` extension, so it may be the case that it can be safely deprecated.
Basically, we have three options:
* Do nothing, left things as is. It doesn't solve the original issue, but also doesn't introduce any potentially breaking changes
* Deprecate `CoroutineContext.isActive`. Such change has multiple potential downsides
* Its only possible replacement is `this.job.isActive`, but this replacement is not equivalent to the original method -- `.job` throws an exception for contexts without a `Job`. An absence of replacement can be too disturbing as [a lot of code](https://grep.app/search?q=context.isActive&filter[lang][0]=Kotlin) rely on a perfectly fine `ctxWithJob.isActive`
* Code that relies on `.job.isActive` no longer can be called from such entry points safely
* Change the default behaviour -- return `true`. It also "fixes" such patterns as `GlobalScope.isActive` but basically is a breaking change | 1.0 | Consider deprecating or changing the behaviour of CoroutineContext.isActive - According to the [doc](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/is-active.html), `isActive` has the following property:
>The coroutineContext.isActive expression is a shortcut for coroutineContext[Job]?.isActive == true. See [Job.isActive](https://kotlin.github.io/kotlinx.coroutines/kotlinx-coroutines-core/kotlinx.coroutines/-job/is-active.html).
It means that, if the `Job` is not present, `isActive` always returns `false`.
We have multiple reports that such behaviour can be error-prone when used with non-`kotlinx.coroutines` entry points, such as Ktor and `suspend fun main`, because it is inconsistent with the overall contract:
>(Job) has not been completed and was not cancelled yet
`CoroutineContext.isActive` predates both `CoroutineScope` (which should always have a `Job` in it, if it's not `GlobalScope`) and `job` extension, so it may be the case that it can be safely deprecated.
Basically, we have three options:
* Do nothing, left things as is. It doesn't solve the original issue, but also doesn't introduce any potentially breaking changes
* Deprecate `CoroutineContext.isActive`. Such change has multiple potential downsides
* Its only possible replacement is `this.job.isActive`, but this replacement is not equivalent to the original method -- `.job` throws an exception for contexts without a `Job`. An absence of replacement can be too disturbing as [a lot of code](https://grep.app/search?q=context.isActive&filter[lang][0]=Kotlin) rely on a perfectly fine `ctxWithJob.isActive`
* Code that relies on `.job.isActive` no longer can be called from such entry points safely
* Change the default behaviour -- return `true`. It also "fixes" such patterns as `GlobalScope.isActive` but basically is a breaking change | non_infrastructure | consider deprecating or changing the behaviour of coroutinecontext isactive according to the isactive has the following property the coroutinecontext isactive expression is a shortcut for coroutinecontext isactive true see it means that if the job is not present isactive always returns false we have multiple reports that such behaviour can be error prone when used with non kotlinx coroutines entry points such as ktor and suspend fun main because it is inconsistent with the overall contract job has not been completed and was not cancelled yet coroutinecontext isactive predates both coroutinescope which should always have a job in it if it s not globalscope and job extension so it may be the case that it can be safely deprecated basically we have three options do nothing left things as is it doesn t solve the original issue but also doesn t introduce any potentially breaking changes deprecate coroutinecontext isactive such change has multiple potential downsides its only possible replacement is this job isactive but this replacement is not equivalent to the original method job throws an exception for contexts without a job an absence of replacement can be too disturbing as kotlin rely on a perfectly fine ctxwithjob isactive code that relies on job isactive no longer can be called from such entry points safely change the default behaviour return true it also fixes such patterns as globalscope isactive but basically is a breaking change | 0 |
31,905 | 26,230,972,289 | IssuesEvent | 2023-01-05 00:08:34 | iree-org/iree | https://api.github.com/repos/iree-org/iree | closed | Schedule release candidate CI workflow picks up commits with failed checks | bug 🐞 infrastructure | ### What happened?
https://github.com/iree-org/iree/actions/runs/3469376607/jobs/5796297215
picked a failing commit (https://github.com/iree-org/iree/commit/3ff9d517054313a63d20012dcf9510e307915df1) as the "last green commit"
The code to pick up the commit is at https://github.com/talentpair/last-green-commit-action/blob/d95cfa836b22ef047dd0a8ddb1e6d9567982d702/src/main.ts#L34
GitHub REST api returns the `check_suites` via https://docs.github.com/en/rest/checks/suites#list-check-suites-for-a-git-reference
In the example above, it returns a list of check_suites with 13 checks, and some of them are with status `queued` and some of them are with the status `completed`
https://api.github.com/repos/iree-org/iree/check-suites/9307573321
https://api.github.com/repos/iree-org/iree/check-suites/9316620900
### Steps to reproduce your issue
_No response_
### What component(s) does this issue relate to?
Other
### Version information
_No response_
### Additional context
_No response_ | 1.0 | Schedule release candidate CI workflow picks up commits with failed checks - ### What happened?
https://github.com/iree-org/iree/actions/runs/3469376607/jobs/5796297215
picked a failing commit (https://github.com/iree-org/iree/commit/3ff9d517054313a63d20012dcf9510e307915df1) as the "last green commit"
The code to pick up the commit is at https://github.com/talentpair/last-green-commit-action/blob/d95cfa836b22ef047dd0a8ddb1e6d9567982d702/src/main.ts#L34
GitHub REST api returns the `check_suites` via https://docs.github.com/en/rest/checks/suites#list-check-suites-for-a-git-reference
In the example above, it returns a list of check_suites with 13 checks, and some of them are with status `queued` and some of them are with the status `completed`
https://api.github.com/repos/iree-org/iree/check-suites/9307573321
https://api.github.com/repos/iree-org/iree/check-suites/9316620900
### Steps to reproduce your issue
_No response_
### What component(s) does this issue relate to?
Other
### Version information
_No response_
### Additional context
_No response_ | infrastructure | schedule release candidate ci workflow picks up commits with failed checks what happened picked a failing commit as the last green commit the code to pick up the commit is at github rest api returns the check suites via in the example above it returns a list of check suites with checks and some of them are with status queued and some of them are with the status completed steps to reproduce your issue no response what component s does this issue relate to other version information no response additional context no response | 1 |
9,232 | 7,879,406,414 | IssuesEvent | 2018-06-26 13:20:20 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Grid copy and paste seems to be broken | bug interface/infrastructure | Attempts to paste into a grid cause ApsimX to crash. The problem appears to be a null reference exception. This should be fixed, and we should also have a more robust exception handling mechanism, to try to handle just failures more gracefully (and ideally without data loss). | 1.0 | Grid copy and paste seems to be broken - Attempts to paste into a grid cause ApsimX to crash. The problem appears to be a null reference exception. This should be fixed, and we should also have a more robust exception handling mechanism, to try to handle just failures more gracefully (and ideally without data loss). | infrastructure | grid copy and paste seems to be broken attempts to paste into a grid cause apsimx to crash the problem appears to be a null reference exception this should be fixed and we should also have a more robust exception handling mechanism to try to handle just failures more gracefully and ideally without data loss | 1 |
17,369 | 12,321,682,131 | IssuesEvent | 2020-05-13 09:06:35 | microsoft/WindowsTemplateStudio | https://api.github.com/repos/microsoft/WindowsTemplateStudio | closed | VS Emulator: Recreate app | Can Close Out Soon Infrastructure enhancement | Add a button in the generation details that recreates the same app (user selection and folder destination).
This will be very usefull because when we are creating templates there are times when you fix something in the template and you want to generate the same app to check if the issue is fixed or not and you always have to open the wizard and set the same user selection. | 1.0 | VS Emulator: Recreate app - Add a button in the generation details that recreates the same app (user selection and folder destination).
This will be very usefull because when we are creating templates there are times when you fix something in the template and you want to generate the same app to check if the issue is fixed or not and you always have to open the wizard and set the same user selection. | infrastructure | vs emulator recreate app add a button in the generation details that recreates the same app user selection and folder destination this will be very usefull because when we are creating templates there are times when you fix something in the template and you want to generate the same app to check if the issue is fixed or not and you always have to open the wizard and set the same user selection | 1 |
11,330 | 9,104,771,880 | IssuesEvent | 2019-02-20 19:02:27 | square/misk-web | https://api.github.com/repos/square/misk-web | closed | Rework misk-web repo to have a single Rush managed directory | enhancement infrastructure | Currently we have 2 Rush managed directories
- `examples`
- `misk-web/web/packages`
A single one will reduce the legwork to bumping and publishing packages since version updates will be updated across the `@misk/*` packages and the example code.
| 1.0 | Rework misk-web repo to have a single Rush managed directory - Currently we have 2 Rush managed directories
- `examples`
- `misk-web/web/packages`
A single one will reduce the legwork to bumping and publishing packages since version updates will be updated across the `@misk/*` packages and the example code.
| infrastructure | rework misk web repo to have a single rush managed directory currently we have rush managed directories examples misk web web packages a single one will reduce the legwork to bumping and publishing packages since version updates will be updated across the misk packages and the example code | 1 |
14,962 | 11,272,264,892 | IssuesEvent | 2020-01-14 14:35:26 | approvals/ApprovalTests.cpp | https://api.github.com/repos/approvals/ApprovalTests.cpp | closed | Naming of targets in third_party is non-standard | bug infrastructure | If the Catch2 project is included via CMake's `add_subdirectory()` or `FetchContent`, then the following targets are created, as far as I can tell:
* `Catch2`
* `Catch2::Catch2`
Unfortunately I didn't appreciate the significance of this when creating third_party/catch2/CMakeLists.txt - which creates the target `catch`
The name `Catch2::Catch2` is preferred, as if that is missing, a warning is issued when CMake runs, making it easier to track down missing dependencies.
I think it should be possible to create aliases in the CMake files in third_party to retain the old naming, in case any users are already depending on the third_party target-names I created earlier. | 1.0 | Naming of targets in third_party is non-standard - If the Catch2 project is included via CMake's `add_subdirectory()` or `FetchContent`, then the following targets are created, as far as I can tell:
* `Catch2`
* `Catch2::Catch2`
Unfortunately I didn't appreciate the significance of this when creating third_party/catch2/CMakeLists.txt - which creates the target `catch`
The name `Catch2::Catch2` is preferred, as if that is missing, a warning is issued when CMake runs, making it easier to track down missing dependencies.
I think it should be possible to create aliases in the CMake files in third_party to retain the old naming, in case any users are already depending on the third_party target-names I created earlier. | infrastructure | naming of targets in third party is non standard if the project is included via cmake s add subdirectory or fetchcontent then the following targets are created as far as i can tell unfortunately i didn t appreciate the significance of this when creating third party cmakelists txt which creates the target catch the name is preferred as if that is missing a warning is issued when cmake runs making it easier to track down missing dependencies i think it should be possible to create aliases in the cmake files in third party to retain the old naming in case any users are already depending on the third party target names i created earlier | 1 |
18,068 | 12,748,414,342 | IssuesEvent | 2020-06-26 20:04:17 | hyphacoop/organizing | https://api.github.com/repos/hyphacoop/organizing | opened | Re-evaluate external services | [priority-★☆☆] wg:infrastructure | <sup>_This initial comment is collaborative and open to modification by all._</sup>
## Task Summary
🎟️ **Re-ticketed from:** #
🗣 **Loomio:** N/A
📅 **Due date:** end-Oct
🎯 **Success criteria:** Have a plan for continued hosting (or deprecation) of each service listed below.
In our discussions, we planned to not migrate these services:
>- VM8: email 💧💾💾💾
>- VM9: matrix + whatsapp bridge + chatbot 💧💧💾
>- VM4: nextcloud + onlyoffice 💧💧💾💾
>- VM10: android vm 💧
We should re-evaluate the long-term plan for these services, and deprecate ones that are no longer important.
## To Do
- [ ] Discuss and have a plan for each service
| 1.0 | Re-evaluate external services - <sup>_This initial comment is collaborative and open to modification by all._</sup>
## Task Summary
🎟️ **Re-ticketed from:** #
🗣 **Loomio:** N/A
📅 **Due date:** end-Oct
🎯 **Success criteria:** Have a plan for continued hosting (or deprecation) of each service listed below.
In our discussions, we planned to not migrate these services:
>- VM8: email 💧💾💾💾
>- VM9: matrix + whatsapp bridge + chatbot 💧💧💾
>- VM4: nextcloud + onlyoffice 💧💧💾💾
>- VM10: android vm 💧
We should re-evaluate the long-term plan for these services, and deprecate ones that are no longer important.
## To Do
- [ ] Discuss and have a plan for each service
| infrastructure | re evaluate external services this initial comment is collaborative and open to modification by all task summary 🎟️ re ticketed from 🗣 loomio n a 📅 due date end oct 🎯 success criteria have a plan for continued hosting or deprecation of each service listed below in our discussions we planned to not migrate these services email 💧💾💾💾 matrix whatsapp bridge chatbot 💧💧💾 nextcloud onlyoffice 💧💧💾💾 android vm 💧 we should re evaluate the long term plan for these services and deprecate ones that are no longer important to do discuss and have a plan for each service | 1 |
831,539 | 32,051,978,481 | IssuesEvent | 2023-09-23 17:10:02 | Hamlib/Hamlib | https://api.github.com/repos/Hamlib/Hamlib | opened | FT-DX101MP ST command fix | bug priority | To maintain compatibility with older firmware need to test if ST command is availble.
Send ST; and if ?; is received command is not available.
| 1.0 | FT-DX101MP ST command fix - To maintain compatibility with older firmware need to test if ST command is availble.
Send ST; and if ?; is received command is not available.
| non_infrastructure | ft st command fix to maintain compatibility with older firmware need to test if st command is availble send st and if is received command is not available | 0 |
28,656 | 23,422,211,677 | IssuesEvent | 2022-08-13 21:53:45 | oppia/oppia-android | https://api.github.com/repos/oppia/oppia-android | closed | [A11Y Advanced] Lessons tab flow needs to be improved | Type: Improvement Priority: Essential issue_type_infrastructure issue_user_impact_low user_team | Lessons tab flow needs to be improved. Current experience is shown below:
https://user-images.githubusercontent.com/9396084/136244615-eb6b4f3f-e676-485d-9477-579822845039.mp4
| 1.0 | [A11Y Advanced] Lessons tab flow needs to be improved - Lessons tab flow needs to be improved. Current experience is shown below:
https://user-images.githubusercontent.com/9396084/136244615-eb6b4f3f-e676-485d-9477-579822845039.mp4
| infrastructure | lessons tab flow needs to be improved lessons tab flow needs to be improved current experience is shown below | 1 |
417,795 | 12,179,342,906 | IssuesEvent | 2020-04-28 10:30:33 | web-platform-tests/wpt | https://api.github.com/repos/web-platform-tests/wpt | closed | Commits touching many files fail to run in Taskcluster | Taskcluster infra priority:roadmap | https://wpt.fyi/runs?max-count=100&label=beta shows these runs so far this year:

Beta runs are triggered weekly, so many dates are missing here, like all of July, Aug 5 and Aug 19.
To figure out what went wrong, one has to know what the weekly SHA was, and https://wpt.fyi/api/revisions/list?epochs=weekly&num_revisions=100 lists them going back in time.
Then a URL like https://api.github.com/repos/web-platform-tests/wpt/statuses/8561d630fb3c4ede85b33df61f91847a21c1989e will lead to the task group:
https://tools.taskcluster.net/groups/WfP5NQ-ISzahOnNbiJugEQ
This last time, it looks like all tasks failed like this:
```
[taskcluster 2019-08-19 00:00:58.151Z] === Task Starting ===
standard_init_linux.go:190: exec user process caused "argument list too long"
[taskcluster 2019-08-19 00:00:59.121Z] === Task Finished ===
```
For [Aug 5](https://tools.taskcluster.net/groups/Dq8CaooRRreUDE1TBSPcBg) it was the same.
@jgraham any idea why this happens, and if it's disproportionately affecting Beta runs?
Related: https://github.com/web-platform-tests/wpt/issues/14210 | 1.0 | Commits touching many files fail to run in Taskcluster - https://wpt.fyi/runs?max-count=100&label=beta shows these runs so far this year:

Beta runs are triggered weekly, so many dates are missing here, like all of July, Aug 5 and Aug 19.
To figure out what went wrong, one has to know what the weekly SHA was, and https://wpt.fyi/api/revisions/list?epochs=weekly&num_revisions=100 lists them going back in time.
Then a URL like https://api.github.com/repos/web-platform-tests/wpt/statuses/8561d630fb3c4ede85b33df61f91847a21c1989e will lead to the task group:
https://tools.taskcluster.net/groups/WfP5NQ-ISzahOnNbiJugEQ
This last time, it looks like all tasks failed like this:
```
[taskcluster 2019-08-19 00:00:58.151Z] === Task Starting ===
standard_init_linux.go:190: exec user process caused "argument list too long"
[taskcluster 2019-08-19 00:00:59.121Z] === Task Finished ===
```
For [Aug 5](https://tools.taskcluster.net/groups/Dq8CaooRRreUDE1TBSPcBg) it was the same.
@jgraham any idea why this happens, and if it's disproportionately affecting Beta runs?
Related: https://github.com/web-platform-tests/wpt/issues/14210 | non_infrastructure | commits touching many files fail to run in taskcluster shows these runs so far this year beta runs are triggered weekly so many dates are missing here like all of july aug and aug to figure out what went wrong one has to know what the weekly sha was and lists them going back in time then a url like will lead to the task group this last time it looks like all tasks failed like this task starting standard init linux go exec user process caused argument list too long task finished for it was the same jgraham any idea why this happens and if it s disproportionately affecting beta runs related | 0 |
183,174 | 21,714,470,828 | IssuesEvent | 2022-05-10 16:30:13 | svg-GHC-2/test_django.nv | https://api.github.com/repos/svg-GHC-2/test_django.nv | closed | CVE-2016-2513 (Low) detected in Django-1.8.3-py2.py3-none-any.whl - autoclosed | security vulnerability | ## CVE-2016-2513 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-1.8.3-py2.py3-none-any.whl</b></p></summary>
<p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Django-1.8.3-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/svg-GHC-2/test_django.nv/commit/9c82557a12ed8d1bf704180a7d351aa1518ef16c">9c82557a12ed8d1bf704180a7d351aa1518ef16c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests.
<p>Publish Date: 2016-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2513>CVE-2016-2513</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2513">https://nvd.nist.gov/vuln/detail/CVE-2016-2513</a></p>
<p>Release Date: 2016-04-08</p>
<p>Fix Resolution: 1.8.10,1.9.3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Django","packageVersion":"1.8.3","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"Django:1.8.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.8.10,1.9.3","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2513","vulnerabilityDetails":"The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2513","cvss3Severity":"low","cvss3Score":"3.1","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2016-2513 (Low) detected in Django-1.8.3-py2.py3-none-any.whl - autoclosed - ## CVE-2016-2513 - Low Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Django-1.8.3-py2.py3-none-any.whl</b></p></summary>
<p>A high-level Python Web framework that encourages rapid development and clean, pragmatic design.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/a3/e1/0f3c17b1caa559ba69513ff72e250377c268d5bd3e8ad2b22809c7e2e907/Django-1.8.3-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: /requirements.txt</p>
<p>Path to vulnerable library: /requirements.txt</p>
<p>
Dependency Hierarchy:
- :x: **Django-1.8.3-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/svg-GHC-2/test_django.nv/commit/9c82557a12ed8d1bf704180a7d351aa1518ef16c">9c82557a12ed8d1bf704180a7d351aa1518ef16c</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests.
<p>Publish Date: 2016-04-08
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2513>CVE-2016-2513</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2016-2513">https://nvd.nist.gov/vuln/detail/CVE-2016-2513</a></p>
<p>Release Date: 2016-04-08</p>
<p>Fix Resolution: 1.8.10,1.9.3</p>
</p>
</details>
<p></p>
***
<!-- REMEDIATE-OPEN-PR-START -->
- [ ] Check this box to open an automated fix PR
<!-- REMEDIATE-OPEN-PR-END -->
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Django","packageVersion":"1.8.3","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":false,"dependencyTree":"Django:1.8.3","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.8.10,1.9.3","isBinary":false}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2016-2513","vulnerabilityDetails":"The password hasher in contrib/auth/hashers.py in Django before 1.8.10 and 1.9.x before 1.9.3 allows remote attackers to enumerate users via a timing attack involving login requests.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-2513","cvss3Severity":"low","cvss3Score":"3.1","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"Required","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve low detected in django none any whl autoclosed cve low severity vulnerability vulnerable library django none any whl a high level python web framework that encourages rapid development and clean pragmatic design library home page a href path to dependency file requirements txt path to vulnerable library requirements txt dependency hierarchy x django none any whl vulnerable library found in head commit a href found in base branch main vulnerability details the password hasher in contrib auth hashers py in django before and x before allows remote attackers to enumerate users via a timing attack involving login requests publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree django isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the password hasher in contrib auth hashers py in django before and x before allows remote attackers to enumerate users via a timing attack involving login requests vulnerabilityurl | 0 |
22,043 | 14,972,682,530 | IssuesEvent | 2021-01-27 23:20:53 | neopragma/cobol-check | https://api.github.com/repos/neopragma/cobol-check | closed | Build approval test into the gradle build | enhancement infrastructure | Due to the fact this is a batch process, we can't catch enough errors with unit checks only. Need to set up an approval test using sample Cobol source and test suites and include it in the Gradle build with a dependency on the integration test step. | 1.0 | Build approval test into the gradle build - Due to the fact this is a batch process, we can't catch enough errors with unit checks only. Need to set up an approval test using sample Cobol source and test suites and include it in the Gradle build with a dependency on the integration test step. | infrastructure | build approval test into the gradle build due to the fact this is a batch process we can t catch enough errors with unit checks only need to set up an approval test using sample cobol source and test suites and include it in the gradle build with a dependency on the integration test step | 1 |
253,596 | 19,142,761,689 | IssuesEvent | 2021-12-02 02:02:17 | supabase/supabase | https://api.github.com/repos/supabase/supabase | closed | Realtime Blog Post 12/1/2021 has very bad chart | documentation | # Improve documentation
## Link
https://supabase.com/blog/2021/12/01/realtime-row-level-security-in-postgresql
## Describe the problem
I was quite scared when I saw this graph....
https://supabase.com/images/blog/launch-week-three/realtime-row-level-security-in-postgresql/supabase-realtime-processing-per-subscription.png
## Describe the improvement
Swap the X and Y axis labels (hopefully...)

| 1.0 | Realtime Blog Post 12/1/2021 has very bad chart - # Improve documentation
## Link
https://supabase.com/blog/2021/12/01/realtime-row-level-security-in-postgresql
## Describe the problem
I was quite scared when I saw this graph....
https://supabase.com/images/blog/launch-week-three/realtime-row-level-security-in-postgresql/supabase-realtime-processing-per-subscription.png
## Describe the improvement
Swap the X and Y axis labels (hopefully...)

| non_infrastructure | realtime blog post has very bad chart improve documentation link describe the problem i was quite scared when i saw this graph describe the improvement swap the x and y axis labels hopefully | 0 |
22,642 | 6,278,591,108 | IssuesEvent | 2017-07-18 14:39:39 | eclipse/che | https://api.github.com/repos/eclipse/che | closed | No error notification when trying to create a workspace with wrong name | kind/bug severity/P2 status/code-review team/plugin | While creating a new workspace add to the workspace name "-" symbol. After clicking the Save button nothing changes.

| 1.0 | No error notification when trying to create a workspace with wrong name - While creating a new workspace add to the workspace name "-" symbol. After clicking the Save button nothing changes.

| non_infrastructure | no error notification when trying to create a workspace with wrong name while creating a new workspace add to the workspace name symbol after clicking the save button nothing changes | 0 |
18,309 | 12,889,250,172 | IssuesEvent | 2020-07-13 14:15:53 | ansible/galaxy_ng | https://api.github.com/repos/ansible/galaxy_ng | closed | RPMs: Decide how to package the UI | area/infrastructure priority/high sprint/2 status/ready-for-QE type/enhancement | - Determine the best strategy for packaging the UI
- Produce PyPi and/or NPM packages that can be consumed by the RPM build process
- Coordinate with Evgeni
Subtask of #145 | 1.0 | RPMs: Decide how to package the UI - - Determine the best strategy for packaging the UI
- Produce PyPi and/or NPM packages that can be consumed by the RPM build process
- Coordinate with Evgeni
Subtask of #145 | infrastructure | rpms decide how to package the ui determine the best strategy for packaging the ui produce pypi and or npm packages that can be consumed by the rpm build process coordinate with evgeni subtask of | 1 |
788 | 2,904,916,276 | IssuesEvent | 2015-06-18 20:40:28 | openEXO/cloud-kepler | https://api.github.com/repos/openEXO/cloud-kepler | opened | First draft of Output FITS Structure | in progress infrastructure | There is no documentation for how the output file is structured. | 1.0 | First draft of Output FITS Structure - There is no documentation for how the output file is structured. | infrastructure | first draft of output fits structure there is no documentation for how the output file is structured | 1 |
13,910 | 10,543,505,078 | IssuesEvent | 2019-10-02 15:05:54 | fablabbcn/fablabs.io | https://api.github.com/repos/fablabbcn/fablabs.io | opened | Backstage search differs from frontend view | Infrastructure bug | **Describe the bug**
Some approved labs under the backstage are not appearing as they are on the fronted.
**To Reproduce**
Steps to reproduce the behavior:
Shown in screenshots!
**Expected behavior**
Example for Denmark on the front end you see 9 fablabs as approved, on the backstage, you only see 6 and there is no way to find them, try all possible search methods and it basically tells me that the labs don't exist, even though their link is working on the platform.
**Screenshots**
frontend:
<img width="594" alt="Screen Shot 2019-10-02 at 9 55 31 AM" src="https://user-images.githubusercontent.com/24419466/66055958-fdf31000-e4fb-11e9-8071-6f2f0aa8e124.png">
backstage:
<img width="560" alt="Screen Shot 2019-10-02 at 9 57 31 AM" src="https://user-images.githubusercontent.com/24419466/66055970-03505a80-e4fc-11e9-9607-ac99d68ffd77.png">
**Desktop (please complete the following information):**
- OS: macOS 10.14.6
- Browser: Chrome
- Version: Version 77.0.3865.90 (Official Build) (64-bit)
**Additional context**
not helping with actual metrics.
| 1.0 | Backstage search differs from frontend view - **Describe the bug**
Some approved labs under the backstage are not appearing as they are on the fronted.
**To Reproduce**
Steps to reproduce the behavior:
Shown in screenshots!
**Expected behavior**
Example for Denmark on the front end you see 9 fablabs as approved, on the backstage, you only see 6 and there is no way to find them, try all possible search methods and it basically tells me that the labs don't exist, even though their link is working on the platform.
**Screenshots**
frontend:
<img width="594" alt="Screen Shot 2019-10-02 at 9 55 31 AM" src="https://user-images.githubusercontent.com/24419466/66055958-fdf31000-e4fb-11e9-8071-6f2f0aa8e124.png">
backstage:
<img width="560" alt="Screen Shot 2019-10-02 at 9 57 31 AM" src="https://user-images.githubusercontent.com/24419466/66055970-03505a80-e4fc-11e9-9607-ac99d68ffd77.png">
**Desktop (please complete the following information):**
- OS: macOS 10.14.6
- Browser: Chrome
- Version: Version 77.0.3865.90 (Official Build) (64-bit)
**Additional context**
not helping with actual metrics.
| infrastructure | backstage search differs from frontend view describe the bug some approved labs under the backstage are not appearing as they are on the fronted to reproduce steps to reproduce the behavior shown in screenshots expected behavior example for denmark on the front end you see fablabs as approved on the backstage you only see and there is no way to find them try all possible search methods and it basically tells me that the labs don t exist even though their link is working on the platform screenshots frontend img width alt screen shot at am src backstage img width alt screen shot at am src desktop please complete the following information os macos browser chrome version version official build bit additional context not helping with actual metrics | 1 |
238,134 | 18,234,578,319 | IssuesEvent | 2021-10-01 04:22:57 | CoinAlpha/hummingbot | https://api.github.com/repos/CoinAlpha/hummingbot | closed | Documentation for `HangingOrdersTracker` class | documentation | ## Why
The `HangingOrdersTracker` class is one of the two clear components (the other is `APIThrottler`) that we have individualized even in the code that are going to be used as they are and something that we want to make it usable by the community.
## What
Add to developer documentation and include as much information as possible about **how to use** this class. | 1.0 | Documentation for `HangingOrdersTracker` class - ## Why
The `HangingOrdersTracker` class is one of the two clear components (the other is `APIThrottler`) that we have individualized even in the code that are going to be used as they are and something that we want to make it usable by the community.
## What
Add to developer documentation and include as much information as possible about **how to use** this class. | non_infrastructure | documentation for hangingorderstracker class why the hangingorderstracker class is one of the two clear components the other is apithrottler that we have individualized even in the code that are going to be used as they are and something that we want to make it usable by the community what add to developer documentation and include as much information as possible about how to use this class | 0 |
35,645 | 31,930,956,722 | IssuesEvent | 2023-09-19 07:23:20 | SonarSource/sonarlint-visualstudio | https://api.github.com/repos/SonarSource/sonarlint-visualstudio | closed | Fix MEF importing constructors - calls GetService | Type: Task Infrastructure Threading | Sub Group of Ticket https://github.com/SonarSource/sonarlint-visualstudio/issues/4512
Depends on #4859
TODO
- [x] ActiveDocumentLocator
- [x] VsInfoService
- [x] AbsoluteFilePathLocator
- [x] ToolWindowService
- [x] TeamExplorerController
- [x] VcxRequestFactory
- [x] StatusBarNotifier
- [x] InfoBarManager -> Only needs a test to confirm it is free threaded
- [x] IssueLocationActionsSourceProvider
- [x] StatusRequestHandler
- [x] ErrorListHelper
- [x] TaintIssuesSynchronizer | 1.0 | Fix MEF importing constructors - calls GetService - Sub Group of Ticket https://github.com/SonarSource/sonarlint-visualstudio/issues/4512
Depends on #4859
TODO
- [x] ActiveDocumentLocator
- [x] VsInfoService
- [x] AbsoluteFilePathLocator
- [x] ToolWindowService
- [x] TeamExplorerController
- [x] VcxRequestFactory
- [x] StatusBarNotifier
- [x] InfoBarManager -> Only needs a test to confirm it is free threaded
- [x] IssueLocationActionsSourceProvider
- [x] StatusRequestHandler
- [x] ErrorListHelper
- [x] TaintIssuesSynchronizer | infrastructure | fix mef importing constructors calls getservice sub group of ticket depends on todo activedocumentlocator vsinfoservice absolutefilepathlocator toolwindowservice teamexplorercontroller vcxrequestfactory statusbarnotifier infobarmanager only needs a test to confirm it is free threaded issuelocationactionssourceprovider statusrequesthandler errorlisthelper taintissuessynchronizer | 1 |
231,302 | 25,499,103,928 | IssuesEvent | 2022-11-28 01:07:00 | joshbnewton31080/NodeGoat | https://api.github.com/repos/joshbnewton31080/NodeGoat | closed | CVE-2017-16137 (Medium) detected in debug-2.2.0.tgz - autoclosed | security vulnerability | ## CVE-2017-16137 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>debug-2.2.0.tgz</b></p></summary>
<p>small debugging utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.2.0.tgz">https://registry.npmjs.org/debug/-/debug-2.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/connect/node_modules/debug/package.json</p>
<p>
Dependency Hierarchy:
- helmet-2.3.0.tgz (Root Library)
- connect-3.4.1.tgz
- :x: **debug-2.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshbnewton31080/NodeGoat/commit/a3c66c1e0636f4caeff5096ac64c1f21ebad3387">a3c66c1e0636f4caeff5096ac64c1f21ebad3387</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16137>CVE-2017-16137</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137">https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution (debug): 2.6.9</p>
<p>Direct dependency fix Resolution (helmet): 3.8.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | True | CVE-2017-16137 (Medium) detected in debug-2.2.0.tgz - autoclosed - ## CVE-2017-16137 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>debug-2.2.0.tgz</b></p></summary>
<p>small debugging utility</p>
<p>Library home page: <a href="https://registry.npmjs.org/debug/-/debug-2.2.0.tgz">https://registry.npmjs.org/debug/-/debug-2.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/connect/node_modules/debug/package.json</p>
<p>
Dependency Hierarchy:
- helmet-2.3.0.tgz (Root Library)
- connect-3.4.1.tgz
- :x: **debug-2.2.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/joshbnewton31080/NodeGoat/commit/a3c66c1e0636f4caeff5096ac64c1f21ebad3387">a3c66c1e0636f4caeff5096ac64c1f21ebad3387</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter. It takes around 50k characters to block for 2 seconds making this a low severity issue.
<p>Publish Date: 2018-06-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-16137>CVE-2017-16137</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137">https://nvd.nist.gov/view/vuln/detail?vulnId=CVE-2017-16137</a></p>
<p>Release Date: 2018-06-07</p>
<p>Fix Resolution (debug): 2.6.9</p>
<p>Direct dependency fix Resolution (helmet): 3.8.2</p>
</p>
</details>
<p></p>
***
:rescue_worker_helmet: Automatic Remediation is available for this issue | non_infrastructure | cve medium detected in debug tgz autoclosed cve medium severity vulnerability vulnerable library debug tgz small debugging utility library home page a href path to dependency file package json path to vulnerable library node modules connect node modules debug package json dependency hierarchy helmet tgz root library connect tgz x debug tgz vulnerable library found in head commit a href found in base branch master vulnerability details the debug module is vulnerable to regular expression denial of service when untrusted user input is passed into the o formatter it takes around characters to block for seconds making this a low severity issue publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution debug direct dependency fix resolution helmet rescue worker helmet automatic remediation is available for this issue | 0 |
33,730 | 27,759,902,765 | IssuesEvent | 2023-03-16 07:20:10 | nilearn/nilearn | https://api.github.com/repos/nilearn/nilearn | closed | Document all GitHub Actions workflows | Infrastructure Developer Experience | The [README.md](https://github.com/nilearn/nilearn/blob/main/.github/workflows/README.md) in `.github/workflows` only documents the documentation build workflow. It can be useful to comprehensively document all the GiHub Actions workflows here.
Linking comments: https://github.com/nilearn/nilearn/pull/3536#pullrequestreview-1315426922 and https://github.com/nilearn/nilearn/pull/3536#issuecomment-1446297374 | 1.0 | Document all GitHub Actions workflows - The [README.md](https://github.com/nilearn/nilearn/blob/main/.github/workflows/README.md) in `.github/workflows` only documents the documentation build workflow. It can be useful to comprehensively document all the GiHub Actions workflows here.
Linking comments: https://github.com/nilearn/nilearn/pull/3536#pullrequestreview-1315426922 and https://github.com/nilearn/nilearn/pull/3536#issuecomment-1446297374 | infrastructure | document all github actions workflows the in github workflows only documents the documentation build workflow it can be useful to comprehensively document all the gihub actions workflows here linking comments and | 1 |
75,175 | 25,569,390,607 | IssuesEvent | 2022-11-30 16:30:22 | department-of-veterans-affairs/va.gov-cms | https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms | opened | CMS: Explore behavior where banners can be published but not assigned to a system. | Defect Needs refining | ## Describe the defect
There are currently ~100 banner alerts for "Website coming soon" that are published, but that are not assigned to a system (NOTE, I've only checked a few, the assumption is that this is the case for all.) This shouldn't be possible, the assigned system is a required field.
A couple example:
http://prod.cms.va.gov/va-hines-health-care/vamc-banner-alert/2021-09-22/website-coming-soon-not-the-official-va-hines-health-care-website
http://prod.cms.va.gov/va-pittsburgh-health-care/vamc-banner-alert/2021-04-07/website-coming-soon-not-the-official-va-houston-health-care-website
> Hunch from Swirt:
> Then my next hunch is that it is actually set, but something about the winnower or the view is not making it appear as set..... though if it were truly set, it would show on the FE. This definitely needs a ticket for more investigation. There is some logic on that select list that disables items that are not within your section. It may run afoul for admins who have no section.
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## AC / Expected behavior
A clear and concise description of what you expected to happen.
## Screenshots
If applicable, add screenshots to help explain your problem.
## Additional context
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Desktop (please complete the following information if relevant, or delete)
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS workstream (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [x] `⭐️ Facilities`
- [ ] `⭐️ User support`
| 1.0 | CMS: Explore behavior where banners can be published but not assigned to a system. - ## Describe the defect
There are currently ~100 banner alerts for "Website coming soon" that are published, but that are not assigned to a system (NOTE, I've only checked a few, the assumption is that this is the case for all.) This shouldn't be possible, the assigned system is a required field.
A couple example:
http://prod.cms.va.gov/va-hines-health-care/vamc-banner-alert/2021-09-22/website-coming-soon-not-the-official-va-hines-health-care-website
http://prod.cms.va.gov/va-pittsburgh-health-care/vamc-banner-alert/2021-04-07/website-coming-soon-not-the-official-va-houston-health-care-website
> Hunch from Swirt:
> Then my next hunch is that it is actually set, but something about the winnower or the view is not making it appear as set..... though if it were truly set, it would show on the FE. This definitely needs a ticket for more investigation. There is some logic on that select list that disables items that are not within your section. It may run afoul for admins who have no section.
## To Reproduce
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
## AC / Expected behavior
A clear and concise description of what you expected to happen.
## Screenshots
If applicable, add screenshots to help explain your problem.
## Additional context
Add any other context about the problem here. Reach out to the Product Managers to determine if it should be escalated as critical (prevents users from accomplishing their work with no known workaround and needs to be addressed within 2 business days).
## Desktop (please complete the following information if relevant, or delete)
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
## Labels
(You can delete this section once it's complete)
- [x] Issue type (red) (defaults to "Defect")
- [ ] CMS subsystem (green)
- [ ] CMS practice area (blue)
- [x] CMS workstream (orange) (not needed for bug tickets)
- [ ] CMS-supported product (black)
### CMS Team
Please check the team(s) that will do this work.
- [ ] `Program`
- [ ] `Platform CMS Team`
- [ ] `Sitewide Crew`
- [ ] `⭐️ Sitewide CMS`
- [ ] `⭐️ Public Websites`
- [x] `⭐️ Facilities`
- [ ] `⭐️ User support`
| non_infrastructure | cms explore behavior where banners can be published but not assigned to a system describe the defect there are currently banner alerts for website coming soon that are published but that are not assigned to a system note i ve only checked a few the assumption is that this is the case for all this shouldn t be possible the assigned system is a required field a couple example hunch from swirt then my next hunch is that it is actually set but something about the winnower or the view is not making it appear as set though if it were truly set it would show on the fe this definitely needs a ticket for more investigation there is some logic on that select list that disables items that are not within your section it may run afoul for admins who have no section to reproduce steps to reproduce the behavior go to click on scroll down to see error ac expected behavior a clear and concise description of what you expected to happen screenshots if applicable add screenshots to help explain your problem additional context add any other context about the problem here reach out to the product managers to determine if it should be escalated as critical prevents users from accomplishing their work with no known workaround and needs to be addressed within business days desktop please complete the following information if relevant or delete os browser version labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms workstream orange not needed for bug tickets cms supported product black cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support | 0 |
20,873 | 14,222,468,590 | IssuesEvent | 2020-11-17 16:55:26 | pulibrary/dspace-cli | https://api.github.com/repos/pulibrary/dspace-cli | closed | Update the git repository URLs on updatespace to no longer use bitbucket repositories | infrastructure | Currently there are repositories using bitbucket remote URLs on the server environment. | 1.0 | Update the git repository URLs on updatespace to no longer use bitbucket repositories - Currently there are repositories using bitbucket remote URLs on the server environment. | infrastructure | update the git repository urls on updatespace to no longer use bitbucket repositories currently there are repositories using bitbucket remote urls on the server environment | 1 |
13,091 | 10,119,601,226 | IssuesEvent | 2019-07-31 11:54:48 | raiden-network/raiden-services | https://api.github.com/repos/raiden-network/raiden-services | opened | Add withdraw support to register-service script | Enhancement :star2: Infrastructure :office: | We should add support for withdrawing the deposit from the ServiceRegistry. | 1.0 | Add withdraw support to register-service script - We should add support for withdrawing the deposit from the ServiceRegistry. | infrastructure | add withdraw support to register service script we should add support for withdrawing the deposit from the serviceregistry | 1 |
14,381 | 10,776,195,323 | IssuesEvent | 2019-11-03 19:11:58 | xxks-kkk/shuati | https://api.github.com/repos/xxks-kkk/shuati | opened | Add test infrastructure for python code | infrastructure | Add test infrastructure code for python to allow push-button test on all python programs | 1.0 | Add test infrastructure for python code - Add test infrastructure code for python to allow push-button test on all python programs | infrastructure | add test infrastructure for python code add test infrastructure code for python to allow push button test on all python programs | 1 |
17,971 | 23,983,657,651 | IssuesEvent | 2022-09-13 17:03:54 | mdsreq-fga-unb/2022.1-Capita-C | https://api.github.com/repos/mdsreq-fga-unb/2022.1-Capita-C | closed | Processo de Requisitos | requisito ProcessoRequisitos REQ Comentários Professor | Da mesma maneira como foi realizado na unidade 1, está sendo listado um conjunto de atividades: quando serão feitas, por quem, etc. Onde essas atividades estão ou serão posicionadas no processo de trabalho?
**Fonte**
https://mdsreq-fga-unb.github.io/2022.1-Capita-C/processoER/
| 1.0 | Processo de Requisitos - Da mesma maneira como foi realizado na unidade 1, está sendo listado um conjunto de atividades: quando serão feitas, por quem, etc. Onde essas atividades estão ou serão posicionadas no processo de trabalho?
**Fonte**
https://mdsreq-fga-unb.github.io/2022.1-Capita-C/processoER/
| non_infrastructure | processo de requisitos da mesma maneira como foi realizado na unidade está sendo listado um conjunto de atividades quando serão feitas por quem etc onde essas atividades estão ou serão posicionadas no processo de trabalho fonte | 0 |
752,863 | 26,329,740,628 | IssuesEvent | 2023-01-10 09:53:40 | swhustla/comparison_of_time_series_methods | https://api.github.com/repos/swhustla/comparison_of_time_series_methods | closed | Prophet error for Straight line | bug High priority | Looks like seasonality for the straight line data returns an empty list that causes the further error. | 1.0 | Prophet error for Straight line - Looks like seasonality for the straight line data returns an empty list that causes the further error. | non_infrastructure | prophet error for straight line looks like seasonality for the straight line data returns an empty list that causes the further error | 0 |
18,827 | 13,129,291,116 | IssuesEvent | 2020-08-06 13:41:18 | bootstrapworld/curriculum | https://api.github.com/repos/bootstrapworld/curriculum | opened | In printed pyret workbooks where students write code for functions the word "end" hangs far below the last line | Infrastructure | Could it be moved up?
For example on p. 26 of the algebra workbook... | 1.0 | In printed pyret workbooks where students write code for functions the word "end" hangs far below the last line - Could it be moved up?
For example on p. 26 of the algebra workbook... | infrastructure | in printed pyret workbooks where students write code for functions the word end hangs far below the last line could it be moved up for example on p of the algebra workbook | 1 |
24,445 | 17,268,961,541 | IssuesEvent | 2021-07-22 17:05:06 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | [wasm] wasm-tools workload installation fails randomly | area-Infrastructure-mono | The installation for the performace CI builds is failing randomly like here https://helixri8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-alicial-wasmaotmicr448c314f6f6941daa4/x64.micro.net6.0.Partition11/console.dbe24a6e.log?sv=2019-07-07&se=2021-10-19T17%3A57%3A11Z&sr=c&sp=rl&sig=uFl%2FQ3RhxB9G%2BIQx120Qtws8L1maPMrQIg4vLmt8auM%3D
Usually even after retry.
```
[2021/07/21 11:53:11][INFO] $ dotnet --info
[2021/07/21 11:53:11][INFO] .NET SDK (reflecting any global.json):
[2021/07/21 11:53:11][INFO] Version: 6.0.100-rc.1.21371.4
[2021/07/21 11:53:11][INFO] Commit: ebd2d1d607
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] Runtime Environment:
[2021/07/21 11:53:11][INFO] OS Name: ubuntu
[2021/07/21 11:53:11][INFO] OS Version: 18.04
[2021/07/21 11:53:11][INFO] OS Platform: Linux
[2021/07/21 11:53:11][INFO] RID: ubuntu.18.04-x64
[2021/07/21 11:53:11][INFO] Base Path: /home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/sdk/6.0.100-rc.1.21371.4/
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] Host (useful for support):
[2021/07/21 11:53:11][INFO] Version: 6.0.0-rc.1.21369.14
[2021/07/21 11:53:11][INFO] Commit: bd35632892
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] .NET SDKs installed:
[2021/07/21 11:53:11][INFO] 6.0.100-rc.1.21371.4 [/home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/sdk]
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] .NET runtimes installed:
[2021/07/21 11:53:11][INFO] Microsoft.AspNetCore.App 6.0.0-rc.1.21370.12 [/home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/shared/Microsoft.AspNetCore.App]
[2021/07/21 11:53:11][INFO] Microsoft.NETCore.App 6.0.0-rc.1.21369.14 [/home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/shared/Microsoft.NETCore.App]
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] To install additional .NET runtimes or SDKs:
[2021/07/21 11:53:11][INFO] https://aka.ms/dotnet-download
[2021/07/21 11:53:11][INFO] $ pushd "/home/helixbot/work/A7280930/p/performance/src/benchmarks/micro/wasmaot"
[2021/07/21 11:53:11][INFO] $ dotnet workload install wasm-tools
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:12][INFO] Skip NuGet package signing validation. NuGet signing validation is not available on Linux or macOS https://aka.ms/workloadskippackagevalidation .
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.android.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.ios.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.maui.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.macos.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.workload.emscripten.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.workload.mono.toolchain.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.tvos.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.maccatalyst.
[2021/07/21 11:53:12][INFO] Installing workload manifest microsoft.net.sdk.macos version 12.0.100-preview.7183.
[2021/07/21 11:53:13][INFO] Installing workload manifest microsoft.net.sdk.ios version 15.0.100-preview.7183.
[2021/07/21 11:53:13][INFO] Installing workload manifest microsoft.net.sdk.maui version 6.0.100-preview.6.1003+sha.5c159aabf-azdo.4977641.
[2021/07/21 11:53:14][INFO] Installing workload manifest microsoft.net.sdk.android version 30.0.100-preview.7.91.
[2021/07/21 11:53:14][INFO] Installing workload manifest microsoft.net.sdk.maccatalyst version 15.0.100-preview.7183.
[2021/07/21 11:53:14][INFO] Installing workload manifest microsoft.net.workload.mono.toolchain version 6.0.0-rc.1.21371.7.
[2021/07/21 11:53:15][INFO] Installing workload manifest microsoft.net.sdk.tvos version 15.0.100-preview.7183.
[2021/07/21 11:53:16][INFO] Installing pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:17][INFO] Writing workload pack installation record for Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:17][INFO] Installing pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:18][INFO] Workload installation failed, rolling back installed packs...
[2021/07/21 11:53:18][INFO] Installing workload manifest microsoft.net.sdk.macos version 11.3.100-ci.main.723.
[2021/07/21 11:53:18][INFO] Installation roll back failed: Failed to install manifest microsoft.net.sdk.macos version 11.3.100-ci.main.723: The transaction has aborted..
[2021/07/21 11:53:18][INFO] Rolling back pack Microsoft.NET.Runtime.WebAssembly.Sdk installation...
[2021/07/21 11:53:18][INFO] Uninstalling workload pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7.
[2021/07/21 11:53:18][INFO] Rolling back pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm installation...
[2021/07/21 11:53:18][INFO] Workload installation failed: Downloading microsoft.netcore.app.runtime.mono.browser-wasm version 6.0.0-rc.1.21371.7 failed
[2021/07/21 11:53:18][INFO] install
[2021/07/21 11:53:18][INFO] Install a workload.
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Usage:
[2021/07/21 11:53:18][INFO] dotnet [options] workload install [<WORKLOAD_ID>...]
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Arguments:
[2021/07/21 11:53:18][INFO] <WORKLOAD_ID> The NuGet Package Id of the workload to install.
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Options:
[2021/07/21 11:53:18][INFO] --sdk-version <VERSION> The version of the SDK.
[2021/07/21 11:53:18][INFO] --configfile <FILE> The NuGet configuration file to use.
[2021/07/21 11:53:18][INFO] -s, --source <SOURCE> The NuGet package source to use
[2021/07/21 11:53:18][INFO] during the restore.
[2021/07/21 11:53:18][INFO] --skip-manifest-update Skip updating the workload manifests.
[2021/07/21 11:53:18][INFO] --from-cache <from-cache> Complete the operation from cache
[2021/07/21 11:53:18][INFO] (offline).
[2021/07/21 11:53:18][INFO] --download-to-cache Download packages needed to install a
[2021/07/21 11:53:18][INFO] <download-to-cache> workload to a folder which can be
[2021/07/21 11:53:18][INFO] used for offline installation.
[2021/07/21 11:53:18][INFO] --include-previews Allow prerelease workload manifests.
[2021/07/21 11:53:18][INFO] --temp-dir <temp-dir> Configure the temporary directory
[2021/07/21 11:53:18][INFO] used for this command (must be
[2021/07/21 11:53:18][INFO] secure).
[2021/07/21 11:53:18][INFO] --disable-parallel Prevent restoring multiple projects
[2021/07/21 11:53:18][INFO] in parallel.
[2021/07/21 11:53:18][INFO] --ignore-failed-sources Treat package source failures as
[2021/07/21 11:53:18][INFO] warnings.
[2021/07/21 11:53:18][INFO] --no-cache Do not cache packages and http
[2021/07/21 11:53:18][INFO] requests.
[2021/07/21 11:53:18][INFO] --interactive Allows the command to stop and wait
[2021/07/21 11:53:18][INFO] for user input or action (for example
[2021/07/21 11:53:18][INFO] to complete authentication).
[2021/07/21 11:53:18][INFO] -v, --verbosity Set the MSBuild verbosity level.
[2021/07/21 11:53:18][INFO] <d|detailed|diag|diagnostic|m|minimal| Allowed values are q[uiet],
[2021/07/21 11:53:18][INFO] n|normal|q|quiet> m[inimal], n[ormal], d[etailed], and
[2021/07/21 11:53:18][INFO] diag[nostic].
[2021/07/21 11:53:18][INFO] -?, -h, --help Show command line help.
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] $ popd
[2021/07/21 11:53:18][INFO] $ pushd "/home/helixbot/work/A7280930/p/performance/src/benchmarks/micro/wasmaot"
[2021/07/21 11:53:18][INFO] $ dotnet workload install wasm-tools
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Skip NuGet package signing validation. NuGet signing validation is not available on Linux or macOS https://aka.ms/workloadskippackagevalidation .
[2021/07/21 11:53:18][INFO] Updated advertising manifest microsoft.net.sdk.maui.
[2021/07/21 11:53:18][INFO] Updated advertising manifest microsoft.net.workload.emscripten.
[2021/07/21 11:53:18][INFO] Updated advertising manifest microsoft.net.sdk.android.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.maccatalyst.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.macos.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.tvos.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.workload.mono.toolchain.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.ios.
[2021/07/21 11:53:19][INFO] Installing pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:20][INFO] Writing workload pack installation record for Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:20][INFO] Installing pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:20][INFO] Workload installation failed, rolling back installed packs...
[2021/07/21 11:53:20][INFO] Uninstalling workload pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7.
[2021/07/21 11:53:20][INFO] Rolling back pack Microsoft.NET.Runtime.WebAssembly.Sdk installation...
[2021/07/21 11:53:20][INFO] Rolling back pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm installation...
[2021/07/21 11:53:20][INFO] Workload installation failed: Downloading microsoft.netcore.app.runtime.mono.browser-wasm version 6.0.0-rc.1.21371.7 failed
[2021/07/21 11:53:20][INFO] install
[2021/07/21 11:53:20][INFO] Install a workload.
[2021/07/21 11:53:20][INFO]
[2021/07/21 11:53:20][INFO] Usage:
[2021/07/21 11:53:20][INFO] dotnet [options] workload install [<WORKLOAD_ID>...]
[2021/07/21 11:53:20][INFO]
[2021/07/21 11:53:20][INFO] Arguments:
[2021/07/21 11:53:20][INFO] <WORKLOAD_ID> The NuGet Package Id of the workload to install.
[2021/07/21 11:53:20][INFO]
[2021/07/21 11:53:20][INFO] Options:
[2021/07/21 11:53:20][INFO] --sdk-version <VERSION> The version of the SDK.
[2021/07/21 11:53:20][INFO] --configfile <FILE> The NuGet configuration file to use.
[2021/07/21 11:53:20][INFO] -s, --source <SOURCE> The NuGet package source to use
[2021/07/21 11:53:20][INFO] during the restore.
[2021/07/21 11:53:20][INFO] --skip-manifest-update Skip updating the workload manifests.
[2021/07/21 11:53:20][INFO] --from-cache <from-cache> Complete the operation from cache
[2021/07/21 11:53:20][INFO] (offline).
[2021/07/21 11:53:20][INFO] --download-to-cache Download packages needed to install a
[2021/07/21 11:53:20][INFO] <download-to-cache> workload to a folder which can be
[2021/07/21 11:53:20][INFO] used for offline installation.
[2021/07/21 11:53:20][INFO] --include-previews Allow prerelease workload manifests.
[2021/07/21 11:53:20][INFO] --temp-dir <temp-dir> Configure the temporary directory
[2021/07/21 11:53:20][INFO] used for this command (must be
[2021/07/21 11:53:20][INFO] secure).
[2021/07/21 11:53:20][INFO] --disable-parallel Prevent restoring multiple projects
[2021/07/21 11:53:20][INFO] in parallel.
[2021/07/21 11:53:20][INFO] --ignore-failed-sources Treat package source failures as
[2021/07/21 11:53:20][INFO] warnings.
[2021/07/21 11:53:20][INFO] --no-cache Do not cache packages and http
[2021/07/21 11:53:20][INFO] requests.
[2021/07/21 11:53:20][INFO] --interactive Allows the command to stop and wait
[2021/07/21 11:53:20][INFO] for user input or action (for example
[2021/07/21 11:53:20][INFO] to complete authentication).
[2021/07/21 11:53:20][INFO] -v, --verbosity Set the MSBuild verbosity level.
[2021/07/21 11:53:20][INFO] <d|detailed|diag|diagnostic|m|minimal| Allowed values are q[uiet],
[2021/07/21 11:53:20][INFO] n|normal|q|quiet> m[inimal], n[ormal], d[etailed], and
[2021/07/21 11:53:20][INFO] diag[nostic].
[2021/07/21 11:53:20][INFO] -?, -h, --help Show command line help.
``` | 1.0 | [wasm] wasm-tools workload installation fails randomly - The installation for the performace CI builds is failing randomly like here https://helixri8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-alicial-wasmaotmicr448c314f6f6941daa4/x64.micro.net6.0.Partition11/console.dbe24a6e.log?sv=2019-07-07&se=2021-10-19T17%3A57%3A11Z&sr=c&sp=rl&sig=uFl%2FQ3RhxB9G%2BIQx120Qtws8L1maPMrQIg4vLmt8auM%3D
Usually even after retry.
```
[2021/07/21 11:53:11][INFO] $ dotnet --info
[2021/07/21 11:53:11][INFO] .NET SDK (reflecting any global.json):
[2021/07/21 11:53:11][INFO] Version: 6.0.100-rc.1.21371.4
[2021/07/21 11:53:11][INFO] Commit: ebd2d1d607
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] Runtime Environment:
[2021/07/21 11:53:11][INFO] OS Name: ubuntu
[2021/07/21 11:53:11][INFO] OS Version: 18.04
[2021/07/21 11:53:11][INFO] OS Platform: Linux
[2021/07/21 11:53:11][INFO] RID: ubuntu.18.04-x64
[2021/07/21 11:53:11][INFO] Base Path: /home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/sdk/6.0.100-rc.1.21371.4/
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] Host (useful for support):
[2021/07/21 11:53:11][INFO] Version: 6.0.0-rc.1.21369.14
[2021/07/21 11:53:11][INFO] Commit: bd35632892
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] .NET SDKs installed:
[2021/07/21 11:53:11][INFO] 6.0.100-rc.1.21371.4 [/home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/sdk]
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] .NET runtimes installed:
[2021/07/21 11:53:11][INFO] Microsoft.AspNetCore.App 6.0.0-rc.1.21370.12 [/home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/shared/Microsoft.AspNetCore.App]
[2021/07/21 11:53:11][INFO] Microsoft.NETCore.App 6.0.0-rc.1.21369.14 [/home/helixbot/work/A7280930/p/performance/tools/dotnet/x64/shared/Microsoft.NETCore.App]
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:11][INFO] To install additional .NET runtimes or SDKs:
[2021/07/21 11:53:11][INFO] https://aka.ms/dotnet-download
[2021/07/21 11:53:11][INFO] $ pushd "/home/helixbot/work/A7280930/p/performance/src/benchmarks/micro/wasmaot"
[2021/07/21 11:53:11][INFO] $ dotnet workload install wasm-tools
[2021/07/21 11:53:11][INFO]
[2021/07/21 11:53:12][INFO] Skip NuGet package signing validation. NuGet signing validation is not available on Linux or macOS https://aka.ms/workloadskippackagevalidation .
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.android.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.ios.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.maui.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.macos.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.workload.emscripten.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.workload.mono.toolchain.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.tvos.
[2021/07/21 11:53:12][INFO] Updated advertising manifest microsoft.net.sdk.maccatalyst.
[2021/07/21 11:53:12][INFO] Installing workload manifest microsoft.net.sdk.macos version 12.0.100-preview.7183.
[2021/07/21 11:53:13][INFO] Installing workload manifest microsoft.net.sdk.ios version 15.0.100-preview.7183.
[2021/07/21 11:53:13][INFO] Installing workload manifest microsoft.net.sdk.maui version 6.0.100-preview.6.1003+sha.5c159aabf-azdo.4977641.
[2021/07/21 11:53:14][INFO] Installing workload manifest microsoft.net.sdk.android version 30.0.100-preview.7.91.
[2021/07/21 11:53:14][INFO] Installing workload manifest microsoft.net.sdk.maccatalyst version 15.0.100-preview.7183.
[2021/07/21 11:53:14][INFO] Installing workload manifest microsoft.net.workload.mono.toolchain version 6.0.0-rc.1.21371.7.
[2021/07/21 11:53:15][INFO] Installing workload manifest microsoft.net.sdk.tvos version 15.0.100-preview.7183.
[2021/07/21 11:53:16][INFO] Installing pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:17][INFO] Writing workload pack installation record for Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:17][INFO] Installing pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:18][INFO] Workload installation failed, rolling back installed packs...
[2021/07/21 11:53:18][INFO] Installing workload manifest microsoft.net.sdk.macos version 11.3.100-ci.main.723.
[2021/07/21 11:53:18][INFO] Installation roll back failed: Failed to install manifest microsoft.net.sdk.macos version 11.3.100-ci.main.723: The transaction has aborted..
[2021/07/21 11:53:18][INFO] Rolling back pack Microsoft.NET.Runtime.WebAssembly.Sdk installation...
[2021/07/21 11:53:18][INFO] Uninstalling workload pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7.
[2021/07/21 11:53:18][INFO] Rolling back pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm installation...
[2021/07/21 11:53:18][INFO] Workload installation failed: Downloading microsoft.netcore.app.runtime.mono.browser-wasm version 6.0.0-rc.1.21371.7 failed
[2021/07/21 11:53:18][INFO] install
[2021/07/21 11:53:18][INFO] Install a workload.
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Usage:
[2021/07/21 11:53:18][INFO] dotnet [options] workload install [<WORKLOAD_ID>...]
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Arguments:
[2021/07/21 11:53:18][INFO] <WORKLOAD_ID> The NuGet Package Id of the workload to install.
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Options:
[2021/07/21 11:53:18][INFO] --sdk-version <VERSION> The version of the SDK.
[2021/07/21 11:53:18][INFO] --configfile <FILE> The NuGet configuration file to use.
[2021/07/21 11:53:18][INFO] -s, --source <SOURCE> The NuGet package source to use
[2021/07/21 11:53:18][INFO] during the restore.
[2021/07/21 11:53:18][INFO] --skip-manifest-update Skip updating the workload manifests.
[2021/07/21 11:53:18][INFO] --from-cache <from-cache> Complete the operation from cache
[2021/07/21 11:53:18][INFO] (offline).
[2021/07/21 11:53:18][INFO] --download-to-cache Download packages needed to install a
[2021/07/21 11:53:18][INFO] <download-to-cache> workload to a folder which can be
[2021/07/21 11:53:18][INFO] used for offline installation.
[2021/07/21 11:53:18][INFO] --include-previews Allow prerelease workload manifests.
[2021/07/21 11:53:18][INFO] --temp-dir <temp-dir> Configure the temporary directory
[2021/07/21 11:53:18][INFO] used for this command (must be
[2021/07/21 11:53:18][INFO] secure).
[2021/07/21 11:53:18][INFO] --disable-parallel Prevent restoring multiple projects
[2021/07/21 11:53:18][INFO] in parallel.
[2021/07/21 11:53:18][INFO] --ignore-failed-sources Treat package source failures as
[2021/07/21 11:53:18][INFO] warnings.
[2021/07/21 11:53:18][INFO] --no-cache Do not cache packages and http
[2021/07/21 11:53:18][INFO] requests.
[2021/07/21 11:53:18][INFO] --interactive Allows the command to stop and wait
[2021/07/21 11:53:18][INFO] for user input or action (for example
[2021/07/21 11:53:18][INFO] to complete authentication).
[2021/07/21 11:53:18][INFO] -v, --verbosity Set the MSBuild verbosity level.
[2021/07/21 11:53:18][INFO] <d|detailed|diag|diagnostic|m|minimal| Allowed values are q[uiet],
[2021/07/21 11:53:18][INFO] n|normal|q|quiet> m[inimal], n[ormal], d[etailed], and
[2021/07/21 11:53:18][INFO] diag[nostic].
[2021/07/21 11:53:18][INFO] -?, -h, --help Show command line help.
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] $ popd
[2021/07/21 11:53:18][INFO] $ pushd "/home/helixbot/work/A7280930/p/performance/src/benchmarks/micro/wasmaot"
[2021/07/21 11:53:18][INFO] $ dotnet workload install wasm-tools
[2021/07/21 11:53:18][INFO]
[2021/07/21 11:53:18][INFO] Skip NuGet package signing validation. NuGet signing validation is not available on Linux or macOS https://aka.ms/workloadskippackagevalidation .
[2021/07/21 11:53:18][INFO] Updated advertising manifest microsoft.net.sdk.maui.
[2021/07/21 11:53:18][INFO] Updated advertising manifest microsoft.net.workload.emscripten.
[2021/07/21 11:53:18][INFO] Updated advertising manifest microsoft.net.sdk.android.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.maccatalyst.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.macos.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.tvos.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.workload.mono.toolchain.
[2021/07/21 11:53:19][INFO] Updated advertising manifest microsoft.net.sdk.ios.
[2021/07/21 11:53:19][INFO] Installing pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:20][INFO] Writing workload pack installation record for Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:20][INFO] Installing pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm version 6.0.0-rc.1.21371.7...
[2021/07/21 11:53:20][INFO] Workload installation failed, rolling back installed packs...
[2021/07/21 11:53:20][INFO] Uninstalling workload pack Microsoft.NET.Runtime.WebAssembly.Sdk version 6.0.0-rc.1.21371.7.
[2021/07/21 11:53:20][INFO] Rolling back pack Microsoft.NET.Runtime.WebAssembly.Sdk installation...
[2021/07/21 11:53:20][INFO] Rolling back pack Microsoft.NETCore.App.Runtime.Mono.browser-wasm installation...
[2021/07/21 11:53:20][INFO] Workload installation failed: Downloading microsoft.netcore.app.runtime.mono.browser-wasm version 6.0.0-rc.1.21371.7 failed
[2021/07/21 11:53:20][INFO] install
[2021/07/21 11:53:20][INFO] Install a workload.
[2021/07/21 11:53:20][INFO]
[2021/07/21 11:53:20][INFO] Usage:
[2021/07/21 11:53:20][INFO] dotnet [options] workload install [<WORKLOAD_ID>...]
[2021/07/21 11:53:20][INFO]
[2021/07/21 11:53:20][INFO] Arguments:
[2021/07/21 11:53:20][INFO] <WORKLOAD_ID> The NuGet Package Id of the workload to install.
[2021/07/21 11:53:20][INFO]
[2021/07/21 11:53:20][INFO] Options:
[2021/07/21 11:53:20][INFO] --sdk-version <VERSION> The version of the SDK.
[2021/07/21 11:53:20][INFO] --configfile <FILE> The NuGet configuration file to use.
[2021/07/21 11:53:20][INFO] -s, --source <SOURCE> The NuGet package source to use
[2021/07/21 11:53:20][INFO] during the restore.
[2021/07/21 11:53:20][INFO] --skip-manifest-update Skip updating the workload manifests.
[2021/07/21 11:53:20][INFO] --from-cache <from-cache> Complete the operation from cache
[2021/07/21 11:53:20][INFO] (offline).
[2021/07/21 11:53:20][INFO] --download-to-cache Download packages needed to install a
[2021/07/21 11:53:20][INFO] <download-to-cache> workload to a folder which can be
[2021/07/21 11:53:20][INFO] used for offline installation.
[2021/07/21 11:53:20][INFO] --include-previews Allow prerelease workload manifests.
[2021/07/21 11:53:20][INFO] --temp-dir <temp-dir> Configure the temporary directory
[2021/07/21 11:53:20][INFO] used for this command (must be
[2021/07/21 11:53:20][INFO] secure).
[2021/07/21 11:53:20][INFO] --disable-parallel Prevent restoring multiple projects
[2021/07/21 11:53:20][INFO] in parallel.
[2021/07/21 11:53:20][INFO] --ignore-failed-sources Treat package source failures as
[2021/07/21 11:53:20][INFO] warnings.
[2021/07/21 11:53:20][INFO] --no-cache Do not cache packages and http
[2021/07/21 11:53:20][INFO] requests.
[2021/07/21 11:53:20][INFO] --interactive Allows the command to stop and wait
[2021/07/21 11:53:20][INFO] for user input or action (for example
[2021/07/21 11:53:20][INFO] to complete authentication).
[2021/07/21 11:53:20][INFO] -v, --verbosity Set the MSBuild verbosity level.
[2021/07/21 11:53:20][INFO] <d|detailed|diag|diagnostic|m|minimal| Allowed values are q[uiet],
[2021/07/21 11:53:20][INFO] n|normal|q|quiet> m[inimal], n[ormal], d[etailed], and
[2021/07/21 11:53:20][INFO] diag[nostic].
[2021/07/21 11:53:20][INFO] -?, -h, --help Show command line help.
``` | infrastructure | wasm tools workload installation fails randomly the installation for the performace ci builds is failing randomly like here usually even after retry dotnet info net sdk reflecting any global json version rc commit runtime environment os name ubuntu os version os platform linux rid ubuntu base path home helixbot work p performance tools dotnet sdk rc host useful for support version rc commit net sdks installed rc net runtimes installed microsoft aspnetcore app rc microsoft netcore app rc to install additional net runtimes or sdks pushd home helixbot work p performance src benchmarks micro wasmaot dotnet workload install wasm tools skip nuget package signing validation nuget signing validation is not available on linux or macos updated advertising manifest microsoft net sdk android updated advertising manifest microsoft net sdk ios updated advertising manifest microsoft net sdk maui updated advertising manifest microsoft net sdk macos updated advertising manifest microsoft net workload emscripten updated advertising manifest microsoft net workload mono toolchain updated advertising manifest microsoft net sdk tvos updated advertising manifest microsoft net sdk maccatalyst installing workload manifest microsoft net sdk macos version preview installing workload manifest microsoft net sdk ios version preview installing workload manifest microsoft net sdk maui version preview sha azdo installing workload manifest microsoft net sdk android version preview installing workload manifest microsoft net sdk maccatalyst version preview installing workload manifest microsoft net workload mono toolchain version rc installing workload manifest microsoft net sdk tvos version preview installing pack microsoft net runtime webassembly sdk version rc writing workload pack installation record for microsoft net runtime webassembly sdk version rc installing pack microsoft netcore app runtime mono browser wasm version rc workload installation failed rolling back installed packs installing workload manifest microsoft net sdk macos version ci main installation roll back failed failed to install manifest microsoft net sdk macos version ci main the transaction has aborted rolling back pack microsoft net runtime webassembly sdk installation uninstalling workload pack microsoft net runtime webassembly sdk version rc rolling back pack microsoft netcore app runtime mono browser wasm installation workload installation failed downloading microsoft netcore app runtime mono browser wasm version rc failed install install a workload usage dotnet workload install arguments the nuget package id of the workload to install options sdk version the version of the sdk configfile the nuget configuration file to use s source the nuget package source to use during the restore skip manifest update skip updating the workload manifests from cache complete the operation from cache offline download to cache download packages needed to install a workload to a folder which can be used for offline installation include previews allow prerelease workload manifests temp dir configure the temporary directory used for this command must be secure disable parallel prevent restoring multiple projects in parallel ignore failed sources treat package source failures as warnings no cache do not cache packages and http requests interactive allows the command to stop and wait for user input or action for example to complete authentication v verbosity set the msbuild verbosity level d detailed diag diagnostic m minimal allowed values are q n normal q quiet m n d and diag h help show command line help popd pushd home helixbot work p performance src benchmarks micro wasmaot dotnet workload install wasm tools skip nuget package signing validation nuget signing validation is not available on linux or macos updated advertising manifest microsoft net sdk maui updated advertising manifest microsoft net workload emscripten updated advertising manifest microsoft net sdk android updated advertising manifest microsoft net sdk maccatalyst updated advertising manifest microsoft net sdk macos updated advertising manifest microsoft net sdk tvos updated advertising manifest microsoft net workload mono toolchain updated advertising manifest microsoft net sdk ios installing pack microsoft net runtime webassembly sdk version rc writing workload pack installation record for microsoft net runtime webassembly sdk version rc installing pack microsoft netcore app runtime mono browser wasm version rc workload installation failed rolling back installed packs uninstalling workload pack microsoft net runtime webassembly sdk version rc rolling back pack microsoft net runtime webassembly sdk installation rolling back pack microsoft netcore app runtime mono browser wasm installation workload installation failed downloading microsoft netcore app runtime mono browser wasm version rc failed install install a workload usage dotnet workload install arguments the nuget package id of the workload to install options sdk version the version of the sdk configfile the nuget configuration file to use s source the nuget package source to use during the restore skip manifest update skip updating the workload manifests from cache complete the operation from cache offline download to cache download packages needed to install a workload to a folder which can be used for offline installation include previews allow prerelease workload manifests temp dir configure the temporary directory used for this command must be secure disable parallel prevent restoring multiple projects in parallel ignore failed sources treat package source failures as warnings no cache do not cache packages and http requests interactive allows the command to stop and wait for user input or action for example to complete authentication v verbosity set the msbuild verbosity level d detailed diag diagnostic m minimal allowed values are q n normal q quiet m n d and diag h help show command line help | 1 |
121,896 | 10,197,277,030 | IssuesEvent | 2019-08-12 23:37:41 | rancher/rancher | https://api.github.com/repos/rancher/rancher | closed | Remove Ability to Add Unknown Overrides from Cluster Templates | [zube]: To Test area/ui team/ui | <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
enhancement
We are going to remove the ability to add unknown overrides to the overrides section for 2.3 until a better solution can be found for the non-scalar type questions.
This means the "add override" button will be removed from the UI. | 1.0 | Remove Ability to Add Unknown Overrides from Cluster Templates - <!--
Please search for existing issues first, then read https://rancher.com/docs/rancher/v2.x/en/contributing/#bugs-issues-or-questions to see what we expect in an issue
For security issues, please email security@rancher.com instead of posting a public issue in GitHub. You may (but are not required to) use the GPG key located on Keybase.
-->
**What kind of request is this (question/bug/enhancement/feature request):**
enhancement
We are going to remove the ability to add unknown overrides to the overrides section for 2.3 until a better solution can be found for the non-scalar type questions.
This means the "add override" button will be removed from the UI. | non_infrastructure | remove ability to add unknown overrides from cluster templates please search for existing issues first then read to see what we expect in an issue for security issues please email security rancher com instead of posting a public issue in github you may but are not required to use the gpg key located on keybase what kind of request is this question bug enhancement feature request enhancement we are going to remove the ability to add unknown overrides to the overrides section for until a better solution can be found for the non scalar type questions this means the add override button will be removed from the ui | 0 |
30,722 | 25,016,014,276 | IssuesEvent | 2022-11-03 18:52:53 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Improve hosting test/dev experience | area-Infrastructure-coreclr area-Host | Right now there are instructions about running all coreclr tests in the [workflow](https://github.com/dotnet/runtime/blob/main/docs/workflow/testing/coreclr/testing.md) but the instructions on individual tests just says to run `dotnet build --test`, but there are a number of test projects in `src/test/hosting` and it's not easy to run all of them. The tests themselves also implicitly require other partitions to be built, but do not have an explicit requirement.
We could make some small improvements here:
- Add some instructions
- Add a script that represents the "canonical" way to run tests for this component
- See if the dev loop is still fast if we add an explicit reference to the `clr.runtime` partition | 1.0 | Improve hosting test/dev experience - Right now there are instructions about running all coreclr tests in the [workflow](https://github.com/dotnet/runtime/blob/main/docs/workflow/testing/coreclr/testing.md) but the instructions on individual tests just says to run `dotnet build --test`, but there are a number of test projects in `src/test/hosting` and it's not easy to run all of them. The tests themselves also implicitly require other partitions to be built, but do not have an explicit requirement.
We could make some small improvements here:
- Add some instructions
- Add a script that represents the "canonical" way to run tests for this component
- See if the dev loop is still fast if we add an explicit reference to the `clr.runtime` partition | infrastructure | improve hosting test dev experience right now there are instructions about running all coreclr tests in the but the instructions on individual tests just says to run dotnet build test but there are a number of test projects in src test hosting and it s not easy to run all of them the tests themselves also implicitly require other partitions to be built but do not have an explicit requirement we could make some small improvements here add some instructions add a script that represents the canonical way to run tests for this component see if the dev loop is still fast if we add an explicit reference to the clr runtime partition | 1 |
390,172 | 11,525,809,425 | IssuesEvent | 2020-02-15 11:09:14 | wso2/docs-is | https://api.github.com/repos/wso2/docs-is | closed | Issue in the doc of User Managed Access with WSO2 Identity Server | Affected/5.10.0 Priority/High Severity/Major | In the doc https://is.docs.wso2.com/en/next/learn/user-managed-access-with-wso2-identity-server/,
the provided 'Tip' under '**Obtain the Protection API Access token (PAT)**' says 'Be sure to replace the <CLIENT_ID> and <CLIENT_SECRET> tags with the values you obtained when you configured the service provider for the **client**.' and links to the client SP configuration which is wrong. It should be corrected as,
**client -> resource owner**
and the link needs to be pointed to https://is.docs.wso2.com/en/next/learn/user-managed-access-with-wso2-identity-server/
| 1.0 | Issue in the doc of User Managed Access with WSO2 Identity Server - In the doc https://is.docs.wso2.com/en/next/learn/user-managed-access-with-wso2-identity-server/,
the provided 'Tip' under '**Obtain the Protection API Access token (PAT)**' says 'Be sure to replace the <CLIENT_ID> and <CLIENT_SECRET> tags with the values you obtained when you configured the service provider for the **client**.' and links to the client SP configuration which is wrong. It should be corrected as,
**client -> resource owner**
and the link needs to be pointed to https://is.docs.wso2.com/en/next/learn/user-managed-access-with-wso2-identity-server/
| non_infrastructure | issue in the doc of user managed access with identity server in the doc the provided tip under obtain the protection api access token pat says be sure to replace the and tags with the values you obtained when you configured the service provider for the client and links to the client sp configuration which is wrong it should be corrected as client resource owner and the link needs to be pointed to | 0 |
286,342 | 31,553,237,070 | IssuesEvent | 2023-09-02 10:14:43 | hinoshiba/news | https://api.github.com/repos/hinoshiba/news | closed | [SecurityWeek] 500k Impacted by Data Breach at Fashion Retailer Forever 21 | SecurityWeek Stale |
Fashion retailer Forever 21 says that the personal information of more than 500,000 individuals was compromised in a data breach.
The post [500k Impacted by Data Breach at Fashion Retailer Forever 21](https://www.securityweek.com/500k-impacted-by-data-breach-at-fashion-retailer-forever-21/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/500k-impacted-by-data-breach-at-fashion-retailer-forever-21/>
| True | [SecurityWeek] 500k Impacted by Data Breach at Fashion Retailer Forever 21 -
Fashion retailer Forever 21 says that the personal information of more than 500,000 individuals was compromised in a data breach.
The post [500k Impacted by Data Breach at Fashion Retailer Forever 21](https://www.securityweek.com/500k-impacted-by-data-breach-at-fashion-retailer-forever-21/) appeared first on [SecurityWeek](https://www.securityweek.com).
<https://www.securityweek.com/500k-impacted-by-data-breach-at-fashion-retailer-forever-21/>
| non_infrastructure | impacted by data breach at fashion retailer forever fashion retailer forever says that the personal information of more than individuals was compromised in a data breach the post appeared first on | 0 |
27,213 | 21,466,234,596 | IssuesEvent | 2022-04-26 04:16:01 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Backport Move to Windows.10.Amd64.Server2022.ES.Open to 6.0 | area-Infrastructure untriaged | CI in backport PRs are now failing because Windows.10.Amd64.Server19H1.ES.Open is reaching EOL, example: https://dev.azure.com/dnceng/public/_build/results?buildId=1674630&view=logs&j=457f7e88-dfa2-5bd9-f871-fdf124c2477d&t=bfe52dfb-2099-5c7f-ee52-70a1d81c544e&l=53
`##[error]SENDHELIXJOB(0,0): error : Helix queue windows.10.amd64.server19h1.es.open is set for estimated removal date of 2022-03-31. In most cases the queue will be removed permanently due to end-of-life; please contact dnceng for any questions or concerns, and we can help you decide how to proceed and discuss other options.
`
Please backport https://github.com/dotnet/runtime/pull/66404 to 6.0 | 1.0 | Backport Move to Windows.10.Amd64.Server2022.ES.Open to 6.0 - CI in backport PRs are now failing because Windows.10.Amd64.Server19H1.ES.Open is reaching EOL, example: https://dev.azure.com/dnceng/public/_build/results?buildId=1674630&view=logs&j=457f7e88-dfa2-5bd9-f871-fdf124c2477d&t=bfe52dfb-2099-5c7f-ee52-70a1d81c544e&l=53
`##[error]SENDHELIXJOB(0,0): error : Helix queue windows.10.amd64.server19h1.es.open is set for estimated removal date of 2022-03-31. In most cases the queue will be removed permanently due to end-of-life; please contact dnceng for any questions or concerns, and we can help you decide how to proceed and discuss other options.
`
Please backport https://github.com/dotnet/runtime/pull/66404 to 6.0 | infrastructure | backport move to windows es open to ci in backport prs are now failing because windows es open is reaching eol example sendhelixjob error helix queue windows es open is set for estimated removal date of in most cases the queue will be removed permanently due to end of life please contact dnceng for any questions or concerns and we can help you decide how to proceed and discuss other options please backport to | 1 |
15,035 | 11,303,416,166 | IssuesEvent | 2020-01-17 20:03:28 | skuzzle/cmp | https://api.github.com/repos/skuzzle/cmp | closed | Use GitHub Build Infrastructure | infrastructure | * Build should publish maven artifacts to GitHub Registry
* Build should publish docker artifacts to GitHub Registy
See: https://help.github.com/en/github/managing-packages-with-github-packages/configuring-apache-maven-for-use-with-github-packages
* Deployment must pull docker artifacts from GitHub Registry | 1.0 | Use GitHub Build Infrastructure - * Build should publish maven artifacts to GitHub Registry
* Build should publish docker artifacts to GitHub Registy
See: https://help.github.com/en/github/managing-packages-with-github-packages/configuring-apache-maven-for-use-with-github-packages
* Deployment must pull docker artifacts from GitHub Registry | infrastructure | use github build infrastructure build should publish maven artifacts to github registry build should publish docker artifacts to github registy see deployment must pull docker artifacts from github registry | 1 |
35,566 | 31,836,608,733 | IssuesEvent | 2023-09-14 13:53:03 | linea-it/tno | https://api.github.com/repos/linea-it/tno | opened | Modificações estruturais na página de predições de ocultações públicas | enhancement Frontend Backend Infrastructure | - [ ] Separar da página pública do portal TNO.
- [ ] Requerer URL exclusiva para esta página pública.
- [ ] Indicar a que período se refere o número total de ocultações exibidos na tabela.
- [ ] Indicar na landing page a data da última atualização/execução de predições da tabela. Hoje existe um ‘last updated’ que não está relacionado a isso.
| 1.0 | Modificações estruturais na página de predições de ocultações públicas - - [ ] Separar da página pública do portal TNO.
- [ ] Requerer URL exclusiva para esta página pública.
- [ ] Indicar a que período se refere o número total de ocultações exibidos na tabela.
- [ ] Indicar na landing page a data da última atualização/execução de predições da tabela. Hoje existe um ‘last updated’ que não está relacionado a isso.
| infrastructure | modificações estruturais na página de predições de ocultações públicas separar da página pública do portal tno requerer url exclusiva para esta página pública indicar a que período se refere o número total de ocultações exibidos na tabela indicar na landing page a data da última atualização execução de predições da tabela hoje existe um ‘last updated’ que não está relacionado a isso | 1 |
13,044 | 10,089,383,660 | IssuesEvent | 2019-07-26 08:47:32 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | opened | Enable yaml stages based publishing | area-Infrastructure | https://github.com/dotnet/arcade/blob/master/Documentation/YamlStagesRepoStatus.md. Target completion data is 8/13.
We need to do this sooner than later.
cc @safern @wtgodbe @ericstj | 1.0 | Enable yaml stages based publishing - https://github.com/dotnet/arcade/blob/master/Documentation/YamlStagesRepoStatus.md. Target completion data is 8/13.
We need to do this sooner than later.
cc @safern @wtgodbe @ericstj | infrastructure | enable yaml stages based publishing target completion data is we need to do this sooner than later cc safern wtgodbe ericstj | 1 |
27,009 | 21,002,908,646 | IssuesEvent | 2022-03-29 19:18:36 | The-Compilers/CryptoYard | https://api.github.com/repos/The-Compilers/CryptoYard | closed | Create server setup script | infrastructure | As a DevOps member I want to have a shell script which I could run on a clean Ubuntu 20.04 server to set up a production environment, including the following:
- Update all Ubuntu packages
- Install Nginx
- Set up HTTP for Nginx
- Install Docker
- Install Docker compose
- Install GIT
- Clone GIT repo to a given directory | 1.0 | Create server setup script - As a DevOps member I want to have a shell script which I could run on a clean Ubuntu 20.04 server to set up a production environment, including the following:
- Update all Ubuntu packages
- Install Nginx
- Set up HTTP for Nginx
- Install Docker
- Install Docker compose
- Install GIT
- Clone GIT repo to a given directory | infrastructure | create server setup script as a devops member i want to have a shell script which i could run on a clean ubuntu server to set up a production environment including the following update all ubuntu packages install nginx set up http for nginx install docker install docker compose install git clone git repo to a given directory | 1 |
2,630 | 3,789,759,444 | IssuesEvent | 2016-03-21 19:02:00 | ilri/DSpace | https://api.github.com/repos/ilri/DSpace | closed | Migrate to Let's Encrypt for TLS certificates | infrastructure | We need to migrate to Let's Encrypt's certificate authority for TLS certificates for the following domains:
- cgspace.cgiar.org
- mahider.ilri.org
- dspace.ilri.org
First these need to be activated with Let's Encrypt, then we need to add support to the [infrastructure playbooks](https://github.com/ilri/rmg-ansible-public), both for the nginx vhost templates as well as a cron job to do the certificate renewals. This first phase will happen during the next deploy window for CGSpace, since we'll have a window of downtime anyways. | 1.0 | Migrate to Let's Encrypt for TLS certificates - We need to migrate to Let's Encrypt's certificate authority for TLS certificates for the following domains:
- cgspace.cgiar.org
- mahider.ilri.org
- dspace.ilri.org
First these need to be activated with Let's Encrypt, then we need to add support to the [infrastructure playbooks](https://github.com/ilri/rmg-ansible-public), both for the nginx vhost templates as well as a cron job to do the certificate renewals. This first phase will happen during the next deploy window for CGSpace, since we'll have a window of downtime anyways. | infrastructure | migrate to let s encrypt for tls certificates we need to migrate to let s encrypt s certificate authority for tls certificates for the following domains cgspace cgiar org mahider ilri org dspace ilri org first these need to be activated with let s encrypt then we need to add support to the both for the nginx vhost templates as well as a cron job to do the certificate renewals this first phase will happen during the next deploy window for cgspace since we ll have a window of downtime anyways | 1 |
810,714 | 30,256,705,904 | IssuesEvent | 2023-07-07 03:43:51 | googleapis/python-aiplatform | https://api.github.com/repos/googleapis/python-aiplatform | closed | tests.system.aiplatform.test_language_models.TestLanguageModels: test_batch_prediction failed | type: bug priority: p1 flakybot: issue api: vertex-ai | This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 2235305c7714835ff331e5294f90a6a23e31391d
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/282b2d95-c702-4500-b272-c68e5ff03574), [Sponge](http://sponge2/282b2d95-c702-4500-b272-c68e5ff03574)
status: failed
<details><summary>Test output</summary><br><pre>self = <tests.system.aiplatform.test_language_models.TestLanguageModels object at 0x7f899c0a3ee0>
def test_batch_prediction(self):
source_uri = "gs://ucaip-samples-us-central1/model/llm/batch_prediction/batch_prediction_prompts1.jsonl"
destination_uri_prefix = "gs://ucaip-samples-us-central1/model/llm/batch_prediction/predictions/text-bison@001_"
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
model = TextGenerationModel.from_pretrained("text-bison@001")
> job = model.batch_predict(
source_uri=source_uri,
destination_uri_prefix=destination_uri_prefix,
model_parameters={"temperature": 0, "top_p": 1, "top_k": 5},
)
tests/system/aiplatform/test_language_models.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
vertexai/language_models/_language_models.py:379: in batch_predict
job = aiplatform.BatchPredictionJob.create(
google/cloud/aiplatform/jobs.py:794: in create
return cls._create(
google/cloud/aiplatform/base.py:814: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/jobs.py:875: in _create
batch_prediction_job._block_until_complete()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f89981ac070>
resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/3499956480004587520
def _block_until_complete(self):
"""Helper method to block and check on job until complete.
Raises:
RuntimeError: If job failed or cancelled.
"""
log_wait = _LOG_WAIT_TIME
previous_time = time.time()
while self.state not in _JOB_COMPLETE_STATES:
current_time = time.time()
if current_time - previous_time >= log_wait:
self._log_job_state()
log_wait = min(log_wait * _WAIT_TIME_MULTIPLIER, _MAX_WAIT_TIME)
previous_time = current_time
time.sleep(_JOB_WAIT_TIME)
self._log_job_state()
# Error is only populated when the job state is
# JOB_STATE_FAILED or JOB_STATE_CANCELLED.
if self._gca_resource.state in _JOB_ERROR_STATES:
> raise RuntimeError("Job failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Job failed with:
E code: 13
E message: "INTERNAL"
google/cloud/aiplatform/jobs.py:241: RuntimeError</pre></details> | 1.0 | tests.system.aiplatform.test_language_models.TestLanguageModels: test_batch_prediction failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/main/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: 2235305c7714835ff331e5294f90a6a23e31391d
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/282b2d95-c702-4500-b272-c68e5ff03574), [Sponge](http://sponge2/282b2d95-c702-4500-b272-c68e5ff03574)
status: failed
<details><summary>Test output</summary><br><pre>self = <tests.system.aiplatform.test_language_models.TestLanguageModels object at 0x7f899c0a3ee0>
def test_batch_prediction(self):
source_uri = "gs://ucaip-samples-us-central1/model/llm/batch_prediction/batch_prediction_prompts1.jsonl"
destination_uri_prefix = "gs://ucaip-samples-us-central1/model/llm/batch_prediction/predictions/text-bison@001_"
aiplatform.init(project=e2e_base._PROJECT, location=e2e_base._LOCATION)
model = TextGenerationModel.from_pretrained("text-bison@001")
> job = model.batch_predict(
source_uri=source_uri,
destination_uri_prefix=destination_uri_prefix,
model_parameters={"temperature": 0, "top_p": 1, "top_k": 5},
)
tests/system/aiplatform/test_language_models.py:158:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
vertexai/language_models/_language_models.py:379: in batch_predict
job = aiplatform.BatchPredictionJob.create(
google/cloud/aiplatform/jobs.py:794: in create
return cls._create(
google/cloud/aiplatform/base.py:814: in wrapper
return method(*args, **kwargs)
google/cloud/aiplatform/jobs.py:875: in _create
batch_prediction_job._block_until_complete()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f89981ac070>
resource name: projects/580378083368/locations/us-central1/batchPredictionJobs/3499956480004587520
def _block_until_complete(self):
"""Helper method to block and check on job until complete.
Raises:
RuntimeError: If job failed or cancelled.
"""
log_wait = _LOG_WAIT_TIME
previous_time = time.time()
while self.state not in _JOB_COMPLETE_STATES:
current_time = time.time()
if current_time - previous_time >= log_wait:
self._log_job_state()
log_wait = min(log_wait * _WAIT_TIME_MULTIPLIER, _MAX_WAIT_TIME)
previous_time = current_time
time.sleep(_JOB_WAIT_TIME)
self._log_job_state()
# Error is only populated when the job state is
# JOB_STATE_FAILED or JOB_STATE_CANCELLED.
if self._gca_resource.state in _JOB_ERROR_STATES:
> raise RuntimeError("Job failed with:\n%s" % self._gca_resource.error)
E RuntimeError: Job failed with:
E code: 13
E message: "INTERNAL"
google/cloud/aiplatform/jobs.py:241: RuntimeError</pre></details> | non_infrastructure | tests system aiplatform test language models testlanguagemodels test batch prediction failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output self def test batch prediction self source uri gs ucaip samples us model llm batch prediction batch prediction jsonl destination uri prefix gs ucaip samples us model llm batch prediction predictions text bison aiplatform init project base project location base location model textgenerationmodel from pretrained text bison job model batch predict source uri source uri destination uri prefix destination uri prefix model parameters temperature top p top k tests system aiplatform test language models py vertexai language models language models py in batch predict job aiplatform batchpredictionjob create google cloud aiplatform jobs py in create return cls create google cloud aiplatform base py in wrapper return method args kwargs google cloud aiplatform jobs py in create batch prediction job block until complete self resource name projects locations us batchpredictionjobs def block until complete self helper method to block and check on job until complete raises runtimeerror if job failed or cancelled log wait log wait time previous time time time while self state not in job complete states current time time time if current time previous time log wait self log job state log wait min log wait wait time multiplier max wait time previous time current time time sleep job wait time self log job state error is only populated when the job state is job state failed or job state cancelled if self gca resource state in job error states raise runtimeerror job failed with n s self gca resource error e runtimeerror job failed with e code e message internal google cloud aiplatform jobs py runtimeerror | 0 |
755,203 | 26,420,845,900 | IssuesEvent | 2023-01-13 20:18:13 | 42-webserv/SpaceX | https://api.github.com/repos/42-webserv/SpaceX | opened | [⚙] test script using curl, telnet | Priority: ⭑⭑⭑ [Reason] Todo: ⌨ Status: ▶ | **내용**
* 여기다가 내용 쓰세용
</br></br>
**태스크**
- [ ] telnet
- [ ] curl
e.g.
```
curl -X POST -H “Content-Type: plain/text” –data [SOME DATA]
curl –resolve example.com:80:127.0.0.1 http://example.com/
```
| 1.0 | [⚙] test script using curl, telnet - **내용**
* 여기다가 내용 쓰세용
</br></br>
**태스크**
- [ ] telnet
- [ ] curl
e.g.
```
curl -X POST -H “Content-Type: plain/text” –data [SOME DATA]
curl –resolve example.com:80:127.0.0.1 http://example.com/
```
| non_infrastructure | test script using curl telnet 내용 여기다가 내용 쓰세용 태스크 telnet curl e g curl x post h “content type plain text” –data curl –resolve example com | 0 |
327,722 | 28,079,705,844 | IssuesEvent | 2023-03-30 04:51:06 | pulp/pulp_rpm | https://api.github.com/repos/pulp/pulp_rpm | closed | Refactor `test_advisory_upload` test module to use pytest fixtures | Task Tests | The `test_advisory_upload` module needs to be refactored to use pytest fixtures. | 1.0 | Refactor `test_advisory_upload` test module to use pytest fixtures - The `test_advisory_upload` module needs to be refactored to use pytest fixtures. | non_infrastructure | refactor test advisory upload test module to use pytest fixtures the test advisory upload module needs to be refactored to use pytest fixtures | 0 |
34,631 | 30,233,923,185 | IssuesEvent | 2023-07-06 08:55:34 | ministryofjustice/data-platform | https://api.github.com/repos/ministryofjustice/data-platform | closed | Stretch goal: Catalogue Discovery: Investigate integrating CKAN with DataHub | Data Platform Core Infrastructure | ## User Story
The two candidates we explored offer different benefits, and as we experimented with them, we increasingly suspect that our desired tool will have features that are available only in one or the other. Since both catalogues come with comprehensive APIs, we would like to explore the possibility of integrating them with each other to best exploit the features of each. For example, could we, using APIs, allow CKAN to fall back to DataHub's authentication mechanism which easily integrates with AzureAD? Or could we expose things like Lineage in CKAN's interface.
## User Type(s)
Data Platform Consumers
## Value
This will give us additional flexibility when identifying catalogue candidates.
## Questions / Assumptions / Hypothesis
We assume that CKAN provides enough configuration to allow graphically exposing data scraped from DataHub
We assume the experience will be largely seamless.
We assume we have enough skills within our team to give it a real test.
### Proposal
We should test the two scenarios outlined in the intro to see if this is feasible.
## Definition of done
<!-- Checklist for definition of done and acceptance criteria, for example: -->
- [ ] Integration experimentation carried out
- [ ] Outcomes captured
- [ ] Findings reflected in User Research
- [ ] Integration tests assessed as potential candidates for the hackathon
- [ ] Demo
- [ ] Follow-on stories raised.
## Reference
[How to write good user stories](https://www.gov.uk/service-manual/agile-delivery/writing-user-stories)
| 1.0 | Stretch goal: Catalogue Discovery: Investigate integrating CKAN with DataHub - ## User Story
The two candidates we explored offer different benefits, and as we experimented with them, we increasingly suspect that our desired tool will have features that are available only in one or the other. Since both catalogues come with comprehensive APIs, we would like to explore the possibility of integrating them with each other to best exploit the features of each. For example, could we, using APIs, allow CKAN to fall back to DataHub's authentication mechanism which easily integrates with AzureAD? Or could we expose things like Lineage in CKAN's interface.
## User Type(s)
Data Platform Consumers
## Value
This will give us additional flexibility when identifying catalogue candidates.
## Questions / Assumptions / Hypothesis
We assume that CKAN provides enough configuration to allow graphically exposing data scraped from DataHub
We assume the experience will be largely seamless.
We assume we have enough skills within our team to give it a real test.
### Proposal
We should test the two scenarios outlined in the intro to see if this is feasible.
## Definition of done
<!-- Checklist for definition of done and acceptance criteria, for example: -->
- [ ] Integration experimentation carried out
- [ ] Outcomes captured
- [ ] Findings reflected in User Research
- [ ] Integration tests assessed as potential candidates for the hackathon
- [ ] Demo
- [ ] Follow-on stories raised.
## Reference
[How to write good user stories](https://www.gov.uk/service-manual/agile-delivery/writing-user-stories)
| infrastructure | stretch goal catalogue discovery investigate integrating ckan with datahub user story the two candidates we explored offer different benefits and as we experimented with them we increasingly suspect that our desired tool will have features that are available only in one or the other since both catalogues come with comprehensive apis we would like to explore the possibility of integrating them with each other to best exploit the features of each for example could we using apis allow ckan to fall back to datahub s authentication mechanism which easily integrates with azuread or could we expose things like lineage in ckan s interface user type s data platform consumers value this will give us additional flexibility when identifying catalogue candidates questions assumptions hypothesis we assume that ckan provides enough configuration to allow graphically exposing data scraped from datahub we assume the experience will be largely seamless we assume we have enough skills within our team to give it a real test proposal we should test the two scenarios outlined in the intro to see if this is feasible definition of done integration experimentation carried out outcomes captured findings reflected in user research integration tests assessed as potential candidates for the hackathon demo follow on stories raised reference | 1 |
7,535 | 6,979,998,670 | IssuesEvent | 2017-12-12 23:16:32 | zulip/zulip | https://api.github.com/repos/zulip/zulip | opened | Set up test suite on CircleCI | area: testing-infrastructure priority: high | Travis CI isn't very stable or very fast, and I think we can do better running on CircleCI.
Key differences include:
* We control the base image -- so it won't change out from under us and cause failures, and also we can preload things that save time in provision.
* Caching is more hermetic. This is actually good and bad, as Circle went a bit overboard on this; results will be totally reproducible, which is good, but it'll take a little work to maintain good performance with the venv and npm/yarn caches.
* You can run a build and SSH into the machine to look around -- super handy for debugging.
* Queue times seem much shorter.
The sequence to get us off of Travis and onto Circle will look something like:
[] Get the backend suite (with Python 3.4) running on Circle. I have this most of the way.
[] Set up our main GitHub repo to have Circle run on master and all PRs.
[] Get the frontend suite running on Circle.
[] Get the backend suite with Python 3.5 running on Circle. Probably use a Xenial base image.
[] Get the production suite running on Circle. I think there may be fancy things we can do to make this suite better than it is now in Travis, but for now just make something functionally equivalent.
[] Let these run for everyone for a week or two. Fix any issues.
[] Once we're happy, turn off Travis.
I did some work on this a while ago. Most of it is in master; take a look through `gitk --grep=circle` to read it, particularly the comments and commit messages.
For a few things, I made hacky changes to non-Circle-specific files; those are [in my `circle` branch](https://github.com/zulip/zulip/compare/master...gnprice:circle). The result [almost works!](https://circleci.com/gh/gnprice/zulip/146), running the backend suite.
To get started on this:
* set up CircleCI to run on your own GitHub `zulip` repo
* borrow my `circle` branch
* get the remaining backend tests working
* make non-hacky versions of all the fixes, and send a PR for them
* then we'll move on to the other suites!
| 1.0 | Set up test suite on CircleCI - Travis CI isn't very stable or very fast, and I think we can do better running on CircleCI.
Key differences include:
* We control the base image -- so it won't change out from under us and cause failures, and also we can preload things that save time in provision.
* Caching is more hermetic. This is actually good and bad, as Circle went a bit overboard on this; results will be totally reproducible, which is good, but it'll take a little work to maintain good performance with the venv and npm/yarn caches.
* You can run a build and SSH into the machine to look around -- super handy for debugging.
* Queue times seem much shorter.
The sequence to get us off of Travis and onto Circle will look something like:
[] Get the backend suite (with Python 3.4) running on Circle. I have this most of the way.
[] Set up our main GitHub repo to have Circle run on master and all PRs.
[] Get the frontend suite running on Circle.
[] Get the backend suite with Python 3.5 running on Circle. Probably use a Xenial base image.
[] Get the production suite running on Circle. I think there may be fancy things we can do to make this suite better than it is now in Travis, but for now just make something functionally equivalent.
[] Let these run for everyone for a week or two. Fix any issues.
[] Once we're happy, turn off Travis.
I did some work on this a while ago. Most of it is in master; take a look through `gitk --grep=circle` to read it, particularly the comments and commit messages.
For a few things, I made hacky changes to non-Circle-specific files; those are [in my `circle` branch](https://github.com/zulip/zulip/compare/master...gnprice:circle). The result [almost works!](https://circleci.com/gh/gnprice/zulip/146), running the backend suite.
To get started on this:
* set up CircleCI to run on your own GitHub `zulip` repo
* borrow my `circle` branch
* get the remaining backend tests working
* make non-hacky versions of all the fixes, and send a PR for them
* then we'll move on to the other suites!
| infrastructure | set up test suite on circleci travis ci isn t very stable or very fast and i think we can do better running on circleci key differences include we control the base image so it won t change out from under us and cause failures and also we can preload things that save time in provision caching is more hermetic this is actually good and bad as circle went a bit overboard on this results will be totally reproducible which is good but it ll take a little work to maintain good performance with the venv and npm yarn caches you can run a build and ssh into the machine to look around super handy for debugging queue times seem much shorter the sequence to get us off of travis and onto circle will look something like get the backend suite with python running on circle i have this most of the way set up our main github repo to have circle run on master and all prs get the frontend suite running on circle get the backend suite with python running on circle probably use a xenial base image get the production suite running on circle i think there may be fancy things we can do to make this suite better than it is now in travis but for now just make something functionally equivalent let these run for everyone for a week or two fix any issues once we re happy turn off travis i did some work on this a while ago most of it is in master take a look through gitk grep circle to read it particularly the comments and commit messages for a few things i made hacky changes to non circle specific files those are the result running the backend suite to get started on this set up circleci to run on your own github zulip repo borrow my circle branch get the remaining backend tests working make non hacky versions of all the fixes and send a pr for them then we ll move on to the other suites | 1 |
78,871 | 22,490,057,801 | IssuesEvent | 2022-06-23 00:17:04 | google/mediapipe | https://api.github.com/repos/google/mediapipe | opened | Error running hello_world on maacos | type:build/install | <em>Please make sure that this is a build/installation issue and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html) documentation before raising any issues.</em>
**System information** (Please provide as much relevant information as possible)
- OS Platform and Distribution (e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4): MacOS 12.1
- Compiler version (e.g. gcc/g++ 8 /Apple clang version 12.0.0): Apple clang version 13.1.6
- Programming Language and version ( e.g. C++ 14, Python 3.6, Java ):
- Installed using virtualenv? pip? Conda? (if python):
- [MediaPipe version](https://github.com/google/mediapipe/releases): 0.8.10
- Bazel version: 5.0.0
- XCode and Tulsi versions (if iOS):
- Android SDK and NDK versions (if android):
- Android [AAR](https://google.github.io/mediapipe/getting_started/android_archive_library.html) ( if android):
- OpenCV version (if running on desktop):
**Describe the problem**:
I was following these installation steps: https://google.github.io/mediapipe/getting_started/install.html#installing-on-macos
But failed to run hello_world at the end.
I believe this has to be something with failing to download some dependencies via HTTP 404.
**Complete Logs:**
$ bazel run --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/hello_world:hello_world
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
DEBUG: Rule 'rules_foreign_cc' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "c2cdcf55ffaf49366725639e45dedd449b8c3fe22b54e31625eb80ce3a240f1e"
DEBUG: Repository rules_foreign_cc instantiated at:
/Users/jjanggu/carat-client/mediapipe/WORKSPACE:42:13: in <toplevel>
Repository rule http_archive defined at:
/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/bazel_tools/tools/build_defs/repo/http.bzl:364:31: in <toplevel>
WARNING: Download from http://mirror.tensorflow.org/github.com/bazelbuild/rules_closure/archive/cf1e44edb908e9616030cc83d085989b8e6cd6df.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/2f6de37d68a4c69e2ff9eec3cebbf1369e496940.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_absl' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_benchmark' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'pybind11_bazel' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_protobuf' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_googletest' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_github_gflags_gflags' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'build_bazel_rules_apple' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'build_bazel_rules_swift' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'build_bazel_apple_support' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'xctestrunner' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'pybind11' because it already exists.
DEBUG: Rule 'rules_cc' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "73106859751c2bc314861adc136d5cbecee3f7ae7d05539dc8235efbf4efdcbe"
DEBUG: Repository rules_cc instantiated at:
/Users/jjanggu/carat-client/mediapipe/WORKSPACE:36:13: in <toplevel>
Repository rule http_archive defined at:
/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/bazel_tools/tools/build_defs/repo/http.bzl:364:31: in <toplevel>
ERROR: Traceback (most recent call last):
File "/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/rules_cc/cc/private/rules_impl/cc_flags_supplier.bzl", line 16, column 76, in <toplevel>
load("@bazel_tools//tools/cpp:toolchain_utils.bzl", "find_cpp_toolchain", "use_cpp_toolchain")
Error: file '@bazel_tools//tools/cpp:toolchain_utils.bzl' does not contain symbol 'use_cpp_toolchain' (did you mean 'find_cpp_toolchain'?)
INFO: Repository com_google_protobuf instantiated at:
/Users/jjanggu/carat-client/mediapipe/WORKSPACE:131:13: in <toplevel>
Repository rule http_archive defined at:
/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/bazel_tools/tools/build_defs/repo/http.bzl:364:31: in <toplevel>
ERROR: /Users/jjanggu/carat-client/mediapipe/mediapipe/examples/desktop/hello_world/BUILD:19:10: error loading package 'mediapipe/framework': at /Users/jjanggu/carat-client/mediapipe/mediapipe/framework/port/build_config.bzl:7:6: at /Users/jjanggu/carat-client/mediapipe/mediapipe/framework/tool/mediapipe_graph.bzl:23:6: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/lite/core/shims/cc_library_with_tflite.bzl:4:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/lite/build_def.bzl:4:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/tensorflow.bzl:13:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/core/platform/rules_cc.bzl:4:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/core/platform/default/rules_cc.bzl:11:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/rules_cc/cc/defs.bzl:17:6: initialization of module 'cc/private/rules_impl/cc_flags_supplier.bzl' failed and referenced by '//mediapipe/examples/desktop/hello_world:hello_world'
ERROR: Analysis of target '//mediapipe/examples/desktop/hello_world:hello_world' failed; build aborted: Analysis failed
INFO: Elapsed time: 107.492s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (43 packages loaded, 192 targets configured)
FAILED: Build did NOT complete successfully (43 packages loaded, 192 targets configured)
currently loading: mediapipe/framework ... (2 packages)
Fetching @local_config_cc; Running xcode-locator
Fetching https://github.com/protocolbuffers/protobuf/archive/v3.19.1.tar.gz; 937,412B
Thank you in advance :) | 1.0 | Error running hello_world on maacos - <em>Please make sure that this is a build/installation issue and also refer to the [troubleshooting](https://google.github.io/mediapipe/getting_started/troubleshooting.html) documentation before raising any issues.</em>
**System information** (Please provide as much relevant information as possible)
- OS Platform and Distribution (e.g. Linux Ubuntu 16.04, Android 11, iOS 14.4): MacOS 12.1
- Compiler version (e.g. gcc/g++ 8 /Apple clang version 12.0.0): Apple clang version 13.1.6
- Programming Language and version ( e.g. C++ 14, Python 3.6, Java ):
- Installed using virtualenv? pip? Conda? (if python):
- [MediaPipe version](https://github.com/google/mediapipe/releases): 0.8.10
- Bazel version: 5.0.0
- XCode and Tulsi versions (if iOS):
- Android SDK and NDK versions (if android):
- Android [AAR](https://google.github.io/mediapipe/getting_started/android_archive_library.html) ( if android):
- OpenCV version (if running on desktop):
**Describe the problem**:
I was following these installation steps: https://google.github.io/mediapipe/getting_started/install.html#installing-on-macos
But failed to run hello_world at the end.
I believe this has to be something with failing to download some dependencies via HTTP 404.
**Complete Logs:**
$ bazel run --define MEDIAPIPE_DISABLE_GPU=1 mediapipe/examples/desktop/hello_world:hello_world
Extracting Bazel installation...
Starting local Bazel server and connecting to it...
DEBUG: Rule 'rules_foreign_cc' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "c2cdcf55ffaf49366725639e45dedd449b8c3fe22b54e31625eb80ce3a240f1e"
DEBUG: Repository rules_foreign_cc instantiated at:
/Users/jjanggu/carat-client/mediapipe/WORKSPACE:42:13: in <toplevel>
Repository rule http_archive defined at:
/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/bazel_tools/tools/build_defs/repo/http.bzl:364:31: in <toplevel>
WARNING: Download from http://mirror.tensorflow.org/github.com/bazelbuild/rules_closure/archive/cf1e44edb908e9616030cc83d085989b8e6cd6df.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
WARNING: Download from https://storage.googleapis.com/mirror.tensorflow.org/github.com/tensorflow/runtime/archive/2f6de37d68a4c69e2ff9eec3cebbf1369e496940.tar.gz failed: class java.io.FileNotFoundException GET returned 404 Not Found
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_absl' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_benchmark' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'pybind11_bazel' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_protobuf' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_google_googletest' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'com_github_gflags_gflags' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'build_bazel_rules_apple' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'build_bazel_rules_swift' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'build_bazel_apple_support' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'xctestrunner' because it already exists.
DEBUG: /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/third_party/repo.bzl:124:14:
Warning: skipping import of repository 'pybind11' because it already exists.
DEBUG: Rule 'rules_cc' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = "73106859751c2bc314861adc136d5cbecee3f7ae7d05539dc8235efbf4efdcbe"
DEBUG: Repository rules_cc instantiated at:
/Users/jjanggu/carat-client/mediapipe/WORKSPACE:36:13: in <toplevel>
Repository rule http_archive defined at:
/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/bazel_tools/tools/build_defs/repo/http.bzl:364:31: in <toplevel>
ERROR: Traceback (most recent call last):
File "/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/rules_cc/cc/private/rules_impl/cc_flags_supplier.bzl", line 16, column 76, in <toplevel>
load("@bazel_tools//tools/cpp:toolchain_utils.bzl", "find_cpp_toolchain", "use_cpp_toolchain")
Error: file '@bazel_tools//tools/cpp:toolchain_utils.bzl' does not contain symbol 'use_cpp_toolchain' (did you mean 'find_cpp_toolchain'?)
INFO: Repository com_google_protobuf instantiated at:
/Users/jjanggu/carat-client/mediapipe/WORKSPACE:131:13: in <toplevel>
Repository rule http_archive defined at:
/private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/bazel_tools/tools/build_defs/repo/http.bzl:364:31: in <toplevel>
ERROR: /Users/jjanggu/carat-client/mediapipe/mediapipe/examples/desktop/hello_world/BUILD:19:10: error loading package 'mediapipe/framework': at /Users/jjanggu/carat-client/mediapipe/mediapipe/framework/port/build_config.bzl:7:6: at /Users/jjanggu/carat-client/mediapipe/mediapipe/framework/tool/mediapipe_graph.bzl:23:6: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/lite/core/shims/cc_library_with_tflite.bzl:4:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/lite/build_def.bzl:4:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/tensorflow.bzl:13:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/core/platform/rules_cc.bzl:4:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/org_tensorflow/tensorflow/core/platform/default/rules_cc.bzl:11:5: at /private/var/tmp/_bazel_jjanggu/009595b94c86fe5d8f182dd3d70a2234/external/rules_cc/cc/defs.bzl:17:6: initialization of module 'cc/private/rules_impl/cc_flags_supplier.bzl' failed and referenced by '//mediapipe/examples/desktop/hello_world:hello_world'
ERROR: Analysis of target '//mediapipe/examples/desktop/hello_world:hello_world' failed; build aborted: Analysis failed
INFO: Elapsed time: 107.492s
INFO: 0 processes.
FAILED: Build did NOT complete successfully (43 packages loaded, 192 targets configured)
FAILED: Build did NOT complete successfully (43 packages loaded, 192 targets configured)
currently loading: mediapipe/framework ... (2 packages)
Fetching @local_config_cc; Running xcode-locator
Fetching https://github.com/protocolbuffers/protobuf/archive/v3.19.1.tar.gz; 937,412B
Thank you in advance :) | non_infrastructure | error running hello world on maacos please make sure that this is a build installation issue and also refer to the documentation before raising any issues system information please provide as much relevant information as possible os platform and distribution e g linux ubuntu android ios macos compiler version e g gcc g apple clang version apple clang version programming language and version e g c python java installed using virtualenv pip conda if python bazel version xcode and tulsi versions if ios android sdk and ndk versions if android android if android opencv version if running on desktop describe the problem i was following these installation steps but failed to run hello world at the end i believe this has to be something with failing to download some dependencies via http complete logs bazel run define mediapipe disable gpu mediapipe examples desktop hello world hello world extracting bazel installation starting local bazel server and connecting to it debug rule rules foreign cc indicated that a canonical reproducible form can be obtained by modifying arguments debug repository rules foreign cc instantiated at users jjanggu carat client mediapipe workspace in repository rule http archive defined at private var tmp bazel jjanggu external bazel tools tools build defs repo http bzl in warning download from failed class java io filenotfoundexception get returned not found warning download from failed class java io filenotfoundexception get returned not found debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository com google absl because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository com google benchmark because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository bazel because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository com google protobuf because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository com google googletest because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository com github gflags gflags because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository build bazel rules apple because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository build bazel rules swift because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository build bazel apple support because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository xctestrunner because it already exists debug private var tmp bazel jjanggu external org tensorflow third party repo bzl warning skipping import of repository because it already exists debug rule rules cc indicated that a canonical reproducible form can be obtained by modifying arguments debug repository rules cc instantiated at users jjanggu carat client mediapipe workspace in repository rule http archive defined at private var tmp bazel jjanggu external bazel tools tools build defs repo http bzl in error traceback most recent call last file private var tmp bazel jjanggu external rules cc cc private rules impl cc flags supplier bzl line column in load bazel tools tools cpp toolchain utils bzl find cpp toolchain use cpp toolchain error file bazel tools tools cpp toolchain utils bzl does not contain symbol use cpp toolchain did you mean find cpp toolchain info repository com google protobuf instantiated at users jjanggu carat client mediapipe workspace in repository rule http archive defined at private var tmp bazel jjanggu external bazel tools tools build defs repo http bzl in error users jjanggu carat client mediapipe mediapipe examples desktop hello world build error loading package mediapipe framework at users jjanggu carat client mediapipe mediapipe framework port build config bzl at users jjanggu carat client mediapipe mediapipe framework tool mediapipe graph bzl at private var tmp bazel jjanggu external org tensorflow tensorflow lite core shims cc library with tflite bzl at private var tmp bazel jjanggu external org tensorflow tensorflow lite build def bzl at private var tmp bazel jjanggu external org tensorflow tensorflow tensorflow bzl at private var tmp bazel jjanggu external org tensorflow tensorflow core platform rules cc bzl at private var tmp bazel jjanggu external org tensorflow tensorflow core platform default rules cc bzl at private var tmp bazel jjanggu external rules cc cc defs bzl initialization of module cc private rules impl cc flags supplier bzl failed and referenced by mediapipe examples desktop hello world hello world error analysis of target mediapipe examples desktop hello world hello world failed build aborted analysis failed info elapsed time info processes failed build did not complete successfully packages loaded targets configured failed build did not complete successfully packages loaded targets configured currently loading mediapipe framework packages fetching local config cc running xcode locator fetching thank you in advance | 0 |
7,353 | 6,916,644,349 | IssuesEvent | 2017-11-29 03:49:48 | uccser/cs-unplugged | https://api.github.com/repos/uccser/cs-unplugged | closed | Implement automatic update of .po file | infrastructure internationalization | The .po file needs to be updated with all static strings for translation, by running the `python manage.py makemessages`. This needs to be run upon any change to templates or database content.
This process should be automated on Travis to run when any such files are updated on `develop`. | 1.0 | Implement automatic update of .po file - The .po file needs to be updated with all static strings for translation, by running the `python manage.py makemessages`. This needs to be run upon any change to templates or database content.
This process should be automated on Travis to run when any such files are updated on `develop`. | infrastructure | implement automatic update of po file the po file needs to be updated with all static strings for translation by running the python manage py makemessages this needs to be run upon any change to templates or database content this process should be automated on travis to run when any such files are updated on develop | 1 |
7,670 | 7,047,342,256 | IssuesEvent | 2018-01-02 13:03:07 | openpaperwork/paperwork | https://api.github.com/repos/openpaperwork/paperwork | opened | Mails & mailing-list | infrastructure | Would be nice:
* Not have the mailing hosted on googlegroups.
* Be able to provide `xxx@openpaper.work` emails to main contributors if they want one.
Regarding the mailing-lists: do we wait for the switch to Gnome infrastructure ? Or do we decide to keep using our own even after switching to Gnome infrastructure ? | 1.0 | Mails & mailing-list - Would be nice:
* Not have the mailing hosted on googlegroups.
* Be able to provide `xxx@openpaper.work` emails to main contributors if they want one.
Regarding the mailing-lists: do we wait for the switch to Gnome infrastructure ? Or do we decide to keep using our own even after switching to Gnome infrastructure ? | infrastructure | mails mailing list would be nice not have the mailing hosted on googlegroups be able to provide xxx openpaper work emails to main contributors if they want one regarding the mailing lists do we wait for the switch to gnome infrastructure or do we decide to keep using our own even after switching to gnome infrastructure | 1 |
28,989 | 23,648,658,962 | IssuesEvent | 2022-08-26 02:54:28 | dotnet/project-system | https://api.github.com/repos/dotnet/project-system | closed | Consolidate build pipeline orgs and use consistent pipeline naming | Area-Infrastructure Triage-Approved | Relates to: https://github.com/dotnet/project-system/issues/7915
This issue extends beyond this repo, but to all the repos our team owns. We own these repos:
- https://github.com/dotnet/project-system
- https://github.com/dotnet/project-system-tools
- https://github.com/dotnet/NuGet.BuildTasks
- https://github.com/dotnet/ProjFileTools
We currently use 2 different DevOps orgs (1 project within each org). They are:
- https://dev.azure.com/devdiv/DevDiv
- Org: `devdiv`
- Project: `DevDiv`
- https://dev.azure.com/dnceng/public
- Org: `dnceng`
- Project: `public`
The specific reason for this is they have different access settings: [private vs public](https://docs.microsoft.com/en-us/azure/devops/organizations/projects/about-projects#private-and-public-projects). The `DevDiv` project is **private** (aka *Enterprise*) and the `public` project is **public**. For our GitHub pull-request pipelines, we require a **public** project. For our build (signing, packaging, etc.), we require a **private** project.
As it currently stands, every project in the `devdiv` org is **private**. However, in the `dnceng` org, contains both a **public** (`public`) and **private** (`internal`) project. From my understanding, our Microsoft org, **.NET**, would be billed to the `dnceng` (DotNet Engineering) DevOps org. Technically speaking, the only thing that is required to be in the `devdiv` org is our insertion PRs to Visual Studio since the VS repo exists within that org. One restriction to that is the amount of *variable groups* we rely on in our pipelines to make them function. Since we create VS components, we rely on their variables. However, there might be similar variables in the `dnceng` org.
This issue would be to investigate if it is possible to consolidate the pipelines into the same org or if that is even worthwhile. Right now, the downside of using 2 orgs is:
- Different variable groups (for secrets primarily)
- Different pipeline images/image pools
- (Potentially) using resources that we aren't billed for
- Confusion when investigating infrastructure issues or documenting infrastructure
- Different naming and folder conventions for pipelines
This issue would also see consistent naming of pipelines. Here are the current names (and folders) of pipelines, separated by pipeline type and repo:
- GitHub PR build pipeline
- `project-system`: dotnet\project-system: [**unit-tests**](https://dev.azure.com/dnceng/public/_build?definitionId=406)
- `project-system-tools`: dotnet\project-system-tools: [**build**](https://dev.azure.com/dnceng/public/_build?definitionId=446)
- `NuGet.BuildTasks`: dotnet\NuGet.BuildTasks: [**dotnet.NuGet.BuildTasks-CI**](https://dev.azure.com/dnceng/public/_build?definitionId=567)
- `ProjFileTools`: *None*
- GitHub PR RichNav pipeline
- `project-system`: dotnet\project-system: [**project-system-richnav**](https://dev.azure.com/dnceng/public/_build?definitionId=910)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: *None*
- `ProjFileTools`: *None*
- Localization pipeline (after PR merge)
- `project-system`: dotnet\project-system: [**one-loc-build**](https://dev.azure.com/dnceng/public/_build?definitionId=981)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: *None*
- `ProjFileTools`: *None*
- Signed build/packaging pipeline (after PR merge)
- `project-system`: DotNet\project-system: [**DotNet-Project-System**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=9675)
- `project-system-tools`: DotNet\project-system-tools: [**project-system-tools**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=7294)
- `NuGet.BuildTasks`: *No Folder*: [**dotnet.NuGet.BuildTasks**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=11797)
- `ProjFileTools`: *None*
- Compliance pipeline (after PR merge)
- `project-system`: DotNet\project-system: [**DotNet-Project-System-Compliance**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=15013)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: *No Folder*: [**DotNet.NuGet.BuildTasks-Compliance**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=15125)
- `ProjFileTools`: *None*
- VS Insertion release pipeline
- `project-system`: Managed Languages\Project System: [**Project System Insertion (main -> main)**](https://dev.azure.com/devdiv/DevDiv/_release?view=all&_a=releases&definitionId=1242)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: Managed Languages: [**Nuget.BuildTasks**](https://dev.azure.com/devdiv/DevDiv/_release?view=all&_a=releases&definitionId=1937)
- `ProjFileTools`: *None*
- *Additional pipelines*
- `project-system`: dotnet\project-system: [**integration-tests**](https://dev.azure.com/dnceng/public/_build?definitionId=417) (standard pipeline)
- `project-system`: Managed Languages\Project System: [**Project System - OptProf**](https://dev.azure.com/devdiv/DevDiv/_release?view=all&_a=releases&definitionId=3197) (release pipeline)
- There are also multiple variants of the *Project System Insertion* release pipeline targeting different branches and a validation release pipeline | 1.0 | Consolidate build pipeline orgs and use consistent pipeline naming - Relates to: https://github.com/dotnet/project-system/issues/7915
This issue extends beyond this repo, but to all the repos our team owns. We own these repos:
- https://github.com/dotnet/project-system
- https://github.com/dotnet/project-system-tools
- https://github.com/dotnet/NuGet.BuildTasks
- https://github.com/dotnet/ProjFileTools
We currently use 2 different DevOps orgs (1 project within each org). They are:
- https://dev.azure.com/devdiv/DevDiv
- Org: `devdiv`
- Project: `DevDiv`
- https://dev.azure.com/dnceng/public
- Org: `dnceng`
- Project: `public`
The specific reason for this is they have different access settings: [private vs public](https://docs.microsoft.com/en-us/azure/devops/organizations/projects/about-projects#private-and-public-projects). The `DevDiv` project is **private** (aka *Enterprise*) and the `public` project is **public**. For our GitHub pull-request pipelines, we require a **public** project. For our build (signing, packaging, etc.), we require a **private** project.
As it currently stands, every project in the `devdiv` org is **private**. However, in the `dnceng` org, contains both a **public** (`public`) and **private** (`internal`) project. From my understanding, our Microsoft org, **.NET**, would be billed to the `dnceng` (DotNet Engineering) DevOps org. Technically speaking, the only thing that is required to be in the `devdiv` org is our insertion PRs to Visual Studio since the VS repo exists within that org. One restriction to that is the amount of *variable groups* we rely on in our pipelines to make them function. Since we create VS components, we rely on their variables. However, there might be similar variables in the `dnceng` org.
This issue would be to investigate if it is possible to consolidate the pipelines into the same org or if that is even worthwhile. Right now, the downside of using 2 orgs is:
- Different variable groups (for secrets primarily)
- Different pipeline images/image pools
- (Potentially) using resources that we aren't billed for
- Confusion when investigating infrastructure issues or documenting infrastructure
- Different naming and folder conventions for pipelines
This issue would also see consistent naming of pipelines. Here are the current names (and folders) of pipelines, separated by pipeline type and repo:
- GitHub PR build pipeline
- `project-system`: dotnet\project-system: [**unit-tests**](https://dev.azure.com/dnceng/public/_build?definitionId=406)
- `project-system-tools`: dotnet\project-system-tools: [**build**](https://dev.azure.com/dnceng/public/_build?definitionId=446)
- `NuGet.BuildTasks`: dotnet\NuGet.BuildTasks: [**dotnet.NuGet.BuildTasks-CI**](https://dev.azure.com/dnceng/public/_build?definitionId=567)
- `ProjFileTools`: *None*
- GitHub PR RichNav pipeline
- `project-system`: dotnet\project-system: [**project-system-richnav**](https://dev.azure.com/dnceng/public/_build?definitionId=910)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: *None*
- `ProjFileTools`: *None*
- Localization pipeline (after PR merge)
- `project-system`: dotnet\project-system: [**one-loc-build**](https://dev.azure.com/dnceng/public/_build?definitionId=981)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: *None*
- `ProjFileTools`: *None*
- Signed build/packaging pipeline (after PR merge)
- `project-system`: DotNet\project-system: [**DotNet-Project-System**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=9675)
- `project-system-tools`: DotNet\project-system-tools: [**project-system-tools**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=7294)
- `NuGet.BuildTasks`: *No Folder*: [**dotnet.NuGet.BuildTasks**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=11797)
- `ProjFileTools`: *None*
- Compliance pipeline (after PR merge)
- `project-system`: DotNet\project-system: [**DotNet-Project-System-Compliance**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=15013)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: *No Folder*: [**DotNet.NuGet.BuildTasks-Compliance**](https://dev.azure.com/devdiv/DevDiv/_build?definitionId=15125)
- `ProjFileTools`: *None*
- VS Insertion release pipeline
- `project-system`: Managed Languages\Project System: [**Project System Insertion (main -> main)**](https://dev.azure.com/devdiv/DevDiv/_release?view=all&_a=releases&definitionId=1242)
- `project-system-tools`: *None*
- `NuGet.BuildTasks`: Managed Languages: [**Nuget.BuildTasks**](https://dev.azure.com/devdiv/DevDiv/_release?view=all&_a=releases&definitionId=1937)
- `ProjFileTools`: *None*
- *Additional pipelines*
- `project-system`: dotnet\project-system: [**integration-tests**](https://dev.azure.com/dnceng/public/_build?definitionId=417) (standard pipeline)
- `project-system`: Managed Languages\Project System: [**Project System - OptProf**](https://dev.azure.com/devdiv/DevDiv/_release?view=all&_a=releases&definitionId=3197) (release pipeline)
- There are also multiple variants of the *Project System Insertion* release pipeline targeting different branches and a validation release pipeline | infrastructure | consolidate build pipeline orgs and use consistent pipeline naming relates to this issue extends beyond this repo but to all the repos our team owns we own these repos we currently use different devops orgs project within each org they are org devdiv project devdiv org dnceng project public the specific reason for this is they have different access settings the devdiv project is private aka enterprise and the public project is public for our github pull request pipelines we require a public project for our build signing packaging etc we require a private project as it currently stands every project in the devdiv org is private however in the dnceng org contains both a public public and private internal project from my understanding our microsoft org net would be billed to the dnceng dotnet engineering devops org technically speaking the only thing that is required to be in the devdiv org is our insertion prs to visual studio since the vs repo exists within that org one restriction to that is the amount of variable groups we rely on in our pipelines to make them function since we create vs components we rely on their variables however there might be similar variables in the dnceng org this issue would be to investigate if it is possible to consolidate the pipelines into the same org or if that is even worthwhile right now the downside of using orgs is different variable groups for secrets primarily different pipeline images image pools potentially using resources that we aren t billed for confusion when investigating infrastructure issues or documenting infrastructure different naming and folder conventions for pipelines this issue would also see consistent naming of pipelines here are the current names and folders of pipelines separated by pipeline type and repo github pr build pipeline project system dotnet project system project system tools dotnet project system tools nuget buildtasks dotnet nuget buildtasks projfiletools none github pr richnav pipeline project system dotnet project system project system tools none nuget buildtasks none projfiletools none localization pipeline after pr merge project system dotnet project system project system tools none nuget buildtasks none projfiletools none signed build packaging pipeline after pr merge project system dotnet project system project system tools dotnet project system tools nuget buildtasks no folder projfiletools none compliance pipeline after pr merge project system dotnet project system project system tools none nuget buildtasks no folder projfiletools none vs insertion release pipeline project system managed languages project system project system tools none nuget buildtasks managed languages projfiletools none additional pipelines project system dotnet project system standard pipeline project system managed languages project system release pipeline there are also multiple variants of the project system insertion release pipeline targeting different branches and a validation release pipeline | 1 |
196,817 | 6,949,614,043 | IssuesEvent | 2017-12-06 07:27:28 | telerik/kendo-ui-core | https://api.github.com/repos/telerik/kendo-ui-core | opened | The DateTimePicker with DateInput return nulll if the value is set before the min value(same for max) | Bug C: DatePicker C: DateTimePicker Kendo1 Priority 1 SEV: Low | ### Bug report
The DateTimePicker with DateInput returns null if the value is set before the min value(same for max). The the same scenario the DateInput will return the min value, creating a different behavior for both widgets.
### Reproduction of the problem
The issue could be reproduced in the following Dojo by setting a value lower than the min one and logging the results: http://dojo.telerik.com/OmEfOv
### Environment
* **Browser:** [all]
| 1.0 | The DateTimePicker with DateInput return nulll if the value is set before the min value(same for max) - ### Bug report
The DateTimePicker with DateInput returns null if the value is set before the min value(same for max). The the same scenario the DateInput will return the min value, creating a different behavior for both widgets.
### Reproduction of the problem
The issue could be reproduced in the following Dojo by setting a value lower than the min one and logging the results: http://dojo.telerik.com/OmEfOv
### Environment
* **Browser:** [all]
| non_infrastructure | the datetimepicker with dateinput return nulll if the value is set before the min value same for max bug report the datetimepicker with dateinput returns null if the value is set before the min value same for max the the same scenario the dateinput will return the min value creating a different behavior for both widgets reproduction of the problem the issue could be reproduced in the following dojo by setting a value lower than the min one and logging the results environment browser | 0 |
18,868 | 13,149,332,447 | IssuesEvent | 2020-08-09 04:20:45 | timhaley94/holdem | https://api.github.com/repos/timhaley94/holdem | closed | Integrate Circle CI and terraform | infrastructure | On merges to master, Circle CI should run `terraform apply` if and only if the test/build passes. Blocked by #1, #2, #3 | 1.0 | Integrate Circle CI and terraform - On merges to master, Circle CI should run `terraform apply` if and only if the test/build passes. Blocked by #1, #2, #3 | infrastructure | integrate circle ci and terraform on merges to master circle ci should run terraform apply if and only if the test build passes blocked by | 1 |
54,297 | 13,540,747,222 | IssuesEvent | 2020-09-16 15:01:41 | radon-h2020/radon-iac-miner | https://api.github.com/repos/radon-h2020/radon-iac-miner | opened | R-T3.4-14: The IaC miner must have a gui to export the crawled projects | Defect prediction IDE MUST WP3 | ID | R-T3.4-14
-- | --
Section | WP3: Methodology and Quality Assurance Requirements
Type | FUNCTIONAL_SUITABILITY
User Story | As an Operations Engineer/QoS Engineer/Release Manager, I want to export the data concerning the projects that I crawled using the IaC miner
Requirement | The IaC miner must allow developers to dump the crawled repository in a suitable structured format (e.g., CSV, SQL)
Priority | Must have
Affected Tools | DEFECT_PRED_TOOL
Means of Verification | Direct implementation on IDE, feature checklist, case-study | 1.0 | R-T3.4-14: The IaC miner must have a gui to export the crawled projects - ID | R-T3.4-14
-- | --
Section | WP3: Methodology and Quality Assurance Requirements
Type | FUNCTIONAL_SUITABILITY
User Story | As an Operations Engineer/QoS Engineer/Release Manager, I want to export the data concerning the projects that I crawled using the IaC miner
Requirement | The IaC miner must allow developers to dump the crawled repository in a suitable structured format (e.g., CSV, SQL)
Priority | Must have
Affected Tools | DEFECT_PRED_TOOL
Means of Verification | Direct implementation on IDE, feature checklist, case-study | non_infrastructure | r the iac miner must have a gui to export the crawled projects id r section methodology and quality assurance requirements type functional suitability user story as an operations engineer qos engineer release manager i want to export the data concerning the projects that i crawled using the iac miner requirement the iac miner must allow developers to dump the crawled repository in a suitable structured format e g csv sql priority must have affected tools defect pred tool means of verification direct implementation on ide feature checklist case study | 0 |
24,933 | 17,929,900,986 | IssuesEvent | 2021-09-10 07:50:13 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Simplify .NETFramework tfms by avoiding the "-windows" RID | area-Infrastructure-libraries in pr | Libraries which target .NET Framework usually have rid agnostic tfms, i.e. `net461`. If the library targets `netstandard2.0-windows` as well, the .NET Framework tfm must be rid specific, as rid specific .NET Framework apps would otherwise pick the .NETStandard asset over the .NETFramework one (based on the RID compatibility rules). There is yet another reason that requires .NETFramework tfms to be RID specific, which is when a project P2Ps other projects which are rid-specific. Without the RID specific .NETFramework tfm, a compatible .NETStandard asset would be picked instead.
NuGet doesn't support setting a `TargetPlatform` in the TargetFramework alias when targeting .NETFramework or .NETStandard. Any such tfms in dotnet/runtime are currently leveraging a hack that strips the TargetPlatform / TargetFrameworkSuffix away during restore and packaging (as NuGet Pack uses the project.assets.json file). For any project that includes a RID specific .NETFramework or .NETStandard tfm, a NuGet.config file must be present next to the solution file so that Visual Studio doesn't attempt to restore these projects as the mentioned hack doesn't work inside Visual Studio. FWIW, generating such NuGet.config file and placing it next to solution files when required is currently handled by slngen.proj: https://github.com/dotnet/runtime/blob/c156ebe0f7be4a81584336d3a152aabad791db25/src/libraries/slngen.proj#L33
I propose that we remove all "TargetFrameworkSuffixes" / TargetPlatforms / RIDs (whatever you would like to call them) from .NETFramework tfms and let the packaging targets handle the cases where a RID specific asset is required in the package. As NuGet will likely never support RID specific .NETFramework tfm aliases, the distinction between a RID specific and a RID agnostic .NETFramework tfm is unnecessary.
cc @joperezr @ericstj | 1.0 | Simplify .NETFramework tfms by avoiding the "-windows" RID - Libraries which target .NET Framework usually have rid agnostic tfms, i.e. `net461`. If the library targets `netstandard2.0-windows` as well, the .NET Framework tfm must be rid specific, as rid specific .NET Framework apps would otherwise pick the .NETStandard asset over the .NETFramework one (based on the RID compatibility rules). There is yet another reason that requires .NETFramework tfms to be RID specific, which is when a project P2Ps other projects which are rid-specific. Without the RID specific .NETFramework tfm, a compatible .NETStandard asset would be picked instead.
NuGet doesn't support setting a `TargetPlatform` in the TargetFramework alias when targeting .NETFramework or .NETStandard. Any such tfms in dotnet/runtime are currently leveraging a hack that strips the TargetPlatform / TargetFrameworkSuffix away during restore and packaging (as NuGet Pack uses the project.assets.json file). For any project that includes a RID specific .NETFramework or .NETStandard tfm, a NuGet.config file must be present next to the solution file so that Visual Studio doesn't attempt to restore these projects as the mentioned hack doesn't work inside Visual Studio. FWIW, generating such NuGet.config file and placing it next to solution files when required is currently handled by slngen.proj: https://github.com/dotnet/runtime/blob/c156ebe0f7be4a81584336d3a152aabad791db25/src/libraries/slngen.proj#L33
I propose that we remove all "TargetFrameworkSuffixes" / TargetPlatforms / RIDs (whatever you would like to call them) from .NETFramework tfms and let the packaging targets handle the cases where a RID specific asset is required in the package. As NuGet will likely never support RID specific .NETFramework tfm aliases, the distinction between a RID specific and a RID agnostic .NETFramework tfm is unnecessary.
cc @joperezr @ericstj | infrastructure | simplify netframework tfms by avoiding the windows rid libraries which target net framework usually have rid agnostic tfms i e if the library targets windows as well the net framework tfm must be rid specific as rid specific net framework apps would otherwise pick the netstandard asset over the netframework one based on the rid compatibility rules there is yet another reason that requires netframework tfms to be rid specific which is when a project other projects which are rid specific without the rid specific netframework tfm a compatible netstandard asset would be picked instead nuget doesn t support setting a targetplatform in the targetframework alias when targeting netframework or netstandard any such tfms in dotnet runtime are currently leveraging a hack that strips the targetplatform targetframeworksuffix away during restore and packaging as nuget pack uses the project assets json file for any project that includes a rid specific netframework or netstandard tfm a nuget config file must be present next to the solution file so that visual studio doesn t attempt to restore these projects as the mentioned hack doesn t work inside visual studio fwiw generating such nuget config file and placing it next to solution files when required is currently handled by slngen proj i propose that we remove all targetframeworksuffixes targetplatforms rids whatever you would like to call them from netframework tfms and let the packaging targets handle the cases where a rid specific asset is required in the package as nuget will likely never support rid specific netframework tfm aliases the distinction between a rid specific and a rid agnostic netframework tfm is unnecessary cc joperezr ericstj | 1 |
733,930 | 25,329,779,117 | IssuesEvent | 2022-11-18 12:19:04 | insightsengineering/scda.2022 | https://api.github.com/repos/insightsengineering/scda.2022 | closed | Remove random.cdisc.data submodule | bug priority sme | 
I then get error when pulling main:

I assume this isn't right @shajoezhu | 1.0 | Remove random.cdisc.data submodule - 
I then get error when pulling main:

I assume this isn't right @shajoezhu | non_infrastructure | remove random cdisc data submodule i then get error when pulling main i assume this isn t right shajoezhu | 0 |
20,425 | 13,912,482,204 | IssuesEvent | 2020-10-20 18:56:27 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | opened | Build scripts should automatically create configuration-specific libs build log files | area-Infrastructure-libraries | Currently, the libraries build does not create any log files by default (e.g., when building locally, not in the CI system).
Apparently, when the CI system passes the "-ci" argument to the build script, that causes binlog files to be generated to the `artifacts\log\buildConfig` directory, due to it passing the "/bl" flag to msbuild.
Also, apparently, if you pass this manually, subsequent builds will use the same log file names and hence overwrite the previous log files. This is a problem if you build multiple platform/configurations, especially when building using the `build -arch x64,x86,arm,arm64` multi-arch syntax.
This should be improved, by:
1. Always generate log files for all parts of the build (the CoreCLR build already does this; I'm mainly talking about the libs build here)
2. Log files should be named by the configuration (OS, processor, build flavor) so as to not be overwritten by other builds.
@dotnet/runtime-infrastructure | 1.0 | Build scripts should automatically create configuration-specific libs build log files - Currently, the libraries build does not create any log files by default (e.g., when building locally, not in the CI system).
Apparently, when the CI system passes the "-ci" argument to the build script, that causes binlog files to be generated to the `artifacts\log\buildConfig` directory, due to it passing the "/bl" flag to msbuild.
Also, apparently, if you pass this manually, subsequent builds will use the same log file names and hence overwrite the previous log files. This is a problem if you build multiple platform/configurations, especially when building using the `build -arch x64,x86,arm,arm64` multi-arch syntax.
This should be improved, by:
1. Always generate log files for all parts of the build (the CoreCLR build already does this; I'm mainly talking about the libs build here)
2. Log files should be named by the configuration (OS, processor, build flavor) so as to not be overwritten by other builds.
@dotnet/runtime-infrastructure | infrastructure | build scripts should automatically create configuration specific libs build log files currently the libraries build does not create any log files by default e g when building locally not in the ci system apparently when the ci system passes the ci argument to the build script that causes binlog files to be generated to the artifacts log buildconfig directory due to it passing the bl flag to msbuild also apparently if you pass this manually subsequent builds will use the same log file names and hence overwrite the previous log files this is a problem if you build multiple platform configurations especially when building using the build arch arm multi arch syntax this should be improved by always generate log files for all parts of the build the coreclr build already does this i m mainly talking about the libs build here log files should be named by the configuration os processor build flavor so as to not be overwritten by other builds dotnet runtime infrastructure | 1 |
5,114 | 5,444,446,477 | IssuesEvent | 2017-03-07 02:49:46 | Daniel-Mietchen/ideas | https://api.github.com/repos/Daniel-Mietchen/ideas | opened | Any good open-source hashtag trackers? | 5min far-future infrastructure | None of those that I've seen so far are open.
5min to get started. | 1.0 | Any good open-source hashtag trackers? - None of those that I've seen so far are open.
5min to get started. | infrastructure | any good open source hashtag trackers none of those that i ve seen so far are open to get started | 1 |
241,661 | 18,470,301,631 | IssuesEvent | 2021-10-17 16:17:14 | CuboidDroid/cuboidoutpost | https://api.github.com/repos/CuboidDroid/cuboidoutpost | closed | Recipe conflict between Pam's popcorn and corn on the cob. | documentation | **Modpack Version:**
e.g. 0.2.8
**Describe the bug**
Recipe Conflict between popcorn and corn on the cob. | 1.0 | Recipe conflict between Pam's popcorn and corn on the cob. - **Modpack Version:**
e.g. 0.2.8
**Describe the bug**
Recipe Conflict between popcorn and corn on the cob. | non_infrastructure | recipe conflict between pam s popcorn and corn on the cob modpack version e g describe the bug recipe conflict between popcorn and corn on the cob | 0 |
27,956 | 22,642,000,471 | IssuesEvent | 2022-07-01 03:46:06 | iree-org/iree | https://api.github.com/repos/iree-org/iree | closed | Figure out magic flags to build LLVM/MLIR as installable and have our cmake use it | infrastructure | Right now we require an llvm checkout and directly reach into the submodule directory for include paths and tools. We should instead be able to support out-of-tree prebuilt LLVM installs.
Specifically, if we have an LLVM install directory with the appropriate headers, prebuilt shared libraries for ones that we depend on, and the tools we require (namely just mlir-tblgen, I believe) we should be able to use cmake's `find_dependency` if the local in-tree submodule is missing.
@stellaraccident has a discussion on the MLIR discourse here: https://llvm.discourse.group/t/separate-install-target-for-mlir/1005
Whatever that's doing is likely what we want as well. Most of this work is about fixing our cmake files to not assume the presence of in-tree LLVM and the specific structure of the third_party directory. | 1.0 | Figure out magic flags to build LLVM/MLIR as installable and have our cmake use it - Right now we require an llvm checkout and directly reach into the submodule directory for include paths and tools. We should instead be able to support out-of-tree prebuilt LLVM installs.
Specifically, if we have an LLVM install directory with the appropriate headers, prebuilt shared libraries for ones that we depend on, and the tools we require (namely just mlir-tblgen, I believe) we should be able to use cmake's `find_dependency` if the local in-tree submodule is missing.
@stellaraccident has a discussion on the MLIR discourse here: https://llvm.discourse.group/t/separate-install-target-for-mlir/1005
Whatever that's doing is likely what we want as well. Most of this work is about fixing our cmake files to not assume the presence of in-tree LLVM and the specific structure of the third_party directory. | infrastructure | figure out magic flags to build llvm mlir as installable and have our cmake use it right now we require an llvm checkout and directly reach into the submodule directory for include paths and tools we should instead be able to support out of tree prebuilt llvm installs specifically if we have an llvm install directory with the appropriate headers prebuilt shared libraries for ones that we depend on and the tools we require namely just mlir tblgen i believe we should be able to use cmake s find dependency if the local in tree submodule is missing stellaraccident has a discussion on the mlir discourse here whatever that s doing is likely what we want as well most of this work is about fixing our cmake files to not assume the presence of in tree llvm and the specific structure of the third party directory | 1 |
186,193 | 15,050,566,443 | IssuesEvent | 2021-02-03 13:02:42 | usnistgov/ElectionResultsReporting | https://api.github.com/repos/usnistgov/ElectionResultsReporting | closed | "district ballot style" -> "distinct ballot style" typo | bug documentation | The spec PDF (p22/146, section 2.2) currently has:
> It is possible that, despite best efforts, very low numbers of voters or even just one voter will require a district ballot style.
It should probably read:
> It is possible that, despite best efforts, very low numbers of voters or even just one voter will require a distinct ballot style. | 1.0 | "district ballot style" -> "distinct ballot style" typo - The spec PDF (p22/146, section 2.2) currently has:
> It is possible that, despite best efforts, very low numbers of voters or even just one voter will require a district ballot style.
It should probably read:
> It is possible that, despite best efforts, very low numbers of voters or even just one voter will require a distinct ballot style. | non_infrastructure | district ballot style distinct ballot style typo the spec pdf section currently has it is possible that despite best efforts very low numbers of voters or even just one voter will require a district ballot style it should probably read it is possible that despite best efforts very low numbers of voters or even just one voter will require a distinct ballot style | 0 |
94,638 | 27,251,281,540 | IssuesEvent | 2023-02-22 08:18:51 | microsoft/appcenter | https://api.github.com/repos/microsoft/appcenter | closed | Connecting to Self-Hosted Bitbucket | feature request build | **Describe the solution you'd like**
I would like to connect the App Center to a self-hosted Bitbucket account.
**Additional context**
A request was already attempted with #2411 and was closed. This is a second attempt.
Connecting to Self-Hosted bitbucket Instances.
| 1.0 | Connecting to Self-Hosted Bitbucket - **Describe the solution you'd like**
I would like to connect the App Center to a self-hosted Bitbucket account.
**Additional context**
A request was already attempted with #2411 and was closed. This is a second attempt.
Connecting to Self-Hosted bitbucket Instances.
| non_infrastructure | connecting to self hosted bitbucket describe the solution you d like i would like to connect the app center to a self hosted bitbucket account additional context a request was already attempted with and was closed this is a second attempt connecting to self hosted bitbucket instances | 0 |
34,029 | 14,257,838,572 | IssuesEvent | 2020-11-20 04:47:21 | Azure/azure-sdk-for-java | https://api.github.com/repos/Azure/azure-sdk-for-java | opened | SB T2 Rename getAmqpAnnotatedMessage to getRawAmqpMessage in Service bus message | Client Service Bus | In order to keep consistency between all the languages (JS, python, .net) we have decided to rename this.
- Rename getAmqpAnnotatedMessage to getRawAmqpMessage
Change in Following classes
[ ] ServiceBusMessage
[ ] ServiceBusReceivedMessage
| 1.0 | SB T2 Rename getAmqpAnnotatedMessage to getRawAmqpMessage in Service bus message - In order to keep consistency between all the languages (JS, python, .net) we have decided to rename this.
- Rename getAmqpAnnotatedMessage to getRawAmqpMessage
Change in Following classes
[ ] ServiceBusMessage
[ ] ServiceBusReceivedMessage
| non_infrastructure | sb rename getamqpannotatedmessage to getrawamqpmessage in service bus message in order to keep consistency between all the languages js python net we have decided to rename this rename getamqpannotatedmessage to getrawamqpmessage change in following classes servicebusmessage servicebusreceivedmessage | 0 |
8,375 | 7,371,724,752 | IssuesEvent | 2018-03-13 12:46:35 | openshift/origin | https://api.github.com/repos/openshift/origin | opened | Unable to restart service origin-node | area/infrastructure kind/test-flake priority/P1 sig/pod | Seen in https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/18957/test_pull_request_origin_extended_conformance_install/9007/
```
1. Hosts: localhost
Play: Configure nodes
Task: restart node
Message: Unable to restart service origin-node: Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
```
Not sure if this failure is related:
```
Configure nodes [localhost] nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d scripts 21m31s
go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Configure\snodes\s\[localhost\]\snickhammond\.logrotate\s\:\snickhammond\.logrotate\s\|\sSetup\slogrotate\.d\sscripts$'
``` | 1.0 | Unable to restart service origin-node - Seen in https://openshift-gce-devel.appspot.com/build/origin-ci-test/pr-logs/pull/18957/test_pull_request_origin_extended_conformance_install/9007/
```
1. Hosts: localhost
Play: Configure nodes
Task: restart node
Message: Unable to restart service origin-node: Job for origin-node.service failed because the control process exited with error code. See "systemctl status origin-node.service" and "journalctl -xe" for details.
```
Not sure if this failure is related:
```
Configure nodes [localhost] nickhammond.logrotate : nickhammond.logrotate | Setup logrotate.d scripts 21m31s
go run hack/e2e.go -v -test --test_args='--ginkgo.focus=Configure\snodes\s\[localhost\]\snickhammond\.logrotate\s\:\snickhammond\.logrotate\s\|\sSetup\slogrotate\.d\sscripts$'
``` | infrastructure | unable to restart service origin node seen in hosts localhost play configure nodes task restart node message unable to restart service origin node job for origin node service failed because the control process exited with error code see systemctl status origin node service and journalctl xe for details not sure if this failure is related configure nodes nickhammond logrotate nickhammond logrotate setup logrotate d scripts go run hack go v test test args ginkgo focus configure snodes s snickhammond logrotate s snickhammond logrotate s ssetup slogrotate d sscripts | 1 |
266,161 | 20,122,661,921 | IssuesEvent | 2022-02-08 05:15:58 | OpenAstronomy/packaging-guide | https://api.github.com/repos/OpenAstronomy/packaging-guide | opened | Finalize user documentation by filling out "needs writing" placeholders | documentation | Particularly:
* https://github.com/OpenAstronomy/packaging-guide/blob/master/docs/ci.rst
* https://github.com/OpenAstronomy/packaging-guide/blob/master/docs/scripts.rst
Motivation: Astropy cannot reroute people from [package-template](https://github.com/astropy/package-template) to this site if this site appears unfinished.
Blocks:
* astropy/package-template#519 | 1.0 | Finalize user documentation by filling out "needs writing" placeholders - Particularly:
* https://github.com/OpenAstronomy/packaging-guide/blob/master/docs/ci.rst
* https://github.com/OpenAstronomy/packaging-guide/blob/master/docs/scripts.rst
Motivation: Astropy cannot reroute people from [package-template](https://github.com/astropy/package-template) to this site if this site appears unfinished.
Blocks:
* astropy/package-template#519 | non_infrastructure | finalize user documentation by filling out needs writing placeholders particularly motivation astropy cannot reroute people from to this site if this site appears unfinished blocks astropy package template | 0 |
17,695 | 10,758,626,849 | IssuesEvent | 2019-10-31 15:17:11 | opensensorhub/osh-core | https://api.github.com/repos/opensensorhub/osh-core | closed | Add DoS protections | enhancement service | <a href="https://github.com/sensiasoft"><img src="https://avatars.githubusercontent.com/u/9446498?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [sensiasoft](https://github.com/sensiasoft)**
_Saturday Feb 14, 2015 at 17:43 GMT_
_Originally opened as https://github.com/sensiasoft/sensorhub-core/issues/15_
---
It is currently very easy to overload the server by sending many data streams to it. Maybe some low level protections should be activated even when no security front end is used.
| 1.0 | Add DoS protections - <a href="https://github.com/sensiasoft"><img src="https://avatars.githubusercontent.com/u/9446498?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [sensiasoft](https://github.com/sensiasoft)**
_Saturday Feb 14, 2015 at 17:43 GMT_
_Originally opened as https://github.com/sensiasoft/sensorhub-core/issues/15_
---
It is currently very easy to overload the server by sending many data streams to it. Maybe some low level protections should be activated even when no security front end is used.
| non_infrastructure | add dos protections issue by saturday feb at gmt originally opened as it is currently very easy to overload the server by sending many data streams to it maybe some low level protections should be activated even when no security front end is used | 0 |
7,710 | 7,056,733,622 | IssuesEvent | 2018-01-04 13:59:48 | Vastra-Gotalandsregionen/komponentkartan | https://api.github.com/repos/Vastra-Gotalandsregionen/komponentkartan | closed | Skapa riktig Library bundle | effort2: medium (days) infrastructure changes P2: Required | ### Beskrivning
Skapa en bundle för komponentkartan-biblioteket som publiceras till NPM
### Typ av ärende (kryssa i en med "x")
- [ ] Bugg
- [X ] Förbättringsförslag
### Din miljö
### Steg för att påvisa problemet
### Förväntat beteende
Mindre slimmat bundle +typings och CSS
### Faktiskt beteende
Just nu så publiceras all källkod till NPM
### Övrig information, kodexempel och motivering till förändring
Få ner storleken på komponentkartan och göra den mer library-aktig
### Skärmdumpar mm
Se https://www.npmjs.com/package/ng-packagr för skapa en library-bundle | 1.0 | Skapa riktig Library bundle - ### Beskrivning
Skapa en bundle för komponentkartan-biblioteket som publiceras till NPM
### Typ av ärende (kryssa i en med "x")
- [ ] Bugg
- [X ] Förbättringsförslag
### Din miljö
### Steg för att påvisa problemet
### Förväntat beteende
Mindre slimmat bundle +typings och CSS
### Faktiskt beteende
Just nu så publiceras all källkod till NPM
### Övrig information, kodexempel och motivering till förändring
Få ner storleken på komponentkartan och göra den mer library-aktig
### Skärmdumpar mm
Se https://www.npmjs.com/package/ng-packagr för skapa en library-bundle | infrastructure | skapa riktig library bundle beskrivning skapa en bundle för komponentkartan biblioteket som publiceras till npm typ av ärende kryssa i en med x bugg förbättringsförslag din miljö steg för att påvisa problemet förväntat beteende mindre slimmat bundle typings och css faktiskt beteende just nu så publiceras all källkod till npm övrig information kodexempel och motivering till förändring få ner storleken på komponentkartan och göra den mer library aktig skärmdumpar mm se för skapa en library bundle | 1 |
338,976 | 30,333,653,205 | IssuesEvent | 2023-07-11 08:12:35 | etcd-io/etcd | https://api.github.com/repos/etcd-io/etcd | closed | Run arm64 integration and e2e workflows against a supported release branch | area/testing help wanted type/feature | ### What would you like to be added?
There have been recent efforts to resolve issues with arm64 tests and improve overall support for arm64, refer:
- https://github.com/etcd-io/etcd/pull/15829
- https://github.com/etcd-io/etcd/pull/15233
- https://github.com/etcd-io/etcd/pull/15230
These tests have been running more reliably now:
- https://github.com/etcd-io/etcd/actions/workflows/tests-arm64.yaml
- https://github.com/etcd-io/etcd/actions/workflows/e2e-arm64.yaml
With that in mind, should we run these workflows against `release-3.5` to get them running against a stable release branch?
cc @ahrtr, @serathius, @geetasg, @chaochn47
### Why is this needed?
There have been several discussions and questions lately on improving tier of support for arm64. I believe this would be required step to progressing that? | 1.0 | Run arm64 integration and e2e workflows against a supported release branch - ### What would you like to be added?
There have been recent efforts to resolve issues with arm64 tests and improve overall support for arm64, refer:
- https://github.com/etcd-io/etcd/pull/15829
- https://github.com/etcd-io/etcd/pull/15233
- https://github.com/etcd-io/etcd/pull/15230
These tests have been running more reliably now:
- https://github.com/etcd-io/etcd/actions/workflows/tests-arm64.yaml
- https://github.com/etcd-io/etcd/actions/workflows/e2e-arm64.yaml
With that in mind, should we run these workflows against `release-3.5` to get them running against a stable release branch?
cc @ahrtr, @serathius, @geetasg, @chaochn47
### Why is this needed?
There have been several discussions and questions lately on improving tier of support for arm64. I believe this would be required step to progressing that? | non_infrastructure | run integration and workflows against a supported release branch what would you like to be added there have been recent efforts to resolve issues with tests and improve overall support for refer these tests have been running more reliably now with that in mind should we run these workflows against release to get them running against a stable release branch cc ahrtr serathius geetasg why is this needed there have been several discussions and questions lately on improving tier of support for i believe this would be required step to progressing that | 0 |
14,881 | 11,212,229,899 | IssuesEvent | 2020-01-06 17:05:58 | patternfly/patternfly-react | https://api.github.com/repos/patternfly/patternfly-react | opened | Prop descriptions are missing from the docs | documentation :memo: infrastructure | **Describe the issue. What is the expected and unexpected behavior?**
**Please provide the steps to reproduce. Feel free to link CodeSandbox or another tool.**
<!-- PatternFly-React Codesandbox template: https://codesandbox.io/s/recursing-khorana-kmind -->
**Is this a bug or enhancement? If this issue is a bug, is this issue blocking you or is there a work-around?**
**What is your product and what release version are you targeting?**
| 1.0 | Prop descriptions are missing from the docs - **Describe the issue. What is the expected and unexpected behavior?**
**Please provide the steps to reproduce. Feel free to link CodeSandbox or another tool.**
<!-- PatternFly-React Codesandbox template: https://codesandbox.io/s/recursing-khorana-kmind -->
**Is this a bug or enhancement? If this issue is a bug, is this issue blocking you or is there a work-around?**
**What is your product and what release version are you targeting?**
| infrastructure | prop descriptions are missing from the docs describe the issue what is the expected and unexpected behavior please provide the steps to reproduce feel free to link codesandbox or another tool is this a bug or enhancement if this issue is a bug is this issue blocking you or is there a work around what is your product and what release version are you targeting | 1 |
1,001 | 3,286,392,682 | IssuesEvent | 2015-10-29 02:12:11 | lucasangelon/centralwayfinderios | https://api.github.com/repos/lucasangelon/centralwayfinderios | opened | Create Map Options Menu | requirement | On the Maps View Controller, containing the following options:
- Walking / Driving
- Map View Options | 1.0 | Create Map Options Menu - On the Maps View Controller, containing the following options:
- Walking / Driving
- Map View Options | non_infrastructure | create map options menu on the maps view controller containing the following options walking driving map view options | 0 |
35,012 | 30,679,441,645 | IssuesEvent | 2023-07-26 08:10:28 | arduino/arduino-ide | https://api.github.com/repos/arduino/arduino-ide | closed | Redundant localization data files | topic: infrastructure type: imperfection | ### Describe the problem
Arduino IDE has been [localized](https://en.wikipedia.org/wiki/Internationalization_and_localization) to several languages thanks to the amazing [contributions of translations by the community](https://github.com/arduino/arduino-ide/blob/main/docs/contributor-guide/translation.md).
The localization data is stored in the files under the [`i18n` folder](https://github.com/arduino/arduino-ide/tree/main/i18n) of the repository. There is a separate file for each of the locales that have been added to [the "**Arduino IDE 2.0**" project](https://explore.transifex.com/arduino-1/ide2/) on the **Transifex** localization platform. I notice that there appear to be multiple files for equivalent locales:
- [`ca.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/ca.json) / [`ca_ES.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/ca_ES.json)
- [`my.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/my.json) / [`my_MM.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/my_MM.json)
- [`sv.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/sv.json) / [`sv_SE.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/sv_SE.json)
- [`uk.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/uk.json) / [`uk_UA.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/uk_UA.json)
🐛 The redundant files add unnecessary complication to the repository.
🐛 The localization configuration might accidentally be configured to use the unmaintained file instead of the one that is active on Transifex
### To reproduce
1. Open the [`i18n` folder](https://github.com/arduino/arduino-ide/tree/main/i18n) of the repository.
1. Compare the locale codes from the filenames against the names of the languages they are associate with on **Transifex**.
**ⓘ** The **Transifex** website is terrible, but you can manage to see them by logging in to your Transifex account and then searching for the language name (NOT language code) in the "**Select Language**" menu on this page:
https://app.transifex.com/join/?o=arduino-1&p=ide2

🐛 Multiple data files are present for a single language.
### Expected behavior
There is only one data file for each locale.
### Arduino IDE version
f6a43254f5c416a2e4fa888875358336b42dd4d5
### Operating system
N/A
### Operating system version
N/A
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://www.arduino.cc/en/software#nightly-builds)
- [X] My report contains all necessary details | 1.0 | Redundant localization data files - ### Describe the problem
Arduino IDE has been [localized](https://en.wikipedia.org/wiki/Internationalization_and_localization) to several languages thanks to the amazing [contributions of translations by the community](https://github.com/arduino/arduino-ide/blob/main/docs/contributor-guide/translation.md).
The localization data is stored in the files under the [`i18n` folder](https://github.com/arduino/arduino-ide/tree/main/i18n) of the repository. There is a separate file for each of the locales that have been added to [the "**Arduino IDE 2.0**" project](https://explore.transifex.com/arduino-1/ide2/) on the **Transifex** localization platform. I notice that there appear to be multiple files for equivalent locales:
- [`ca.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/ca.json) / [`ca_ES.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/ca_ES.json)
- [`my.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/my.json) / [`my_MM.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/my_MM.json)
- [`sv.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/sv.json) / [`sv_SE.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/sv_SE.json)
- [`uk.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/uk.json) / [`uk_UA.json`](https://github.com/arduino/arduino-ide/blob/main/i18n/uk_UA.json)
🐛 The redundant files add unnecessary complication to the repository.
🐛 The localization configuration might accidentally be configured to use the unmaintained file instead of the one that is active on Transifex
### To reproduce
1. Open the [`i18n` folder](https://github.com/arduino/arduino-ide/tree/main/i18n) of the repository.
1. Compare the locale codes from the filenames against the names of the languages they are associate with on **Transifex**.
**ⓘ** The **Transifex** website is terrible, but you can manage to see them by logging in to your Transifex account and then searching for the language name (NOT language code) in the "**Select Language**" menu on this page:
https://app.transifex.com/join/?o=arduino-1&p=ide2

🐛 Multiple data files are present for a single language.
### Expected behavior
There is only one data file for each locale.
### Arduino IDE version
f6a43254f5c416a2e4fa888875358336b42dd4d5
### Operating system
N/A
### Operating system version
N/A
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://www.arduino.cc/en/software#nightly-builds)
- [X] My report contains all necessary details | infrastructure | redundant localization data files describe the problem arduino ide has been to several languages thanks to the amazing the localization data is stored in the files under the of the repository there is a separate file for each of the locales that have been added to on the transifex localization platform i notice that there appear to be multiple files for equivalent locales 🐛 the redundant files add unnecessary complication to the repository 🐛 the localization configuration might accidentally be configured to use the unmaintained file instead of the one that is active on transifex to reproduce open the of the repository compare the locale codes from the filenames against the names of the languages they are associate with on transifex ⓘ the transifex website is terrible but you can manage to see them by logging in to your transifex account and then searching for the language name not language code in the select language menu on this page 🐛 multiple data files are present for a single language expected behavior there is only one data file for each locale arduino ide version operating system n a operating system version n a issue checklist i searched for previous reports in i verified the problem still occurs when using the latest my report contains all necessary details | 1 |
13,440 | 10,261,242,744 | IssuesEvent | 2019-08-22 09:24:10 | ampproject/amp-wp | https://api.github.com/repos/ampproject/amp-wp | opened | Add functional tests for CLI commands | [Integration] WP-CLI [Type] Infrastructure | The obvious choice would be reusing `wp-cli/wp-cli-tests` to write Behat tests, but there seems to be a blocking technical issue so far that doesn't allow this without resorting to a hack.
The main issue is that you need to install the plugin within the WordPress installation that Behat will set up. To do so, you use a built plugin version and send it over for installation as a ZIP archive.
However, this built version will contain the same Composer autoloader (with the same hash included in the class name) than the Composer autoloader that Behat was actually run from. This causes a redeclaration error from PHP when the WordPress installation tries to load the plugin.
I can think of two possible solutions right now to solve this:
1.) Rebuild a fresh ZIP archive with a regenerated autoloader that will have a different hash than the one Behat was loaded from.
2.) Avoid using Behat directly as a Composer dependency and use a binary/Phar instead, that comes with its own autoloader.
Before trying to tackle this, I'd like to investigate this from the WP-CLI side, to see whether this can be streamlined through the official package first.
Related #3056 | 1.0 | Add functional tests for CLI commands - The obvious choice would be reusing `wp-cli/wp-cli-tests` to write Behat tests, but there seems to be a blocking technical issue so far that doesn't allow this without resorting to a hack.
The main issue is that you need to install the plugin within the WordPress installation that Behat will set up. To do so, you use a built plugin version and send it over for installation as a ZIP archive.
However, this built version will contain the same Composer autoloader (with the same hash included in the class name) than the Composer autoloader that Behat was actually run from. This causes a redeclaration error from PHP when the WordPress installation tries to load the plugin.
I can think of two possible solutions right now to solve this:
1.) Rebuild a fresh ZIP archive with a regenerated autoloader that will have a different hash than the one Behat was loaded from.
2.) Avoid using Behat directly as a Composer dependency and use a binary/Phar instead, that comes with its own autoloader.
Before trying to tackle this, I'd like to investigate this from the WP-CLI side, to see whether this can be streamlined through the official package first.
Related #3056 | infrastructure | add functional tests for cli commands the obvious choice would be reusing wp cli wp cli tests to write behat tests but there seems to be a blocking technical issue so far that doesn t allow this without resorting to a hack the main issue is that you need to install the plugin within the wordpress installation that behat will set up to do so you use a built plugin version and send it over for installation as a zip archive however this built version will contain the same composer autoloader with the same hash included in the class name than the composer autoloader that behat was actually run from this causes a redeclaration error from php when the wordpress installation tries to load the plugin i can think of two possible solutions right now to solve this rebuild a fresh zip archive with a regenerated autoloader that will have a different hash than the one behat was loaded from avoid using behat directly as a composer dependency and use a binary phar instead that comes with its own autoloader before trying to tackle this i d like to investigate this from the wp cli side to see whether this can be streamlined through the official package first related | 1 |
13,260 | 10,170,875,910 | IssuesEvent | 2019-08-08 06:54:30 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | [Metricbeat] Cronjob support | :infrastructure Metricbeat [zube]: In Progress containers module | **Describe the enhancement:**
As [discussed](https://discuss.elastic.co/t/kubernetes-metrics-cronjobs/180377) on the board, it seems like Metricbeat doesn't support Kubernetes Cronjob metrics, i.e. when cronjobs create pods, those pods' metrics are not monitored and so there is no data about those pods sent over to ES. We use Cronjobs in our clusters often and we could really use the metrics for those pods.
**Describe a specific use case for the enhancement or feature:**
Monitoring CPU and Memory usage of Cronjobs to optimize their hardware/software. | 1.0 | [Metricbeat] Cronjob support - **Describe the enhancement:**
As [discussed](https://discuss.elastic.co/t/kubernetes-metrics-cronjobs/180377) on the board, it seems like Metricbeat doesn't support Kubernetes Cronjob metrics, i.e. when cronjobs create pods, those pods' metrics are not monitored and so there is no data about those pods sent over to ES. We use Cronjobs in our clusters often and we could really use the metrics for those pods.
**Describe a specific use case for the enhancement or feature:**
Monitoring CPU and Memory usage of Cronjobs to optimize their hardware/software. | infrastructure | cronjob support describe the enhancement as on the board it seems like metricbeat doesn t support kubernetes cronjob metrics i e when cronjobs create pods those pods metrics are not monitored and so there is no data about those pods sent over to es we use cronjobs in our clusters often and we could really use the metrics for those pods describe a specific use case for the enhancement or feature monitoring cpu and memory usage of cronjobs to optimize their hardware software | 1 |
10,351 | 8,514,061,029 | IssuesEvent | 2018-10-31 17:33:20 | OpenLiberty/open-liberty | https://api.github.com/repos/OpenLiberty/open-liberty | opened | Enhance Liberty Server config backups for FAT | in:Test Infrastructure team:Security SSO | Update the Liberty Server FAT tooling to save copies of all expanded files for individual test cases. Right now, only the last one is saved.
This will involve updates to reconfigureServerUsingExpandedConfiguration in LibertyServer.java. | 1.0 | Enhance Liberty Server config backups for FAT - Update the Liberty Server FAT tooling to save copies of all expanded files for individual test cases. Right now, only the last one is saved.
This will involve updates to reconfigureServerUsingExpandedConfiguration in LibertyServer.java. | infrastructure | enhance liberty server config backups for fat update the liberty server fat tooling to save copies of all expanded files for individual test cases right now only the last one is saved this will involve updates to reconfigureserverusingexpandedconfiguration in libertyserver java | 1 |
267,573 | 23,306,829,565 | IssuesEvent | 2022-08-08 02:35:19 | apache/pulsar | https://api.github.com/repos/apache/pulsar | closed | broker开启function功能时的多集群跨地域复制问题 | component/test flaky-tests lifecycle/stale Stale | pulsar集群1:10.66.107.31/32/33,每台服务器上一个broker实例、一个bookie实例
local zookeeper集群:10.66.107.34:2181,10.66.107.34:2182,10.66.107.34:2183
pulsar集群2:10.66.107.37/38/39,每台服务器上一个broker实例、一个bookie实例
local zookeeper集群:10.66.107.36:2181,10.66.107.36:2182,10.66.107.36:2183
共享的存储配置Zookeeper集群:10.66.107.35:2181,10.66.107.35:2182,10.66.107.35:2183
集群1的配置如下(以31为例):
broker.conf:
zookeeperServers=10.66.107.34:2181,10.66.107.34:2182,10.66.107.34:2183
configurationStoreServers=10.66.107.35:2181,10.66.107.35:2182,10.66.107.35:2183
brokerServicePortTls=6651
webServicePortTls=8443
advertisedAddress=10.66.107.31
clusterName=pulsar-cluster-1
functionsWorkerEnabled=true
bookkeeper.conf
advertisedAddress=10.66.107.31
bookieId=31
zkServers=10.66.107.34:2181,10.66.107.34:2182,10.66.107.34:2183
httpServerEnabled=true
functions_worker.yml
workerId: 31
workerHostname: 10.66.107.31
configurationStoreServers: 10.66.107.34:2181
pulsarFunctionsCluster: pulsar-cluster-1
stateStorageServiceUrl: bk://localhost:4181
pulsarFunctionsNamespace: public/functions1
集群2的配置如下(以37为例):
broker.conf
zookeeperServers=10.66.107.36:2181,10.66.107.36:2182,10.66.107.36:2183
configurationStoreServers=10.66.107.35:2181,10.66.107.35:2182,10.66.107.35:2183
brokerServicePortTls=6651
webServicePortTls=8443
advertisedAddress=10.66.107.37
clusterName=pulsar-cluster-2
functionsWorkerEnabled=true
bookkeeper.conf
advertisedAddress=10.66.107.37
bookieId=37
zkServers=10.66.107.36:2181,10.66.107.36:2182,10.66.107.36:2183
httpServerEnabled=true
functions_worker.yml
workerId: 37
workerHostname: 10.66.107.37
configurationStoreServers: 10.66.107.36:2181
pulsarFunctionsCluster: pulsar-cluster-2
stateStorageServiceUrl: bk://localhost:4181
pulsarFunctionsNamespace: public/functions2
问题1:
两个集群一共6个broker.conf中的functionsWorkerEnabled=true。
第一个集群bookie、broker启动正常,第二个集群的三个bookie启动正常,但是三个broker启动报错(后将functionsWorkerEnabled全部改为false后正常):
15:17:55.899 [ForkJoinPool.commonPool-worker-1] WARN org.apache.pulsar.broker.web.PulsarWebResource - Namespace missing local cluster name in clusters list: local_cluster=pulsar-cluster-2 ns=public/functions clusters=[pulsar-cluster-1]
15:17:55.924 [pulsar-web-40-15] INFO org.eclipse.jetty.server.RequestLog - 10.66.107.37 - - [26/Jan/2022:15:17:55 +0800] "PUT /admin/v2/persistent/public/functions/assignments HTTP/1.1" 412 60 "-" "Pulsar-Java-v2.8.0" 139
15:17:55.933 [AsyncHttpClient-57-1] WARN org.apache.pulsar.client.admin.internal.BaseResource - [http://10.66.107.37:8080/admin/v2/persistent/public/functions/assignments] Failed to perform http put request: javax.ws.rs.ClientErrorException: HTTP 412 Precondition Failed
15:17:55.944 [main] ERROR org.apache.pulsar.functions.worker.PulsarWorkerService - Error Starting up in worker
org.apache.pulsar.client.admin.PulsarAdminException$PreconditionFailedException: Namespace does not have any clusters configured
at org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:236) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at org.apache.pulsar.client.admin.internal.BaseResource$1.failed(BaseResource.java:130) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at org.glassfish.jersey.client.JerseyInvocation$1.failed(JerseyInvocation.java:882) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.JerseyInvocation$1.completed(JerseyInvocation.java:863) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:229) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.access$200(ClientRuntime.java:62) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime$2.lambda$response$0(ClientRuntime.java:173) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:292) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:274) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:288) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime$2.response(ClientRuntime.java:173) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$apply$1(AsyncHttpConnector.java:212) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[?:1.8.0_131]
at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$retryOperation$4(AsyncHttpConnector.java:254) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[?:1.8.0_131]
at org.asynchttpclient.netty.NettyResponseFuture.loadContent(NettyResponseFuture.java:222) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.NettyResponseFuture.done(NettyResponseFuture.java:257) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.finishUpdate(AsyncHttpClientHandler.java:241) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.HttpHandler.handleChunk(HttpHandler.java:114) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.HttpHandler.handleRead(HttpHandler.java:143) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[io.netty-netty-common-4.1.63.Final.jar:4.1.63.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[io.netty-netty-common-4.1.63.Final.jar:4.1.63.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[io.netty-netty-common-4.1.63.Final.jar:4.1.63.Final]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]
Caused by: javax.ws.rs.ClientErrorException: HTTP 412 Precondition Failed
at org.glassfish.jersey.client.JerseyInvocation.createExceptionForFamily(JerseyInvocation.java:985) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:967) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.JerseyInvocation.access$700(JerseyInvocation.java:82) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
... 54 more
问题2:
将functionsWorkerEnabled全部改为false后正常,但是在集群1某个节点上执行如下命令配置从pulsar-cluster-1到pulsar-cluster-2的跨地域复制时:
bin/pulsar-admin clusters create \
--broker-url pulsar://10.66.107.37:6650,10.66.107.38:6650,10.66.107.39:6650 \
--url http://10.66.107.37:8080,10.66.107.38:8080,10.66.107.39:8080 \
pulsar-cluster-2
报错:
22:26:57.207 [AsyncHttpClient-7-1] WARN org.apache.pulsar.client.admin.internal.BaseResource - [http://10.66.107.32:8080/admin/v2/clusters/pulsar-cluster-2] Failed to perform http put request: javax.ws.rs.ClientErrorException: HTTP 409 Conflict
Cluster already exists
Reason: Cluster already exists
难道按照上述步骤搭建完之后,自动就是双向的全连通复制了?如果是这样的话,单向复制模式和failover模式该如何搭建呢?
问题3:
# 测试结果:
1.使用命令行在两个集群中发送消息:先在集群1上进行消费,消费完之后连到集群2上重启Java代码消费者,没有重复消费;
2.使用命令行在两个集群中发送消息:先在集群2上进行消费,消费完之后连到集群1上重启Java代码消费者,有重复消费;
| 2.0 | broker开启function功能时的多集群跨地域复制问题 - pulsar集群1:10.66.107.31/32/33,每台服务器上一个broker实例、一个bookie实例
local zookeeper集群:10.66.107.34:2181,10.66.107.34:2182,10.66.107.34:2183
pulsar集群2:10.66.107.37/38/39,每台服务器上一个broker实例、一个bookie实例
local zookeeper集群:10.66.107.36:2181,10.66.107.36:2182,10.66.107.36:2183
共享的存储配置Zookeeper集群:10.66.107.35:2181,10.66.107.35:2182,10.66.107.35:2183
集群1的配置如下(以31为例):
broker.conf:
zookeeperServers=10.66.107.34:2181,10.66.107.34:2182,10.66.107.34:2183
configurationStoreServers=10.66.107.35:2181,10.66.107.35:2182,10.66.107.35:2183
brokerServicePortTls=6651
webServicePortTls=8443
advertisedAddress=10.66.107.31
clusterName=pulsar-cluster-1
functionsWorkerEnabled=true
bookkeeper.conf
advertisedAddress=10.66.107.31
bookieId=31
zkServers=10.66.107.34:2181,10.66.107.34:2182,10.66.107.34:2183
httpServerEnabled=true
functions_worker.yml
workerId: 31
workerHostname: 10.66.107.31
configurationStoreServers: 10.66.107.34:2181
pulsarFunctionsCluster: pulsar-cluster-1
stateStorageServiceUrl: bk://localhost:4181
pulsarFunctionsNamespace: public/functions1
集群2的配置如下(以37为例):
broker.conf
zookeeperServers=10.66.107.36:2181,10.66.107.36:2182,10.66.107.36:2183
configurationStoreServers=10.66.107.35:2181,10.66.107.35:2182,10.66.107.35:2183
brokerServicePortTls=6651
webServicePortTls=8443
advertisedAddress=10.66.107.37
clusterName=pulsar-cluster-2
functionsWorkerEnabled=true
bookkeeper.conf
advertisedAddress=10.66.107.37
bookieId=37
zkServers=10.66.107.36:2181,10.66.107.36:2182,10.66.107.36:2183
httpServerEnabled=true
functions_worker.yml
workerId: 37
workerHostname: 10.66.107.37
configurationStoreServers: 10.66.107.36:2181
pulsarFunctionsCluster: pulsar-cluster-2
stateStorageServiceUrl: bk://localhost:4181
pulsarFunctionsNamespace: public/functions2
问题1:
两个集群一共6个broker.conf中的functionsWorkerEnabled=true。
第一个集群bookie、broker启动正常,第二个集群的三个bookie启动正常,但是三个broker启动报错(后将functionsWorkerEnabled全部改为false后正常):
15:17:55.899 [ForkJoinPool.commonPool-worker-1] WARN org.apache.pulsar.broker.web.PulsarWebResource - Namespace missing local cluster name in clusters list: local_cluster=pulsar-cluster-2 ns=public/functions clusters=[pulsar-cluster-1]
15:17:55.924 [pulsar-web-40-15] INFO org.eclipse.jetty.server.RequestLog - 10.66.107.37 - - [26/Jan/2022:15:17:55 +0800] "PUT /admin/v2/persistent/public/functions/assignments HTTP/1.1" 412 60 "-" "Pulsar-Java-v2.8.0" 139
15:17:55.933 [AsyncHttpClient-57-1] WARN org.apache.pulsar.client.admin.internal.BaseResource - [http://10.66.107.37:8080/admin/v2/persistent/public/functions/assignments] Failed to perform http put request: javax.ws.rs.ClientErrorException: HTTP 412 Precondition Failed
15:17:55.944 [main] ERROR org.apache.pulsar.functions.worker.PulsarWorkerService - Error Starting up in worker
org.apache.pulsar.client.admin.PulsarAdminException$PreconditionFailedException: Namespace does not have any clusters configured
at org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:236) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at org.apache.pulsar.client.admin.internal.BaseResource$1.failed(BaseResource.java:130) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at org.glassfish.jersey.client.JerseyInvocation$1.failed(JerseyInvocation.java:882) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.JerseyInvocation$1.completed(JerseyInvocation.java:863) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.processResponse(ClientRuntime.java:229) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime.access$200(ClientRuntime.java:62) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime$2.lambda$response$0(ClientRuntime.java:173) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:292) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:274) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.internal.Errors.process(Errors.java:244) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:288) ~[org.glassfish.jersey.core-jersey-common-2.34.jar:?]
at org.glassfish.jersey.client.ClientRuntime$2.response(ClientRuntime.java:173) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$apply$1(AsyncHttpConnector.java:212) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[?:1.8.0_131]
at org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector.lambda$retryOperation$4(AsyncHttpConnector.java:254) ~[org.apache.pulsar-pulsar-client-admin-original-2.8.0.jar:2.8.0]
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474) ~[?:1.8.0_131]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962) ~[?:1.8.0_131]
at org.asynchttpclient.netty.NettyResponseFuture.loadContent(NettyResponseFuture.java:222) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.NettyResponseFuture.done(NettyResponseFuture.java:257) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.finishUpdate(AsyncHttpClientHandler.java:241) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.HttpHandler.handleChunk(HttpHandler.java:114) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.HttpHandler.handleRead(HttpHandler.java:143) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at org.asynchttpclient.netty.handler.AsyncHttpClientHandler.channelRead(AsyncHttpClientHandler.java:78) ~[org.asynchttpclient-async-http-client-2.12.1.jar:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:296) ~[io.netty-netty-codec-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) ~[io.netty-netty-transport-4.1.63.Final.jar:4.1.63.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[io.netty-netty-common-4.1.63.Final.jar:4.1.63.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[io.netty-netty-common-4.1.63.Final.jar:4.1.63.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[io.netty-netty-common-4.1.63.Final.jar:4.1.63.Final]
at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_131]
Caused by: javax.ws.rs.ClientErrorException: HTTP 412 Precondition Failed
at org.glassfish.jersey.client.JerseyInvocation.createExceptionForFamily(JerseyInvocation.java:985) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.JerseyInvocation.convertToException(JerseyInvocation.java:967) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
at org.glassfish.jersey.client.JerseyInvocation.access$700(JerseyInvocation.java:82) ~[org.glassfish.jersey.core-jersey-client-2.34.jar:?]
... 54 more
问题2:
将functionsWorkerEnabled全部改为false后正常,但是在集群1某个节点上执行如下命令配置从pulsar-cluster-1到pulsar-cluster-2的跨地域复制时:
bin/pulsar-admin clusters create \
--broker-url pulsar://10.66.107.37:6650,10.66.107.38:6650,10.66.107.39:6650 \
--url http://10.66.107.37:8080,10.66.107.38:8080,10.66.107.39:8080 \
pulsar-cluster-2
报错:
22:26:57.207 [AsyncHttpClient-7-1] WARN org.apache.pulsar.client.admin.internal.BaseResource - [http://10.66.107.32:8080/admin/v2/clusters/pulsar-cluster-2] Failed to perform http put request: javax.ws.rs.ClientErrorException: HTTP 409 Conflict
Cluster already exists
Reason: Cluster already exists
难道按照上述步骤搭建完之后,自动就是双向的全连通复制了?如果是这样的话,单向复制模式和failover模式该如何搭建呢?
问题3:
# 测试结果:
1.使用命令行在两个集群中发送消息:先在集群1上进行消费,消费完之后连到集群2上重启Java代码消费者,没有重复消费;
2.使用命令行在两个集群中发送消息:先在集群2上进行消费,消费完之后连到集群1上重启Java代码消费者,有重复消费;
| non_infrastructure | broker开启function功能时的多集群跨地域复制问题 : ,每台服务器上一个broker实例、一个bookie实例 local zookeeper集群: : ,每台服务器上一个broker实例、一个bookie实例 local zookeeper集群: 共享的存储配置zookeeper集群: ( ): broker conf: zookeeperservers configurationstoreservers brokerserviceporttls webserviceporttls advertisedaddress clustername pulsar cluster functionsworkerenabled true bookkeeper conf advertisedaddress bookieid zkservers httpserverenabled true functions worker yml workerid workerhostname configurationstoreservers pulsarfunctionscluster pulsar cluster statestorageserviceurl bk localhost pulsarfunctionsnamespace public ( ): broker conf zookeeperservers configurationstoreservers brokerserviceporttls webserviceporttls advertisedaddress clustername pulsar cluster functionsworkerenabled true bookkeeper conf advertisedaddress bookieid zkservers httpserverenabled true functions worker yml workerid workerhostname configurationstoreservers pulsarfunctionscluster pulsar cluster statestorageserviceurl bk localhost pulsarfunctionsnamespace public : conf中的functionsworkerenabled true。 第一个集群bookie、broker启动正常,第二个集群的三个bookie启动正常,但是三个broker启动报错(后将functionsworkerenabled全部改为false后正常): warn org apache pulsar broker web pulsarwebresource namespace missing local cluster name in clusters list local cluster pulsar cluster ns public functions clusters info org eclipse jetty server requestlog put admin persistent public functions assignments http pulsar java warn org apache pulsar client admin internal baseresource failed to perform http put request javax ws rs clienterrorexception http precondition failed error org apache pulsar functions worker pulsarworkerservice error starting up in worker org apache pulsar client admin pulsaradminexception preconditionfailedexception namespace does not have any clusters configured at org apache pulsar client admin internal baseresource getapiexception baseresource java at org apache pulsar client admin internal baseresource failed baseresource java at org glassfish jersey client jerseyinvocation failed jerseyinvocation java at org glassfish jersey client jerseyinvocation completed jerseyinvocation java at org glassfish jersey client clientruntime processresponse clientruntime java at org glassfish jersey client clientruntime access clientruntime java at org glassfish jersey client clientruntime lambda response clientruntime java at org glassfish jersey internal errors call errors java at org glassfish jersey internal errors call errors java at org glassfish jersey internal errors process errors java at org glassfish jersey internal errors process errors java at org glassfish jersey internal errors process errors java at org glassfish jersey process internal requestscope runinscope requestscope java at org glassfish jersey client clientruntime response clientruntime java at org apache pulsar client admin internal http asynchttpconnector lambda apply asynchttpconnector java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture complete completablefuture java at org apache pulsar client admin internal http asynchttpconnector lambda retryoperation asynchttpconnector java at java util concurrent completablefuture uniwhencomplete completablefuture java at java util concurrent completablefuture uniwhencomplete tryfire completablefuture java at java util concurrent completablefuture postcomplete completablefuture java at java util concurrent completablefuture complete completablefuture java at org asynchttpclient netty nettyresponsefuture loadcontent nettyresponsefuture java at org asynchttpclient netty nettyresponsefuture done nettyresponsefuture java at org asynchttpclient netty handler asynchttpclienthandler finishupdate asynchttpclienthandler java at org asynchttpclient netty handler httphandler handlechunk httphandler java at org asynchttpclient netty handler httphandler handleread httphandler java at org asynchttpclient netty handler asynchttpclienthandler channelread asynchttpclienthandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec messagetomessagedecoder channelread messagetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel combinedchannelduplexhandler delegatingchannelhandlercontext firechannelread combinedchannelduplexhandler java at io netty handler codec bytetomessagedecoder firechannelread bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel combinedchannelduplexhandler channelread combinedchannelduplexhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java at io netty util concurrent fastthreadlocalrunnable run fastthreadlocalrunnable java at java lang thread run thread java caused by javax ws rs clienterrorexception http precondition failed at org glassfish jersey client jerseyinvocation createexceptionforfamily jerseyinvocation java at org glassfish jersey client jerseyinvocation converttoexception jerseyinvocation java at org glassfish jersey client jerseyinvocation access jerseyinvocation java more : 将functionsworkerenabled全部改为false后正常, cluster cluster : bin pulsar admin clusters create broker url pulsar url pulsar cluster 报错: warn org apache pulsar client admin internal baseresource failed to perform http put request javax ws rs clienterrorexception http conflict cluster already exists reason cluster already exists 难道按照上述步骤搭建完之后,自动就是双向的全连通复制了?如果是这样的话,单向复制模式和failover模式该如何搭建呢? : 测试结果: 使用命令行在两个集群中发送消息: , ,没有重复消费; 使用命令行在两个集群中发送消息: , ,有重复消费; | 0 |
18,873 | 13,151,361,469 | IssuesEvent | 2020-08-09 16:21:47 | MathiasMen/FreeFit | https://api.github.com/repos/MathiasMen/FreeFit | opened | ExerciseEditor: Create tests regarding new exercises | Infrastructure Testing | - [ ] Test that adding an exercise works
- [ ] Test that text input works
- [ ] Test that validation of text works
- [ ] Test that slider works
- [ ] Test that delete button works
- [ ] Test that multiple new exercises don't interfere when being edited
When checking for input, one could either check for changed data in the `FreeFit::Data::Exercise` object or actually carry out a demand by clicking add to exercises. | 1.0 | ExerciseEditor: Create tests regarding new exercises - - [ ] Test that adding an exercise works
- [ ] Test that text input works
- [ ] Test that validation of text works
- [ ] Test that slider works
- [ ] Test that delete button works
- [ ] Test that multiple new exercises don't interfere when being edited
When checking for input, one could either check for changed data in the `FreeFit::Data::Exercise` object or actually carry out a demand by clicking add to exercises. | infrastructure | exerciseeditor create tests regarding new exercises test that adding an exercise works test that text input works test that validation of text works test that slider works test that delete button works test that multiple new exercises don t interfere when being edited when checking for input one could either check for changed data in the freefit data exercise object or actually carry out a demand by clicking add to exercises | 1 |
247,979 | 26,771,132,311 | IssuesEvent | 2023-01-31 14:12:27 | TreyM-WSS/whitesource-demo-1 | https://api.github.com/repos/TreyM-WSS/whitesource-demo-1 | opened | CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz | security vulnerability | ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- cli-9.0.0-next.10.tgz (Root Library)
- pacote-9.5.8.tgz
- make-fetch-happen-5.0.2.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
| True | CVE-2022-25881 (Medium) detected in http-cache-semantics-3.8.1.tgz - ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-3.8.1.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-3.8.1.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- cli-9.0.0-next.10.tgz (Root Library)
- pacote-9.5.8.tgz
- make-fetch-happen-5.0.2.tgz
- :x: **http-cache-semantics-3.8.1.tgz** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
| non_infrastructure | cve medium detected in http cache semantics tgz cve medium severity vulnerability vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file package json path to vulnerable library node modules http cache semantics package json dependency hierarchy cli next tgz root library pacote tgz make fetch happen tgz x http cache semantics tgz vulnerable library found in base branch master vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics | 0 |
386,688 | 11,448,810,914 | IssuesEvent | 2020-02-06 04:57:02 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | www.miekemosmuller.com - Text overflows the viewport | browser-firefox-mobile engine-gecko priority-normal severity-important | <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.miekemosmuller.com/nl/blog/ernst-en-spel
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: text to big for screen
**Steps to Reproduce**:
Text can't read end of every sentence right side
[Screenshot](https://webcompat.com/uploads/2020/2/81099b97-7fed-4d5b-9a2f-cc0b038190c7.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200125223657</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/2/115b99dc-9e97-4d31-b7bc-e74ebc455eba)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | 1.0 | www.miekemosmuller.com - Text overflows the viewport - <!-- @browser: Firefox Mobile 68.0 -->
<!-- @ua_header: Mozilla/5.0 (Android 8.0.0; Mobile; rv:68.0) Gecko/68.0 Firefox/68.0 -->
<!-- @reported_with: mobile-reporter -->
**URL**: https://www.miekemosmuller.com/nl/blog/ernst-en-spel
**Browser / Version**: Firefox Mobile 68.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Desktop site instead of mobile site
**Description**: text to big for screen
**Steps to Reproduce**:
Text can't read end of every sentence right side
[Screenshot](https://webcompat.com/uploads/2020/2/81099b97-7fed-4d5b-9a2f-cc0b038190c7.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20200125223657</li><li>channel: beta</li><li>hasTouchScreen: true</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/2/115b99dc-9e97-4d31-b7bc-e74ebc455eba)
_From [webcompat.com](https://webcompat.com/) with ❤️_ | non_infrastructure | text overflows the viewport url browser version firefox mobile operating system android tested another browser yes problem type desktop site instead of mobile site description text to big for screen steps to reproduce text can t read end of every sentence right side browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel beta hastouchscreen true mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️ | 0 |
380,241 | 11,256,072,719 | IssuesEvent | 2020-01-12 13:57:04 | kubernetes/website | https://api.github.com/repos/kubernetes/website | closed | No diagram on /concepts/overview/components/ | kind/feature language/en lifecycle/stale priority/important-longterm | <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [x] Feature Request
- [ ] Bug Report
**Problem:**
On the page describing components there is no diagram presenting, how they are tied together. There is such diagram on https://kubernetes.io/docs/concepts/architecture/cloud-controller/, but it's not obvious that one should look for one there.
**Proposed Solution:**
Add diagram from https://kubernetes.io/docs/concepts/architecture/cloud-controller/ to https://kubernetes.io/docs/concepts/overview/components/
**Page to Update:**
https://kubernetes.io/docs/concepts/overview/components/
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| 1.0 | No diagram on /concepts/overview/components/ - <!-- Thanks for filing an issue! Before submitting, please fill in the following information. -->
<!-- See https://kubernetes.io/docs/contribute/start/ for guidance on writing an actionable issue description. -->
<!--Required Information-->
**This is a...**
<!-- choose one by changing [ ] to [x] -->
- [x] Feature Request
- [ ] Bug Report
**Problem:**
On the page describing components there is no diagram presenting, how they are tied together. There is such diagram on https://kubernetes.io/docs/concepts/architecture/cloud-controller/, but it's not obvious that one should look for one there.
**Proposed Solution:**
Add diagram from https://kubernetes.io/docs/concepts/architecture/cloud-controller/ to https://kubernetes.io/docs/concepts/overview/components/
**Page to Update:**
https://kubernetes.io/docs/concepts/overview/components/
<!--Optional Information (remove the comment tags around information you would like to include)-->
<!--Kubernetes Version:-->
<!--Additional Information:-->
| non_infrastructure | no diagram on concepts overview components this is a feature request bug report problem on the page describing components there is no diagram presenting how they are tied together there is such diagram on but it s not obvious that one should look for one there proposed solution add diagram from to page to update | 0 |
35,120 | 30,770,952,781 | IssuesEvent | 2023-07-30 22:27:48 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | opened | Need to rethink how we create windows releases | interface/infrastructure refactor | ### Describe the new feature
We got the email below from Google Compute.
We use Windows VM images for apsimdev.apsim.info and for creating APSIM releases.
1. Migrating from windows server to debian for apsimdev.apsim.info is a big job. It is something we need to do but hopefully we can do this over time and not before end of October. The email below says:
*Virtual machines based on these images created prior to October 30, 2023, will continue to run*
2. Can we stop using Windows images and use debian images instead for creating APSIM releases? This should be doable in the short term.
**Email from Google Compute:**
We’re writing to let you know that starting October 30, 2023, the "Windows Server 2019 Datacenter for Containers" Virtual Machines (VM) image family will be discontinued, and all images will be removed.
**What do you need to know?**
This image family contains the Mirantis Container Runtime (formerly Docker EE) which was historically distributed and supported on Windows by Microsoft for no additional charge. Because Microsoft ceased distributing and supporting the Mirantis Container Runtime, we’ll be unable to continue distributing it as part of the "Windows Server 2019 Datacenter for Containers" VM image family after October 30, 2023.
Virtual machines based on these images created prior to October 30, 2023, will continue to run, but you will not be able to create new instances or images based on images from the "Windows Server 2019 Datacenter for Containers" VM image family. After this date, support, security updates, and patch fixes will be unavailable for the "Windows Server 2019 Datacenter for Containers" VM image family.
The [open-source](https://notifications.google.com/g/p/ADa0GC-s3eO1d11kzqJClr4fLshQdGOHlorWTFBiIP9OtYwVG229rvpCcjmlcg0bKNO8sDOCSR47cWz4cDhDQm_8scwJA2a8SaWmibFTnw_40IxN8xtnVdqDR6s80cKAv0ycWFZ5kslHpmnpXZckP5fyGsMQbsyF26ylXLH2AAv6xxvJOzSXIIgNjYmR-H4amL0osY16Dcp0IRMFgOOPccA7ZzdUM9YrrxJH5v3_ussPXunnFev3iOKookZfkrrFuho5LvDVsSPhP0ImveOIQR3tvatSgqFn) GKE Windows Builder will be updated to use the open-source Docker CE Runtime, after October 20, 2023.
**What do you need to do?**
Please review the instructions described below and take the appropriate action depending on how you are using VM images.
If you’re using VM images as part of the GKE Windows Builder:
* If you don’t require support from a commercial vendor, no action is required.
* If you require support from a commercial vendor, please reach out to your account manager, support or sales for guidance.
If you’re using VM images outside of GKE Windows Builder, please migrate to an alternative image with your chosen container runtime:
* If you want to continue using the Mirantis Container Runtime, starting July 2023, Mirantis is offering [Mirantis Container Runtime for Windows Server on Google Cloud Marketplace](https://notifications.google.com/g/p/ADa0GC8mwSPswt-fQ4KYjlVCQlE-y0KcWaEXEgHae2xqLh6Obx_RePXiJDbX80_O8SXCTB1qEb8SG4AfjrsFKYnJTUGbA_JBySj8ITgJfkMWphmUM6sp7BhbEQLu9iD8kMNs5HdXWoYtgA8qKu3J45EwsRZ-FeFFi3yifsIlUbYIVVAj5T1yHT0ueh6Ckqp6AZqgo_okcn8CXNuswLszhsjuEouYS3ybTIv72nt4BDfw0CxJqs67JGRCTqzRmI5dx2aiT_22VQvwTUhhe7Y2L-Xk30NMY-Ld5eOHh3u6NDquaX_7nhHHHtOnFdkwELQQ5g). The cost of this VM image includes support and licensing for Mirantis Container Runtime directly from Mirantis. Customer applications will continue to run as before.
The only difference being that these VM images are now offered by Mirantis, Inc. through Google Cloud Marketplace. Customers interested in using the standalone Mirantis Container Runtime can download it directly from the [Mirantis website](https://notifications.google.com/g/p/ADa0GC9MNsau0NPsae4zMVy83GvGVns1Z8fUzKzzxX0rKFMvYN0YAekDqxVs4ivzocYHVg7r1VjU5QsY7tV34j5rOf0BjWaxbICSWANqVKFM-yOIuLkGU1AvGb1KFkAzYx7gaYg-eK5Jk_i1iskBu-_K8hotX8IQkOql91FiD6ld19_fD95zMh8o7zYSyC4gIYLgpbw-Vr_WYHjYvmYs6j0lF__0kjkf6Qr3Qg).
* If you want to migrate to an alternative container runtime like Docker CE, please begin with the “Windows Server 2019 Datacenter” image family and install that runtime during your image build process.
Your affected projects are below:
* [APSIM Web Services (apsim-web-services)](https://notifications.google.com/g/p/ADa0GC-hKDSDx4-Q3fZ4RCtX48XDZS7DbqzS-pill9uPR_eSa59Kr3P7tBL-NyM85HHF8axgZ8b-eIlQbPAFSP-DMnswNvL7YxyxF7Pvs-M1BJ_GQ35tBdkeb5FSlMMJaeKZUTTHdttDLQKfo9dvtRh48i-gvodGcaJOxomqKhTai0ovcio-DvVMEXFeAKDaAu93iDUVVFapWQoHtkgKRePl4P-aRlkzMQzY43E)
**We’re here to help**
If you need more information please refer to the following resources:
* [GKE Windows Builder documentation](https://notifications.google.com/g/p/ADa0GC9pEO9ZppaMr4utk9Kh643doVgRu35DON84mGSl4Pi6M8fsMRpzKkbvidg--sIpP_jJEQhKUAd6ISVmoXqXpxLLmTar3udH1oI9aCQ0CABEFbMQtIALReqlxPeo88It5wPf5ZiVjzmmQP8Lrbk0uDSU9XKEYwnlj4MgDimJKpyXOLiZTt_TCoY32n8YRXa7yQ0iDyzByz7yKI0jDkObBnZxqqFGAz6SudX0fDYR1IDjNIlhXRt11aK5l5na4sl8rOH_BbtxXP0WZqAnVS3wr-FVPz4F0McbLk2NzYc8fBAh5KE0uK6w-Dzo_ulAuEyhjC-Kh3GEecdmxp-S2Zh7eQ0rz_si)
* [Deprecation of Docker Virtual Machine Images announcement](https://notifications.google.com/g/p/ADa0GC9ahXcuZdVTuav-FbhSI-dKBvJ0q2qmESeY6fiUt8ZsXNBkl5DReDn1pf-bsJ42PVYOpx14D14Nw46IsRy-u_ss_Iej83f82jTw3nXML1fdbaJYef4alSYJZdsnkJz4RUQXep92wt2ZAWT81O8AfqM2UHpVC6YTL5mrUKX8qTtyaS-AeRRnBF1gdue2JkIJFKuKxmewdgolY-Bah_hKb54DF6r2qknBt33zcVu9mDb198W9F1Vf12yVupohh4fyqOn-sRaxW0cCbCzwTaZJ0k5mNnzQb0CmP62uxhnF2o8ac8vLZOBxGNww8A)
| 1.0 | Need to rethink how we create windows releases - ### Describe the new feature
We got the email below from Google Compute.
We use Windows VM images for apsimdev.apsim.info and for creating APSIM releases.
1. Migrating from windows server to debian for apsimdev.apsim.info is a big job. It is something we need to do but hopefully we can do this over time and not before end of October. The email below says:
*Virtual machines based on these images created prior to October 30, 2023, will continue to run*
2. Can we stop using Windows images and use debian images instead for creating APSIM releases? This should be doable in the short term.
**Email from Google Compute:**
We’re writing to let you know that starting October 30, 2023, the "Windows Server 2019 Datacenter for Containers" Virtual Machines (VM) image family will be discontinued, and all images will be removed.
**What do you need to know?**
This image family contains the Mirantis Container Runtime (formerly Docker EE) which was historically distributed and supported on Windows by Microsoft for no additional charge. Because Microsoft ceased distributing and supporting the Mirantis Container Runtime, we’ll be unable to continue distributing it as part of the "Windows Server 2019 Datacenter for Containers" VM image family after October 30, 2023.
Virtual machines based on these images created prior to October 30, 2023, will continue to run, but you will not be able to create new instances or images based on images from the "Windows Server 2019 Datacenter for Containers" VM image family. After this date, support, security updates, and patch fixes will be unavailable for the "Windows Server 2019 Datacenter for Containers" VM image family.
The [open-source](https://notifications.google.com/g/p/ADa0GC-s3eO1d11kzqJClr4fLshQdGOHlorWTFBiIP9OtYwVG229rvpCcjmlcg0bKNO8sDOCSR47cWz4cDhDQm_8scwJA2a8SaWmibFTnw_40IxN8xtnVdqDR6s80cKAv0ycWFZ5kslHpmnpXZckP5fyGsMQbsyF26ylXLH2AAv6xxvJOzSXIIgNjYmR-H4amL0osY16Dcp0IRMFgOOPccA7ZzdUM9YrrxJH5v3_ussPXunnFev3iOKookZfkrrFuho5LvDVsSPhP0ImveOIQR3tvatSgqFn) GKE Windows Builder will be updated to use the open-source Docker CE Runtime, after October 20, 2023.
**What do you need to do?**
Please review the instructions described below and take the appropriate action depending on how you are using VM images.
If you’re using VM images as part of the GKE Windows Builder:
* If you don’t require support from a commercial vendor, no action is required.
* If you require support from a commercial vendor, please reach out to your account manager, support or sales for guidance.
If you’re using VM images outside of GKE Windows Builder, please migrate to an alternative image with your chosen container runtime:
* If you want to continue using the Mirantis Container Runtime, starting July 2023, Mirantis is offering [Mirantis Container Runtime for Windows Server on Google Cloud Marketplace](https://notifications.google.com/g/p/ADa0GC8mwSPswt-fQ4KYjlVCQlE-y0KcWaEXEgHae2xqLh6Obx_RePXiJDbX80_O8SXCTB1qEb8SG4AfjrsFKYnJTUGbA_JBySj8ITgJfkMWphmUM6sp7BhbEQLu9iD8kMNs5HdXWoYtgA8qKu3J45EwsRZ-FeFFi3yifsIlUbYIVVAj5T1yHT0ueh6Ckqp6AZqgo_okcn8CXNuswLszhsjuEouYS3ybTIv72nt4BDfw0CxJqs67JGRCTqzRmI5dx2aiT_22VQvwTUhhe7Y2L-Xk30NMY-Ld5eOHh3u6NDquaX_7nhHHHtOnFdkwELQQ5g). The cost of this VM image includes support and licensing for Mirantis Container Runtime directly from Mirantis. Customer applications will continue to run as before.
The only difference being that these VM images are now offered by Mirantis, Inc. through Google Cloud Marketplace. Customers interested in using the standalone Mirantis Container Runtime can download it directly from the [Mirantis website](https://notifications.google.com/g/p/ADa0GC9MNsau0NPsae4zMVy83GvGVns1Z8fUzKzzxX0rKFMvYN0YAekDqxVs4ivzocYHVg7r1VjU5QsY7tV34j5rOf0BjWaxbICSWANqVKFM-yOIuLkGU1AvGb1KFkAzYx7gaYg-eK5Jk_i1iskBu-_K8hotX8IQkOql91FiD6ld19_fD95zMh8o7zYSyC4gIYLgpbw-Vr_WYHjYvmYs6j0lF__0kjkf6Qr3Qg).
* If you want to migrate to an alternative container runtime like Docker CE, please begin with the “Windows Server 2019 Datacenter” image family and install that runtime during your image build process.
Your affected projects are below:
* [APSIM Web Services (apsim-web-services)](https://notifications.google.com/g/p/ADa0GC-hKDSDx4-Q3fZ4RCtX48XDZS7DbqzS-pill9uPR_eSa59Kr3P7tBL-NyM85HHF8axgZ8b-eIlQbPAFSP-DMnswNvL7YxyxF7Pvs-M1BJ_GQ35tBdkeb5FSlMMJaeKZUTTHdttDLQKfo9dvtRh48i-gvodGcaJOxomqKhTai0ovcio-DvVMEXFeAKDaAu93iDUVVFapWQoHtkgKRePl4P-aRlkzMQzY43E)
**We’re here to help**
If you need more information please refer to the following resources:
* [GKE Windows Builder documentation](https://notifications.google.com/g/p/ADa0GC9pEO9ZppaMr4utk9Kh643doVgRu35DON84mGSl4Pi6M8fsMRpzKkbvidg--sIpP_jJEQhKUAd6ISVmoXqXpxLLmTar3udH1oI9aCQ0CABEFbMQtIALReqlxPeo88It5wPf5ZiVjzmmQP8Lrbk0uDSU9XKEYwnlj4MgDimJKpyXOLiZTt_TCoY32n8YRXa7yQ0iDyzByz7yKI0jDkObBnZxqqFGAz6SudX0fDYR1IDjNIlhXRt11aK5l5na4sl8rOH_BbtxXP0WZqAnVS3wr-FVPz4F0McbLk2NzYc8fBAh5KE0uK6w-Dzo_ulAuEyhjC-Kh3GEecdmxp-S2Zh7eQ0rz_si)
* [Deprecation of Docker Virtual Machine Images announcement](https://notifications.google.com/g/p/ADa0GC9ahXcuZdVTuav-FbhSI-dKBvJ0q2qmESeY6fiUt8ZsXNBkl5DReDn1pf-bsJ42PVYOpx14D14Nw46IsRy-u_ss_Iej83f82jTw3nXML1fdbaJYef4alSYJZdsnkJz4RUQXep92wt2ZAWT81O8AfqM2UHpVC6YTL5mrUKX8qTtyaS-AeRRnBF1gdue2JkIJFKuKxmewdgolY-Bah_hKb54DF6r2qknBt33zcVu9mDb198W9F1Vf12yVupohh4fyqOn-sRaxW0cCbCzwTaZJ0k5mNnzQb0CmP62uxhnF2o8ac8vLZOBxGNww8A)
| infrastructure | need to rethink how we create windows releases describe the new feature we got the email below from google compute we use windows vm images for apsimdev apsim info and for creating apsim releases migrating from windows server to debian for apsimdev apsim info is a big job it is something we need to do but hopefully we can do this over time and not before end of october the email below says virtual machines based on these images created prior to october will continue to run can we stop using windows images and use debian images instead for creating apsim releases this should be doable in the short term email from google compute we’re writing to let you know that starting october the windows server datacenter for containers virtual machines vm image family will be discontinued and all images will be removed what do you need to know this image family contains the mirantis container runtime formerly docker ee which was historically distributed and supported on windows by microsoft for no additional charge because microsoft ceased distributing and supporting the mirantis container runtime we’ll be unable to continue distributing it as part of the windows server datacenter for containers vm image family after october virtual machines based on these images created prior to october will continue to run but you will not be able to create new instances or images based on images from the windows server datacenter for containers vm image family after this date support security updates and patch fixes will be unavailable for the windows server datacenter for containers vm image family the gke windows builder will be updated to use the open source docker ce runtime after october what do you need to do please review the instructions described below and take the appropriate action depending on how you are using vm images if you’re using vm images as part of the gke windows builder if you don’t require support from a commercial vendor no action is required if you require support from a commercial vendor please reach out to your account manager support or sales for guidance if you’re using vm images outside of gke windows builder please migrate to an alternative image with your chosen container runtime if you want to continue using the mirantis container runtime starting july mirantis is offering the cost of this vm image includes support and licensing for mirantis container runtime directly from mirantis customer applications will continue to run as before the only difference being that these vm images are now offered by mirantis inc through google cloud marketplace customers interested in using the standalone mirantis container runtime can download it directly from the if you want to migrate to an alternative container runtime like docker ce please begin with the “windows server datacenter” image family and install that runtime during your image build process your affected projects are below we’re here to help if you need more information please refer to the following resources | 1 |
7,010 | 6,712,463,866 | IssuesEvent | 2017-10-13 09:28:11 | QQuick/Transcrypt | https://api.github.com/repos/QQuick/Transcrypt | closed | Transcrypt requires write priviledges on its own installation folder | IS: limitation SUB: infrastructure | 1. I upgraded transcrypt in linux using `sudo pip3 install -U transcrypt`
2. Doing anything with it as a standard user (eg: try to build a file), I get permission denied errors or missing modules are reported. Seems I was missing the time module:
```
Transcrypt (TM) Python to JavaScript Small Sane Subset Transpiler Version 3.6.50
[...]
Error while compiling (offending file last):
[...]
File '/usr/local/lib/python3.5/dist-packages/transcrypt/modules/time/__init__.py', line 1, namely:
Can't import module 'time'
Aborted
```
3. I located my transcrypt installation folder, and tried to compile the automated tests. Got something like:
```
Transcrypt (TM) Python to JavaScript Small Sane Subset Transpiler Version 3.6.50
[...]
Error while compiling (offending file last):
File '/usr/local/lib/python3.5/dist-packages/transcrypt/development/automated_tests/transcrypt/autotest.py', line 1, at import of:
File '/usr/local/lib/python3.5/dist-packages/transcrypt/modules/org/transcrypt/autotester/__init__.py', line 10, at import of:
File '/usr/local/lib/python3.5/dist-packages/transcrypt/modules/org/transcrypt/autotester/html.py', line 10, namely:
Can't import from module 'org.transcrypt.autotester.html'
Aborted
```
4. Granted write permission to anybody over the transcrypt folder: `sudo chmod 777 -R /usr/local/lib/python3.5/dist-packages/transcrypt`
5. Problem solved.
Feel like it will happen again next upgrade.
| 1.0 | Transcrypt requires write priviledges on its own installation folder - 1. I upgraded transcrypt in linux using `sudo pip3 install -U transcrypt`
2. Doing anything with it as a standard user (eg: try to build a file), I get permission denied errors or missing modules are reported. Seems I was missing the time module:
```
Transcrypt (TM) Python to JavaScript Small Sane Subset Transpiler Version 3.6.50
[...]
Error while compiling (offending file last):
[...]
File '/usr/local/lib/python3.5/dist-packages/transcrypt/modules/time/__init__.py', line 1, namely:
Can't import module 'time'
Aborted
```
3. I located my transcrypt installation folder, and tried to compile the automated tests. Got something like:
```
Transcrypt (TM) Python to JavaScript Small Sane Subset Transpiler Version 3.6.50
[...]
Error while compiling (offending file last):
File '/usr/local/lib/python3.5/dist-packages/transcrypt/development/automated_tests/transcrypt/autotest.py', line 1, at import of:
File '/usr/local/lib/python3.5/dist-packages/transcrypt/modules/org/transcrypt/autotester/__init__.py', line 10, at import of:
File '/usr/local/lib/python3.5/dist-packages/transcrypt/modules/org/transcrypt/autotester/html.py', line 10, namely:
Can't import from module 'org.transcrypt.autotester.html'
Aborted
```
4. Granted write permission to anybody over the transcrypt folder: `sudo chmod 777 -R /usr/local/lib/python3.5/dist-packages/transcrypt`
5. Problem solved.
Feel like it will happen again next upgrade.
| infrastructure | transcrypt requires write priviledges on its own installation folder i upgraded transcrypt in linux using sudo install u transcrypt doing anything with it as a standard user eg try to build a file i get permission denied errors or missing modules are reported seems i was missing the time module transcrypt tm python to javascript small sane subset transpiler version error while compiling offending file last file usr local lib dist packages transcrypt modules time init py line namely can t import module time aborted i located my transcrypt installation folder and tried to compile the automated tests got something like transcrypt tm python to javascript small sane subset transpiler version error while compiling offending file last file usr local lib dist packages transcrypt development automated tests transcrypt autotest py line at import of file usr local lib dist packages transcrypt modules org transcrypt autotester init py line at import of file usr local lib dist packages transcrypt modules org transcrypt autotester html py line namely can t import from module org transcrypt autotester html aborted granted write permission to anybody over the transcrypt folder sudo chmod r usr local lib dist packages transcrypt problem solved feel like it will happen again next upgrade | 1 |
12,868 | 9,985,500,350 | IssuesEvent | 2019-07-10 16:42:34 | canada-ca/TBS-OCIO-ESP | https://api.github.com/repos/canada-ca/TBS-OCIO-ESP | opened | Develop automation recommendations and guidance | Cloud DigitalWorkSpacce Infrastructure OpenSource artifact | There is a cornucopia of manual processes in the GC. We need to automate. To facilitate, let's develop some guidance and recommendations (pointers to existing solutions and success / failures) that can help anyone expedite their own automation challenges. (eventually these could become standards) | 1.0 | Develop automation recommendations and guidance - There is a cornucopia of manual processes in the GC. We need to automate. To facilitate, let's develop some guidance and recommendations (pointers to existing solutions and success / failures) that can help anyone expedite their own automation challenges. (eventually these could become standards) | infrastructure | develop automation recommendations and guidance there is a cornucopia of manual processes in the gc we need to automate to facilitate let s develop some guidance and recommendations pointers to existing solutions and success failures that can help anyone expedite their own automation challenges eventually these could become standards | 1 |
54,177 | 23,194,494,840 | IssuesEvent | 2022-08-01 15:12:34 | open-services-group/community | https://api.github.com/repos/open-services-group/community | closed | [SIG Services][Guideline] Have a customer 0 | sig/services kind/guideline | Before releasing publicly consider getting a customer 0 so you can get instant feedback on the service before users start complaining right after initial release. Gather UX feedback, don't share any insider info with the customer, let them use it but watch them closely and be in contact as a close support.
Not a strong requirement, rather a nice to have. | 1.0 | [SIG Services][Guideline] Have a customer 0 - Before releasing publicly consider getting a customer 0 so you can get instant feedback on the service before users start complaining right after initial release. Gather UX feedback, don't share any insider info with the customer, let them use it but watch them closely and be in contact as a close support.
Not a strong requirement, rather a nice to have. | non_infrastructure | have a customer before releasing publicly consider getting a customer so you can get instant feedback on the service before users start complaining right after initial release gather ux feedback don t share any insider info with the customer let them use it but watch them closely and be in contact as a close support not a strong requirement rather a nice to have | 0 |
2,061 | 4,320,865,109 | IssuesEvent | 2016-07-25 07:49:55 | aAXEe/online_chart_ol3 | https://api.github.com/repos/aAXEe/online_chart_ol3 | closed | replace url-routing with react-router implementation | requirement | we should use a solid routing library instead of implementing out own routing!? | 1.0 | replace url-routing with react-router implementation - we should use a solid routing library instead of implementing out own routing!? | non_infrastructure | replace url routing with react router implementation we should use a solid routing library instead of implementing out own routing | 0 |
14,320 | 10,741,480,319 | IssuesEvent | 2019-10-29 20:20:04 | filecoin-project/go-filecoin | https://api.github.com/repos/filecoin-project/go-filecoin | closed | run high-level functional test (submitPoSt, commitSector, verification) against nightly devnet each morning | A-FAST A-infrastructure A-tests | ### Description
This story is a stub.
### Acceptance criteria
We need some test which:
1. creates a miner
1. stores a piece or two with that miner, causing it to seal the piece into a sector
1. retrieves a piece from that miner
1. verifies that the `commitSector` message appeared on chain
1. verifies that the `submitPoSt` message appeared on chain
### Risks + pitfalls
### Protocol Changes
### Where to begin
The existing `functional-tests/retrieval` is nearly what we want. This new test should be able to share a lot of code with the existing retrieval test, with some exceptions:
1. We'll need to create a new address instead of importing a keypair produced by gengen
1. We'll need to hit the faucet for FIL
1. We probably won't need to create a bootstrap miner (as the devnet should already have one running... right?) | 1.0 | run high-level functional test (submitPoSt, commitSector, verification) against nightly devnet each morning - ### Description
This story is a stub.
### Acceptance criteria
We need some test which:
1. creates a miner
1. stores a piece or two with that miner, causing it to seal the piece into a sector
1. retrieves a piece from that miner
1. verifies that the `commitSector` message appeared on chain
1. verifies that the `submitPoSt` message appeared on chain
### Risks + pitfalls
### Protocol Changes
### Where to begin
The existing `functional-tests/retrieval` is nearly what we want. This new test should be able to share a lot of code with the existing retrieval test, with some exceptions:
1. We'll need to create a new address instead of importing a keypair produced by gengen
1. We'll need to hit the faucet for FIL
1. We probably won't need to create a bootstrap miner (as the devnet should already have one running... right?) | infrastructure | run high level functional test submitpost commitsector verification against nightly devnet each morning description this story is a stub acceptance criteria we need some test which creates a miner stores a piece or two with that miner causing it to seal the piece into a sector retrieves a piece from that miner verifies that the commitsector message appeared on chain verifies that the submitpost message appeared on chain risks pitfalls protocol changes where to begin the existing functional tests retrieval is nearly what we want this new test should be able to share a lot of code with the existing retrieval test with some exceptions we ll need to create a new address instead of importing a keypair produced by gengen we ll need to hit the faucet for fil we probably won t need to create a bootstrap miner as the devnet should already have one running right | 1 |
64,515 | 18,722,556,129 | IssuesEvent | 2021-11-03 13:23:30 | primefaces/primefaces | https://api.github.com/repos/primefaces/primefaces | closed | Core: primefaces.TOUCHABLE does not globally disable touchable | defect | **Environment:**
- PF Version: _10.0_
- JSF + version: _ALL_
- Affected browsers: _ALL_
**Hierarchy**
If you have a datatable with horizontal scrolling inside a tabview the tabview takes precedence in swipe event. The tabview is just a UI container and should by default not override the content touch events since it limits it use case. Only very simple tabviews with static content can benefit from this behavior.
**Global deactivating has no effect**
I tried deactivating it at global level using "primefaces.TOUCHABLE" web.xml context-param but it does not work. Somehow i need to deactivate it on component level. Maybe not available in 10.0.0?
**Default false?**
Should touchable behavior not be false by default so you can turn it on for mobile specific application and in CRUD applications it breaks backwards compatibility when upgrading to 10.0.0. I must now add touchable="false" to very many components cause we and our clients use the "desktop" version also on mobile and basically every table there is horizontally scrollable.
Reported by @djmj here: https://github.com/primefaces/primefaces/issues/5744#issuecomment-955576984 | 1.0 | Core: primefaces.TOUCHABLE does not globally disable touchable - **Environment:**
- PF Version: _10.0_
- JSF + version: _ALL_
- Affected browsers: _ALL_
**Hierarchy**
If you have a datatable with horizontal scrolling inside a tabview the tabview takes precedence in swipe event. The tabview is just a UI container and should by default not override the content touch events since it limits it use case. Only very simple tabviews with static content can benefit from this behavior.
**Global deactivating has no effect**
I tried deactivating it at global level using "primefaces.TOUCHABLE" web.xml context-param but it does not work. Somehow i need to deactivate it on component level. Maybe not available in 10.0.0?
**Default false?**
Should touchable behavior not be false by default so you can turn it on for mobile specific application and in CRUD applications it breaks backwards compatibility when upgrading to 10.0.0. I must now add touchable="false" to very many components cause we and our clients use the "desktop" version also on mobile and basically every table there is horizontally scrollable.
Reported by @djmj here: https://github.com/primefaces/primefaces/issues/5744#issuecomment-955576984 | non_infrastructure | core primefaces touchable does not globally disable touchable environment pf version jsf version all affected browsers all hierarchy if you have a datatable with horizontal scrolling inside a tabview the tabview takes precedence in swipe event the tabview is just a ui container and should by default not override the content touch events since it limits it use case only very simple tabviews with static content can benefit from this behavior global deactivating has no effect i tried deactivating it at global level using primefaces touchable web xml context param but it does not work somehow i need to deactivate it on component level maybe not available in default false should touchable behavior not be false by default so you can turn it on for mobile specific application and in crud applications it breaks backwards compatibility when upgrading to i must now add touchable false to very many components cause we and our clients use the desktop version also on mobile and basically every table there is horizontally scrollable reported by djmj here | 0 |
10,322 | 8,489,863,185 | IssuesEvent | 2018-10-26 21:28:29 | hashmapinc/WitsmlApi-Server | https://api.github.com/repos/hashmapinc/WitsmlApi-Server | closed | Creation of readthedocs endpoint | Infrastructure | Child of #1
Creation of read the docks endpoint for the witsml api. | 1.0 | Creation of readthedocs endpoint - Child of #1
Creation of read the docks endpoint for the witsml api. | infrastructure | creation of readthedocs endpoint child of creation of read the docks endpoint for the witsml api | 1 |
256,451 | 27,561,675,072 | IssuesEvent | 2023-03-07 22:39:21 | samqws-marketing/electronicarts_ava-capture | https://api.github.com/repos/samqws-marketing/electronicarts_ava-capture | closed | CVE-2019-8331 (Medium) detected in bootstrap-3.3.7.min.js - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /website-backend/ava/static/rest_framework/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/electronicarts_ava-capture/commit/a04e5f9a7ee817317d0d58ce800eefc6bf4bd150">a04e5f9a7ee817317d0d58ce800eefc6bf4bd150</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
| True | CVE-2019-8331 (Medium) detected in bootstrap-3.3.7.min.js - autoclosed - ## CVE-2019-8331 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to vulnerable library: /website-backend/ava/static/rest_framework/js/bootstrap.min.js</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/electronicarts_ava-capture/commit/a04e5f9a7ee817317d0d58ce800eefc6bf4bd150">a04e5f9a7ee817317d0d58ce800eefc6bf4bd150</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute.
<p>Publish Date: 2019-02-20
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-8331>CVE-2019-8331</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2019-02-20</p>
<p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p>
</p>
</details>
<p></p>
| non_infrastructure | cve medium detected in bootstrap min js autoclosed cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to vulnerable library website backend ava static rest framework js bootstrap min js dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution bootstrap bootstrap sass | 0 |
14,235 | 10,720,551,350 | IssuesEvent | 2019-10-26 18:30:02 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | Lots of unused compilation flags on newer clang | area-Infrastructure | Hi, When compiling on latest clang, I need to turn off `-Werror` in [configurecompiler.cmake](https://github.com/dotnet/coreclr/blob/9b832e6eee9c1e93e2bfa4422d9c91a0f6da9452/configurecompiler.cmake#L477)
Because CoreCLR is passing to following flags, which newer clangs don't really appreciate:
```bash
clang-9: warning: -Wl,--copy-dt-needed-entries: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,-z: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,now: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,-z: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,relro: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: optimization flag '-ftree-loop-distribute-patterns' is not supported [-Wignored-optimization-argument]
clang-9: warning: optimization flag '-fno-semantic-interposition' is not supported [-Wignored-optimization-argument]
clang-9: warning: optimization flag '-ftree-loop-vectorize' is not supported [-Wignored-optimization-argument]
```
Would it make sense to track this down (e.g. figure out why it's being passed to begin with) and since when it is ignored by a newer clang?
After disableing `-Werror` CoreCLR master branch seems to build nicely and seemingly works out of the box with clang-9.
The reason I "insist" on working with clang-9 is that my distro (clearlinux) is very opinionated about not providing older/less safe versions of the compiler.
I'd be happy to track this down and create a PR...
| 1.0 | Lots of unused compilation flags on newer clang - Hi, When compiling on latest clang, I need to turn off `-Werror` in [configurecompiler.cmake](https://github.com/dotnet/coreclr/blob/9b832e6eee9c1e93e2bfa4422d9c91a0f6da9452/configurecompiler.cmake#L477)
Because CoreCLR is passing to following flags, which newer clangs don't really appreciate:
```bash
clang-9: warning: -Wl,--copy-dt-needed-entries: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,-z: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,now: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,-z: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: -Wl,relro: 'linker' input unused [-Wunused-command-line-argument]
clang-9: warning: optimization flag '-ftree-loop-distribute-patterns' is not supported [-Wignored-optimization-argument]
clang-9: warning: optimization flag '-fno-semantic-interposition' is not supported [-Wignored-optimization-argument]
clang-9: warning: optimization flag '-ftree-loop-vectorize' is not supported [-Wignored-optimization-argument]
```
Would it make sense to track this down (e.g. figure out why it's being passed to begin with) and since when it is ignored by a newer clang?
After disableing `-Werror` CoreCLR master branch seems to build nicely and seemingly works out of the box with clang-9.
The reason I "insist" on working with clang-9 is that my distro (clearlinux) is very opinionated about not providing older/less safe versions of the compiler.
I'd be happy to track this down and create a PR...
| infrastructure | lots of unused compilation flags on newer clang hi when compiling on latest clang i need to turn off werror in because coreclr is passing to following flags which newer clangs don t really appreciate bash clang warning wl copy dt needed entries linker input unused clang warning wl z linker input unused clang warning wl now linker input unused clang warning wl z linker input unused clang warning wl relro linker input unused clang warning optimization flag ftree loop distribute patterns is not supported clang warning optimization flag fno semantic interposition is not supported clang warning optimization flag ftree loop vectorize is not supported would it make sense to track this down e g figure out why it s being passed to begin with and since when it is ignored by a newer clang after disableing werror coreclr master branch seems to build nicely and seemingly works out of the box with clang the reason i insist on working with clang is that my distro clearlinux is very opinionated about not providing older less safe versions of the compiler i d be happy to track this down and create a pr | 1 |
14,962 | 3,437,707,240 | IssuesEvent | 2015-12-13 12:47:47 | blackwatchint/blackwatchint | https://api.github.com/repos/blackwatchint/blackwatchint | opened | Landing helicopters partially in water results in drowning | Low Priority Modpack Needs Testing | Landing a helicopter partially in water reportedly causes the occupants to go into an irreversible but non-deadly "downing" state. It also slowly damages the helicopter as if it was fully submerged. The drowning effects on the occupants can only be removed via death. | 1.0 | Landing helicopters partially in water results in drowning - Landing a helicopter partially in water reportedly causes the occupants to go into an irreversible but non-deadly "downing" state. It also slowly damages the helicopter as if it was fully submerged. The drowning effects on the occupants can only be removed via death. | non_infrastructure | landing helicopters partially in water results in drowning landing a helicopter partially in water reportedly causes the occupants to go into an irreversible but non deadly downing state it also slowly damages the helicopter as if it was fully submerged the drowning effects on the occupants can only be removed via death | 0 |
362,785 | 25,389,633,048 | IssuesEvent | 2022-11-22 02:12:11 | ossf/scorecard | https://api.github.com/repos/ossf/scorecard | closed | Reviewer/maintainer guidance/expectations | documentation no-issue-activity work-in-progress | How can we guide contributors/maintainers in doing code reviews/maintaining the project?
---
> > @laurentsimon -- Another nit to briefly continue the convo from [#1532 (comment)](https://github.com/ossf/scorecard/pull/1532#issuecomment-1022481885):
> > The notes you left in the PR description are much clearer!
> > > docs/checks.md: updated the doc
> > > docs/checks/internal/checks.yaml: updated the source of truth for docs
> >
> >
> > What I was suggesting in the previous PR was to make these not the PR description, but the actual commit messages.
> > That way, when the PR content gets squashed and merged, the details get included as part of the git history.
>
> Gocha. The info also seems useful for the PR reviewer. So need them in both places?
do you have a doc on updating a past commit? I can try adding it
_Originally posted by @laurentsimon in https://github.com/ossf/scorecard/issues/1545#issuecomment-1023522170_ | 1.0 | Reviewer/maintainer guidance/expectations - How can we guide contributors/maintainers in doing code reviews/maintaining the project?
---
> > @laurentsimon -- Another nit to briefly continue the convo from [#1532 (comment)](https://github.com/ossf/scorecard/pull/1532#issuecomment-1022481885):
> > The notes you left in the PR description are much clearer!
> > > docs/checks.md: updated the doc
> > > docs/checks/internal/checks.yaml: updated the source of truth for docs
> >
> >
> > What I was suggesting in the previous PR was to make these not the PR description, but the actual commit messages.
> > That way, when the PR content gets squashed and merged, the details get included as part of the git history.
>
> Gocha. The info also seems useful for the PR reviewer. So need them in both places?
do you have a doc on updating a past commit? I can try adding it
_Originally posted by @laurentsimon in https://github.com/ossf/scorecard/issues/1545#issuecomment-1023522170_ | non_infrastructure | reviewer maintainer guidance expectations how can we guide contributors maintainers in doing code reviews maintaining the project laurentsimon another nit to briefly continue the convo from the notes you left in the pr description are much clearer docs checks md updated the doc docs checks internal checks yaml updated the source of truth for docs what i was suggesting in the previous pr was to make these not the pr description but the actual commit messages that way when the pr content gets squashed and merged the details get included as part of the git history gocha the info also seems useful for the pr reviewer so need them in both places do you have a doc on updating a past commit i can try adding it originally posted by laurentsimon in | 0 |
31,532 | 25,857,202,191 | IssuesEvent | 2022-12-13 14:33:45 | CDCgov/data-exchange-hl7 | https://api.github.com/repos/CDCgov/data-exchange-hl7 | closed | All functions using Redis to acquire configuration | enhancement infrastructure | SHALL
- Use TF Redis
Functions
- [x] MMG Validator
- [x] MMG Redis Table
- [x] Vocab Redis Table
- [X] Vocab Function @dtx0111
- [X] MMG Validation Function @dtx0111
- [X] load MMGs from MMG AT
- [x] #223 load Legacy MMGs
- [x] Message Transformer > MMG-based model transformer 2022-12-01 @dtx0111 Confirm this is done in the transformer
N/A
- Receiver-Debatcher
- Structure Validator | 1.0 | All functions using Redis to acquire configuration - SHALL
- Use TF Redis
Functions
- [x] MMG Validator
- [x] MMG Redis Table
- [x] Vocab Redis Table
- [X] Vocab Function @dtx0111
- [X] MMG Validation Function @dtx0111
- [X] load MMGs from MMG AT
- [x] #223 load Legacy MMGs
- [x] Message Transformer > MMG-based model transformer 2022-12-01 @dtx0111 Confirm this is done in the transformer
N/A
- Receiver-Debatcher
- Structure Validator | infrastructure | all functions using redis to acquire configuration shall use tf redis functions mmg validator mmg redis table vocab redis table vocab function mmg validation function load mmgs from mmg at load legacy mmgs message transformer mmg based model transformer confirm this is done in the transformer n a receiver debatcher structure validator | 1 |
16,003 | 11,795,886,464 | IssuesEvent | 2020-03-18 09:47:52 | reapit/foundations | https://api.github.com/repos/reapit/foundations | opened | Create broker service to faciliate embedding of related journal entry resources | feature infrastructure platform-team | We should introduce a broker to the journal service to allow us to more easily build upon it and provide a place for interaction with other microservices for embed functionality. | 1.0 | Create broker service to faciliate embedding of related journal entry resources - We should introduce a broker to the journal service to allow us to more easily build upon it and provide a place for interaction with other microservices for embed functionality. | infrastructure | create broker service to faciliate embedding of related journal entry resources we should introduce a broker to the journal service to allow us to more easily build upon it and provide a place for interaction with other microservices for embed functionality | 1 |
55,230 | 14,286,703,617 | IssuesEvent | 2020-11-23 15:28:12 | google/truth | https://api.github.com/repos/google/truth | opened | Deprecate StringSubject.doesNotMatch? | P3 type=defect | As noted in https://github.com/google/truth/pull/789#issuecomment-732213007, it's _very_ frequently misused.
Users who want that behavior would likely be better served by having to opt into it more explicitly with a call like `doesNotContainMatch("(?s)^.*something.*$")`. Unfortunately, even that is likely to lead to misuse: If users omit `(?s)`, then `.` does not match newlines, so any string with a newline will pass a test like `doesNotContainMatch("^.*something.*$")` :( Perhaps our regex methods would ideally have enabled `(?s)` by default. But then that would be different from the `Pattern`-accepting overloads. There is no great solution here. | 1.0 | Deprecate StringSubject.doesNotMatch? - As noted in https://github.com/google/truth/pull/789#issuecomment-732213007, it's _very_ frequently misused.
Users who want that behavior would likely be better served by having to opt into it more explicitly with a call like `doesNotContainMatch("(?s)^.*something.*$")`. Unfortunately, even that is likely to lead to misuse: If users omit `(?s)`, then `.` does not match newlines, so any string with a newline will pass a test like `doesNotContainMatch("^.*something.*$")` :( Perhaps our regex methods would ideally have enabled `(?s)` by default. But then that would be different from the `Pattern`-accepting overloads. There is no great solution here. | non_infrastructure | deprecate stringsubject doesnotmatch as noted in it s very frequently misused users who want that behavior would likely be better served by having to opt into it more explicitly with a call like doesnotcontainmatch s something unfortunately even that is likely to lead to misuse if users omit s then does not match newlines so any string with a newline will pass a test like doesnotcontainmatch something perhaps our regex methods would ideally have enabled s by default but then that would be different from the pattern accepting overloads there is no great solution here | 0 |
15,630 | 11,622,028,914 | IssuesEvent | 2020-02-27 05:08:03 | GIScience/openpoiservice | https://api.github.com/repos/GIScience/openpoiservice | opened | Take tests out of main package | infrastructure | Take `tests` out of the main package: can have very undesirable side effects otherwise as the root `__init__.py` is executed when running the tests | 1.0 | Take tests out of main package - Take `tests` out of the main package: can have very undesirable side effects otherwise as the root `__init__.py` is executed when running the tests | infrastructure | take tests out of main package take tests out of the main package can have very undesirable side effects otherwise as the root init py is executed when running the tests | 1 |
96,514 | 12,136,141,758 | IssuesEvent | 2020-04-23 13:58:46 | demokratie-live/democracy-client | https://api.github.com/repos/demokratie-live/democracy-client | opened | 🚀 [Feature]📱PushAktivierung nach der Abstimmung vorschlagen | Design Feature 📱 Mobile App | Dieses Ticket beschäftigt sich mit der Lösung des UX-Feedback von Mathias:
a) "Ich finde es etwas verwirrend, das bei der eigentlichen Abstimmung jedesmal die Benachrichtigung zu Hartz IV steht. Ich denke immer ich stimme über Hartz IV ab. Könnte man hier nicht die Info der ausgewählten Entscheidungsvorlage integrieren?"
b1) Mathias hat unter Schon Gewusst den Button 'Aktivieren' hinter 'Deine Stimme ist...' nicht gesehen, nach meinem Hinweis sagt er:
b2) bei getoggeltem Button und während der Abstimmung "klicke [ich ]nicht auf aktivieren, da ich denke dass sich dann der Abstimmungsprozess beendet"
Lösung
- Aktivierung *nach* der Abstimmung vorschlagen (vgl. Screenshot)
- in die Push immer das jeweils abgestimmte $Gesetz einpflegen
<img width="565" alt="Bildschirmfoto 2020-04-23 um 15 58 24" src="https://user-images.githubusercontent.com/32302889/80107385-4ac33480-857b-11ea-97f7-0c2b00d2a5f9.png">
| 1.0 | 🚀 [Feature]📱PushAktivierung nach der Abstimmung vorschlagen - Dieses Ticket beschäftigt sich mit der Lösung des UX-Feedback von Mathias:
a) "Ich finde es etwas verwirrend, das bei der eigentlichen Abstimmung jedesmal die Benachrichtigung zu Hartz IV steht. Ich denke immer ich stimme über Hartz IV ab. Könnte man hier nicht die Info der ausgewählten Entscheidungsvorlage integrieren?"
b1) Mathias hat unter Schon Gewusst den Button 'Aktivieren' hinter 'Deine Stimme ist...' nicht gesehen, nach meinem Hinweis sagt er:
b2) bei getoggeltem Button und während der Abstimmung "klicke [ich ]nicht auf aktivieren, da ich denke dass sich dann der Abstimmungsprozess beendet"
Lösung
- Aktivierung *nach* der Abstimmung vorschlagen (vgl. Screenshot)
- in die Push immer das jeweils abgestimmte $Gesetz einpflegen
<img width="565" alt="Bildschirmfoto 2020-04-23 um 15 58 24" src="https://user-images.githubusercontent.com/32302889/80107385-4ac33480-857b-11ea-97f7-0c2b00d2a5f9.png">
| non_infrastructure | 🚀 📱pushaktivierung nach der abstimmung vorschlagen dieses ticket beschäftigt sich mit der lösung des ux feedback von mathias a ich finde es etwas verwirrend das bei der eigentlichen abstimmung jedesmal die benachrichtigung zu hartz iv steht ich denke immer ich stimme über hartz iv ab könnte man hier nicht die info der ausgewählten entscheidungsvorlage integrieren mathias hat unter schon gewusst den button aktivieren hinter deine stimme ist nicht gesehen nach meinem hinweis sagt er bei getoggeltem button und während der abstimmung klicke nicht auf aktivieren da ich denke dass sich dann der abstimmungsprozess beendet lösung aktivierung nach der abstimmung vorschlagen vgl screenshot in die push immer das jeweils abgestimmte gesetz einpflegen img width alt bildschirmfoto um src | 0 |
154,418 | 12,213,815,738 | IssuesEvent | 2020-05-01 08:12:17 | openethereum/openethereum | https://api.github.com/repos/openethereum/openethereum | opened | Automatic json test files discovery | F4-tests 💻 P7-nicetohave 🐕 | Build the tests that do not require us to manually maintain a list of test file folders, from https://github.com/openethereum/openethereum/issues/11085. | 1.0 | Automatic json test files discovery - Build the tests that do not require us to manually maintain a list of test file folders, from https://github.com/openethereum/openethereum/issues/11085. | non_infrastructure | automatic json test files discovery build the tests that do not require us to manually maintain a list of test file folders from | 0 |
797,010 | 28,135,156,190 | IssuesEvent | 2023-04-01 09:38:33 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Action icon change seems non-predicted again | Issue: Bug Priority: 2-Before Release Difficulty: 2-Medium | Try toggling combat mode: the popup should be predicted but the icon seems to be server-side. | 1.0 | Action icon change seems non-predicted again - Try toggling combat mode: the popup should be predicted but the icon seems to be server-side. | non_infrastructure | action icon change seems non predicted again try toggling combat mode the popup should be predicted but the icon seems to be server side | 0 |
7,389 | 6,935,553,550 | IssuesEvent | 2017-12-03 10:32:56 | fsr-itse/EvaP | https://api.github.com/repos/fsr-itse/EvaP | opened | Update to Django 2.0 | [C] Core [C] Infrastructure [P] Minor | Django 2.0 was released: https://docs.djangoproject.com/en/dev/releases/2.0/
* [ ] run the tests and see what happens
* [ ] run the tests with warnings enabled (-Wall?) and check for deprecations
* [ ] (maybe re-export test_data so later diffs are cleaner)
* [ ] https://docs.djangoproject.com/en/dev/releases/2.0/#simplified-url-routing-syntax
* [ ] https://docs.djangoproject.com/en/dev/releases/2.0/#abstractuser-last-name-max-length-increased-to-150
* [ ] Subclasses of AbstractBaseUser are no longer required to implement get_short_name() and get_full_name()
And: "A model instance’s primary key now appears in the default Model.__str__() method, e.g. Question object (1)" :) | 1.0 | Update to Django 2.0 - Django 2.0 was released: https://docs.djangoproject.com/en/dev/releases/2.0/
* [ ] run the tests and see what happens
* [ ] run the tests with warnings enabled (-Wall?) and check for deprecations
* [ ] (maybe re-export test_data so later diffs are cleaner)
* [ ] https://docs.djangoproject.com/en/dev/releases/2.0/#simplified-url-routing-syntax
* [ ] https://docs.djangoproject.com/en/dev/releases/2.0/#abstractuser-last-name-max-length-increased-to-150
* [ ] Subclasses of AbstractBaseUser are no longer required to implement get_short_name() and get_full_name()
And: "A model instance’s primary key now appears in the default Model.__str__() method, e.g. Question object (1)" :) | infrastructure | update to django django was released run the tests and see what happens run the tests with warnings enabled wall and check for deprecations maybe re export test data so later diffs are cleaner subclasses of abstractbaseuser are no longer required to implement get short name and get full name and a model instance’s primary key now appears in the default model str method e g question object | 1 |
13,040 | 10,083,675,081 | IssuesEvent | 2019-07-25 14:09:13 | InsightSoftwareConsortium/ITK | https://api.github.com/repos/InsightSoftwareConsortium/ITK | closed | Dashboard configure errors with Python 2.7 | type:Infrastructure | #938 # Description
https://open.cdash.org/index.php?project=Insight&filtercount=1&showfilters=0&field1=revision&compare1=63&value1=a4bbdfd&showfeed=0

``` none
-- Found PythonLibs: /usr/lib/libpython2.7.dylib (found version "2.7.10")
CMake Warning at Wrapping/Generators/Python/CMakeLists.txt:8 (message):
Python versions less than 3.5 are not supported. Python version: "2.7.16".
CMake Warning at Wrapping/Generators/Python/CMakeLists.txt:14 (message):
Python executable ("2.7.16") and library ("2.7.10") version mismatch.
```
### Impact analysis
CDash reports are failing for python builds.
### Expected behavior
ITK master branch should configure cleanly in the CI environments with python 3.5 or greater.
### Actual behavior
Python 2.7 is found and used.
### Versions
ITK master branch
### Environment
CI Python build environments. | 1.0 | Dashboard configure errors with Python 2.7 - #938 # Description
https://open.cdash.org/index.php?project=Insight&filtercount=1&showfilters=0&field1=revision&compare1=63&value1=a4bbdfd&showfeed=0

``` none
-- Found PythonLibs: /usr/lib/libpython2.7.dylib (found version "2.7.10")
CMake Warning at Wrapping/Generators/Python/CMakeLists.txt:8 (message):
Python versions less than 3.5 are not supported. Python version: "2.7.16".
CMake Warning at Wrapping/Generators/Python/CMakeLists.txt:14 (message):
Python executable ("2.7.16") and library ("2.7.10") version mismatch.
```
### Impact analysis
CDash reports are failing for python builds.
### Expected behavior
ITK master branch should configure cleanly in the CI environments with python 3.5 or greater.
### Actual behavior
Python 2.7 is found and used.
### Versions
ITK master branch
### Environment
CI Python build environments. | infrastructure | dashboard configure errors with python description none found pythonlibs usr lib dylib found version cmake warning at wrapping generators python cmakelists txt message python versions less than are not supported python version cmake warning at wrapping generators python cmakelists txt message python executable and library version mismatch impact analysis cdash reports are failing for python builds expected behavior itk master branch should configure cleanly in the ci environments with python or greater actual behavior python is found and used versions itk master branch environment ci python build environments | 1 |
32,701 | 26,922,027,290 | IssuesEvent | 2023-02-07 11:08:30 | openforis/fra-platform | https://api.github.com/repos/openforis/fra-platform | closed | Previous text in HTML may not be recognizable | infrastructure | @minotogna
Maybe we should let the previous text written in html be recognizable since it is not readable.

| 1.0 | Previous text in HTML may not be recognizable - @minotogna
Maybe we should let the previous text written in html be recognizable since it is not readable.

| infrastructure | previous text in html may not be recognizable minotogna maybe we should let the previous text written in html be recognizable since it is not readable | 1 |
919 | 3,001,840,302 | IssuesEvent | 2015-07-24 14:01:59 | codhab/plataform | https://api.github.com/repos/codhab/plataform | closed | deploy via capistrano | infrastructure | #### descrição
- [ ] deploy via capistrano
- [ ] deploy staging
- [ ] deploy production
| 1.0 | deploy via capistrano - #### descrição
- [ ] deploy via capistrano
- [ ] deploy staging
- [ ] deploy production
| infrastructure | deploy via capistrano descrição deploy via capistrano deploy staging deploy production | 1 |
6,231 | 6,261,946,878 | IssuesEvent | 2017-07-15 05:09:19 | dotnet/coreclr | https://api.github.com/repos/dotnet/coreclr | closed | ResourceManager should restore logic for when to use satellite resource lookup under AppX | area-Infrastructure test-run-uwp-coreclr | We should change ResourceManager.cs to do https://github.com/dotnet/coreclr/pull/12117/files#diff-eb71ec85c8bb94ecf707440b98c59716R894 once we fix our uap test runner to reprocess the test resources into PRI so that ResourceManager string loading works correctly inside APX.
cc: @jkotas | 1.0 | ResourceManager should restore logic for when to use satellite resource lookup under AppX - We should change ResourceManager.cs to do https://github.com/dotnet/coreclr/pull/12117/files#diff-eb71ec85c8bb94ecf707440b98c59716R894 once we fix our uap test runner to reprocess the test resources into PRI so that ResourceManager string loading works correctly inside APX.
cc: @jkotas | infrastructure | resourcemanager should restore logic for when to use satellite resource lookup under appx we should change resourcemanager cs to do once we fix our uap test runner to reprocess the test resources into pri so that resourcemanager string loading works correctly inside apx cc jkotas | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.