Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
66,244 | 3,251,412,545 | IssuesEvent | 2015-10-19 09:39:55 | kubernetes/kubernetes | https://api.github.com/repos/kubernetes/kubernetes | opened | ReplicationController failed to notice Pod Eviction | kind/bug priority/P0 team/CSI team/node | TL;DR: I don't have a reliable repro yet, it might be a one-off flake. If it's not it can be serious, as we can permanently kill a Pod.
I was running 250 Node tests (denisty 30') looking at heapster resource consumption. The first run was close to the limit (3G), so the second one was likely to end in OOM. To nobody's surprise it did. Because it was eating by far most memory it was (most likely) killed by the sys oom killer, which, for some reason ended up in:
```
Mon, 19 Oct 2015 11:11:11 +0200 Mon, 19 Oct 2015 11:11:11 +0200 1 heapster-v10-j8a4y Pod NodeControllerEviction {controllermanager } Marking for deletion Pod heapster-v10-j8a4y from Node e2e-test-gmarek-minion-9ei
```
(Node itself was healthy at the moment)
Small problem is that the Pod is not running anywhere now:
```
kubectl --namespace=kube-system get pod | grep heapster
```
returns an empty result.
The real question is why this is the case:
```
gmarek@breakwater:~/go/src/k8s.io/kubernetes$ kubectl --namespace=kube-system describe rc heapster-v10
Name: heapster-v10
Namespace: kube-system
Image(s): gcr.io/google_containers/heapster:v0.18.2
Selector: k8s-app=heapster,version=v10
Labels: k8s-app=heapster,kubernetes.io/cluster-service=true,version=v10
Replicas: 1 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.
```
cc @wojtek-t @fgrzadkowski @davidopp @lavalamp @dchen1107 | 1.0 | ReplicationController failed to notice Pod Eviction - TL;DR: I don't have a reliable repro yet, it might be a one-off flake. If it's not it can be serious, as we can permanently kill a Pod.
I was running 250 Node tests (denisty 30') looking at heapster resource consumption. The first run was close to the limit (3G), so the second one was likely to end in OOM. To nobody's surprise it did. Because it was eating by far most memory it was (most likely) killed by the sys oom killer, which, for some reason ended up in:
```
Mon, 19 Oct 2015 11:11:11 +0200 Mon, 19 Oct 2015 11:11:11 +0200 1 heapster-v10-j8a4y Pod NodeControllerEviction {controllermanager } Marking for deletion Pod heapster-v10-j8a4y from Node e2e-test-gmarek-minion-9ei
```
(Node itself was healthy at the moment)
Small problem is that the Pod is not running anywhere now:
```
kubectl --namespace=kube-system get pod | grep heapster
```
returns an empty result.
The real question is why this is the case:
```
gmarek@breakwater:~/go/src/k8s.io/kubernetes$ kubectl --namespace=kube-system describe rc heapster-v10
Name: heapster-v10
Namespace: kube-system
Image(s): gcr.io/google_containers/heapster:v0.18.2
Selector: k8s-app=heapster,version=v10
Labels: k8s-app=heapster,kubernetes.io/cluster-service=true,version=v10
Replicas: 1 current / 1 desired
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
No events.
```
cc @wojtek-t @fgrzadkowski @davidopp @lavalamp @dchen1107 | non_infrastructure | replicationcontroller failed to notice pod eviction tl dr i don t have a reliable repro yet it might be a one off flake if it s not it can be serious as we can permanently kill a pod i was running node tests denisty looking at heapster resource consumption the first run was close to the limit so the second one was likely to end in oom to nobody s surprise it did because it was eating by far most memory it was most likely killed by the sys oom killer which for some reason ended up in mon oct mon oct heapster pod nodecontrollereviction controllermanager marking for deletion pod heapster from node test gmarek minion node itself was healthy at the moment small problem is that the pod is not running anywhere now kubectl namespace kube system get pod grep heapster returns an empty result the real question is why this is the case gmarek breakwater go src io kubernetes kubectl namespace kube system describe rc heapster name heapster namespace kube system image s gcr io google containers heapster selector app heapster version labels app heapster kubernetes io cluster service true version replicas current desired pods status running waiting succeeded failed no events cc wojtek t fgrzadkowski davidopp lavalamp | 0 |
220,992 | 7,372,742,906 | IssuesEvent | 2018-03-13 15:28:45 | idaholab/raven | https://api.github.com/repos/idaholab/raven | opened | RAVEN interface and ExternalXML | improvement priority_normal | --------
Issue Description
--------
##### What did you expect to see happen?
When using the RAVEN interface:
If the "inner" RAVEN uses ExteranlXML, the outer run should not be affected.
##### What did you see instead?
If, for example, the OutStreams are in the main "inner" file and the DataObjects are in an ExternalXML file, the RAVENparser will crash with (for example):
```
IOError: RAVEN_PARSER ERROR: The OutStream of type "Print" named "dumpOPT" is linked to not existing DataObject!
```
##### Do you have a suggested fix for the development team?
Expand the necessary ExternalXML nodes before parsing the XML.
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or improvement?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)?
- [ ] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
| 1.0 | RAVEN interface and ExternalXML - --------
Issue Description
--------
##### What did you expect to see happen?
When using the RAVEN interface:
If the "inner" RAVEN uses ExteranlXML, the outer run should not be affected.
##### What did you see instead?
If, for example, the OutStreams are in the main "inner" file and the DataObjects are in an ExternalXML file, the RAVENparser will crash with (for example):
```
IOError: RAVEN_PARSER ERROR: The OutStream of type "Print" named "dumpOPT" is linked to not existing DataObject!
```
##### Do you have a suggested fix for the development team?
Expand the necessary ExternalXML nodes before parsing the XML.
----------------
For Change Control Board: Issue Review
----------------
This review should occur before any development is performed as a response to this issue.
- [x] 1. Is it tagged with a type: defect or improvement?
- [x] 2. Is it tagged with a priority: critical, normal or minor?
- [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements?
- [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users.
- [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.)
-------
For Change Control Board: Issue Closure
-------
This review should occur when the issue is imminently going to be closed.
- [ ] 1. If the issue is a defect, is the defect fixed?
- [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.)
- [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)?
- [ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)?
- [ ] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
| non_infrastructure | raven interface and externalxml issue description what did you expect to see happen when using the raven interface if the inner raven uses exteranlxml the outer run should not be affected what did you see instead if for example the outstreams are in the main inner file and the dataobjects are in an externalxml file the ravenparser will crash with for example ioerror raven parser error the outstream of type print named dumpopt is linked to not existing dataobject do you have a suggested fix for the development team expand the necessary externalxml nodes before parsing the xml for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or improvement is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest stable branch if yes is there any issue tagged with stable create if needed if the issue is being closed without a merge request has an explanation of why it is being closed been provided | 0 |
249,084 | 7,953,757,763 | IssuesEvent | 2018-07-12 03:37:42 | StrangeLoopGames/EcoIssues | https://api.github.com/repos/StrangeLoopGames/EcoIssues | closed | Dedicated steam eco server not showing in list | Medium Priority | I have been searching to fix the same problem firewall is open on eco and eco server , port are forwarded correctly and I am running the server as an administrator . In steam server list can't even see my server as active with internal ip as a localhost or with my web address but I can access my server in-game but my friends cannot . tried to link my steam account on eco server but the link is not working for now.
please help fix this ! | 1.0 | Dedicated steam eco server not showing in list - I have been searching to fix the same problem firewall is open on eco and eco server , port are forwarded correctly and I am running the server as an administrator . In steam server list can't even see my server as active with internal ip as a localhost or with my web address but I can access my server in-game but my friends cannot . tried to link my steam account on eco server but the link is not working for now.
please help fix this ! | non_infrastructure | dedicated steam eco server not showing in list i have been searching to fix the same problem firewall is open on eco and eco server port are forwarded correctly and i am running the server as an administrator in steam server list can t even see my server as active with internal ip as a localhost or with my web address but i can access my server in game but my friends cannot tried to link my steam account on eco server but the link is not working for now please help fix this | 0 |
172,098 | 21,031,333,808 | IssuesEvent | 2022-03-31 01:21:26 | srivatsamarichi/spring-petclinic | https://api.github.com/repos/srivatsamarichi/spring-petclinic | opened | CVE-2022-22950 (Medium) detected in spring-expression-5.3.6.jar | security vulnerability | ## CVE-2022-22950 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-expression-5.3.6.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.3.6/spring-expression-5.3.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.5.jar (Root Library)
- spring-webmvc-5.3.6.jar
- :x: **spring-expression-5.3.6.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-expression:5.3.17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-22950 (Medium) detected in spring-expression-5.3.6.jar - ## CVE-2022-22950 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-expression-5.3.6.jar</b></p></summary>
<p>Spring Expression Language (SpEL)</p>
<p>Library home page: <a href="https://github.com/spring-projects/spring-framework">https://github.com/spring-projects/spring-framework</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/spring-expression/5.3.6/spring-expression-5.3.6.jar</p>
<p>
Dependency Hierarchy:
- spring-boot-starter-web-2.4.5.jar (Root Library)
- spring-webmvc-5.3.6.jar
- :x: **spring-expression-5.3.6.jar** (Vulnerable Library)
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Spring Framework versions 5.3.0 - 5.3.16 and older unsupported versions, it is possible for a user to provide a specially crafted SpEL expression that may cause a denial of service condition
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-22950>CVE-2022-22950</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://tanzu.vmware.com/security/cve-2022-22950">https://tanzu.vmware.com/security/cve-2022-22950</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: org.springframework:spring-expression:5.3.17</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in spring expression jar cve medium severity vulnerability vulnerable library spring expression jar spring expression language spel library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org springframework spring expression spring expression jar dependency hierarchy spring boot starter web jar root library spring webmvc jar x spring expression jar vulnerable library found in base branch master vulnerability details in spring framework versions and older unsupported versions it is possible for a user to provide a specially crafted spel expression that may cause a denial of service condition publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework spring expression step up your open source security game with whitesource | 0 |
795,103 | 28,061,574,959 | IssuesEvent | 2023-03-29 12:58:37 | unlock-protocol/unlock | https://api.github.com/repos/unlock-protocol/unlock | closed | Walletless "registration" for events | π¨ High Priority Events | On free events where we subsidize gas, we should be able to not require users to use their wallets when they checkout by just using the walletless airdrops.
Note: we need to make sure the event does **not** use hooks either.
| 1.0 | Walletless "registration" for events - On free events where we subsidize gas, we should be able to not require users to use their wallets when they checkout by just using the walletless airdrops.
Note: we need to make sure the event does **not** use hooks either.
| non_infrastructure | walletless registration for events on free events where we subsidize gas we should be able to not require users to use their wallets when they checkout by just using the walletless airdrops note we need to make sure the event does not use hooks either | 0 |
194,503 | 15,434,177,507 | IssuesEvent | 2021-03-07 01:46:48 | wds9601/film-club | https://api.github.com/repos/wds9601/film-club | opened | Some films don't have release dates π | bug documentation | Has caused an error server-side: https://github.com/gstro/film-club-server/issues/38
This ticket captures work to be done to update the API spec, and also as a reminder to check for `null` in the client code when using `releaseDate`. | 1.0 | Some films don't have release dates π - Has caused an error server-side: https://github.com/gstro/film-club-server/issues/38
This ticket captures work to be done to update the API spec, and also as a reminder to check for `null` in the client code when using `releaseDate`. | non_infrastructure | some films don t have release dates π has caused an error server side this ticket captures work to be done to update the api spec and also as a reminder to check for null in the client code when using releasedate | 0 |
29,773 | 24,259,591,287 | IssuesEvent | 2022-09-27 21:06:29 | JupiterBroadcasting/jupiterbroadcasting.com | https://api.github.com/repos/JupiterBroadcasting/jupiterbroadcasting.com | opened | Testing `develop` branch workflow | dev infrastructure | Testing part of the workflow for our (soon) eventual move to a [Gitflow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) workflow to protect main and offer - in the future - live full-production testing of the develop branch.
We'll see if I understand all of this well ; )
Steps for this test:
* [x] do a PR #431 to merge `main` -> `develop` to bring `develop` in sync w `main`
* [ ] have someone merge said PR (thanks @elreydetoda !)
* [ ] apply PR #400 to `develop`
* [ ] see how the E2E Tests do w that one (for fun!). Not expecting anything PR #400 specific other than a green light..
* [ ] If satisfied, do a PR from `develop` to `main`
| 1.0 | Testing `develop` branch workflow - Testing part of the workflow for our (soon) eventual move to a [Gitflow](https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow) workflow to protect main and offer - in the future - live full-production testing of the develop branch.
We'll see if I understand all of this well ; )
Steps for this test:
* [x] do a PR #431 to merge `main` -> `develop` to bring `develop` in sync w `main`
* [ ] have someone merge said PR (thanks @elreydetoda !)
* [ ] apply PR #400 to `develop`
* [ ] see how the E2E Tests do w that one (for fun!). Not expecting anything PR #400 specific other than a green light..
* [ ] If satisfied, do a PR from `develop` to `main`
| infrastructure | testing develop branch workflow testing part of the workflow for our soon eventual move to a workflow to protect main and offer in the future live full production testing of the develop branch we ll see if i understand all of this well steps for this test do a pr to merge main develop to bring develop in sync w main have someone merge said pr thanks elreydetoda apply pr to develop see how the tests do w that one for fun not expecting anything pr specific other than a green light if satisfied do a pr from develop to main | 1 |
53,162 | 22,637,806,746 | IssuesEvent | 2022-06-30 20:57:50 | microsoft/BotFramework-Composer | https://api.github.com/repos/microsoft/BotFramework-Composer | closed | QnAmaker multi-turn prompts don't show on Teams, Composer prompts do | customer-reported Bot Services | I am making a chatbot using Composer that calls a QnAMaker knowledge base that contains multi-turn prompts. These prompts work fine when tested in Composer, Web Chat, and QnAMaker itself:

When testing the bot in Teams, however, the prompts don't appear:

I have tried using the Teams channel in the bot resource, as well as connecting it to a Teams app through the App Studio, and neither of them produce the prompts. I can't find anything specific to multi-turn integration with Teams in the documentation so am unsure why this would be happening?
Prompts created within Composer do appear in Teams:

So it is only QnAMaker multi-turn prompts that are causing this problem. However, there are no settings in QnAmaker or Teams that relate to this, so it must be something that needs changing in Composer to get it to work. Any help would be appreciated. | 1.0 | QnAmaker multi-turn prompts don't show on Teams, Composer prompts do - I am making a chatbot using Composer that calls a QnAMaker knowledge base that contains multi-turn prompts. These prompts work fine when tested in Composer, Web Chat, and QnAMaker itself:

When testing the bot in Teams, however, the prompts don't appear:

I have tried using the Teams channel in the bot resource, as well as connecting it to a Teams app through the App Studio, and neither of them produce the prompts. I can't find anything specific to multi-turn integration with Teams in the documentation so am unsure why this would be happening?
Prompts created within Composer do appear in Teams:

So it is only QnAMaker multi-turn prompts that are causing this problem. However, there are no settings in QnAmaker or Teams that relate to this, so it must be something that needs changing in Composer to get it to work. Any help would be appreciated. | non_infrastructure | qnamaker multi turn prompts don t show on teams composer prompts do i am making a chatbot using composer that calls a qnamaker knowledge base that contains multi turn prompts these prompts work fine when tested in composer web chat and qnamaker itself when testing the bot in teams however the prompts don t appear i have tried using the teams channel in the bot resource as well as connecting it to a teams app through the app studio and neither of them produce the prompts i can t find anything specific to multi turn integration with teams in the documentation so am unsure why this would be happening prompts created within composer do appear in teams so it is only qnamaker multi turn prompts that are causing this problem however there are no settings in qnamaker or teams that relate to this so it must be something that needs changing in composer to get it to work any help would be appreciated | 0 |
20,002 | 13,624,179,737 | IssuesEvent | 2020-09-24 07:39:29 | globaldothealth/list | https://api.github.com/repos/globaldothealth/list | closed | Fix 60-second request timeout / socket hang up in dev/prod | Infrastructure P1 Launch blocker | **Describe the bug**
Requests in dev/prod time out after 60 seconds.
**To Reproduce**
Send an API request, say to batch upsert, with a large amount of data (>10k rows).
**Expected behavior**
Requests, at least batch upsert, should allow more time to complete.
**Environment (please complete the following information):**
Only occurs in dev/prod -- can't repro locally. I've seen the issue both for bulk upload and ADI, so the issue isn't specific to the UI/browser.
I've previously tweaked our nginx config to avoid 504s, but in these cases, the API client gets `500: socket hang up`. | 1.0 | Fix 60-second request timeout / socket hang up in dev/prod - **Describe the bug**
Requests in dev/prod time out after 60 seconds.
**To Reproduce**
Send an API request, say to batch upsert, with a large amount of data (>10k rows).
**Expected behavior**
Requests, at least batch upsert, should allow more time to complete.
**Environment (please complete the following information):**
Only occurs in dev/prod -- can't repro locally. I've seen the issue both for bulk upload and ADI, so the issue isn't specific to the UI/browser.
I've previously tweaked our nginx config to avoid 504s, but in these cases, the API client gets `500: socket hang up`. | infrastructure | fix second request timeout socket hang up in dev prod describe the bug requests in dev prod time out after seconds to reproduce send an api request say to batch upsert with a large amount of data rows expected behavior requests at least batch upsert should allow more time to complete environment please complete the following information only occurs in dev prod can t repro locally i ve seen the issue both for bulk upload and adi so the issue isn t specific to the ui browser i ve previously tweaked our nginx config to avoid but in these cases the api client gets socket hang up | 1 |
22,562 | 15,279,611,689 | IssuesEvent | 2021-02-23 04:25:50 | LLNL/maestrowf | https://api.github.com/repos/LLNL/maestrowf | opened | Tests for DataStructures package | Infrastructure | Create tests that increase coverage for the maestrowf/datastructures package modules. | 1.0 | Tests for DataStructures package - Create tests that increase coverage for the maestrowf/datastructures package modules. | infrastructure | tests for datastructures package create tests that increase coverage for the maestrowf datastructures package modules | 1 |
28,977 | 23,645,587,925 | IssuesEvent | 2022-08-25 21:43:44 | meltano/squared | https://api.github.com/repos/meltano/squared | closed | Rerunning Paritially Completed CI Fails at dbt | data/Infrastructure data/Product Dogfooding | In this CI run https://github.com/meltano/squared/actions/runs/2784683224 github had some API errors but following a re-run it succeeded, probably a throttling thing. The problem is that the re-run caused the CI_BRANCH variable to update to the next increment so half of the EL sources werent available for the transform tests to pass.
The original purpose of adding a run ID and run attempt `CI_BRANCH: 'b${{ github.RUN_ID }}_${{ github.RUN_ATTEMPT }}'` as part of the branch unique id was to make different changes on the same branch test in isolation. For example if I push up a branch with a bug, it failed, then I push another change I'd want that second change to run in isolation from scratch otherwise the "fix" changes could cause a separate uncaught error.
I dont think our unique id is doing what we want. What we really want is to use the branch name plus the commit hash of the most recent commit, so after new commits are added to the branch our CI environment is reset but retries using the same commits should re-use whats already created. | 1.0 | Rerunning Paritially Completed CI Fails at dbt - In this CI run https://github.com/meltano/squared/actions/runs/2784683224 github had some API errors but following a re-run it succeeded, probably a throttling thing. The problem is that the re-run caused the CI_BRANCH variable to update to the next increment so half of the EL sources werent available for the transform tests to pass.
The original purpose of adding a run ID and run attempt `CI_BRANCH: 'b${{ github.RUN_ID }}_${{ github.RUN_ATTEMPT }}'` as part of the branch unique id was to make different changes on the same branch test in isolation. For example if I push up a branch with a bug, it failed, then I push another change I'd want that second change to run in isolation from scratch otherwise the "fix" changes could cause a separate uncaught error.
I dont think our unique id is doing what we want. What we really want is to use the branch name plus the commit hash of the most recent commit, so after new commits are added to the branch our CI environment is reset but retries using the same commits should re-use whats already created. | infrastructure | rerunning paritially completed ci fails at dbt in this ci run github had some api errors but following a re run it succeeded probably a throttling thing the problem is that the re run caused the ci branch variable to update to the next increment so half of the el sources werent available for the transform tests to pass the original purpose of adding a run id and run attempt ci branch b github run id github run attempt as part of the branch unique id was to make different changes on the same branch test in isolation for example if i push up a branch with a bug it failed then i push another change i d want that second change to run in isolation from scratch otherwise the fix changes could cause a separate uncaught error i dont think our unique id is doing what we want what we really want is to use the branch name plus the commit hash of the most recent commit so after new commits are added to the branch our ci environment is reset but retries using the same commits should re use whats already created | 1 |
34,933 | 30,595,915,513 | IssuesEvent | 2023-07-21 22:04:28 | OpenXRay/xray-16 | https://api.github.com/repos/OpenXRay/xray-16 | opened | Actions improvements | Enhancement Help wanted Portability Infrastructure Player Experience Developer Experience Linux good first issue macOS | TODO:
- [ ] Output binaries for macOS builds
- [ ] Provide AppImage package | 1.0 | Actions improvements - TODO:
- [ ] Output binaries for macOS builds
- [ ] Provide AppImage package | infrastructure | actions improvements todo output binaries for macos builds provide appimage package | 1 |
29,161 | 23,764,483,296 | IssuesEvent | 2022-09-01 11:41:05 | wellcomecollection/platform | https://api.github.com/repos/wellcomecollection/platform | closed | Point libsys external DNS to III hosted Sierra | π§ Infrastructure πCatalogue | ### Background
Sierra is being migrated from on-premises to hosting from III. This is happening on Tuesday 30th August. I have confirmed that there will be no IP whitelisting for the REST API and so the only change we need to make is to point the external DNS for libsys.wellcomelibrary.org to the new hosted server from III.
### Details
- This should not be done until we have had the notification from LSS that the migration has started.
- External DNS for libsys.wellcomelibrary.org should CNAME to welli.iii.com
- Once Sierra is back up, we should check the adapter is working. This will likely be the next working day due to time zone differences. | 1.0 | Point libsys external DNS to III hosted Sierra - ### Background
Sierra is being migrated from on-premises to hosting from III. This is happening on Tuesday 30th August. I have confirmed that there will be no IP whitelisting for the REST API and so the only change we need to make is to point the external DNS for libsys.wellcomelibrary.org to the new hosted server from III.
### Details
- This should not be done until we have had the notification from LSS that the migration has started.
- External DNS for libsys.wellcomelibrary.org should CNAME to welli.iii.com
- Once Sierra is back up, we should check the adapter is working. This will likely be the next working day due to time zone differences. | infrastructure | point libsys external dns to iii hosted sierra background sierra is being migrated from on premises to hosting from iii this is happening on tuesday august i have confirmed that there will be no ip whitelisting for the rest api and so the only change we need to make is to point the external dns for libsys wellcomelibrary org to the new hosted server from iii details this should not be done until we have had the notification from lss that the migration has started external dns for libsys wellcomelibrary org should cname to welli iii com once sierra is back up we should check the adapter is working this will likely be the next working day due to time zone differences | 1 |
19,979 | 11,354,113,964 | IssuesEvent | 2020-01-24 16:53:29 | terraform-providers/terraform-provider-aws | https://api.github.com/repos/terraform-providers/terraform-provider-aws | closed | Recently enabled EFS service in AWS China regions is using incorrect domain name. | partition/aws-cn service/efs | <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a π [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
Terraform v0.12.19
+ provider.aws v2.45.0
+ provider.template v2.1.2
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_efs_file_system
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_efs_file_system" "efs" {
performance_mode = "generalPurpose"
encrypted = true
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
<!--- What should have happened? --->
```hcl
$ terraform state show module.dbsdevlsp.aws_efs_file_system.efs\[0\]
# module.dbsdevlsp.aws_efs_file_system.efs[0]:
resource "aws_efs_file_system" "efs" {
arn = "arn:aws-cn:elasticfilesystem:cn-north-1:xxxxxxxxxx:file-system/fs-abcdefgh"
creation_token = "terraform-20200124105304526000000001"
dns_name = "fs-abcdefgh.efs.cn-north-1.amazonaws.com.cn"
encrypted = true
id = "fs-abcdefgh"
kms_key_id = "arn:aws-cn:kms:cn-north-1:xxxxxxxxxx:key/abcdefgh-1234-5678-90ab-cdefghijklmn"
performance_mode = "generalPurpose"
provisioned_throughput_in_mibps = 0
throughput_mode = "bursting"
}
```
### Actual Behavior
<!--- What actually happened? --->
```hcl
$ terraform state show module.dbsdevlsp.aws_efs_file_system.efs\[0\]
# module.dbsdevlsp.aws_efs_file_system.efs[0]:
resource "aws_efs_file_system" "efs" {
arn = "arn:aws-cn:elasticfilesystem:cn-north-1:xxxxxxxxxx:file-system/fs-abcdefgh"
creation_token = "terraform-20200124105304526000000001"
dns_name = "fs-abcdefgh.efs.cn-north-1.amazonaws.com"
encrypted = true
id = "fs-abcdefgh"
kms_key_id = "arn:aws-cn:kms:cn-north-1:xxxxxxxxxx:key/abcdefgh-1234-5678-90ab-cdefghijklmn"
performance_mode = "generalPurpose"
provisioned_throughput_in_mibps = 0
throughput_mode = "bursting"
}
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
This is in AWS China region cn-north-1.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* It seems to me that aws/resource_aws_efs_file_system.go line 396: `func resourceAwsEfsDnsName(fileSystemId, region string) string` uses improper hardcoding for the domain name ("amazonaws.com") while the actual domain name used in cn-north-1 seems to be "amazonaws.com.cn".
| 1.0 | Recently enabled EFS service in AWS China regions is using incorrect domain name. - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a π [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform Version
<!--- Please run `terraform -v` to show the Terraform core version and provider version(s). If you are not running the latest version of Terraform or the provider, please upgrade because your issue may have already been fixed. [Terraform documentation on provider versioning](https://www.terraform.io/docs/configuration/providers.html#provider-versions). --->
```
Terraform v0.12.19
+ provider.aws v2.45.0
+ provider.template v2.1.2
```
### Affected Resource(s)
<!--- Please list the affected resources and data sources. --->
* aws_efs_file_system
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "aws_efs_file_system" "efs" {
performance_mode = "generalPurpose"
encrypted = true
}
```
### Debug Output
<!---
Please provide a link to a GitHub Gist containing the complete debug output. Please do NOT paste the debug output in the issue; just paste a link to the Gist.
To obtain the debug output, see the [Terraform documentation on debugging](https://www.terraform.io/docs/internals/debugging.html).
--->
### Panic Output
<!--- If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the `crash.log`. --->
### Expected Behavior
<!--- What should have happened? --->
```hcl
$ terraform state show module.dbsdevlsp.aws_efs_file_system.efs\[0\]
# module.dbsdevlsp.aws_efs_file_system.efs[0]:
resource "aws_efs_file_system" "efs" {
arn = "arn:aws-cn:elasticfilesystem:cn-north-1:xxxxxxxxxx:file-system/fs-abcdefgh"
creation_token = "terraform-20200124105304526000000001"
dns_name = "fs-abcdefgh.efs.cn-north-1.amazonaws.com.cn"
encrypted = true
id = "fs-abcdefgh"
kms_key_id = "arn:aws-cn:kms:cn-north-1:xxxxxxxxxx:key/abcdefgh-1234-5678-90ab-cdefghijklmn"
performance_mode = "generalPurpose"
provisioned_throughput_in_mibps = 0
throughput_mode = "bursting"
}
```
### Actual Behavior
<!--- What actually happened? --->
```hcl
$ terraform state show module.dbsdevlsp.aws_efs_file_system.efs\[0\]
# module.dbsdevlsp.aws_efs_file_system.efs[0]:
resource "aws_efs_file_system" "efs" {
arn = "arn:aws-cn:elasticfilesystem:cn-north-1:xxxxxxxxxx:file-system/fs-abcdefgh"
creation_token = "terraform-20200124105304526000000001"
dns_name = "fs-abcdefgh.efs.cn-north-1.amazonaws.com"
encrypted = true
id = "fs-abcdefgh"
kms_key_id = "arn:aws-cn:kms:cn-north-1:xxxxxxxxxx:key/abcdefgh-1234-5678-90ab-cdefghijklmn"
performance_mode = "generalPurpose"
provisioned_throughput_in_mibps = 0
throughput_mode = "bursting"
}
```
### Steps to Reproduce
<!--- Please list the steps required to reproduce the issue. --->
1. `terraform apply`
### Important Factoids
<!--- Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? --->
This is in AWS China region cn-north-1.
### References
<!---
Information about referencing Github Issues: https://help.github.com/articles/basic-writing-and-formatting-syntax/#referencing-issues-and-pull-requests
Are there any other GitHub issues (open or closed) or pull requests that should be linked here? Vendor documentation? For example:
--->
* It seems to me that aws/resource_aws_efs_file_system.go line 396: `func resourceAwsEfsDnsName(fileSystemId, region string) string` uses improper hardcoding for the domain name ("amazonaws.com") while the actual domain name used in cn-north-1 seems to be "amazonaws.com.cn".
| non_infrastructure | recently enabled efs service in aws china regions is using incorrect domain name please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a π to the original issue to help the community and maintainers prioritize this request please do not leave or other comments that do not add relevant new information or questions they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform version terraform provider aws provider template affected resource s aws efs file system terraform configuration files hcl resource aws efs file system efs performance mode generalpurpose encrypted true debug output please provide a link to a github gist containing the complete debug output please do not paste the debug output in the issue just paste a link to the gist to obtain the debug output see the panic output expected behavior hcl terraform state show module dbsdevlsp aws efs file system efs module dbsdevlsp aws efs file system efs resource aws efs file system efs arn arn aws cn elasticfilesystem cn north xxxxxxxxxx file system fs abcdefgh creation token terraform dns name fs abcdefgh efs cn north amazonaws com cn encrypted true id fs abcdefgh kms key id arn aws cn kms cn north xxxxxxxxxx key abcdefgh cdefghijklmn performance mode generalpurpose provisioned throughput in mibps throughput mode bursting actual behavior hcl terraform state show module dbsdevlsp aws efs file system efs module dbsdevlsp aws efs file system efs resource aws efs file system efs arn arn aws cn elasticfilesystem cn north xxxxxxxxxx file system fs abcdefgh creation token terraform dns name fs abcdefgh efs cn north amazonaws com encrypted true id fs abcdefgh kms key id arn aws cn kms cn north xxxxxxxxxx key abcdefgh cdefghijklmn performance mode generalpurpose provisioned throughput in mibps throughput mode bursting steps to reproduce terraform apply important factoids this is in aws china region cn north references information about referencing github issues are there any other github issues open or closed or pull requests that should be linked here vendor documentation for example it seems to me that aws resource aws efs file system go line func resourceawsefsdnsname filesystemid region string string uses improper hardcoding for the domain name amazonaws com while the actual domain name used in cn north seems to be amazonaws com cn | 0 |
9,241 | 7,881,705,981 | IssuesEvent | 2018-06-26 20:00:26 | great-lakes/project-egypt | https://api.github.com/repos/great-lakes/project-egypt | closed | Create instruction for running tests | infrastructure | Parent #24
- [x] Write new MD or add to README.md explaining how to create a new test and how to run tests | 1.0 | Create instruction for running tests - Parent #24
- [x] Write new MD or add to README.md explaining how to create a new test and how to run tests | infrastructure | create instruction for running tests parent write new md or add to readme md explaining how to create a new test and how to run tests | 1 |
383,143 | 11,351,316,590 | IssuesEvent | 2020-01-24 10:55:42 | ooni/ooni.org | https://api.github.com/repos/ooni/ooni.org | closed | Perform data analysis of India websites for OONI fellow | data analysis effort/L priority/high | This entails looking at web_connectivity measurements from a specific set of report_ids and looking at the blocking of websites depending on the target region. | 1.0 | Perform data analysis of India websites for OONI fellow - This entails looking at web_connectivity measurements from a specific set of report_ids and looking at the blocking of websites depending on the target region. | non_infrastructure | perform data analysis of india websites for ooni fellow this entails looking at web connectivity measurements from a specific set of report ids and looking at the blocking of websites depending on the target region | 0 |
5,453 | 5,660,769,105 | IssuesEvent | 2017-04-10 15:48:48 | vmware/docker-volume-vsphere | https://api.github.com/repos/vmware/docker-volume-vsphere | closed | VIB Installation failures in CI | component/test-infrastructure kind/test P1 | Intermittently seen following failures in CI;
For e.g. https://ci.vmware.run/vmware/docker-volume-vsphere/2036
```
=> Deploying to ESX root@192.168.31.62 Fri Mar 31 18:49:47 UTC 2017
Connection to 192.168.31.62 closed by remote host. <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Installation Result:
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: VMWare_bootbank_esx-vmdkops-service_0.13.03aa2c0-0.0.1
VIBs Removed:
VIBs Skipped:
=> deployESXInstall: Installation hit an error on root@192.168.31.62 Fri Mar 31 18:50:04 UTC 2017
make[1]: *** [deploy-esx] Error 2
make: *** [deploy-esx] Error 2
``` | 1.0 | VIB Installation failures in CI - Intermittently seen following failures in CI;
For e.g. https://ci.vmware.run/vmware/docker-volume-vsphere/2036
```
=> Deploying to ESX root@192.168.31.62 Fri Mar 31 18:49:47 UTC 2017
Connection to 192.168.31.62 closed by remote host. <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
Installation Result:
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: VMWare_bootbank_esx-vmdkops-service_0.13.03aa2c0-0.0.1
VIBs Removed:
VIBs Skipped:
=> deployESXInstall: Installation hit an error on root@192.168.31.62 Fri Mar 31 18:50:04 UTC 2017
make[1]: *** [deploy-esx] Error 2
make: *** [deploy-esx] Error 2
``` | infrastructure | vib installation failures in ci intermittently seen following failures in ci for e g deploying to esx root fri mar utc connection to closed by remote host installation result message operation finished successfully reboot required false vibs installed vmware bootbank esx vmdkops service vibs removed vibs skipped deployesxinstall installation hit an error on root fri mar utc make error make error | 1 |
684,995 | 23,441,049,745 | IssuesEvent | 2022-08-15 14:54:40 | LinkNacional/wc_cielo_payment_gateway | https://api.github.com/repos/LinkNacional/wc_cielo_payment_gateway | closed | ImplementaΓ§Γ£o de parcelamento de compras | enhancement priority | - [x] Adicionar configuraΓ§Γ£o que habilita o parcelamento das compras;
- [x] Implementar seletor de parcelas;
- [x] Fazer integraΓ§Γ£o com API 3.0. | 1.0 | ImplementaΓ§Γ£o de parcelamento de compras - - [x] Adicionar configuraΓ§Γ£o que habilita o parcelamento das compras;
- [x] Implementar seletor de parcelas;
- [x] Fazer integraΓ§Γ£o com API 3.0. | non_infrastructure | implementaΓ§Γ£o de parcelamento de compras adicionar configuraΓ§Γ£o que habilita o parcelamento das compras implementar seletor de parcelas fazer integraΓ§Γ£o com api | 0 |
263,377 | 8,288,726,023 | IssuesEvent | 2018-09-19 12:54:37 | regardscitoyens/the-law-factory-parser | https://api.github.com/repos/regardscitoyens/the-law-factory-parser | closed | Add last pending step (including CC) to all textes en cours | bug priority | Handle cases where texts are not published in the right order :
- for instance CC published but not final texte adoptΓ©
cf https://github.com/regardscitoyens/the-law-factory-parser/commit/dea0565f3406d5203863f35c179c021679ef5331
~I thought it was already the case, but it seems it is not always the case, for instance here:
https://www.lafabriquedelaloi.fr/articles.html?loi=ppl17-337 with the TA AN hΓ©micycle is awaiting publication http://www.assemblee-nationale.fr/15/ta/ta0164.asp
pjl17-249~
| 1.0 | Add last pending step (including CC) to all textes en cours - Handle cases where texts are not published in the right order :
- for instance CC published but not final texte adoptΓ©
cf https://github.com/regardscitoyens/the-law-factory-parser/commit/dea0565f3406d5203863f35c179c021679ef5331
~I thought it was already the case, but it seems it is not always the case, for instance here:
https://www.lafabriquedelaloi.fr/articles.html?loi=ppl17-337 with the TA AN hΓ©micycle is awaiting publication http://www.assemblee-nationale.fr/15/ta/ta0164.asp
pjl17-249~
| non_infrastructure | add last pending step including cc to all textes en cours handle cases where texts are not published in the right order for instance cc published but not final texte adoptΓ© cf i thought it was already the case but it seems it is not always the case for instance here with the ta an hΓ©micycle is awaiting publication | 0 |
17,158 | 12,238,393,412 | IssuesEvent | 2020-05-04 19:43:49 | apple/turicreate | https://api.github.com/repos/apple/turicreate | opened | Temporarily disable s3 upload test for SFrame | S3 infrastructure | The current internal S3 proxy service doesn't allow up delete directories. When SFrame uploads files, it will first check the files and then delete them all. This check and deletion action is not atomic and causes a lot of build failures when multiple machines try to upload at the same time.
After we move to a service that can allow up delete directories, we can use `uuid` for each runner to let them upload without interfering with other runners. | 1.0 | Temporarily disable s3 upload test for SFrame - The current internal S3 proxy service doesn't allow up delete directories. When SFrame uploads files, it will first check the files and then delete them all. This check and deletion action is not atomic and causes a lot of build failures when multiple machines try to upload at the same time.
After we move to a service that can allow up delete directories, we can use `uuid` for each runner to let them upload without interfering with other runners. | infrastructure | temporarily disable upload test for sframe the current internal proxy service doesn t allow up delete directories when sframe uploads files it will first check the files and then delete them all this check and deletion action is not atomic and causes a lot of build failures when multiple machines try to upload at the same time after we move to a service that can allow up delete directories we can use uuid for each runner to let them upload without interfering with other runners | 1 |
172,881 | 6,517,332,546 | IssuesEvent | 2017-08-27 21:54:10 | robertsanseries/ciano | https://api.github.com/repos/robertsanseries/ciano | closed | Conversion list | Priority 2 - [Normal] Status 4 - [Confirmed] Status 5 - [In Progress] Status 6 - [Finished] Type 3 - [Enhancement] | Conversion list should equal torrential application.
- [x] Name of the file selected for conversion
- [x] ProgressBar
- [x] Time
- [x] Size
- [x] Display for which type is to be converted
- [x] Button cancel and remove line

| 1.0 | Conversion list - Conversion list should equal torrential application.
- [x] Name of the file selected for conversion
- [x] ProgressBar
- [x] Time
- [x] Size
- [x] Display for which type is to be converted
- [x] Button cancel and remove line

| non_infrastructure | conversion list conversion list should equal torrential application name of the file selected for conversion progressbar time size display for which type is to be converted button cancel and remove line | 0 |
6,873 | 24,005,697,465 | IssuesEvent | 2022-09-14 14:39:33 | tm24fan8/Home-Assistant-Configs | https://api.github.com/repos/tm24fan8/Home-Assistant-Configs | closed | Add 2-hour delay and cancellation modes for school | enhancement lighting security convenience presence detection automation TTS | Need a couple buttons to easily change scheduling for when the schools decide to cause havoc | 1.0 | Add 2-hour delay and cancellation modes for school - Need a couple buttons to easily change scheduling for when the schools decide to cause havoc | non_infrastructure | add hour delay and cancellation modes for school need a couple buttons to easily change scheduling for when the schools decide to cause havoc | 0 |
6,395 | 6,379,321,995 | IssuesEvent | 2017-08-02 14:32:09 | scikit-beam/scikit-beam | https://api.github.com/repos/scikit-beam/scikit-beam | closed | Switch to py.test | infrastructure | It would be great if we could switch to py.test for our testing framework for a number of reasons.
1. It is very easy to use
2. It has a very large number of plugins
3. Errybody doin' it
4. ...and many others.
| 1.0 | Switch to py.test - It would be great if we could switch to py.test for our testing framework for a number of reasons.
1. It is very easy to use
2. It has a very large number of plugins
3. Errybody doin' it
4. ...and many others.
| infrastructure | switch to py test it would be great if we could switch to py test for our testing framework for a number of reasons it is very easy to use it has a very large number of plugins errybody doin it and many others | 1 |
16,730 | 12,129,377,697 | IssuesEvent | 2020-04-22 22:26:25 | 18F/tts-tech-portfolio | https://api.github.com/repos/18F/tts-tech-portfolio | closed | decommission `Department of Labor - Wage and Hour - Section 14c` repository | epic: software and infrastructure grooming: draft - initial | https://github.com/18F/dol-whd-14c
Seems the 18F work has ended, but the issues are still active. Doesn't make sense for it to stay under the 18F org as is.
- [ ] Figure out 18F stakeholders, if any remain
- [ ] [Transfer repository](https://handbook.18f.gov/github/#rules) to Department of Labor, or
- [ ] Archive the repository
- [ ] Remove the GitHub [users](https://github.com/orgs/18F/teams/dol-whd-partner/members) (that aren't TTS staff) and [team](https://github.com/orgs/18F/teams/dol-whd-partner)
See email thread `Zenhub Problem`.
cc @18F/dol-whd-partner | 1.0 | decommission `Department of Labor - Wage and Hour - Section 14c` repository - https://github.com/18F/dol-whd-14c
Seems the 18F work has ended, but the issues are still active. Doesn't make sense for it to stay under the 18F org as is.
- [ ] Figure out 18F stakeholders, if any remain
- [ ] [Transfer repository](https://handbook.18f.gov/github/#rules) to Department of Labor, or
- [ ] Archive the repository
- [ ] Remove the GitHub [users](https://github.com/orgs/18F/teams/dol-whd-partner/members) (that aren't TTS staff) and [team](https://github.com/orgs/18F/teams/dol-whd-partner)
See email thread `Zenhub Problem`.
cc @18F/dol-whd-partner | infrastructure | decommission department of labor wage and hour section repository seems the work has ended but the issues are still active doesn t make sense for it to stay under the org as is figure out stakeholders if any remain to department of labor or archive the repository remove the github that aren t tts staff and see email thread zenhub problem cc dol whd partner | 1 |
24,753 | 24,235,808,936 | IssuesEvent | 2022-09-26 23:06:07 | simonw/datasette | https://api.github.com/repos/simonw/datasette | closed | Preserve query on timeout | enhancement usability | If a query hits the timeout it shows a message like:
> SQL query took too long. The time limit is controlled by the [sql_time_limit_ms](https://docs.datasette.io/en/stable/settings.html#sql-time-limit-ms) configuration option.
But the query is lost. Hitting the browser back button shows the query _before_ the one that errored.
It would be nice if the query that errored was preserved for more tweaking. This would make it similar to how "invalid syntax" works since #1346 / #619. | True | Preserve query on timeout - If a query hits the timeout it shows a message like:
> SQL query took too long. The time limit is controlled by the [sql_time_limit_ms](https://docs.datasette.io/en/stable/settings.html#sql-time-limit-ms) configuration option.
But the query is lost. Hitting the browser back button shows the query _before_ the one that errored.
It would be nice if the query that errored was preserved for more tweaking. This would make it similar to how "invalid syntax" works since #1346 / #619. | non_infrastructure | preserve query on timeout if a query hits the timeout it shows a message like sql query took too long the time limit is controlled by the configuration option but the query is lost hitting the browser back button shows the query before the one that errored it would be nice if the query that errored was preserved for more tweaking this would make it similar to how invalid syntax works since | 0 |
30,250 | 24,700,174,928 | IssuesEvent | 2022-10-19 14:46:55 | dotnet/dotnet-docker | https://api.github.com/repos/dotnet/dotnet-docker | closed | Support pre-release servicing drops | bug area-infrastructure | We should be providing updates of our nightly images whenever possible for servicing releases. These would be drops that are not MSRC-related and thus would be publicly available. For example, 5.0.1 is not a MSRC release so it has builds available at `https://dotnetcli.blob.core.windows.net/dotnet/sdk/5.0.1-servicing.<build>`. Doing this for servicing drops is important for the same reason it's important for preview releases: it provides an additional level of validation of the release and specifically for container environments.
There are a few issues that need to be resolved in order to provide these updates:
- The Dockerfile templates do not currently support the path syntax required to reference these servicing drops. The path includes the build version for the directory name but only the product version for the filename (e.g. https://dotnetcli.blob.core.windows.net/dotnet/Sdk/5.0.101-servicing.20601.5/dotnet-sdk-5.0.101-win-x64.zip). But the Dockerfile templates use the same version in both the directory name and the filename: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/dockerfile-templates/sdk/5.0/Dockerfile.nanoserver#L14 The Dockerfile template needs the ability to provide distinct values for the version specified between the directory and filename.
- The same issue exists for the update-dependencies tool for the URL that it constructs in order to retrieve the SHA values of the files. While it does have logic to provide distinct version values between the directory and filename: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/update-dependencies/DockerfileShaUpdater.cs#L36 It doesn't actually set those variables to distinct values for servicing drops due to its special case logic which only accounts for RTM releases: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/update-dependencies/DockerfileShaUpdater.cs#L101
- In order to automate the creation of PRs that update the Dockerfile for new servicing drops, there needs to be changes to the build pipeline which does this. Currently, the pipeline can only handle two channels: one for the the core .NET product (runtime, aspnet, sdk) and one for the .NET monitor tool. These are represented as stages within the pipeline: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/pipelines/update-dependencies.yml#L12 In order to be able to support nightly updates for both preview releases and servicing releases, there needs to be support for handling an additional channel.
| 1.0 | Support pre-release servicing drops - We should be providing updates of our nightly images whenever possible for servicing releases. These would be drops that are not MSRC-related and thus would be publicly available. For example, 5.0.1 is not a MSRC release so it has builds available at `https://dotnetcli.blob.core.windows.net/dotnet/sdk/5.0.1-servicing.<build>`. Doing this for servicing drops is important for the same reason it's important for preview releases: it provides an additional level of validation of the release and specifically for container environments.
There are a few issues that need to be resolved in order to provide these updates:
- The Dockerfile templates do not currently support the path syntax required to reference these servicing drops. The path includes the build version for the directory name but only the product version for the filename (e.g. https://dotnetcli.blob.core.windows.net/dotnet/Sdk/5.0.101-servicing.20601.5/dotnet-sdk-5.0.101-win-x64.zip). But the Dockerfile templates use the same version in both the directory name and the filename: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/dockerfile-templates/sdk/5.0/Dockerfile.nanoserver#L14 The Dockerfile template needs the ability to provide distinct values for the version specified between the directory and filename.
- The same issue exists for the update-dependencies tool for the URL that it constructs in order to retrieve the SHA values of the files. While it does have logic to provide distinct version values between the directory and filename: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/update-dependencies/DockerfileShaUpdater.cs#L36 It doesn't actually set those variables to distinct values for servicing drops due to its special case logic which only accounts for RTM releases: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/update-dependencies/DockerfileShaUpdater.cs#L101
- In order to automate the creation of PRs that update the Dockerfile for new servicing drops, there needs to be changes to the build pipeline which does this. Currently, the pipeline can only handle two channels: one for the the core .NET product (runtime, aspnet, sdk) and one for the .NET monitor tool. These are represented as stages within the pipeline: https://github.com/dotnet/dotnet-docker/blob/eb12720ccea648c2e543ffa1c358f47ba0cc292d/eng/pipelines/update-dependencies.yml#L12 In order to be able to support nightly updates for both preview releases and servicing releases, there needs to be support for handling an additional channel.
| infrastructure | support pre release servicing drops we should be providing updates of our nightly images whenever possible for servicing releases these would be drops that are not msrc related and thus would be publicly available for example is not a msrc release so it has builds available at doing this for servicing drops is important for the same reason it s important for preview releases it provides an additional level of validation of the release and specifically for container environments there are a few issues that need to be resolved in order to provide these updates the dockerfile templates do not currently support the path syntax required to reference these servicing drops the path includes the build version for the directory name but only the product version for the filename e g but the dockerfile templates use the same version in both the directory name and the filename the dockerfile template needs the ability to provide distinct values for the version specified between the directory and filename the same issue exists for the update dependencies tool for the url that it constructs in order to retrieve the sha values of the files while it does have logic to provide distinct version values between the directory and filename it doesn t actually set those variables to distinct values for servicing drops due to its special case logic which only accounts for rtm releases in order to automate the creation of prs that update the dockerfile for new servicing drops there needs to be changes to the build pipeline which does this currently the pipeline can only handle two channels one for the the core net product runtime aspnet sdk and one for the net monitor tool these are represented as stages within the pipeline in order to be able to support nightly updates for both preview releases and servicing releases there needs to be support for handling an additional channel | 1 |
4,901 | 5,325,930,827 | IssuesEvent | 2017-02-15 01:39:26 | mirai-audio/mir | https://api.github.com/repos/mirai-audio/mir | opened | mir release script | infrastructure | # Goal
Easily create tagged release with a release branch off master.
### Expected Behavior
When a release has been tested, QA'ed and ready for launch:
```bash
./run-release 1.3.4
```
* tags master with release number
* creates release branch
* pushes both to github
## Considerations
```bash
# Generate release notes from last MINOR release tag, crediting each author per commit.
git log `git describe --abbrev=0 --tags`.. --pretty=format:"* %s - @%an"
# Generate release notes from last MINOR release tag, rollup commits to each author
git shortlog `git describe --abbrev=0 --tags`..
```
Creation of git tags, see https://github.com/0xadada/dockdj/blob/master/bin/deploy#L86
## Tasks
List all of the subtasks that will contribute to completion of this issue. Once
all subtasks are complete, that will indicate the issue is "done".
* [ ] Create bash script
* [ ] Test on a test repo w/o pushing
| 1.0 | mir release script - # Goal
Easily create tagged release with a release branch off master.
### Expected Behavior
When a release has been tested, QA'ed and ready for launch:
```bash
./run-release 1.3.4
```
* tags master with release number
* creates release branch
* pushes both to github
## Considerations
```bash
# Generate release notes from last MINOR release tag, crediting each author per commit.
git log `git describe --abbrev=0 --tags`.. --pretty=format:"* %s - @%an"
# Generate release notes from last MINOR release tag, rollup commits to each author
git shortlog `git describe --abbrev=0 --tags`..
```
Creation of git tags, see https://github.com/0xadada/dockdj/blob/master/bin/deploy#L86
## Tasks
List all of the subtasks that will contribute to completion of this issue. Once
all subtasks are complete, that will indicate the issue is "done".
* [ ] Create bash script
* [ ] Test on a test repo w/o pushing
| infrastructure | mir release script goal easily create tagged release with a release branch off master expected behavior when a release has been tested qa ed and ready for launch bash run release tags master with release number creates release branch pushes both to github considerations bash generate release notes from last minor release tag crediting each author per commit git log git describe abbrev tags pretty format s an generate release notes from last minor release tag rollup commits to each author git shortlog git describe abbrev tags creation of git tags see tasks list all of the subtasks that will contribute to completion of this issue once all subtasks are complete that will indicate the issue is done create bash script test on a test repo w o pushing | 1 |
62,817 | 8,641,245,989 | IssuesEvent | 2018-11-24 15:43:40 | OpenAPITools/openapi-generator | https://api.github.com/repos/OpenAPITools/openapi-generator | closed | [Documentation] Missing doc export from default codegen | Feature: Documentation | Missing options which are exported by the DefaultCodegen like modelNamePrefix for example.
##### Related issues/PRs
<!-- has a similar issue/PR been reported/opened before? Please do a search in https://github.com/openapitools/openapi-generator/issues?utf8=%E2%9C%93&q=is%3Aissue%20 -->
#932
| 1.0 | [Documentation] Missing doc export from default codegen - Missing options which are exported by the DefaultCodegen like modelNamePrefix for example.
##### Related issues/PRs
<!-- has a similar issue/PR been reported/opened before? Please do a search in https://github.com/openapitools/openapi-generator/issues?utf8=%E2%9C%93&q=is%3Aissue%20 -->
#932
| non_infrastructure | missing doc export from default codegen missing options which are exported by the defaultcodegen like modelnameprefix for example related issues prs | 0 |
377,852 | 11,185,420,267 | IssuesEvent | 2020-01-01 01:31:26 | webcompat/web-bugs | https://api.github.com/repos/webcompat/web-bugs | closed | br.hao123.com - see bug description | browser-firefox engine-gecko ml-needsdiagnosis-false ml-probability-high priority-critical | <!-- @browser: Firefox 73.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://br.hao123.com/?tn=sft_hp_hao123_br
**Browser / Version**: Firefox 73.0
**Operating System**: Windows 8.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: It doesn't disapear
**Steps to Reproduce**:
Even though I keep trying to change my home page, This site continues to appear
[](https://webcompat.com/uploads/2020/1/f53090b1-0130-4d25-bb93-a87bfde12d9d.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191231213920</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/1/b8a9ccc3-c807-4bb6-be20-bc25d5986928)
_From [webcompat.com](https://webcompat.com/) with β€οΈ_ | 1.0 | br.hao123.com - see bug description - <!-- @browser: Firefox 73.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:73.0) Gecko/20100101 Firefox/73.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: http://br.hao123.com/?tn=sft_hp_hao123_br
**Browser / Version**: Firefox 73.0
**Operating System**: Windows 8.1
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: It doesn't disapear
**Steps to Reproduce**:
Even though I keep trying to change my home page, This site continues to appear
[](https://webcompat.com/uploads/2020/1/f53090b1-0130-4d25-bb93-a87bfde12d9d.jpeg)
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20191231213920</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2020/1/b8a9ccc3-c807-4bb6-be20-bc25d5986928)
_From [webcompat.com](https://webcompat.com/) with β€οΈ_ | non_infrastructure | br com see bug description url browser version firefox operating system windows tested another browser yes problem type something else description it doesn t disapear steps to reproduce even though i keep trying to change my home page this site continues to appear browser configuration gfx webrender all false gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with β€οΈ | 0 |
13,699 | 10,426,787,867 | IssuesEvent | 2019-09-16 18:24:30 | celo-org/celo-monorepo | https://api.github.com/repos/celo-org/celo-monorepo | opened | Developers SBAT deploy a testnet without ethstats | infrastructure | ### Expected Behavior
Developers SBAT to just deploy the testnet, without having to know to deploy ethstats.
### Current Behavior
In, we separated out ethstats out of a testnet deploy. However, the testnet helm-chart currently seems to rely on the ethstat-secret config to be available in the namespace
| 1.0 | Developers SBAT deploy a testnet without ethstats - ### Expected Behavior
Developers SBAT to just deploy the testnet, without having to know to deploy ethstats.
### Current Behavior
In, we separated out ethstats out of a testnet deploy. However, the testnet helm-chart currently seems to rely on the ethstat-secret config to be available in the namespace
| infrastructure | developers sbat deploy a testnet without ethstats expected behavior developers sbat to just deploy the testnet without having to know to deploy ethstats current behavior in we separated out ethstats out of a testnet deploy however the testnet helm chart currently seems to rely on the ethstat secret config to be available in the namespace | 1 |
208,682 | 7,157,253,344 | IssuesEvent | 2018-01-26 19:13:32 | capitalone/cloud-custodian | https://api.github.com/repos/capitalone/cloud-custodian | closed | provisioning lambda to vpc appears to be broken still | area/core kind/bug priority/P1 | Hi,
I've tested https://github.com/capitalone/cloud-custodian/pull/1919.
I see the validation now passes, but it looks like the vpc attributes are not being passed when the lambda is created.
Policy:
```
policies:
- name: vpc-test-sandbox
resource: ec2
mode:
type: config-rule
role: arn:aws:iam::{{ACCOUNT}}:role/service-role/tscloud_lambda_role
timeout: 180
security_groups: [sg-ea399290]
subnets: [subnet-3c3d8367,subnet-a09275c6,subnet-278bb56e]
description: |
Testing vpc provisioning
filters:
- "tag:c7n_testing": present
actions:
- type: mark-for-op
tag: c7n_tag_compliance
op: terminate
days: 1
```
Deploy:
```
$ custodian run -s /tmp/c7n --cache-period 0 vpc-test.yml
2018-01-18 07:48:42,873: custodian.policy:INFO Provisioning policy lambda vpc-test-sandbox
2018-01-18 07:48:43,233: custodian.lambda:INFO Publishing custodian policy lambda function custodian-vpc-test-sandbox
```
Get function vpc config:
```
$ aws lambda get-function --function-name custodian-vpc-test-sandbox | jq '.Configuration.VpcConfig'
null
Expected result:
$ aws lambda get-function --function-name custodian-vpc-test-sandbox | jq '.Configuration.VpcConfig'
{
"SubnetIds": [
"subnet-3c3d8367",
"subnet-a09275c6",
"subnet-278bb56e"
],
"SecurityGroupIds": [
"sg-ea399290"
],
"VpcId": "vpc-d3519bb5"
}
```
The role I'm using has access to vpc provisioning, as far as I can tell, using the canned AWS Policy AWSLambdaVPCAccessExecutionRole. I'm able to use the role to add vpc config manually after provisioning with c7n. | 1.0 | provisioning lambda to vpc appears to be broken still - Hi,
I've tested https://github.com/capitalone/cloud-custodian/pull/1919.
I see the validation now passes, but it looks like the vpc attributes are not being passed when the lambda is created.
Policy:
```
policies:
- name: vpc-test-sandbox
resource: ec2
mode:
type: config-rule
role: arn:aws:iam::{{ACCOUNT}}:role/service-role/tscloud_lambda_role
timeout: 180
security_groups: [sg-ea399290]
subnets: [subnet-3c3d8367,subnet-a09275c6,subnet-278bb56e]
description: |
Testing vpc provisioning
filters:
- "tag:c7n_testing": present
actions:
- type: mark-for-op
tag: c7n_tag_compliance
op: terminate
days: 1
```
Deploy:
```
$ custodian run -s /tmp/c7n --cache-period 0 vpc-test.yml
2018-01-18 07:48:42,873: custodian.policy:INFO Provisioning policy lambda vpc-test-sandbox
2018-01-18 07:48:43,233: custodian.lambda:INFO Publishing custodian policy lambda function custodian-vpc-test-sandbox
```
Get function vpc config:
```
$ aws lambda get-function --function-name custodian-vpc-test-sandbox | jq '.Configuration.VpcConfig'
null
Expected result:
$ aws lambda get-function --function-name custodian-vpc-test-sandbox | jq '.Configuration.VpcConfig'
{
"SubnetIds": [
"subnet-3c3d8367",
"subnet-a09275c6",
"subnet-278bb56e"
],
"SecurityGroupIds": [
"sg-ea399290"
],
"VpcId": "vpc-d3519bb5"
}
```
The role I'm using has access to vpc provisioning, as far as I can tell, using the canned AWS Policy AWSLambdaVPCAccessExecutionRole. I'm able to use the role to add vpc config manually after provisioning with c7n. | non_infrastructure | provisioning lambda to vpc appears to be broken still hi i ve tested i see the validation now passes but it looks like the vpc attributes are not being passed when the lambda is created policy policies name vpc test sandbox resource mode type config rule role arn aws iam account role service role tscloud lambda role timeout security groups subnets description testing vpc provisioning filters tag testing present actions type mark for op tag tag compliance op terminate days deploy custodian run s tmp cache period vpc test yml custodian policy info provisioning policy lambda vpc test sandbox custodian lambda info publishing custodian policy lambda function custodian vpc test sandbox get function vpc config aws lambda get function function name custodian vpc test sandbox jq configuration vpcconfig null expected result aws lambda get function function name custodian vpc test sandbox jq configuration vpcconfig subnetids subnet subnet subnet securitygroupids sg vpcid vpc the role i m using has access to vpc provisioning as far as i can tell using the canned aws policy awslambdavpcaccessexecutionrole i m able to use the role to add vpc config manually after provisioning with | 0 |
30,436 | 24,824,410,590 | IssuesEvent | 2022-10-25 19:16:43 | keep-network/keep-core | https://api.github.com/repos/keep-network/keep-core | closed | Configure go formatting in CI | :cloud: infrastructure πhelp-wanted go | Currently we run `gofmt` to verify the code formatting (https://github.com/keep-network/keep-core/pull/3175). It may not be enough, according to https://sparkbox.com/foundry/go_vet_gofmt_golint_to_code_check_in_Go `go vet` and `golint` should be added.
As [`golint` got deprecated](https://github.com/golang/lint) one of the recommendations is https://staticcheck.io/.
TODO:
- [x] research the tools that should be used in CI
- [x] implement CI jobs
- [x] fix the problems reported by the checks
Refs:
- https://github.com/golang/go/issues/38968
| 1.0 | Configure go formatting in CI - Currently we run `gofmt` to verify the code formatting (https://github.com/keep-network/keep-core/pull/3175). It may not be enough, according to https://sparkbox.com/foundry/go_vet_gofmt_golint_to_code_check_in_Go `go vet` and `golint` should be added.
As [`golint` got deprecated](https://github.com/golang/lint) one of the recommendations is https://staticcheck.io/.
TODO:
- [x] research the tools that should be used in CI
- [x] implement CI jobs
- [x] fix the problems reported by the checks
Refs:
- https://github.com/golang/go/issues/38968
| infrastructure | configure go formatting in ci currently we run gofmt to verify the code formatting it may not be enough according to go vet and golint should be added as one of the recommendations is todo research the tools that should be used in ci implement ci jobs fix the problems reported by the checks refs | 1 |
9,291 | 7,893,097,170 | IssuesEvent | 2018-06-28 16:54:59 | Graylog2/graylog2-server | https://api.github.com/repos/Graylog2/graylog2-server | closed | `GenericErrorCsvWriterTest` fails with `BindException` | infrastructure | Under some circumstances this test fails with an error unrelated to actual test results.
From [jenkins build log](https://jenkins.torch.sh/job/graylog-project-snapshot/892/consoleFull):
```
testSearchError(org.graylog2.rest.GenericErrorCsvWriterTest) Time elapsed: 9.261 sec <<< ERROR!
org.glassfish.jersey.test.spi.TestContainerException: java.net.BindException: Address already in use
Caused by: java.net.BindException: Address already in use
``` | 1.0 | `GenericErrorCsvWriterTest` fails with `BindException` - Under some circumstances this test fails with an error unrelated to actual test results.
From [jenkins build log](https://jenkins.torch.sh/job/graylog-project-snapshot/892/consoleFull):
```
testSearchError(org.graylog2.rest.GenericErrorCsvWriterTest) Time elapsed: 9.261 sec <<< ERROR!
org.glassfish.jersey.test.spi.TestContainerException: java.net.BindException: Address already in use
Caused by: java.net.BindException: Address already in use
``` | infrastructure | genericerrorcsvwritertest fails with bindexception under some circumstances this test fails with an error unrelated to actual test results from testsearcherror org rest genericerrorcsvwritertest time elapsed sec error org glassfish jersey test spi testcontainerexception java net bindexception address already in use caused by java net bindexception address already in use | 1 |
78,852 | 3,518,260,879 | IssuesEvent | 2016-01-12 11:58:36 | brata-hsdc/brata | https://api.github.com/repos/brata-hsdc/brata | opened | Add audio sampling interface to the framework | brata-framework OS-Android priority:1-drop-everything type:3-enhancement | We need to port @dvjones12 code for audio sampling into the framework for this year's challenge. This is top priority as we need to get the framework released this week. | 1.0 | Add audio sampling interface to the framework - We need to port @dvjones12 code for audio sampling into the framework for this year's challenge. This is top priority as we need to get the framework released this week. | non_infrastructure | add audio sampling interface to the framework we need to port code for audio sampling into the framework for this year s challenge this is top priority as we need to get the framework released this week | 0 |
19,363 | 13,224,233,089 | IssuesEvent | 2020-08-17 18:44:45 | oci-labs/check-ins | https://api.github.com/repos/oci-labs/check-ins | closed | Create Sample TestContainer Usage via Role Service (M) | infrastructure spike sprint 8 | Definition of Done
- [x] convert role service test to use test containers
- [x] no tests depend on an externally running database
- [ ] document the necessary process to build out the new test container | 1.0 | Create Sample TestContainer Usage via Role Service (M) - Definition of Done
- [x] convert role service test to use test containers
- [x] no tests depend on an externally running database
- [ ] document the necessary process to build out the new test container | infrastructure | create sample testcontainer usage via role service m definition of done convert role service test to use test containers no tests depend on an externally running database document the necessary process to build out the new test container | 1 |
14,055 | 10,593,471,116 | IssuesEvent | 2019-10-09 14:55:52 | zowe/zlux | https://api.github.com/repos/zowe/zlux | closed | Put iframe adapter into zlux-platform | Client Infrastructure | Acceptance Criteria:
Iframe adapter code does not depend on "sample-iframe-app"
Adapter can be added to iframes by simple script inclusion
Any iframe app can depend on zlux-platform to get & use the adapter file | 1.0 | Put iframe adapter into zlux-platform - Acceptance Criteria:
Iframe adapter code does not depend on "sample-iframe-app"
Adapter can be added to iframes by simple script inclusion
Any iframe app can depend on zlux-platform to get & use the adapter file | infrastructure | put iframe adapter into zlux platform acceptance criteria iframe adapter code does not depend on sample iframe app adapter can be added to iframes by simple script inclusion any iframe app can depend on zlux platform to get use the adapter file | 1 |
9,236 | 7,880,829,015 | IssuesEvent | 2018-06-26 17:03:52 | NCEAS/arctic-data-outreach | https://api.github.com/repos/NCEAS/arctic-data-outreach | closed | POLAR2018 - Best practices for data/metadata | infrastructure next outreach training | # Session: Best practices for data & metadata submission
- Storing and preparing data in open source formats
- Stability, longevity, interoperability
- Metadata best practices & automated metadata quality checks
### Pre-existing resources:
- [Introduction to Research Data Management - case studies (Oxford)](https://zenodo.org/record/28326)
- [Preservation and Archiving of Digital Media](https://classroom.oceanteacher.org/course/view.php?id=111)
- [Research Data Management at CODATA-RDA Summer School in Research Data Science Aug 1-12 2016 - specifically the "Reasons not to share Exercise.docx](https://zenodo.org/record/154433/files/ReasonsNotToShare-Exercise.docx)
- [Sharing and Archiving Your Research Data Workshop Materials (UEL) - specifically the Appraisal Exercise.doc](https://zenodo.org/record/28323
- [Digital Preservation for Researchers - teaching modules](https://zenodo.org/record/28544)
- [EUDAT Summer School - How FAIR are your data](https://zenodo.org/record/1065991)
- [Introduction to Research Data Management - half-day course (Oxford)] - specifically the Intro_to_RDM_handout_3_David_Shotton_Twenty_Questions_for_Research_Data_Management.pdf](https://zenodo.org/record/28325)
- [Data One Education Module - Accessing Data in the Literature](https://www.dataone.org/sites/all/documents/education-modules/exercises/L01_Exercise.pdf)
### Session idea: Accessing data in the literature
Adjust to [DataOne Education Module "Accessing Data in the Literature](https://www.dataone.org/sites/all/documents/education-modules/exercises/L01_Exercise.pdf) to be shorter. Use Arctic examples - 5 to 10 articles.
1. [Recent Warming Reverses Long-Term Arctic Cooling](https://doi.org/10.1126/science.1173983)
2. [The Contribution of Bering Sea Water to the Arctic Ocean (1961)](www.jstor.org/stable/40506914)
3. [The Arctic oscillation signature in the wintertime geopotential height and temperature fields (1998) - 1982 citations](https://doi.org/10.1029/98GL00950)
4. [Development of best practices for scientific research vessel operations in a
changing Arctic: A case study for R/V Sikuliaq (2014) - 1 citation](https://doi.org/10.1016/j.marpol.2017.09.021)
5. [Are Mixed Economies Persistent or Transitional? Evidence
Using Social Networks from Arctic Alaska (2016) - 11 citations](https://doi.org/10.1111/aman.12447)
6. [Who Cares about Polar Regions? Results from a Survey of U.S. Public Opinion (2008)](https://doi.org/10.1657/1523-0430(07-105)[HAMILTON]2.0.CO;2)
7. [Developing an arctic subsistence observation system (2011)](https://doi.org/10.1080/1088937X.2011.584448)
idea 1: Work in groups to discuss and search using laptops / phones if they have them.
idea 2: Have them find an article they like, and then make a 2 min plan for getting and re-using the data. 10 mins - Have a game where they roll dice and move across a board. Discuss how their plan differed from the gameplay
Draft / example gameboard

#### Metadata
- [Data One Education Module - Metadata hands on](https://www.dataone.org/education-modules)
Adjust to be shorter, use Arctic examples from 3 - 5 different fields of study, print out sample copies of metadata and data for quick review and discussion. Make one a real example from the ADC, talk about that afterwards in discussion portion | 1.0 | POLAR2018 - Best practices for data/metadata - # Session: Best practices for data & metadata submission
- Storing and preparing data in open source formats
- Stability, longevity, interoperability
- Metadata best practices & automated metadata quality checks
### Pre-existing resources:
- [Introduction to Research Data Management - case studies (Oxford)](https://zenodo.org/record/28326)
- [Preservation and Archiving of Digital Media](https://classroom.oceanteacher.org/course/view.php?id=111)
- [Research Data Management at CODATA-RDA Summer School in Research Data Science Aug 1-12 2016 - specifically the "Reasons not to share Exercise.docx](https://zenodo.org/record/154433/files/ReasonsNotToShare-Exercise.docx)
- [Sharing and Archiving Your Research Data Workshop Materials (UEL) - specifically the Appraisal Exercise.doc](https://zenodo.org/record/28323
- [Digital Preservation for Researchers - teaching modules](https://zenodo.org/record/28544)
- [EUDAT Summer School - How FAIR are your data](https://zenodo.org/record/1065991)
- [Introduction to Research Data Management - half-day course (Oxford)] - specifically the Intro_to_RDM_handout_3_David_Shotton_Twenty_Questions_for_Research_Data_Management.pdf](https://zenodo.org/record/28325)
- [Data One Education Module - Accessing Data in the Literature](https://www.dataone.org/sites/all/documents/education-modules/exercises/L01_Exercise.pdf)
### Session idea: Accessing data in the literature
Adjust to [DataOne Education Module "Accessing Data in the Literature](https://www.dataone.org/sites/all/documents/education-modules/exercises/L01_Exercise.pdf) to be shorter. Use Arctic examples - 5 to 10 articles.
1. [Recent Warming Reverses Long-Term Arctic Cooling](https://doi.org/10.1126/science.1173983)
2. [The Contribution of Bering Sea Water to the Arctic Ocean (1961)](www.jstor.org/stable/40506914)
3. [The Arctic oscillation signature in the wintertime geopotential height and temperature fields (1998) - 1982 citations](https://doi.org/10.1029/98GL00950)
4. [Development of best practices for scientific research vessel operations in a
changing Arctic: A case study for R/V Sikuliaq (2014) - 1 citation](https://doi.org/10.1016/j.marpol.2017.09.021)
5. [Are Mixed Economies Persistent or Transitional? Evidence
Using Social Networks from Arctic Alaska (2016) - 11 citations](https://doi.org/10.1111/aman.12447)
6. [Who Cares about Polar Regions? Results from a Survey of U.S. Public Opinion (2008)](https://doi.org/10.1657/1523-0430(07-105)[HAMILTON]2.0.CO;2)
7. [Developing an arctic subsistence observation system (2011)](https://doi.org/10.1080/1088937X.2011.584448)
idea 1: Work in groups to discuss and search using laptops / phones if they have them.
idea 2: Have them find an article they like, and then make a 2 min plan for getting and re-using the data. 10 mins - Have a game where they roll dice and move across a board. Discuss how their plan differed from the gameplay
Draft / example gameboard

#### Metadata
- [Data One Education Module - Metadata hands on](https://www.dataone.org/education-modules)
Adjust to be shorter, use Arctic examples from 3 - 5 different fields of study, print out sample copies of metadata and data for quick review and discussion. Make one a real example from the ADC, talk about that afterwards in discussion portion | infrastructure | best practices for data metadata session best practices for data metadata submission storing and preparing data in open source formats stability longevity interoperability metadata best practices automated metadata quality checks pre existing resources specifically the intro to rdm handout david shotton twenty questions for research data management pdf session idea accessing data in the literature adjust to to be shorter use arctic examples to articles development of best practices for scientific research vessel operations in a changing arctic a case study for r v sikuliaq citation are mixed economies persistent or transitional evidence using social networks from arctic alaska citations co idea work in groups to discuss and search using laptops phones if they have them idea have them find an article they like and then make a min plan for getting and re using the data mins have a game where they roll dice and move across a board discuss how their plan differed from the gameplay draft example gameboard metadata adjust to be shorter use arctic examples from different fields of study print out sample copies of metadata and data for quick review and discussion make one a real example from the adc talk about that afterwards in discussion portion | 1 |
29,856 | 24,344,588,199 | IssuesEvent | 2022-10-02 06:02:31 | zer0Kerbal/RadialOmniSeparator | https://api.github.com/repos/zer0Kerbal/RadialOmniSeparator | closed | Create <RadialOmniSeparator.cfg> | issue: config type: localization type: infrastructure | # Create PicknPull.cfg
<!--
tagsConfig.md v1.0.0.0
created: 17 Aug 2022
updated:
-->
* Add localized tags to parts
* Create
* [x] <RadialOmniSeparator.cfg> v1.0.0.0
* [x] adds localized tags to parts
this file: This file: All Rights Reserved by zer0Kerbal | 1.0 | Create <RadialOmniSeparator.cfg> - # Create PicknPull.cfg
<!--
tagsConfig.md v1.0.0.0
created: 17 Aug 2022
updated:
-->
* Add localized tags to parts
* Create
* [x] <RadialOmniSeparator.cfg> v1.0.0.0
* [x] adds localized tags to parts
this file: This file: All Rights Reserved by zer0Kerbal | infrastructure | create create picknpull cfg tagsconfig md created aug updated add localized tags to parts create adds localized tags to parts this file this file all rights reserved by | 1 |
1,459 | 3,226,760,284 | IssuesEvent | 2015-10-10 15:38:08 | monarch-initiative/monarch-app | https://api.github.com/repos/monarch-initiative/monarch-app | opened | URGENT: Load testing as part of release cycle | bug infrastructure web site front end | We urgently need to develop some load tests. We can all agree that the servers *can not* go down, especially when we have spikes of usage during after presentations. Who is the right person to take this on? | 1.0 | URGENT: Load testing as part of release cycle - We urgently need to develop some load tests. We can all agree that the servers *can not* go down, especially when we have spikes of usage during after presentations. Who is the right person to take this on? | infrastructure | urgent load testing as part of release cycle we urgently need to develop some load tests we can all agree that the servers can not go down especially when we have spikes of usage during after presentations who is the right person to take this on | 1 |
360,766 | 25,309,712,107 | IssuesEvent | 2022-11-17 16:31:40 | nikodallanoce/PokeBOT | https://api.github.com/repos/nikodallanoce/PokeBOT | closed | Update the readme with the most useful informations | documentation | Modificare il readme in modo che contenga le informazioni piΓΉ utili del progetto, in particolare:
- nome del gruppo e membri.
- quali argomenti sono stati considerati nel progetto (questo Γ¨ possibile scriverlo nel report).
- come giocare contro il bot in remoto.
- come testare il bot in un server locale ed il setup di quest'ultimo. | 1.0 | Update the readme with the most useful informations - Modificare il readme in modo che contenga le informazioni piΓΉ utili del progetto, in particolare:
- nome del gruppo e membri.
- quali argomenti sono stati considerati nel progetto (questo Γ¨ possibile scriverlo nel report).
- come giocare contro il bot in remoto.
- come testare il bot in un server locale ed il setup di quest'ultimo. | non_infrastructure | update the readme with the most useful informations modificare il readme in modo che contenga le informazioni piΓΉ utili del progetto in particolare nome del gruppo e membri quali argomenti sono stati considerati nel progetto questo Γ¨ possibile scriverlo nel report come giocare contro il bot in remoto come testare il bot in un server locale ed il setup di quest ultimo | 0 |
444,757 | 31,145,366,703 | IssuesEvent | 2023-08-16 05:52:55 | appsmithorg/appsmith-docs | https://api.github.com/repos/appsmithorg/appsmith-docs | closed | [Docs]: Widget Sidebar | Documentation User Education Pod | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Documentation Link
_No response_
### Discord/slack/intercom Link
_No response_
### Describe the problem and improvement.
[Docs]: Widget Sidebar | 1.0 | [Docs]: Widget Sidebar - ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Documentation Link
_No response_
### Discord/slack/intercom Link
_No response_
### Describe the problem and improvement.
[Docs]: Widget Sidebar | non_infrastructure | widget sidebar is there an existing issue for this i have searched the existing issues documentation link no response discord slack intercom link no response describe the problem and improvement widget sidebar | 0 |
19,716 | 13,401,546,432 | IssuesEvent | 2020-09-03 17:29:22 | icgc-argo/roadmap | https://api.github.com/repos/icgc-argo/roadmap | closed | RDPC Infra / Devops May Release | Epic INFRASTRUCTURE devops | **Links**
- Network, Monitoring: https://docs.google.com/document/d/1h6a4cJe0X9FmgzWHvbn_e2tqyr2uOKgM5tye2AM1zKk/edit?usp=sharing
- Backups: https://docs.google.com/document/d/1SG9qWWjutr01rYg-bk1Ba1_cnM_g6tMur4WHBmh0_-0/edit?usp=sharing
**Network Security and Config**
- [x] Pod Security enabled.
- [x] Ingress Controllers (private and public) setup.
- [x] Network Policies enabled.
**Monitoring**
- [ ] Endpoint monitoring enabled
- [x] External cluster monitoring enabled (this is done for Collab already using Uptime Robot)
- [ ] Dashboards with simple templated metrics enabled and accessible (Grafana, Prometheus, Kibana)
**Backups**
- [ ] Kafka (filestore)
- [ ] Song (postgres)
- [ ] QC module (undecided storage)
- [x] Kubernetes configuration
- [ ] Jenkins configuration
- [ ] Replication to IT / Isilon for all backups
--------
- [ ] Secure and deployed Canadian RDPC setup with all services deployed
- [ ] Logging and monitoring
- [ ] Caching in Collab - Github, Quay, Dockerhub | 1.0 | RDPC Infra / Devops May Release - **Links**
- Network, Monitoring: https://docs.google.com/document/d/1h6a4cJe0X9FmgzWHvbn_e2tqyr2uOKgM5tye2AM1zKk/edit?usp=sharing
- Backups: https://docs.google.com/document/d/1SG9qWWjutr01rYg-bk1Ba1_cnM_g6tMur4WHBmh0_-0/edit?usp=sharing
**Network Security and Config**
- [x] Pod Security enabled.
- [x] Ingress Controllers (private and public) setup.
- [x] Network Policies enabled.
**Monitoring**
- [ ] Endpoint monitoring enabled
- [x] External cluster monitoring enabled (this is done for Collab already using Uptime Robot)
- [ ] Dashboards with simple templated metrics enabled and accessible (Grafana, Prometheus, Kibana)
**Backups**
- [ ] Kafka (filestore)
- [ ] Song (postgres)
- [ ] QC module (undecided storage)
- [x] Kubernetes configuration
- [ ] Jenkins configuration
- [ ] Replication to IT / Isilon for all backups
--------
- [ ] Secure and deployed Canadian RDPC setup with all services deployed
- [ ] Logging and monitoring
- [ ] Caching in Collab - Github, Quay, Dockerhub | infrastructure | rdpc infra devops may release links network monitoring backups network security and config pod security enabled ingress controllers private and public setup network policies enabled monitoring endpoint monitoring enabled external cluster monitoring enabled this is done for collab already using uptime robot dashboards with simple templated metrics enabled and accessible grafana prometheus kibana backups kafka filestore song postgres qc module undecided storage kubernetes configuration jenkins configuration replication to it isilon for all backups secure and deployed canadian rdpc setup with all services deployed logging and monitoring caching in collab github quay dockerhub | 1 |
11,543 | 17,396,959,267 | IssuesEvent | 2021-08-02 14:32:23 | renovatebot/renovate | https://api.github.com/repos/renovatebot/renovate | closed | Can't run under react native repositories | manager:gradle priority-3-normal reproduction:provided status:requirements | **What Renovate type are you using?**
<!-- Tell us if you're using the hosted App, or if you are self-hosted Renovate yourself. Platform too (GitHub, GitLab, etc) if you think it's relevant. -->
Renovate Pro (self hosted)
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
It seems like renovate doesn't run `yarn` before running the gradlew command, resulting in errors due missing files.
**Did you see anything helpful in debug logs?**
<!-- If you're running self-hosted, run with `--log-level=debug` or LOG_LEVEL=debug and search for whatever dependency/branch/PR that is causing the problem. If you are using the Renovate App, log into https://app.renovatebot.com/dashboard and locate the correct job log for when the problem occurred (e.g. when the PR was created). The Job ID will help us locate it. -->
```
DEBUG: Start gradle command (repository=mycompany/app/react)
"cmd": "./gradlew --init-script renovate-plugin.gradle renovate"
WARN: Gradle command ./gradlew --init-script renovate-plugin.gradle renovate failed. Exit code: 1. (repository=mycompany/app/react)
"err": {
"killed": false,
"code": 1,
"signal": null,
"cmd": "./gradlew --init-script renovate-plugin.gradle renovate",
"stdout": "",
"stderr": "\nFAILURE: Build failed with an exception.\n\n* Where:\nSettings file '/tmp/renovate/gitlab/mycompany/app/react/android/settings.gradle' line: 2\n\n* What went wrong:\nA problem occurred evaluating settings 'mycompany'.\n> Could not read script '/tmp/renovate/gitlab/mycompany/app/react/node_modules/react-native-unimodules/gradle.groovy' as it does not exist.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Get more help at https://help.gradle.org\n\nBUILD FAILED in 1s\n",
"message": "Command failed: ./gradlew --init-script renovate-plugin.gradle renovate\n\nFAILURE: Build failed with an exception.\n\n* Where:\nSettings file '/tmp/renovate/gitlab/mycompany/app/react/android/settings.gradle' line: 2\n\n* What went wrong:\nA problem occurred evaluating settings 'mycompany'.\n> Could not read script '/tmp/renovate/gitlab/mycompany/app/react/node_modules/react-native-unimodules/gradle.groovy' as it does not exist.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Get more help at https://help.gradle.org\n\nBUILD FAILED in 1s\n",
"stack": "Error: Command failed: ./gradlew --init-script renovate-plugin.gradle renovate\n\nFAILURE: Build failed with an exception.\n\n* Where:\nSettings file '/tmp/renovate/gitlab/mycompany/app/react/android/settings.gradle' line: 2\n\n* What went wrong:\nA problem occurred evaluating settings 'mycompany'.\n> Could not read script '/tmp/renovate/gitlab/mycompany/app/react/node_modules/react-native-unimodules/gradle.groovy' as it does not exist.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Get more help at https://help.gradle.org\n\nBUILD FAILED in 1s\n\n at ChildProcess.exithandler (child_process.js:294:12)\n at ChildProcess.emit (events.js:198:13)\n at ChildProcess.EventEmitter.emit (domain.js:448:20)\n at maybeClose (internal/child_process.js:982:16)\n at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)"
}
INFO: Aborting Renovate due to Gradle lookup errors (repository=mycompany/app/react)
INFO: Registry error - skipping (repository=mycompany/app/react)
INFO: Finished repository (repository=mycompany/app/react)
```
**To Reproduce**
<!-- To fix a bug, we nearly always need a *minimal* repo to reproduce it in, before verifying that our fix works using the same repo. If you provide a public repo that already reproduces the problem, then your bug will get highest priority for fixing. If you can't reproduce it in a simple repo, do your best to describe how it could be reproduced, or under what circumstances the bug occurs. -->
I suppose it's enough to create a react native app from the boilerplate though I haven't tried reproducing that way just yet. As it can be inferred from the logs above, this app uses unimodules, as it used to be an expo managed app which is now ejected. So maybe we would have better luck starting from expo and then ejecting.
**Additional context**
<!-- Add any other context about the problem here, including your own debugging or ideas on what went wrong. -->
Perhaps a generic solution for this issue would be creating a `preUpgradeTasks`, just like `postUpgradeTasks`. But maybe we want Renovate to "just work" with multiple, interdependent package providers, such as all package managers such as `yarn/npm install`, `pod install`, etc. get triggered before proceeding to the next steps in the upgrade process? | 1.0 | Can't run under react native repositories - **What Renovate type are you using?**
<!-- Tell us if you're using the hosted App, or if you are self-hosted Renovate yourself. Platform too (GitHub, GitLab, etc) if you think it's relevant. -->
Renovate Pro (self hosted)
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
It seems like renovate doesn't run `yarn` before running the gradlew command, resulting in errors due missing files.
**Did you see anything helpful in debug logs?**
<!-- If you're running self-hosted, run with `--log-level=debug` or LOG_LEVEL=debug and search for whatever dependency/branch/PR that is causing the problem. If you are using the Renovate App, log into https://app.renovatebot.com/dashboard and locate the correct job log for when the problem occurred (e.g. when the PR was created). The Job ID will help us locate it. -->
```
DEBUG: Start gradle command (repository=mycompany/app/react)
"cmd": "./gradlew --init-script renovate-plugin.gradle renovate"
WARN: Gradle command ./gradlew --init-script renovate-plugin.gradle renovate failed. Exit code: 1. (repository=mycompany/app/react)
"err": {
"killed": false,
"code": 1,
"signal": null,
"cmd": "./gradlew --init-script renovate-plugin.gradle renovate",
"stdout": "",
"stderr": "\nFAILURE: Build failed with an exception.\n\n* Where:\nSettings file '/tmp/renovate/gitlab/mycompany/app/react/android/settings.gradle' line: 2\n\n* What went wrong:\nA problem occurred evaluating settings 'mycompany'.\n> Could not read script '/tmp/renovate/gitlab/mycompany/app/react/node_modules/react-native-unimodules/gradle.groovy' as it does not exist.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Get more help at https://help.gradle.org\n\nBUILD FAILED in 1s\n",
"message": "Command failed: ./gradlew --init-script renovate-plugin.gradle renovate\n\nFAILURE: Build failed with an exception.\n\n* Where:\nSettings file '/tmp/renovate/gitlab/mycompany/app/react/android/settings.gradle' line: 2\n\n* What went wrong:\nA problem occurred evaluating settings 'mycompany'.\n> Could not read script '/tmp/renovate/gitlab/mycompany/app/react/node_modules/react-native-unimodules/gradle.groovy' as it does not exist.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Get more help at https://help.gradle.org\n\nBUILD FAILED in 1s\n",
"stack": "Error: Command failed: ./gradlew --init-script renovate-plugin.gradle renovate\n\nFAILURE: Build failed with an exception.\n\n* Where:\nSettings file '/tmp/renovate/gitlab/mycompany/app/react/android/settings.gradle' line: 2\n\n* What went wrong:\nA problem occurred evaluating settings 'mycompany'.\n> Could not read script '/tmp/renovate/gitlab/mycompany/app/react/node_modules/react-native-unimodules/gradle.groovy' as it does not exist.\n\n* Try:\nRun with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.\n\n* Get more help at https://help.gradle.org\n\nBUILD FAILED in 1s\n\n at ChildProcess.exithandler (child_process.js:294:12)\n at ChildProcess.emit (events.js:198:13)\n at ChildProcess.EventEmitter.emit (domain.js:448:20)\n at maybeClose (internal/child_process.js:982:16)\n at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)"
}
INFO: Aborting Renovate due to Gradle lookup errors (repository=mycompany/app/react)
INFO: Registry error - skipping (repository=mycompany/app/react)
INFO: Finished repository (repository=mycompany/app/react)
```
**To Reproduce**
<!-- To fix a bug, we nearly always need a *minimal* repo to reproduce it in, before verifying that our fix works using the same repo. If you provide a public repo that already reproduces the problem, then your bug will get highest priority for fixing. If you can't reproduce it in a simple repo, do your best to describe how it could be reproduced, or under what circumstances the bug occurs. -->
I suppose it's enough to create a react native app from the boilerplate though I haven't tried reproducing that way just yet. As it can be inferred from the logs above, this app uses unimodules, as it used to be an expo managed app which is now ejected. So maybe we would have better luck starting from expo and then ejecting.
**Additional context**
<!-- Add any other context about the problem here, including your own debugging or ideas on what went wrong. -->
Perhaps a generic solution for this issue would be creating a `preUpgradeTasks`, just like `postUpgradeTasks`. But maybe we want Renovate to "just work" with multiple, interdependent package providers, such as all package managers such as `yarn/npm install`, `pod install`, etc. get triggered before proceeding to the next steps in the upgrade process? | non_infrastructure | can t run under react native repositories what renovate type are you using renovate pro self hosted describe the bug it seems like renovate doesn t run yarn before running the gradlew command resulting in errors due missing files did you see anything helpful in debug logs debug start gradle command repository mycompany app react cmd gradlew init script renovate plugin gradle renovate warn gradle command gradlew init script renovate plugin gradle renovate failed exit code repository mycompany app react err killed false code signal null cmd gradlew init script renovate plugin gradle renovate stdout stderr nfailure build failed with an exception n n where nsettings file tmp renovate gitlab mycompany app react android settings gradle line n n what went wrong na problem occurred evaluating settings mycompany n could not read script tmp renovate gitlab mycompany app react node modules react native unimodules gradle groovy as it does not exist n n try nrun with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights n n get more help at failed in n message command failed gradlew init script renovate plugin gradle renovate n nfailure build failed with an exception n n where nsettings file tmp renovate gitlab mycompany app react android settings gradle line n n what went wrong na problem occurred evaluating settings mycompany n could not read script tmp renovate gitlab mycompany app react node modules react native unimodules gradle groovy as it does not exist n n try nrun with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights n n get more help at failed in n stack error command failed gradlew init script renovate plugin gradle renovate n nfailure build failed with an exception n n where nsettings file tmp renovate gitlab mycompany app react android settings gradle line n n what went wrong na problem occurred evaluating settings mycompany n could not read script tmp renovate gitlab mycompany app react node modules react native unimodules gradle groovy as it does not exist n n try nrun with stacktrace option to get the stack trace run with info or debug option to get more log output run with scan to get full insights n n get more help at failed in n n at childprocess exithandler child process js n at childprocess emit events js n at childprocess eventemitter emit domain js n at maybeclose internal child process js n at process childprocess handle onexit internal child process js info aborting renovate due to gradle lookup errors repository mycompany app react info registry error skipping repository mycompany app react info finished repository repository mycompany app react to reproduce i suppose it s enough to create a react native app from the boilerplate though i haven t tried reproducing that way just yet as it can be inferred from the logs above this app uses unimodules as it used to be an expo managed app which is now ejected so maybe we would have better luck starting from expo and then ejecting additional context perhaps a generic solution for this issue would be creating a preupgradetasks just like postupgradetasks but maybe we want renovate to just work with multiple interdependent package providers such as all package managers such as yarn npm install pod install etc get triggered before proceeding to the next steps in the upgrade process | 0 |
23,667 | 16,509,017,501 | IssuesEvent | 2021-05-26 00:00:18 | microsoft/TypeScript | https://api.github.com/repos/microsoft/TypeScript | opened | Harden response delivery to ensure services is consistent with server | Domain: TSServer Infrastructure | Spoke with @andrewbranch
> If you \[add] a member to any `protocol.*Response`, there's a significant chance that your fourslash tests work but it doesn't actually work over the TS Server because `session.ts` picked apart the response in the process of converting positions to line/character and didn't know to put it back
How could we fix it?
> Destructuring with an object rest instead of requiring session.ts to name every property it wants to preserve
| 1.0 | Harden response delivery to ensure services is consistent with server - Spoke with @andrewbranch
> If you \[add] a member to any `protocol.*Response`, there's a significant chance that your fourslash tests work but it doesn't actually work over the TS Server because `session.ts` picked apart the response in the process of converting positions to line/character and didn't know to put it back
How could we fix it?
> Destructuring with an object rest instead of requiring session.ts to name every property it wants to preserve
| infrastructure | harden response delivery to ensure services is consistent with server spoke with andrewbranch if you a member to any protocol response there s a significant chance that your fourslash tests work but it doesn t actually work over the ts server because session ts picked apart the response in the process of converting positions to line character and didn t know to put it back how could we fix it destructuring with an object rest instead of requiring session ts to name every property it wants to preserve | 1 |
35,126 | 30,778,130,369 | IssuesEvent | 2023-07-31 08:08:22 | arduino/arduino-ide | https://api.github.com/repos/arduino/arduino-ide | opened | The default preference value generated for `arduino.ide.updateChannel` never worked | topic: infrastructure type: imperfection | ### Describe the problem
The default preference value is `'stable'`:
https://github.com/arduino/arduino-ide/blob/144df893d0dafec64a26565cf912a98f32572da9/arduino-ide-extension/src/browser/arduino-preferences.ts#L145
In the IDE2 packager code, the default preference value of the `electron-updater` channel is overridden with a generated one at packaging time.
The packager logic calculates the output channel name here:
https://github.com/arduino/arduino-ide/blob/144df893d0dafec64a26565cf912a98f32572da9/electron/packager/config.js#L126
It is `'stable'` for release builds and `'nightly'` for the nightly builds. Otherwise, it's omitted.
However, the generated default update channel property is merged into an incorrect location. It's put under `theia.`:
https://github.com/arduino/arduino-ide/blob/144df893d0dafec64a26565cf912a98f32572da9/electron/packager/config.js#L134
Output from the `2.1.1` release:
```sh
cat /Applications/Arduino\ IDE\ 2.1.1.app/Contents/Resources/app/package.json | jq '.theia.frontend.config'
```
```json
{
"applicationName": "Arduino IDE",
"defaultTheme": {
"light": "arduino-theme",
"dark": "arduino-theme-dark"
},
"validatePreferencesSchema": false,
"preferences": {
"window.title": "${rootName}${activeEditorShort}${appName}",
"files.autoSave": "afterDelay",
"editor.minimap.enabled": false,
"editor.tabSize": 2,
"editor.scrollBeyondLastLine": false,
"editor.quickSuggestions": {
"other": false,
"comments": false,
"strings": false
},
"editor.maxTokenizationLineLength": 500,
"editor.bracketPairColorization.enabled": false,
"breadcrumbs.enabled": false,
"workbench.tree.renderIndentGuides": "none",
"explorer.compactFolders": false
},
"arduino.ide.updateChannel": "stable",
"buildDate": "2023-06-30T16:00:43.829Z"
}
```
The generated `"arduino.ide.updateChannel": "stable",` entry must be under `preferences`. Otherwise, it has no effect. See https://github.com/eclipse-theia/theia/pull/4766. Since the default preference value of the update channel is `'stable'` the incorrect default preference value does not change anything in release and snapshot builds.
For the nightly, it's also broken:
```sh
cat ~/Desktop/Arduino\ IDE\.app/Contents/Resources/app/package.json | jq '.theia.frontend.config'
```
```json
{
"applicationName": "Arduino IDE",
"defaultTheme": {
"light": "arduino-theme",
"dark": "arduino-theme-dark"
},
"defaultIconTheme": "none",
"validatePreferencesSchema": false,
"preferences": {
"window.title": "${rootName}${activeEditorShort}${appName}",
"files.autoSave": "afterDelay",
"editor.minimap.enabled": false,
"editor.tabSize": 2,
"editor.scrollBeyondLastLine": false,
"editor.quickSuggestions": {
"other": false,
"comments": false,
"strings": false
},
"editor.maxTokenizationLineLength": 500,
"editor.bracketPairColorization.enabled": false,
"breadcrumbs.enabled": false,
"workbench.tree.renderIndentGuides": "none",
"explorer.compactFolders": false
},
"arduino.ide.updateChannel": "nightly",
"buildDate": "2023-07-31T03:04:11.024Z"
}
```
If I understand the intentions here, IDE2 wants to promote the `'nightly'` update site for nightly builds and the `'stable'` otherwise.
- this won't work if the user explicitly sets any values for `` in `~/.arduinoIDE/settings.json`.
- this won't work because IDE2 generates the default preference value to an incorrect location.
### To reproduce
See the issue description.
### Expected behavior
I don't know the intentions, but it doesn't work now.
### Arduino IDE version
2023-07-31T03:04:11.024Z
### Operating system
macOS
### Operating system version
13.4.1
### Additional context
#2144 will drop the invalid generation.
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://www.arduino.cc/en/software#nightly-builds)
- [X] My report contains all necessary details | 1.0 | The default preference value generated for `arduino.ide.updateChannel` never worked - ### Describe the problem
The default preference value is `'stable'`:
https://github.com/arduino/arduino-ide/blob/144df893d0dafec64a26565cf912a98f32572da9/arduino-ide-extension/src/browser/arduino-preferences.ts#L145
In the IDE2 packager code, the default preference value of the `electron-updater` channel is overridden with a generated one at packaging time.
The packager logic calculates the output channel name here:
https://github.com/arduino/arduino-ide/blob/144df893d0dafec64a26565cf912a98f32572da9/electron/packager/config.js#L126
It is `'stable'` for release builds and `'nightly'` for the nightly builds. Otherwise, it's omitted.
However, the generated default update channel property is merged into an incorrect location. It's put under `theia.`:
https://github.com/arduino/arduino-ide/blob/144df893d0dafec64a26565cf912a98f32572da9/electron/packager/config.js#L134
Output from the `2.1.1` release:
```sh
cat /Applications/Arduino\ IDE\ 2.1.1.app/Contents/Resources/app/package.json | jq '.theia.frontend.config'
```
```json
{
"applicationName": "Arduino IDE",
"defaultTheme": {
"light": "arduino-theme",
"dark": "arduino-theme-dark"
},
"validatePreferencesSchema": false,
"preferences": {
"window.title": "${rootName}${activeEditorShort}${appName}",
"files.autoSave": "afterDelay",
"editor.minimap.enabled": false,
"editor.tabSize": 2,
"editor.scrollBeyondLastLine": false,
"editor.quickSuggestions": {
"other": false,
"comments": false,
"strings": false
},
"editor.maxTokenizationLineLength": 500,
"editor.bracketPairColorization.enabled": false,
"breadcrumbs.enabled": false,
"workbench.tree.renderIndentGuides": "none",
"explorer.compactFolders": false
},
"arduino.ide.updateChannel": "stable",
"buildDate": "2023-06-30T16:00:43.829Z"
}
```
The generated `"arduino.ide.updateChannel": "stable",` entry must be under `preferences`. Otherwise, it has no effect. See https://github.com/eclipse-theia/theia/pull/4766. Since the default preference value of the update channel is `'stable'` the incorrect default preference value does not change anything in release and snapshot builds.
For the nightly, it's also broken:
```sh
cat ~/Desktop/Arduino\ IDE\.app/Contents/Resources/app/package.json | jq '.theia.frontend.config'
```
```json
{
"applicationName": "Arduino IDE",
"defaultTheme": {
"light": "arduino-theme",
"dark": "arduino-theme-dark"
},
"defaultIconTheme": "none",
"validatePreferencesSchema": false,
"preferences": {
"window.title": "${rootName}${activeEditorShort}${appName}",
"files.autoSave": "afterDelay",
"editor.minimap.enabled": false,
"editor.tabSize": 2,
"editor.scrollBeyondLastLine": false,
"editor.quickSuggestions": {
"other": false,
"comments": false,
"strings": false
},
"editor.maxTokenizationLineLength": 500,
"editor.bracketPairColorization.enabled": false,
"breadcrumbs.enabled": false,
"workbench.tree.renderIndentGuides": "none",
"explorer.compactFolders": false
},
"arduino.ide.updateChannel": "nightly",
"buildDate": "2023-07-31T03:04:11.024Z"
}
```
If I understand the intentions here, IDE2 wants to promote the `'nightly'` update site for nightly builds and the `'stable'` otherwise.
- this won't work if the user explicitly sets any values for `` in `~/.arduinoIDE/settings.json`.
- this won't work because IDE2 generates the default preference value to an incorrect location.
### To reproduce
See the issue description.
### Expected behavior
I don't know the intentions, but it doesn't work now.
### Arduino IDE version
2023-07-31T03:04:11.024Z
### Operating system
macOS
### Operating system version
13.4.1
### Additional context
#2144 will drop the invalid generation.
### Issue checklist
- [X] I searched for previous reports in [the issue tracker](https://github.com/arduino/arduino-ide/issues?q=)
- [X] I verified the problem still occurs when using the latest [nightly build](https://www.arduino.cc/en/software#nightly-builds)
- [X] My report contains all necessary details | infrastructure | the default preference value generated for arduino ide updatechannel never worked describe the problem the default preference value is stable in the packager code the default preference value of the electron updater channel is overridden with a generated one at packaging time the packager logic calculates the output channel name here it is stable for release builds and nightly for the nightly builds otherwise it s omitted however the generated default update channel property is merged into an incorrect location it s put under theia output from the release sh cat applications arduino ide app contents resources app package json jq theia frontend config json applicationname arduino ide defaulttheme light arduino theme dark arduino theme dark validatepreferencesschema false preferences window title rootname activeeditorshort appname files autosave afterdelay editor minimap enabled false editor tabsize editor scrollbeyondlastline false editor quicksuggestions other false comments false strings false editor maxtokenizationlinelength editor bracketpaircolorization enabled false breadcrumbs enabled false workbench tree renderindentguides none explorer compactfolders false arduino ide updatechannel stable builddate the generated arduino ide updatechannel stable entry must be under preferences otherwise it has no effect see since the default preference value of the update channel is stable the incorrect default preference value does not change anything in release and snapshot builds for the nightly it s also broken sh cat desktop arduino ide app contents resources app package json jq theia frontend config json applicationname arduino ide defaulttheme light arduino theme dark arduino theme dark defaulticontheme none validatepreferencesschema false preferences window title rootname activeeditorshort appname files autosave afterdelay editor minimap enabled false editor tabsize editor scrollbeyondlastline false editor quicksuggestions other false comments false strings false editor maxtokenizationlinelength editor bracketpaircolorization enabled false breadcrumbs enabled false workbench tree renderindentguides none explorer compactfolders false arduino ide updatechannel nightly builddate if i understand the intentions here wants to promote the nightly update site for nightly builds and the stable otherwise this won t work if the user explicitly sets any values for in arduinoide settings json this won t work because generates the default preference value to an incorrect location to reproduce see the issue description expected behavior i don t know the intentions but it doesn t work now arduino ide version operating system macos operating system version additional context will drop the invalid generation issue checklist i searched for previous reports in i verified the problem still occurs when using the latest my report contains all necessary details | 1 |
251,798 | 21,523,642,252 | IssuesEvent | 2022-04-28 16:14:18 | willowtreeapps/vocable-ios | https://api.github.com/repos/willowtreeapps/vocable-ios | closed | Categories List Screen | Last Category Down Button Is Not Disabled | bug v1.4 Test verified | **Description:**
In the categories list screen, last category (8th) down button is not disabled (SS1). But, when we add a custom category as 9th category, last category down button is disabled (SS2). If we remove last (9th) custom category and having totally 8 categories, last category down button is not disabled (SS3). If we add a custom category and remove a preset category, last category down button is disabled (SS4).
**Steps To Reproduce:**
1. Go to the `Settings`
2. Select `Categories and Phrase`
3. Add a custom category
4. Remove last custom category
5. Add a custom category
6. Remove a preset category
**Expected behavior:**
Last category down button is disabled.
**Actual behavior:**
Last category down button is not disabled.
**Device Information:**
iPad Pro - Test Device - iOS 15.3.1
**Build:**
1.4.0 - 2369 (TestFlight)
**Screenshots:**
**SS1**

**SS2**

**SS3**

**SS4**

| 1.0 | Categories List Screen | Last Category Down Button Is Not Disabled - **Description:**
In the categories list screen, last category (8th) down button is not disabled (SS1). But, when we add a custom category as 9th category, last category down button is disabled (SS2). If we remove last (9th) custom category and having totally 8 categories, last category down button is not disabled (SS3). If we add a custom category and remove a preset category, last category down button is disabled (SS4).
**Steps To Reproduce:**
1. Go to the `Settings`
2. Select `Categories and Phrase`
3. Add a custom category
4. Remove last custom category
5. Add a custom category
6. Remove a preset category
**Expected behavior:**
Last category down button is disabled.
**Actual behavior:**
Last category down button is not disabled.
**Device Information:**
iPad Pro - Test Device - iOS 15.3.1
**Build:**
1.4.0 - 2369 (TestFlight)
**Screenshots:**
**SS1**

**SS2**

**SS3**

**SS4**

| non_infrastructure | categories list screen last category down button is not disabled description in the categories list screen last category down button is not disabled but when we add a custom category as category last category down button is disabled if we remove last custom category and having totally categories last category down button is not disabled if we add a custom category and remove a preset category last category down button is disabled steps to reproduce go to the settings select categories and phrase add a custom category remove last custom category add a custom category remove a preset category expected behavior last category down button is disabled actual behavior last category down button is not disabled device information ipad pro test device ios build testflight screenshots | 0 |
10,444 | 8,568,551,454 | IssuesEvent | 2018-11-10 22:43:47 | triplea-game/triplea | https://api.github.com/repos/triplea-game/triplea | closed | Moderator (DB user) is not automatically created | Infrastructure | When creating infrastructure and doing automatic DB setup, the moderator user is something I did not get to. AFAIK it does not exist on prod.
Given our conversations for an http server in #3865 ; I recommend we instead focus on getting that up and running and providing endpoints to return information to moderator users. It's a different mechanism to achieve the same, but a more useful/flexible result that will be easier to use. I think that would be a better focus/effort, if there is general agreement we can potentially close this issue with that consensus/plan.
There is a question of how to proceed, marking as discussion until we determine how to move forward. | 1.0 | Moderator (DB user) is not automatically created - When creating infrastructure and doing automatic DB setup, the moderator user is something I did not get to. AFAIK it does not exist on prod.
Given our conversations for an http server in #3865 ; I recommend we instead focus on getting that up and running and providing endpoints to return information to moderator users. It's a different mechanism to achieve the same, but a more useful/flexible result that will be easier to use. I think that would be a better focus/effort, if there is general agreement we can potentially close this issue with that consensus/plan.
There is a question of how to proceed, marking as discussion until we determine how to move forward. | infrastructure | moderator db user is not automatically created when creating infrastructure and doing automatic db setup the moderator user is something i did not get to afaik it does not exist on prod given our conversations for an http server in i recommend we instead focus on getting that up and running and providing endpoints to return information to moderator users it s a different mechanism to achieve the same but a more useful flexible result that will be easier to use i think that would be a better focus effort if there is general agreement we can potentially close this issue with that consensus plan there is a question of how to proceed marking as discussion until we determine how to move forward | 1 |
24,965 | 17,953,452,866 | IssuesEvent | 2021-09-13 02:46:13 | APSIMInitiative/ApsimX | https://api.github.com/repos/APSIMInitiative/ApsimX | closed | Profiling and refactoring of forest model to improve speed | interface/infrastructure refactor | The forest model is slow to run and should be refactored to improve execution speed.
This will be implemented during the sprint Aug 30-Sep 03. | 1.0 | Profiling and refactoring of forest model to improve speed - The forest model is slow to run and should be refactored to improve execution speed.
This will be implemented during the sprint Aug 30-Sep 03. | infrastructure | profiling and refactoring of forest model to improve speed the forest model is slow to run and should be refactored to improve execution speed this will be implemented during the sprint aug sep | 1 |
25,331 | 18,477,253,549 | IssuesEvent | 2021-10-18 08:43:36 | fom-big-data-bike-path-quality/fom-big-data-bike-path-quality-analytics | https://api.github.com/repos/fom-big-data-bike-path-quality/fom-big-data-bike-path-quality-analytics | closed | Parametrize training on Google Cloud | infrastructure cloud | Extend Github actions so that the training can be parametrized. | 1.0 | Parametrize training on Google Cloud - Extend Github actions so that the training can be parametrized. | infrastructure | parametrize training on google cloud extend github actions so that the training can be parametrized | 1 |
10,542 | 8,628,850,386 | IssuesEvent | 2018-11-21 18:41:03 | square/misk-web | https://api.github.com/repos/square/misk-web | opened | Add createPackageJson function/tool to `@misk/dev` that pulls from miskTab.json | infrastructure | - Generates package json with latest packages determined from Docker image version in src/miskTab.json
- Put json file that is copied into Docker image that includes mappings of Docker image version to the @misk/ versions
- Make it so src/index.tsx and routes can pull slug from miskTab.json | 1.0 | Add createPackageJson function/tool to `@misk/dev` that pulls from miskTab.json - - Generates package json with latest packages determined from Docker image version in src/miskTab.json
- Put json file that is copied into Docker image that includes mappings of Docker image version to the @misk/ versions
- Make it so src/index.tsx and routes can pull slug from miskTab.json | infrastructure | add createpackagejson function tool to misk dev that pulls from misktab json generates package json with latest packages determined from docker image version in src misktab json put json file that is copied into docker image that includes mappings of docker image version to the misk versions make it so src index tsx and routes can pull slug from misktab json | 1 |
22,323 | 15,102,877,555 | IssuesEvent | 2021-02-08 09:35:25 | hyperledger-labs/business-partner-agent | https://api.github.com/repos/hyperledger-labs/business-partner-agent | closed | Deploy helm chart into helm repository | Infrastructure | In order to properly deploy a helm release, we need to provide a public helm repository (--> git hub pages) | 1.0 | Deploy helm chart into helm repository - In order to properly deploy a helm release, we need to provide a public helm repository (--> git hub pages) | infrastructure | deploy helm chart into helm repository in order to properly deploy a helm release we need to provide a public helm repository git hub pages | 1 |
680,196 | 23,261,960,114 | IssuesEvent | 2022-08-04 14:10:59 | woocommerce/woocommerce-gateway-stripe | https://api.github.com/repos/woocommerce/woocommerce-gateway-stripe | opened | Customers shown incorrect order confirmation when manipulating Stripe intent response | priority: low type: bug component: stripe checkout component: UPE | **Describe the bug**
During checkout, customers can be redirected and shown the Order Received page, when payment is still pending on the order. This can be achieved by manipulating the response from Stripe's API when confirming a payment intent to set the `amount` of the response to `0` and the `status` to `null`.
This was achieved during an attempted exploit on the checkout page while using the WC Stripe plugin. This does not actually successfully complete an order nor does it successfully avoid completing a payment, but it does show a customer the Order Received page, with full paid amount displayed, when the WC Order still has a status of "Pending payment". This could be used by a crafty customer to present a merchant with an "invoice" or confirmation of a completed order in order to convince the merchant that an amount had indeed been paid (although, this should definitely not be accepted as payment confirmation by any discerning store owner).
Please refer to [this relevant SIRT thread](p3btAN-1ND-p2) for more details.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup test store and install and configure the WC Stripe plugin, **ensuring you use a version where the UPE can be enabled**. I believe this should be anything after the 5.6.0 release, though I only tested this on the latest released version of the plugin.
2. In the Experimental Features section inside Stripe's Advanced Settings (_WC Stripe Settings > Settings > Advanced Settings_), ensure that "Try the new checkout experience (early access)" is selected. The UPE is (almost) the only way that we can use the Intents API--as opposed to Sources--and is (almost) the only way I could find to trigger a request to `/v1/payment_intents/pi_.../confirm` endpoint.
3. We will also need to setup the [Burp Suite app](https://portswigger.net/burp/releases/professional-community-2022-7-1?requestededition=community&requestedplatform=) (or something similar), so that we can use its browser to follow all requests made by our WC Store and manipulate the response of one in particular.
4. While using Burp Suite, add an item to your cart, and proceed to checkout page.
5. While on the checkout page turn on Burp Suite interceptor (_Proxy > Intercept > Intercept is on_), fill in card payment details, and select "Place order" to initiate checkout.
6. Keep forwarding requests until a request to `POST /v1/payment_intents/pi_.../confirm` appears.
7. Intercept and edit the response to this request (_Action > Do intercept > Response to this request_). Change `amount` to `00000` (or I think just `0` should be fine) and `status` to `null`. Forward this response and continue forwarding remaining requests.
8. You should arrive on the Order Received page, with the page communicating that the order has been completed with the full amount of the product displayed.
**Expected behavior**
If payment has successfully been completed, I would expect to show the customer the Order Received page and for the WC Order to have a status of "Processing". If payment has not been transacted, I would expect the customer to be shown an error instead and for the WC Order to have a status of "Pending payment".
**Environment (please complete the following information):**
WC Stripe 5.6.0+
**Additional context**
Please refer to [this relevant SIRT thread](p3btAN-1ND-p2) for more details. | 1.0 | Customers shown incorrect order confirmation when manipulating Stripe intent response - **Describe the bug**
During checkout, customers can be redirected and shown the Order Received page, when payment is still pending on the order. This can be achieved by manipulating the response from Stripe's API when confirming a payment intent to set the `amount` of the response to `0` and the `status` to `null`.
This was achieved during an attempted exploit on the checkout page while using the WC Stripe plugin. This does not actually successfully complete an order nor does it successfully avoid completing a payment, but it does show a customer the Order Received page, with full paid amount displayed, when the WC Order still has a status of "Pending payment". This could be used by a crafty customer to present a merchant with an "invoice" or confirmation of a completed order in order to convince the merchant that an amount had indeed been paid (although, this should definitely not be accepted as payment confirmation by any discerning store owner).
Please refer to [this relevant SIRT thread](p3btAN-1ND-p2) for more details.
**To Reproduce**
Steps to reproduce the behavior:
1. Setup test store and install and configure the WC Stripe plugin, **ensuring you use a version where the UPE can be enabled**. I believe this should be anything after the 5.6.0 release, though I only tested this on the latest released version of the plugin.
2. In the Experimental Features section inside Stripe's Advanced Settings (_WC Stripe Settings > Settings > Advanced Settings_), ensure that "Try the new checkout experience (early access)" is selected. The UPE is (almost) the only way that we can use the Intents API--as opposed to Sources--and is (almost) the only way I could find to trigger a request to `/v1/payment_intents/pi_.../confirm` endpoint.
3. We will also need to setup the [Burp Suite app](https://portswigger.net/burp/releases/professional-community-2022-7-1?requestededition=community&requestedplatform=) (or something similar), so that we can use its browser to follow all requests made by our WC Store and manipulate the response of one in particular.
4. While using Burp Suite, add an item to your cart, and proceed to checkout page.
5. While on the checkout page turn on Burp Suite interceptor (_Proxy > Intercept > Intercept is on_), fill in card payment details, and select "Place order" to initiate checkout.
6. Keep forwarding requests until a request to `POST /v1/payment_intents/pi_.../confirm` appears.
7. Intercept and edit the response to this request (_Action > Do intercept > Response to this request_). Change `amount` to `00000` (or I think just `0` should be fine) and `status` to `null`. Forward this response and continue forwarding remaining requests.
8. You should arrive on the Order Received page, with the page communicating that the order has been completed with the full amount of the product displayed.
**Expected behavior**
If payment has successfully been completed, I would expect to show the customer the Order Received page and for the WC Order to have a status of "Processing". If payment has not been transacted, I would expect the customer to be shown an error instead and for the WC Order to have a status of "Pending payment".
**Environment (please complete the following information):**
WC Stripe 5.6.0+
**Additional context**
Please refer to [this relevant SIRT thread](p3btAN-1ND-p2) for more details. | non_infrastructure | customers shown incorrect order confirmation when manipulating stripe intent response describe the bug during checkout customers can be redirected and shown the order received page when payment is still pending on the order this can be achieved by manipulating the response from stripe s api when confirming a payment intent to set the amount of the response to and the status to null this was achieved during an attempted exploit on the checkout page while using the wc stripe plugin this does not actually successfully complete an order nor does it successfully avoid completing a payment but it does show a customer the order received page with full paid amount displayed when the wc order still has a status of pending payment this could be used by a crafty customer to present a merchant with an invoice or confirmation of a completed order in order to convince the merchant that an amount had indeed been paid although this should definitely not be accepted as payment confirmation by any discerning store owner please refer to for more details to reproduce steps to reproduce the behavior setup test store and install and configure the wc stripe plugin ensuring you use a version where the upe can be enabled i believe this should be anything after the release though i only tested this on the latest released version of the plugin in the experimental features section inside stripe s advanced settings wc stripe settings settings advanced settings ensure that try the new checkout experience early access is selected the upe is almost the only way that we can use the intents api as opposed to sources and is almost the only way i could find to trigger a request to payment intents pi confirm endpoint we will also need to setup the or something similar so that we can use its browser to follow all requests made by our wc store and manipulate the response of one in particular while using burp suite add an item to your cart and proceed to checkout page while on the checkout page turn on burp suite interceptor proxy intercept intercept is on fill in card payment details and select place order to initiate checkout keep forwarding requests until a request to post payment intents pi confirm appears intercept and edit the response to this request action do intercept response to this request change amount to or i think just should be fine and status to null forward this response and continue forwarding remaining requests you should arrive on the order received page with the page communicating that the order has been completed with the full amount of the product displayed expected behavior if payment has successfully been completed i would expect to show the customer the order received page and for the wc order to have a status of processing if payment has not been transacted i would expect the customer to be shown an error instead and for the wc order to have a status of pending payment environment please complete the following information wc stripe additional context please refer to for more details | 0 |
157,583 | 19,959,072,120 | IssuesEvent | 2022-01-28 05:24:07 | JeffResc/IP-API-Node.js | https://api.github.com/repos/JeffResc/IP-API-Node.js | closed | CVE-2016-10540 (High) detected in minimatch-0.2.14.tgz, minimatch-2.0.10.tgz | security vulnerability | ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-0.2.14.tgz</b>, <b>minimatch-2.0.10.tgz</b></p></summary>
<p>
<details><summary><b>minimatch-0.2.14.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/globule/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.8.11.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **minimatch-0.2.14.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-2.0.10.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jshint-1.10.0.tgz (Root Library)
- :x: **minimatch-2.0.10.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/JeffResc/IP-API-Node.js/commit/99b7653bfce099be086c1b68c2b7b8499c3d63af">99b7653bfce099be086c1b68c2b7b8499c3d63af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2016-10540 (High) detected in minimatch-0.2.14.tgz, minimatch-2.0.10.tgz - ## CVE-2016-10540 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-0.2.14.tgz</b>, <b>minimatch-2.0.10.tgz</b></p></summary>
<p>
<details><summary><b>minimatch-0.2.14.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/globule/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-3.8.11.tgz (Root Library)
- vinyl-fs-0.3.14.tgz
- glob-watcher-0.0.6.tgz
- gaze-0.5.2.tgz
- globule-0.1.0.tgz
- :x: **minimatch-0.2.14.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimatch-2.0.10.tgz</b></p></summary>
<p>a glob matcher in javascript</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p>
<p>Path to dependency file: IP-API-Node.js/package.json</p>
<p>Path to vulnerable library: IP-API-Node.js/node_modules/minimatch/package.json</p>
<p>
Dependency Hierarchy:
- gulp-jshint-1.10.0.tgz (Root Library)
- :x: **minimatch-2.0.10.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/JeffResc/IP-API-Node.js/commit/99b7653bfce099be086c1b68c2b7b8499c3d63af">99b7653bfce099be086c1b68c2b7b8499c3d63af</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter.
<p>Publish Date: 2018-05-31
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p>
<p>Release Date: 2016-06-20</p>
<p>Fix Resolution: Update to version 3.0.2 or later.</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in minimatch tgz minimatch tgz cve high severity vulnerability vulnerable libraries minimatch tgz minimatch tgz minimatch tgz a glob matcher in javascript library home page a href path to dependency file ip api node js package json path to vulnerable library ip api node js node modules globule node modules minimatch package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x minimatch tgz vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file ip api node js package json path to vulnerable library ip api node js node modules minimatch package json dependency hierarchy gulp jshint tgz root library x minimatch tgz vulnerable library found in head commit a href found in base branch master vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to version or later step up your open source security game with whitesource | 0 |
31,055 | 25,301,666,940 | IssuesEvent | 2022-11-17 11:09:44 | safe-global/safe-android | https://api.github.com/repos/safe-global/safe-android | closed | Disable SSL pinning temporarily | infrastructure | SSL certificates are about to change several times for our safe.global website, so DevOps (Raul) asked to temporarily switch off the SSL pinning for new service URLs after migration.
Please disable / do not configure SSL pinning for the new safe.global domains.
We'll need to enable it when the certificates are configured for sure. | 1.0 | Disable SSL pinning temporarily - SSL certificates are about to change several times for our safe.global website, so DevOps (Raul) asked to temporarily switch off the SSL pinning for new service URLs after migration.
Please disable / do not configure SSL pinning for the new safe.global domains.
We'll need to enable it when the certificates are configured for sure. | infrastructure | disable ssl pinning temporarily ssl certificates are about to change several times for our safe global website so devops raul asked to temporarily switch off the ssl pinning for new service urls after migration please disable do not configure ssl pinning for the new safe global domains we ll need to enable it when the certificates are configured for sure | 1 |
11,762 | 9,417,787,613 | IssuesEvent | 2019-04-10 17:35:50 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Should GenAPI be removing the DebuggerDisplay and DebuggerStepThrough attributes? | area-Infrastructure | Currently the automatic generation of reference assemblies excludes some of the Debugger* attributes:
https://github.com/dotnet/corefx/blob/7c37cfbd03d058b966160fdc39cd902ef3b2782c/eng/DefaultGenApiDocIds.txt#L12-L17
This is relevant to Xml.ReaderWriter:
https://github.com/dotnet/corefx/blob/7c37cfbd03d058b966160fdc39cd902ef3b2782c/src/System.Xml.ReaderWriter/ref/System.Xml.ReaderWriter.cs#L528
https://github.com/dotnet/corefx/blob/7c37cfbd03d058b966160fdc39cd902ef3b2782c/src/System.Xml.ReaderWriter/ref/System.Xml.ReaderWriter.cs#L815
Should we be emitting these or retain them?
For context: https://github.com/dotnet/corefx/pull/35557#discussion_r259776563
cc @ericstj, @krwq
| 1.0 | Should GenAPI be removing the DebuggerDisplay and DebuggerStepThrough attributes? - Currently the automatic generation of reference assemblies excludes some of the Debugger* attributes:
https://github.com/dotnet/corefx/blob/7c37cfbd03d058b966160fdc39cd902ef3b2782c/eng/DefaultGenApiDocIds.txt#L12-L17
This is relevant to Xml.ReaderWriter:
https://github.com/dotnet/corefx/blob/7c37cfbd03d058b966160fdc39cd902ef3b2782c/src/System.Xml.ReaderWriter/ref/System.Xml.ReaderWriter.cs#L528
https://github.com/dotnet/corefx/blob/7c37cfbd03d058b966160fdc39cd902ef3b2782c/src/System.Xml.ReaderWriter/ref/System.Xml.ReaderWriter.cs#L815
Should we be emitting these or retain them?
For context: https://github.com/dotnet/corefx/pull/35557#discussion_r259776563
cc @ericstj, @krwq
| infrastructure | should genapi be removing the debuggerdisplay and debuggerstepthrough attributes currently the automatic generation of reference assemblies excludes some of the debugger attributes this is relevant to xml readerwriter should we be emitting these or retain them for context cc ericstj krwq | 1 |
24,841 | 17,840,350,883 | IssuesEvent | 2021-09-03 09:15:55 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | System.Diagnostics.Process.sln fails to build in Visual Studio | area-Infrastructure-libraries | ### Description
```
build -subset clr -configuration debug
build -subset libs -configuration debug -runtimeconfiguration debug
build -vs System.Diagnostics.Process
```
After this, I'm trying to build System.Diagnostics.Process.sln in Visual Studio and getting these errors:
```
Error occurred while restoring NuGet packages: Invalid restore input. Duplicate frameworks found: 'net6.0-windows7.0, net6.0, netstandard2.0, netstandard2.0, net461'. Input files: <PATH>\runtime\src\libraries\Microsoft.Win32.Registry\src\Microsoft.Win32.Registry.csproj.
CSC : error CS0006: Metadata file '<PATH>\runtime\artifacts\bin\System.Drawing.Common\net6.0-Unix-Debug\System.Drawing.Common.dll' could not be found
```
### Configuration
Windows 10 x64
.NET 6.0 Preview 2
VS 16.9.4
| 1.0 | System.Diagnostics.Process.sln fails to build in Visual Studio - ### Description
```
build -subset clr -configuration debug
build -subset libs -configuration debug -runtimeconfiguration debug
build -vs System.Diagnostics.Process
```
After this, I'm trying to build System.Diagnostics.Process.sln in Visual Studio and getting these errors:
```
Error occurred while restoring NuGet packages: Invalid restore input. Duplicate frameworks found: 'net6.0-windows7.0, net6.0, netstandard2.0, netstandard2.0, net461'. Input files: <PATH>\runtime\src\libraries\Microsoft.Win32.Registry\src\Microsoft.Win32.Registry.csproj.
CSC : error CS0006: Metadata file '<PATH>\runtime\artifacts\bin\System.Drawing.Common\net6.0-Unix-Debug\System.Drawing.Common.dll' could not be found
```
### Configuration
Windows 10 x64
.NET 6.0 Preview 2
VS 16.9.4
| infrastructure | system diagnostics process sln fails to build in visual studio description build subset clr configuration debug build subset libs configuration debug runtimeconfiguration debug build vs system diagnostics process after this i m trying to build system diagnostics process sln in visual studio and getting these errors error occurred while restoring nuget packages invalid restore input duplicate frameworks found input files runtime src libraries microsoft registry src microsoft registry csproj csc error metadata file runtime artifacts bin system drawing common unix debug system drawing common dll could not be found configuration windows net preview vs | 1 |
452,375 | 32,058,239,004 | IssuesEvent | 2023-09-24 10:52:15 | vrnimje/quick-ftxui | https://api.github.com/repos/vrnimje/quick-ftxui | opened | [Docs] Adding an example to use Quick-FTXUI as a library, using CMake FetchContent | documentation enhancement | This is for the guide that we have created in the documentation website: https://vrnimje.github.io/quick-ftxui/docs/guides/using-quick-ftxui/
I guess the `CMakeLists.txt` file will be similar to the one we currently use for the [`cpp_examples` folder](https://github.com/vrnimje/quick-ftxui/blob/master/cpp_examples/CMakeLists.txt)
This example will be hosted on another repository, so that users can refer to it as per their needs. But documentation will be added in this file: https://github.com/vrnimje/quick-ftxui/blob/docs/content/docs/guides/use.md | 1.0 | [Docs] Adding an example to use Quick-FTXUI as a library, using CMake FetchContent - This is for the guide that we have created in the documentation website: https://vrnimje.github.io/quick-ftxui/docs/guides/using-quick-ftxui/
I guess the `CMakeLists.txt` file will be similar to the one we currently use for the [`cpp_examples` folder](https://github.com/vrnimje/quick-ftxui/blob/master/cpp_examples/CMakeLists.txt)
This example will be hosted on another repository, so that users can refer to it as per their needs. But documentation will be added in this file: https://github.com/vrnimje/quick-ftxui/blob/docs/content/docs/guides/use.md | non_infrastructure | adding an example to use quick ftxui as a library using cmake fetchcontent this is for the guide that we have created in the documentation website i guess the cmakelists txt file will be similar to the one we currently use for the this example will be hosted on another repository so that users can refer to it as per their needs but documentation will be added in this file | 0 |
23,587 | 16,443,178,270 | IssuesEvent | 2021-05-20 16:25:20 | department-of-veterans-affairs/va.gov-team | https://api.github.com/repos/department-of-veterans-affairs/va.gov-team | closed | Monitoring - Set up DataDog agent to capture metrics in EKS | eks infrastructure monitoring operations | ## Issue Description
As an infrastructure/devops engineer, I need to install DataDog agent so that I can capture metrics from EKS clusters.
---
## Tasks
- [x] Install node agent
- [x] Install cluster agent
- [x] Install kubestate metrics service
- [x] DataDog general configuration
- [x] Basic testing to make sure it's up
- [x] CI Process for installing the above
## Acceptance Criteria
- [x] DataDog agent will be able to capture metrics from EKS
---
## How to configure this issue
- [X] **Attached to an Epic** (what body of work is this a part of?)
- [X] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [X] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [X] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
| 1.0 | Monitoring - Set up DataDog agent to capture metrics in EKS - ## Issue Description
As an infrastructure/devops engineer, I need to install DataDog agent so that I can capture metrics from EKS clusters.
---
## Tasks
- [x] Install node agent
- [x] Install cluster agent
- [x] Install kubestate metrics service
- [x] DataDog general configuration
- [x] Basic testing to make sure it's up
- [x] CI Process for installing the above
## Acceptance Criteria
- [x] DataDog agent will be able to capture metrics from EKS
---
## How to configure this issue
- [X] **Attached to an Epic** (what body of work is this a part of?)
- [X] **Labeled with Team** (`product support`, `analytics-insights`, `operations`, `service-design`, `tools-be`, `tools-fe`)
- [X] **Labeled with Practice Area** (`backend`, `frontend`, `devops`, `design`, `research`, `product`, `ia`, `qa`, `analytics`, `contact center`, `research`, `accessibility`, `content`)
- [X] **Labeled with Type** (`bug`, `request`, `discovery`, `documentation`, etc.)
| infrastructure | monitoring set up datadog agent to capture metrics in eks issue description as an infrastructure devops engineer i need to install datadog agent so that i can capture metrics from eks clusters tasks install node agent install cluster agent install kubestate metrics service datadog general configuration basic testing to make sure it s up ci process for installing the above acceptance criteria datadog agent will be able to capture metrics from eks how to configure this issue attached to an epic what body of work is this a part of labeled with team product support analytics insights operations service design tools be tools fe labeled with practice area backend frontend devops design research product ia qa analytics contact center research accessibility content labeled with type bug request discovery documentation etc | 1 |
407,370 | 27,612,401,947 | IssuesEvent | 2023-03-09 16:50:18 | nexB/vulnerablecode | https://api.github.com/repos/nexB/vulnerablecode | opened | Update documentation for v32 | Priority: high documentation | Review existing documentation for correctness
* installation instructions
* usage instructions
* API usage information
make sure the Changelog is complete
are there any Upgrade instructions
| 1.0 | Update documentation for v32 - Review existing documentation for correctness
* installation instructions
* usage instructions
* API usage information
make sure the Changelog is complete
are there any Upgrade instructions
| non_infrastructure | update documentation for review existing documentation for correctness installation instructions usage instructions api usage information make sure the changelog is complete are there any upgrade instructions | 0 |
4,641 | 5,206,221,640 | IssuesEvent | 2017-01-24 20:00:58 | gahansen/Albany | https://api.github.com/repos/gahansen/Albany | opened | Removing ALBANY_EPETRA_EXE ifdef around distributed response / parameters code | Infrastructure | If distributed responses / parameters are meant to work without Epetra in Albany, the ALBANY_EPETRA_EXE ifdef guards around the relevant functions should be removed. I am concerned if this is not done, there may be issues for physics that are being ported to Tpetra and that utilize this capability (e.g., ATO). | 1.0 | Removing ALBANY_EPETRA_EXE ifdef around distributed response / parameters code - If distributed responses / parameters are meant to work without Epetra in Albany, the ALBANY_EPETRA_EXE ifdef guards around the relevant functions should be removed. I am concerned if this is not done, there may be issues for physics that are being ported to Tpetra and that utilize this capability (e.g., ATO). | infrastructure | removing albany epetra exe ifdef around distributed response parameters code if distributed responses parameters are meant to work without epetra in albany the albany epetra exe ifdef guards around the relevant functions should be removed i am concerned if this is not done there may be issues for physics that are being ported to tpetra and that utilize this capability e g ato | 1 |
117,868 | 9,962,353,323 | IssuesEvent | 2019-07-07 13:53:40 | cockroachdb/cockroach | https://api.github.com/repos/cockroachdb/cockroach | closed | roachtest: clock/jump/large_backward_enabled failed | C-test-failure O-roachtest O-robot | SHA: https://github.com/cockroachdb/cockroach/commits/f1c9693da739fa5fc2c94d4d978fadd6710d17da
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=clock/jump/large_backward_enabled PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1371441&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190703-1371441/clock/jump/large_backward_enabled/run_1
clock_jump_crash.go:53,clock_jump_crash.go:128,test_runner.go:680: Node unexpectedly crashed
``` | 2.0 | roachtest: clock/jump/large_backward_enabled failed - SHA: https://github.com/cockroachdb/cockroach/commits/f1c9693da739fa5fc2c94d4d978fadd6710d17da
Parameters:
To repro, try:
```
# Don't forget to check out a clean suitable branch and experiment with the
# stress invocation until the desired results present themselves. For example,
# using stress instead of stressrace and passing the '-p' stressflag which
# controls concurrency.
./scripts/gceworker.sh start && ./scripts/gceworker.sh mosh
cd ~/go/src/github.com/cockroachdb/cockroach && \
stdbuf -oL -eL \
make stressrace TESTS=clock/jump/large_backward_enabled PKG=roachtest TESTTIMEOUT=5m STRESSFLAGS='-maxtime 20m -timeout 10m' 2>&1 | tee /tmp/stress.log
```
Failed test: https://teamcity.cockroachdb.com/viewLog.html?buildId=1371441&tab=buildLog
```
The test failed on branch=release-19.1, cloud=gce:
test artifacts and logs in: /home/agent/work/.go/src/github.com/cockroachdb/cockroach/artifacts/20190703-1371441/clock/jump/large_backward_enabled/run_1
clock_jump_crash.go:53,clock_jump_crash.go:128,test_runner.go:680: Node unexpectedly crashed
``` | non_infrastructure | roachtest clock jump large backward enabled failed sha parameters to repro try don t forget to check out a clean suitable branch and experiment with the stress invocation until the desired results present themselves for example using stress instead of stressrace and passing the p stressflag which controls concurrency scripts gceworker sh start scripts gceworker sh mosh cd go src github com cockroachdb cockroach stdbuf ol el make stressrace tests clock jump large backward enabled pkg roachtest testtimeout stressflags maxtime timeout tee tmp stress log failed test the test failed on branch release cloud gce test artifacts and logs in home agent work go src github com cockroachdb cockroach artifacts clock jump large backward enabled run clock jump crash go clock jump crash go test runner go node unexpectedly crashed | 0 |
473,347 | 13,640,977,973 | IssuesEvent | 2020-09-25 13:32:55 | willyborja95/Camello | https://api.github.com/repos/willyborja95/Camello | closed | Calendar view issue | Priority: Medium Type: Enhancement | The calendar in some screen sizes appears outside of the rounded background:

| 1.0 | Calendar view issue - The calendar in some screen sizes appears outside of the rounded background:

| non_infrastructure | calendar view issue the calendar in some screen sizes appears outside of the rounded background | 0 |
13,121 | 10,131,761,129 | IssuesEvent | 2019-08-01 20:24:07 | HumanCellAtlas/secondary-analysis | https://api.github.com/repos/HumanCellAtlas/secondary-analysis | closed | What is the current state of EDDy? | infrastructure | AC:
1. Walk through the code base of Eddy, figure out the current state.
1. Add sufficient documentation to the Eddy for other developers to ramp up quickly.
P.S. Talk to [~gwade] and keep him in the loop while working on this ticket\!
βIssue is synchronized with this [Jira Story](https://broadinstitute.atlassian.net/browse/GH-324)
βAttachments: <a href="https://broadinstitute.atlassian.net/secure/attachment/113131/image.png">image.png</a>
| 1.0 | What is the current state of EDDy? - AC:
1. Walk through the code base of Eddy, figure out the current state.
1. Add sufficient documentation to the Eddy for other developers to ramp up quickly.
P.S. Talk to [~gwade] and keep him in the loop while working on this ticket\!
βIssue is synchronized with this [Jira Story](https://broadinstitute.atlassian.net/browse/GH-324)
βAttachments: <a href="https://broadinstitute.atlassian.net/secure/attachment/113131/image.png">image.png</a>
| infrastructure | what is the current state of eddy ac walk through the code base of eddy figure out the current state add sufficient documentation to the eddy for other developers to ramp up quickly p s talk to and keep him in the loop while working on this ticket βissue is synchronized with this βattachments a href | 1 |
27,722 | 22,261,259,731 | IssuesEvent | 2022-06-10 01:02:23 | hackforla/food-oasis | https://api.github.com/repos/hackforla/food-oasis | opened | Upgrade to Material UI 5.8 | size: 8pt Feature: Infrastructure Release Note: System Update Missing: Milestone | ### Overview
The application currently uses @material-ui version 4.12. The current version is @mui/material@5.8.3. Upgrade the app to the new major version. It seems to be a radical set of changes and will be challenging.
### Action Items
| 1.0 | Upgrade to Material UI 5.8 - ### Overview
The application currently uses @material-ui version 4.12. The current version is @mui/material@5.8.3. Upgrade the app to the new major version. It seems to be a radical set of changes and will be challenging.
### Action Items
| infrastructure | upgrade to material ui overview the application currently uses material ui version the current version is mui material upgrade the app to the new major version it seems to be a radical set of changes and will be challenging action items | 1 |
144,105 | 11,595,192,745 | IssuesEvent | 2020-02-24 16:34:19 | elastic/kibana | https://api.github.com/repos/elastic/kibana | closed | graph app test failures | Feature:Graph Team:KibanaApp failed-test test test-cloud test_xpack_functional | **β fail: "graph app feature controls security global graph all privileges "before all" hook"**
β Error: timed out waiting for logout button visible -- last error: Error: retry.try timeout: ElementClickInterceptedError: element click intercepted: Element <button class="euiHeaderSectionItem__button" type="button" aria-controls="headerUserMenu" aria-expanded="false" aria-haspopup="true" aria-label="Account menu" data-test-subj="userMenuButton">...</button> is not clickable at point (1576, 24). Other element would receive the click: <header class="homWelcome__header">...</header>
β (Session info: headless chrome=74.0.3729.108)
β (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Linux 4.4.0-142-generic x86_64)
β at Object.checkLegacyResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/error.js:585:15)
β at parseHttpResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:533:13)
β at Executor.execute (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:468:26)
β at process._tickCallback (internal/process/next_tick.js:68:7)
β at lastError (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:28:9)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_truthy.ts:50:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)z5ZlAxPGwaMP9E4l
β
**β fail: "graph app feature controls security global graph read-only privileges "before all" hook**"
β Error: timed out waiting for logout button visible -- last error: Error: retry.try timeout: ElementClickInterceptedError: element click intercepted: Element <button class="euiHeaderSectionItem__button" type="button" aria-controls="headerUserMenu" aria-expanded="false" aria-haspopup="true" aria-label="Account menu" data-test-subj="userMenuButton">...</button> is not clickable at point (1576, 24). Other element would receive the click: <header class="homWelcome__header">...</header>
β (Session info: headless chrome=74.0.3729.108)
β (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Linux 4.4.0-142-generic x86_64)
β at Object.checkLegacyResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/error.js:585:15)
β at parseHttpResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:533:13)
β at Executor.execute (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:468:26)
β at process._tickCallback (internal/process/next_tick.js:68:7)
β at lastError (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:28:9)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_truthy.ts:50:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
**β fail: "graph app feature controls security no graph privileges "before all" hook"**
β Error: timed out waiting for logout button visible -- last error: Error: retry.try timeout: ElementClickInterceptedError: element click intercepted: Element <button class="euiHeaderSectionItem__button" type="button" aria-controls="headerUserMenu" aria-expanded="false" aria-haspopup="true" aria-label="Account menu" data-test-subj="userMenuButton">...</button> is not clickable at point (1576, 24). Other element would receive the click: <header class="homWelcome__header">...</header>
β (Session info: headless chrome=74.0.3729.108)
β (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Linux 4.4.0-142-generic x86_64)
β at Object.checkLegacyResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/error.js:585:15)
β at parseHttpResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:533:13)
β at Executor.execute (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:468:26)
β at process._tickCallback (internal/process/next_tick.js:68:7)
β at lastError (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:28:9)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_truthy.ts:50:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β
**Version: 7.2** | 4.0 | graph app test failures - **β fail: "graph app feature controls security global graph all privileges "before all" hook"**
β Error: timed out waiting for logout button visible -- last error: Error: retry.try timeout: ElementClickInterceptedError: element click intercepted: Element <button class="euiHeaderSectionItem__button" type="button" aria-controls="headerUserMenu" aria-expanded="false" aria-haspopup="true" aria-label="Account menu" data-test-subj="userMenuButton">...</button> is not clickable at point (1576, 24). Other element would receive the click: <header class="homWelcome__header">...</header>
β (Session info: headless chrome=74.0.3729.108)
β (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Linux 4.4.0-142-generic x86_64)
β at Object.checkLegacyResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/error.js:585:15)
β at parseHttpResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:533:13)
β at Executor.execute (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:468:26)
β at process._tickCallback (internal/process/next_tick.js:68:7)
β at lastError (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:28:9)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_truthy.ts:50:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)z5ZlAxPGwaMP9E4l
β
**β fail: "graph app feature controls security global graph read-only privileges "before all" hook**"
β Error: timed out waiting for logout button visible -- last error: Error: retry.try timeout: ElementClickInterceptedError: element click intercepted: Element <button class="euiHeaderSectionItem__button" type="button" aria-controls="headerUserMenu" aria-expanded="false" aria-haspopup="true" aria-label="Account menu" data-test-subj="userMenuButton">...</button> is not clickable at point (1576, 24). Other element would receive the click: <header class="homWelcome__header">...</header>
β (Session info: headless chrome=74.0.3729.108)
β (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Linux 4.4.0-142-generic x86_64)
β at Object.checkLegacyResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/error.js:585:15)
β at parseHttpResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:533:13)
β at Executor.execute (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:468:26)
β at process._tickCallback (internal/process/next_tick.js:68:7)
β at lastError (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:28:9)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_truthy.ts:50:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
**β fail: "graph app feature controls security no graph privileges "before all" hook"**
β Error: timed out waiting for logout button visible -- last error: Error: retry.try timeout: ElementClickInterceptedError: element click intercepted: Element <button class="euiHeaderSectionItem__button" type="button" aria-controls="headerUserMenu" aria-expanded="false" aria-haspopup="true" aria-label="Account menu" data-test-subj="userMenuButton">...</button> is not clickable at point (1576, 24). Other element would receive the click: <header class="homWelcome__header">...</header>
β (Session info: headless chrome=74.0.3729.108)
β (Driver info: chromedriver=74.0.3729.6 (255758eccf3d244491b8a1317aa76e1ce10d57e9-refs/branch-heads/3729@{#29}),platform=Linux 4.4.0-142-generic x86_64)
β at Object.checkLegacyResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/error.js:585:15)
β at parseHttpResponse (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:533:13)
β at Executor.execute (/home/liza/TESTING/kbn-cloud-testing/node_modules/selenium-webdriver/lib/http.js:468:26)
β at process._tickCallback (internal/process/next_tick.js:68:7)
β at lastError (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:28:9)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_truthy.ts:50:13)
β at onFailure (/home/liza/TESTING/kbn-cloud-testing/test/common/services/retry/retry_for_success.ts:68:13)
β
**Version: 7.2** | non_infrastructure | graph app test failures β fail graph app feature controls security global graph all privileges before all hook β error timed out waiting for logout button visible last error error retry try timeout elementclickinterceptederror element click intercepted element is not clickable at point other element would receive the click β session info headless chrome β driver info chromedriver refs branch heads platform linux generic β at object checklegacyresponse home liza testing kbn cloud testing node modules selenium webdriver lib error js β at parsehttpresponse home liza testing kbn cloud testing node modules selenium webdriver lib http js β at executor execute home liza testing kbn cloud testing node modules selenium webdriver lib http js β at process tickcallback internal process next tick js β at lasterror home liza testing kbn cloud testing test common services retry retry for success ts β at onfailure home liza testing kbn cloud testing test common services retry retry for success ts β at onfailure home liza testing kbn cloud testing test common services retry retry for truthy ts β at onfailure home liza testing kbn cloud testing test common services retry retry for success ts β β fail graph app feature controls security global graph read only privileges before all hook β error timed out waiting for logout button visible last error error retry try timeout elementclickinterceptederror element click intercepted element is not clickable at point other element would receive the click β session info headless chrome β driver info chromedriver refs branch heads platform linux generic β at object checklegacyresponse home liza testing kbn cloud testing node modules selenium webdriver lib error js β at parsehttpresponse home liza testing kbn cloud testing node modules selenium webdriver lib http js β at executor execute home liza testing kbn cloud testing node modules selenium webdriver lib http js β at process tickcallback internal process next tick js β at lasterror home liza testing kbn cloud testing test common services retry retry for success ts β at onfailure home liza testing kbn cloud testing test common services retry retry for success ts β at onfailure home liza testing kbn cloud testing test common services retry retry for truthy ts β at onfailure home liza testing kbn cloud testing test common services retry retry for success ts β fail graph app feature controls security no graph privileges before all hook β error timed out waiting for logout button visible last error error retry try timeout elementclickinterceptederror element click intercepted element is not clickable at point other element would receive the click β session info headless chrome β driver info chromedriver refs branch heads platform linux generic β at object checklegacyresponse home liza testing kbn cloud testing node modules selenium webdriver lib error js β at parsehttpresponse home liza testing kbn cloud testing node modules selenium webdriver lib http js β at executor execute home liza testing kbn cloud testing node modules selenium webdriver lib http js β at process tickcallback internal process next tick js β at lasterror home liza testing kbn cloud testing test common services retry retry for success ts β at onfailure home liza testing kbn cloud testing test common services retry retry for success ts β at onfailure home liza testing kbn cloud testing test common services retry retry for truthy ts β at onfailure home liza testing kbn cloud testing test common services retry retry for success ts β version | 0 |
372,841 | 11,029,106,492 | IssuesEvent | 2019-12-06 13:14:46 | OpenSRP/opensrp-client-reveal | https://api.github.com/repos/OpenSRP/opensrp-client-reveal | closed | RVL- 726 Task duplication for family members in task view of the family module | Priority: High | Task duplication for family members in task view of the family module
- [ ] Index the index case query
- [ ] Generate tasks on same threads that saves families | 1.0 | RVL- 726 Task duplication for family members in task view of the family module - Task duplication for family members in task view of the family module
- [ ] Index the index case query
- [ ] Generate tasks on same threads that saves families | non_infrastructure | rvl task duplication for family members in task view of the family module task duplication for family members in task view of the family module index the index case query generate tasks on same threads that saves families | 0 |
255,034 | 8,102,699,709 | IssuesEvent | 2018-08-13 03:39:44 | mesg-foundation/core | https://api.github.com/repos/mesg-foundation/core | opened | logs command doesn't work anymore | bug high priority | I think because of the refactoring of the container package, the `logs` command doesn't work anymore and I'm pretty sure many other features might have the same problem.
I still have to investigate more but it seems that we are using `context.WithTimeout` and when we have streams of data from docker then the timeout is reached and the context is terminated.
We should use `context.Background` in these cases
This issue might occurs in:
- build the docker image
- log the container
Let's make sure to test again all the different features using the `./dev-core` and `./dev-cli` to do our manual tests
@ilgooz can you have a look and confirm my thoughts ? | 1.0 | logs command doesn't work anymore - I think because of the refactoring of the container package, the `logs` command doesn't work anymore and I'm pretty sure many other features might have the same problem.
I still have to investigate more but it seems that we are using `context.WithTimeout` and when we have streams of data from docker then the timeout is reached and the context is terminated.
We should use `context.Background` in these cases
This issue might occurs in:
- build the docker image
- log the container
Let's make sure to test again all the different features using the `./dev-core` and `./dev-cli` to do our manual tests
@ilgooz can you have a look and confirm my thoughts ? | non_infrastructure | logs command doesn t work anymore i think because of the refactoring of the container package the logs command doesn t work anymore and i m pretty sure many other features might have the same problem i still have to investigate more but it seems that we are using context withtimeout and when we have streams of data from docker then the timeout is reached and the context is terminated we should use context background in these cases this issue might occurs in build the docker image log the container let s make sure to test again all the different features using the dev core and dev cli to do our manual tests ilgooz can you have a look and confirm my thoughts | 0 |
32,302 | 26,610,028,101 | IssuesEvent | 2023-01-23 23:00:49 | AndreTerra5348/appointment | https://api.github.com/repos/AndreTerra5348/appointment | closed | Add Appointment repository | issue: invalid layer: infrastructure spec: low level | ### Description
- The repository implementation shall use [DriftRepository](https://github.com/AndreTerra5348/appointment/blob/18a7ea43fd0ef7bdf593f60207d0cfb32cef6d0a/lib/infrastructure/core/repositories.dart#L9) class with its generics: `<Appointment, AppointmentModels, AppointmentModel>`
### Depends on
- [x] #13
- [ ] #26
### Part of
- #25 | 1.0 | Add Appointment repository - ### Description
- The repository implementation shall use [DriftRepository](https://github.com/AndreTerra5348/appointment/blob/18a7ea43fd0ef7bdf593f60207d0cfb32cef6d0a/lib/infrastructure/core/repositories.dart#L9) class with its generics: `<Appointment, AppointmentModels, AppointmentModel>`
### Depends on
- [x] #13
- [ ] #26
### Part of
- #25 | infrastructure | add appointment repository description the repository implementation shall use class with its generics depends on part of | 1 |
34,142 | 28,350,115,950 | IssuesEvent | 2023-04-12 01:32:36 | grafana/agent | https://api.github.com/repos/grafana/agent | closed | Get changes of `process-exporter` merged upstream and remove `replace` in go.mod | type/infrastructure | Removing the `replace` makes everything compile without problems. We only need to make sure that new changes in the exporter are added to integration/flow component. | 1.0 | Get changes of `process-exporter` merged upstream and remove `replace` in go.mod - Removing the `replace` makes everything compile without problems. We only need to make sure that new changes in the exporter are added to integration/flow component. | infrastructure | get changes of process exporter merged upstream and remove replace in go mod removing the replace makes everything compile without problems we only need to make sure that new changes in the exporter are added to integration flow component | 1 |
29,436 | 24,010,463,684 | IssuesEvent | 2022-09-14 18:19:28 | WordPress/performance | https://api.github.com/repos/WordPress/performance | opened | Bump minimum WordPress requirement to 6.0 | [Type] Enhancement Infrastructure Needs Discussion | **With this issue, I'm proposing to bump the minimum WordPress version requirement of the Performance Lab plugin to 6.0 in the upcoming 1.6.0 version (October 17).**
This is in line with https://github.com/WordPress/performance/blob/trunk/docs/Version-support-policy.md#wordpress-core-versions, and even more so, if we do this in the upcoming 1.6.0 release (October 17), we would be very close to even the WordPress 6.1 release (November 1), so realistically we would support the latest _two_ WordPress versions in the coming months (similar to how initially we supported 5.9 and 5.8 when 5.9 was the latest version).
As mentioned in the version policy, as soon as there's a benefit to bumping the version requirement, we should be able to do so as long as it is to a stable version. One of those is the introduction of the `filesize` metadata for attachments, which was added in 6.0 and heavily benefits logic in the WebP Uploads module. While this is just one benefit, with this being a feature plugin, we don't need to worry too much about keeping support for old WordPress core versions as long as the newer one provides us a clear benefit from a code perspective.
| 1.0 | Bump minimum WordPress requirement to 6.0 - **With this issue, I'm proposing to bump the minimum WordPress version requirement of the Performance Lab plugin to 6.0 in the upcoming 1.6.0 version (October 17).**
This is in line with https://github.com/WordPress/performance/blob/trunk/docs/Version-support-policy.md#wordpress-core-versions, and even more so, if we do this in the upcoming 1.6.0 release (October 17), we would be very close to even the WordPress 6.1 release (November 1), so realistically we would support the latest _two_ WordPress versions in the coming months (similar to how initially we supported 5.9 and 5.8 when 5.9 was the latest version).
As mentioned in the version policy, as soon as there's a benefit to bumping the version requirement, we should be able to do so as long as it is to a stable version. One of those is the introduction of the `filesize` metadata for attachments, which was added in 6.0 and heavily benefits logic in the WebP Uploads module. While this is just one benefit, with this being a feature plugin, we don't need to worry too much about keeping support for old WordPress core versions as long as the newer one provides us a clear benefit from a code perspective.
| infrastructure | bump minimum wordpress requirement to with this issue i m proposing to bump the minimum wordpress version requirement of the performance lab plugin to in the upcoming version october this is in line with and even more so if we do this in the upcoming release october we would be very close to even the wordpress release november so realistically we would support the latest two wordpress versions in the coming months similar to how initially we supported and when was the latest version as mentioned in the version policy as soon as there s a benefit to bumping the version requirement we should be able to do so as long as it is to a stable version one of those is the introduction of the filesize metadata for attachments which was added in and heavily benefits logic in the webp uploads module while this is just one benefit with this being a feature plugin we don t need to worry too much about keeping support for old wordpress core versions as long as the newer one provides us a clear benefit from a code perspective | 1 |
16,856 | 12,152,144,819 | IssuesEvent | 2020-04-24 21:30:40 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Investigate better API iptables rules | Infrastructure closed | https://trello.com/c/jiG6Z1PQ/116-investigate-better-api-iptables-rules
The ansible iptables module doesn't support `-w` and thus will fail if the SDN is manipulating the firewall at the same time. This impacts the role we have to manage the iptables on the masters to allow for graceful failover of the API and web console. | 1.0 | Investigate better API iptables rules - https://trello.com/c/jiG6Z1PQ/116-investigate-better-api-iptables-rules
The ansible iptables module doesn't support `-w` and thus will fail if the SDN is manipulating the firewall at the same time. This impacts the role we have to manage the iptables on the masters to allow for graceful failover of the API and web console. | infrastructure | investigate better api iptables rules the ansible iptables module doesn t support w and thus will fail if the sdn is manipulating the firewall at the same time this impacts the role we have to manage the iptables on the masters to allow for graceful failover of the api and web console | 1 |
107,454 | 23,415,083,086 | IssuesEvent | 2022-08-12 23:01:04 | ROCmSoftwarePlatform/composable_kernel | https://api.github.com/repos/ROCmSoftwarePlatform/composable_kernel | closed | Use CMake options to replace macros in config.hpp | code quality | This is to follow up a discussion https://github.com/ROCmSoftwarePlatform/composable_kernel/pull/130#discussion_r838692788
I suggest that the macros in config.hpp, at least those independent ones, are moved into cmake. This way users and developers do not need to change the code (ie, config.hpp) itself for different configurations. | 1.0 | Use CMake options to replace macros in config.hpp - This is to follow up a discussion https://github.com/ROCmSoftwarePlatform/composable_kernel/pull/130#discussion_r838692788
I suggest that the macros in config.hpp, at least those independent ones, are moved into cmake. This way users and developers do not need to change the code (ie, config.hpp) itself for different configurations. | non_infrastructure | use cmake options to replace macros in config hpp this is to follow up a discussion i suggest that the macros in config hpp at least those independent ones are moved into cmake this way users and developers do not need to change the code ie config hpp itself for different configurations | 0 |
432,607 | 12,495,688,618 | IssuesEvent | 2020-06-01 13:37:55 | buddyboss/buddyboss-platform | https://api.github.com/repos/buddyboss/buddyboss-platform | opened | Members Profile Types slugs | feature: enhancement priority: medium | **Is your feature request related to a problem? Please describe.**
The Profile type URLs are currently a set of GET parameters - ?post_type=bp-member-type&p=35
**Describe the solution you'd like**
I think a slug would be welcome either as root-level slug, or under the users directory "/members/" "/users/", whatever is configured.
e.g. /users/teachers, /users/students ...
**Describe alternatives you've considered**
SEO tools can probably do the trick, but this is good contender for a built-in platform feature, IMO.
**Support ticket links**
Haven't bothered support with it. Thanks for looking into it. | 1.0 | Members Profile Types slugs - **Is your feature request related to a problem? Please describe.**
The Profile type URLs are currently a set of GET parameters - ?post_type=bp-member-type&p=35
**Describe the solution you'd like**
I think a slug would be welcome either as root-level slug, or under the users directory "/members/" "/users/", whatever is configured.
e.g. /users/teachers, /users/students ...
**Describe alternatives you've considered**
SEO tools can probably do the trick, but this is good contender for a built-in platform feature, IMO.
**Support ticket links**
Haven't bothered support with it. Thanks for looking into it. | non_infrastructure | members profile types slugs is your feature request related to a problem please describe the profile type urls are currently a set of get parameters post type bp member type p describe the solution you d like i think a slug would be welcome either as root level slug or under the users directory members users whatever is configured e g users teachers users students describe alternatives you ve considered seo tools can probably do the trick but this is good contender for a built in platform feature imo support ticket links haven t bothered support with it thanks for looking into it | 0 |
72,583 | 31,768,974,640 | IssuesEvent | 2023-09-12 10:30:38 | gauravrs18/issue_onboarding | https://api.github.com/repos/gauravrs18/issue_onboarding | closed | dev-angular-code-account-services-outage-emergency-current-component
-edit-component | CX-account-services | dev-angular-code-account-services-outage-emergency-current-component
-edit-component | 1.0 | dev-angular-code-account-services-outage-emergency-current-component
-edit-component - dev-angular-code-account-services-outage-emergency-current-component
-edit-component | non_infrastructure | dev angular code account services outage emergency current component edit component dev angular code account services outage emergency current component edit component | 0 |
3,533 | 4,387,184,436 | IssuesEvent | 2016-08-08 15:05:34 | MinetestForFun/server-minetestforfun | https://api.github.com/repos/MinetestForFun/server-minetestforfun | closed | Modifications of reboot script | Infrastructure Performance Priority: Medium | The following modifications were discussed and should be implemented in the reboot script :
- [x] The killing method should be the following :
```bash
kill -2 $(pgrep 'minetest')
sleep 30
kill -15 $(pgrep 'minetest')
```
- [x] All repository cloning should use `--depth=1` to (slightly) speed up cloning, and reduce data storage | 1.0 | Modifications of reboot script - The following modifications were discussed and should be implemented in the reboot script :
- [x] The killing method should be the following :
```bash
kill -2 $(pgrep 'minetest')
sleep 30
kill -15 $(pgrep 'minetest')
```
- [x] All repository cloning should use `--depth=1` to (slightly) speed up cloning, and reduce data storage | infrastructure | modifications of reboot script the following modifications were discussed and should be implemented in the reboot script the killing method should be the following bash kill pgrep minetest sleep kill pgrep minetest all repository cloning should use depth to slightly speed up cloning and reduce data storage | 1 |
12,445 | 9,661,060,428 | IssuesEvent | 2019-05-20 16:58:55 | Azure/azure-sdk-for-js | https://api.github.com/repos/Azure/azure-sdk-for-js | closed | [Service Bus] Fix functionality failures in browserified SDK - Invalid parameter scenarios | Client Service Bus |
**Describe the bug**
Tests in `invalidParameters.spec.ts` fail when run against browserified version of Service Bus SDK.
Following are the errors noted. Similar errors are noted when using subscription client / session receiver as well.
```
FAILED TESTS:
Invalid parameters in QueueClient
Γ PeekBySequenceNumber: Invalid maxMessageCount in QueueClient
HeadlessChrome 75.0.3765 (Windows 10.0.0)
TypeError: Missing parameter "fromSequenceNumber"
at throwTypeErrorIfParameterMissing (test-browser/index.js:25540:29)
at ManagementClient.<anonymous> (test-browser/index.js:29172:17)
at Generator.next (<anonymous>)
at test-browser/index.js:100:75
at new Promise (<anonymous>)
at __awaiter (test-browser/index.js:96:16)
at ManagementClient.peekBySequenceNumber (test-browser/index.js:29168:20)
at QueueClient.<anonymous> (test-browser/index.js:30611:55)
at Generator.next (<anonymous>)
at test-browser/index.js:100:75
Γ PeekBySequenceNumber: Wrong type maxMessageCount in QueueClient
HeadlessChrome 75.0.3765 (Windows 10.0.0)
AssertionError: expected 'Missing parameter "fromSequenceNumber"' to equal 'The parameter "maxMessageCount" should be of type
"number"'
at Object.should.equal (test-browser/index.js:7759:39)
at Context.<anonymous> (test-browser/index.js:32018:26)
at Generator.throw (<anonymous>)
at rejected (test-browser/index.js:98:69)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Clone https://github.com/ramya0820/azure-sdk-for-js/tree/service-bus-browser-tests-v1
2. Setup local workspace with appropriate `.env` file contents per README.
3. Mark tests in `invalidParameters.spec.ts` with `.only` to run only those tests.
3. run `npm run test:browser`
**Expected behavior**
All tests should pass, but failures appear as described above.
| 1.0 | [Service Bus] Fix functionality failures in browserified SDK - Invalid parameter scenarios -
**Describe the bug**
Tests in `invalidParameters.spec.ts` fail when run against browserified version of Service Bus SDK.
Following are the errors noted. Similar errors are noted when using subscription client / session receiver as well.
```
FAILED TESTS:
Invalid parameters in QueueClient
Γ PeekBySequenceNumber: Invalid maxMessageCount in QueueClient
HeadlessChrome 75.0.3765 (Windows 10.0.0)
TypeError: Missing parameter "fromSequenceNumber"
at throwTypeErrorIfParameterMissing (test-browser/index.js:25540:29)
at ManagementClient.<anonymous> (test-browser/index.js:29172:17)
at Generator.next (<anonymous>)
at test-browser/index.js:100:75
at new Promise (<anonymous>)
at __awaiter (test-browser/index.js:96:16)
at ManagementClient.peekBySequenceNumber (test-browser/index.js:29168:20)
at QueueClient.<anonymous> (test-browser/index.js:30611:55)
at Generator.next (<anonymous>)
at test-browser/index.js:100:75
Γ PeekBySequenceNumber: Wrong type maxMessageCount in QueueClient
HeadlessChrome 75.0.3765 (Windows 10.0.0)
AssertionError: expected 'Missing parameter "fromSequenceNumber"' to equal 'The parameter "maxMessageCount" should be of type
"number"'
at Object.should.equal (test-browser/index.js:7759:39)
at Context.<anonymous> (test-browser/index.js:32018:26)
at Generator.throw (<anonymous>)
at rejected (test-browser/index.js:98:69)
```
**To Reproduce**
Steps to reproduce the behavior:
1. Clone https://github.com/ramya0820/azure-sdk-for-js/tree/service-bus-browser-tests-v1
2. Setup local workspace with appropriate `.env` file contents per README.
3. Mark tests in `invalidParameters.spec.ts` with `.only` to run only those tests.
3. run `npm run test:browser`
**Expected behavior**
All tests should pass, but failures appear as described above.
| non_infrastructure | fix functionality failures in browserified sdk invalid parameter scenarios describe the bug tests in invalidparameters spec ts fail when run against browserified version of service bus sdk following are the errors noted similar errors are noted when using subscription client session receiver as well failed tests invalid parameters in queueclient Γ peekbysequencenumber invalid maxmessagecount in queueclient headlesschrome windows typeerror missing parameter fromsequencenumber at throwtypeerrorifparametermissing test browser index js at managementclient test browser index js at generator next at test browser index js at new promise at awaiter test browser index js at managementclient peekbysequencenumber test browser index js at queueclient test browser index js at generator next at test browser index js Γ peekbysequencenumber wrong type maxmessagecount in queueclient headlesschrome windows assertionerror expected missing parameter fromsequencenumber to equal the parameter maxmessagecount should be of type number at object should equal test browser index js at context test browser index js at generator throw at rejected test browser index js to reproduce steps to reproduce the behavior clone setup local workspace with appropriate env file contents per readme mark tests in invalidparameters spec ts with only to run only those tests run npm run test browser expected behavior all tests should pass but failures appear as described above | 0 |
25,503 | 18,796,302,832 | IssuesEvent | 2021-11-08 22:52:21 | dotnet/razor-tooling | https://api.github.com/repos/dotnet/razor-tooling | closed | aspnetcore-tooling-ci builds often fail due to transient versioning issues. | area-infrastructure fundamentals | Builds of aspnetcore-tooling-ci have been failing quite frequently lately due to "random" versioning issues. Currently that pipeline only has a pass rate of 52.53%.
[Here's an example build](https://dev.azure.com/dnceng/public/_build/results?buildId=1388007&view=logs&j=7c8326b9-0a5f-532a-e6de-db8515c72d9a&t=2f0f1e20-badc-52e7-dbc6-29a52429fc6c) and the text reads something like:
```
There was a conflict between "StreamJsonRpc, Version=2.7.0.0, Culture=neutral, PublicKeyToken=xxxx" and "StreamJsonRpc, Version=2.8.0.0, Culture=neutral, PublicKeyToken=xxxx".
"StreamJsonRpc, Version=2.7.0.0, Culture=neutral, PublicKeyToken=xxxx" was chosen because it was primary and "StreamJsonRpc, Version=2.8.0.0, Culture=neutral, PublicKeyToken=xxxx" was not.
References which depend on "StreamJsonRpc, Version=2.7.0.0, Culture=neutral, PublicKeyToken=xxxx" [D:\workspace\_work\1\s\.packages\streamjsonrpc\2.7.70\lib\netstandard2.0\StreamJsonRpc.dll].
```
We had previously identified this problem as being related to a flakey NuGet source that was being investigated, but either that source hasn't been fixed in a couple weeks or it's happening again and they need to harden their infrastructure. | 1.0 | aspnetcore-tooling-ci builds often fail due to transient versioning issues. - Builds of aspnetcore-tooling-ci have been failing quite frequently lately due to "random" versioning issues. Currently that pipeline only has a pass rate of 52.53%.
[Here's an example build](https://dev.azure.com/dnceng/public/_build/results?buildId=1388007&view=logs&j=7c8326b9-0a5f-532a-e6de-db8515c72d9a&t=2f0f1e20-badc-52e7-dbc6-29a52429fc6c) and the text reads something like:
```
There was a conflict between "StreamJsonRpc, Version=2.7.0.0, Culture=neutral, PublicKeyToken=xxxx" and "StreamJsonRpc, Version=2.8.0.0, Culture=neutral, PublicKeyToken=xxxx".
"StreamJsonRpc, Version=2.7.0.0, Culture=neutral, PublicKeyToken=xxxx" was chosen because it was primary and "StreamJsonRpc, Version=2.8.0.0, Culture=neutral, PublicKeyToken=xxxx" was not.
References which depend on "StreamJsonRpc, Version=2.7.0.0, Culture=neutral, PublicKeyToken=xxxx" [D:\workspace\_work\1\s\.packages\streamjsonrpc\2.7.70\lib\netstandard2.0\StreamJsonRpc.dll].
```
We had previously identified this problem as being related to a flakey NuGet source that was being investigated, but either that source hasn't been fixed in a couple weeks or it's happening again and they need to harden their infrastructure. | infrastructure | aspnetcore tooling ci builds often fail due to transient versioning issues builds of aspnetcore tooling ci have been failing quite frequently lately due to random versioning issues currently that pipeline only has a pass rate of and the text reads something like there was a conflict between streamjsonrpc version culture neutral publickeytoken xxxx and streamjsonrpc version culture neutral publickeytoken xxxx streamjsonrpc version culture neutral publickeytoken xxxx was chosen because it was primary and streamjsonrpc version culture neutral publickeytoken xxxx was not references which depend on streamjsonrpc version culture neutral publickeytoken xxxx we had previously identified this problem as being related to a flakey nuget source that was being investigated but either that source hasn t been fixed in a couple weeks or it s happening again and they need to harden their infrastructure | 1 |
9,751 | 8,133,371,151 | IssuesEvent | 2018-08-19 00:29:34 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | Disable CLI's first time experience from within our repo | area-Infrastructure | We don't want to run the CLI's first time experience in our builds.
See discussion: https://github.com/dotnet/corefx/pull/19407#issuecomment-299530720
/cc @weshaggard @eerhardt @livarcocc | 1.0 | Disable CLI's first time experience from within our repo - We don't want to run the CLI's first time experience in our builds.
See discussion: https://github.com/dotnet/corefx/pull/19407#issuecomment-299530720
/cc @weshaggard @eerhardt @livarcocc | infrastructure | disable cli s first time experience from within our repo we don t want to run the cli s first time experience in our builds see discussion cc weshaggard eerhardt livarcocc | 1 |
22,497 | 15,224,144,849 | IssuesEvent | 2021-02-18 04:29:12 | hyphacoop/organizing | https://api.github.com/repos/hyphacoop/organizing | opened | Move active and useful OKRs to Github Task Board reformulated as tasks | wg:business-planning wg:finance wg:governance wg:infrastructure wg:operations | <sup>_This initial comment is collaborative and open to modification by all._</sup>
π
**Due date:** March 1, 2021
## Task Summary
Each WG to review their OKRs and add those that are important to the task board. Associated with Call me Chrysalis initiative.
## To Do for each WG, check off when complete
- [ ] bizdev
- [ ] gov
- [ ] ops
- [ ] infra
- [ ] finance
| 1.0 | Move active and useful OKRs to Github Task Board reformulated as tasks - <sup>_This initial comment is collaborative and open to modification by all._</sup>
π
**Due date:** March 1, 2021
## Task Summary
Each WG to review their OKRs and add those that are important to the task board. Associated with Call me Chrysalis initiative.
## To Do for each WG, check off when complete
- [ ] bizdev
- [ ] gov
- [ ] ops
- [ ] infra
- [ ] finance
| infrastructure | move active and useful okrs to github task board reformulated as tasks this initial comment is collaborative and open to modification by all π
due date march task summary each wg to review their okrs and add those that are important to the task board associated with call me chrysalis initiative to do for each wg check off when complete bizdev gov ops infra finance | 1 |
254,474 | 27,389,382,636 | IssuesEvent | 2023-02-28 15:20:15 | Dima2021/easybuggy | https://api.github.com/repos/Dima2021/easybuggy | closed | CVE-2016-10735 (Medium) detected in bootstrap-3.3.7.min.js - autoclosed | security vulnerability | ## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /src/main/webapp/dfi/style_bootstrap.html</p>
<p>Path to vulnerable library: /src/main/webapp/dfi/style_bootstrap.html,/target/easybuggy-1-SNAPSHOT/dfi/style_bootstrap.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/easybuggy/commit/516304f979df23a052978fab3c6f4960c7967169">516304f979df23a052978fab3c6f4960c7967169</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0, 4.0.0-beta.2</p>
</p>
</details>
<p></p>
| True | CVE-2016-10735 (Medium) detected in bootstrap-3.3.7.min.js - autoclosed - ## CVE-2016-10735 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary>
<p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p>
<p>Path to dependency file: /src/main/webapp/dfi/style_bootstrap.html</p>
<p>Path to vulnerable library: /src/main/webapp/dfi/style_bootstrap.html,/target/easybuggy-1-SNAPSHOT/dfi/style_bootstrap.html</p>
<p>
Dependency Hierarchy:
- :x: **bootstrap-3.3.7.min.js** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/Dima2021/easybuggy/commit/516304f979df23a052978fab3c6f4960c7967169">516304f979df23a052978fab3c6f4960c7967169</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Bootstrap 3.x before 3.4.0 and 4.x-beta before 4.0.0-beta.2, XSS is possible in the data-target attribute, a different vulnerability than CVE-2018-14041.
<p>Publish Date: 2019-01-09
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-10735>CVE-2016-10735</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10735</a></p>
<p>Release Date: 2019-01-09</p>
<p>Fix Resolution: bootstrap - 3.4.0, 4.0.0-beta.2</p>
</p>
</details>
<p></p>
| non_infrastructure | cve medium detected in bootstrap min js autoclosed cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file src main webapp dfi style bootstrap html path to vulnerable library src main webapp dfi style bootstrap html target easybuggy snapshot dfi style bootstrap html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch master vulnerability details in bootstrap x before and x beta before beta xss is possible in the data target attribute a different vulnerability than cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap beta | 0 |
24,243 | 17,032,081,057 | IssuesEvent | 2021-07-04 19:29:51 | gfx-rs/wgpu | https://api.github.com/repos/gfx-rs/wgpu | reopened | Figure Out Testing on CI on Linux | area: infrastructure type: enhancement | **Is your feature request related to a problem? Please describe.**
With #1538 merged, we have the ability to test our graphics code in CI. Once dx12 and dx11 are implemented, we can use WARP to test these backeds on the windows build.
**Describe the solution you'd like**
The two main software implementation of vulkan are lavapipe and swiftshader. Both have had issues. We need to evaluate them or any other options.
**Describe alternatives you've considered**
Relying solely on WARP and local testing of our test suite. We may also be able to rely on our todo network of testing machines. | 1.0 | Figure Out Testing on CI on Linux - **Is your feature request related to a problem? Please describe.**
With #1538 merged, we have the ability to test our graphics code in CI. Once dx12 and dx11 are implemented, we can use WARP to test these backeds on the windows build.
**Describe the solution you'd like**
The two main software implementation of vulkan are lavapipe and swiftshader. Both have had issues. We need to evaluate them or any other options.
**Describe alternatives you've considered**
Relying solely on WARP and local testing of our test suite. We may also be able to rely on our todo network of testing machines. | infrastructure | figure out testing on ci on linux is your feature request related to a problem please describe with merged we have the ability to test our graphics code in ci once and are implemented we can use warp to test these backeds on the windows build describe the solution you d like the two main software implementation of vulkan are lavapipe and swiftshader both have had issues we need to evaluate them or any other options describe alternatives you ve considered relying solely on warp and local testing of our test suite we may also be able to rely on our todo network of testing machines | 1 |
290,399 | 21,877,105,195 | IssuesEvent | 2022-05-19 11:12:24 | appsmithorg/appsmith | https://api.github.com/repos/appsmithorg/appsmith | closed | [Docs] #12319 Add BE analytics event for templates | Documentation User Education Pod | > TODO
- [ ] Evaluate if this task is needed. If not add the "Skip Docs" label on the parent ticket
- [ ] Fill these fields
- [ ] Prepare first draft
- [ ] Add label: "Ready for Docs Team"
Field | Details
-----|-----
**POD** | New Developers Pod
**Parent Ticket** | #12319
Engineer |
Release Date |
Live Date |
First Draft |
Auto Assign |
Priority |
Environment | | 1.0 | [Docs] #12319 Add BE analytics event for templates - > TODO
- [ ] Evaluate if this task is needed. If not add the "Skip Docs" label on the parent ticket
- [ ] Fill these fields
- [ ] Prepare first draft
- [ ] Add label: "Ready for Docs Team"
Field | Details
-----|-----
**POD** | New Developers Pod
**Parent Ticket** | #12319
Engineer |
Release Date |
Live Date |
First Draft |
Auto Assign |
Priority |
Environment | | non_infrastructure | add be analytics event for templates todo evaluate if this task is needed if not add the skip docs label on the parent ticket fill these fields prepare first draft add label ready for docs team field details pod new developers pod parent ticket engineer release date live date first draft auto assign priority environment | 0 |
95,513 | 19,705,629,317 | IssuesEvent | 2022-01-12 21:37:16 | detiuaveiro/RacingGame- | https://api.github.com/repos/detiuaveiro/RacingGame- | closed | Bug: Adding mana is according to the frames! Change it to be with Time.DeltaTime | bug Code | Adding mana will be different on different machines, so use Time.DeltaTime instead! | 1.0 | Bug: Adding mana is according to the frames! Change it to be with Time.DeltaTime - Adding mana will be different on different machines, so use Time.DeltaTime instead! | non_infrastructure | bug adding mana is according to the frames change it to be with time deltatime adding mana will be different on different machines so use time deltatime instead | 0 |
808,088 | 30,033,461,272 | IssuesEvent | 2023-06-27 11:10:09 | rangav/thunder-client-support | https://api.github.com/repos/rangav/thunder-client-support | closed | Operation not Permitted when I use a large csv file after ~ 200 iterations | bug Priority | **Describe the bug**
Error in Set Env: EPERM: operation not permitted, rename 'c:\Users\Public\my-tests\thunder-tests\.thunderEnvironment.json.tmp' -> 'c:\Users\Public\my-tests\thunder-tests\thunderEnvironment.json'Β
**To Reproduce**
We did run a larger test with one request but with a large csv file. approx 300 datasets (lines) and 15 columns.
The first run with 300 datasets stop at around 180 execution.
The second run did run smoothly.
The third run did stop at 210 exection. With the error above.
Means it is not alway reproducible :(
**Expected behavior**
A bit mor information would help, do identify the problem :)
**Platform:**
- OS: Windows 10
- vscode version: `Version: 1.78.2 (user setup)
Commit: b3e4e68a0bc097f0ae7907b217c1119af9e03435
Date: 2023-05-10T14:39:26.248Z
Electron: 22.5.2
Chromium: 108.0.5359.215
Node.js: 16.17.1
V8: 10.8.168.25-electron.0
OS: Windows_NT x64 10.0.19044
Sandboxed: Yes`
- extension version: v2.6.1
**Your Team Size Using TC:**
2-10
| 1.0 | Operation not Permitted when I use a large csv file after ~ 200 iterations - **Describe the bug**
Error in Set Env: EPERM: operation not permitted, rename 'c:\Users\Public\my-tests\thunder-tests\.thunderEnvironment.json.tmp' -> 'c:\Users\Public\my-tests\thunder-tests\thunderEnvironment.json'Β
**To Reproduce**
We did run a larger test with one request but with a large csv file. approx 300 datasets (lines) and 15 columns.
The first run with 300 datasets stop at around 180 execution.
The second run did run smoothly.
The third run did stop at 210 exection. With the error above.
Means it is not alway reproducible :(
**Expected behavior**
A bit mor information would help, do identify the problem :)
**Platform:**
- OS: Windows 10
- vscode version: `Version: 1.78.2 (user setup)
Commit: b3e4e68a0bc097f0ae7907b217c1119af9e03435
Date: 2023-05-10T14:39:26.248Z
Electron: 22.5.2
Chromium: 108.0.5359.215
Node.js: 16.17.1
V8: 10.8.168.25-electron.0
OS: Windows_NT x64 10.0.19044
Sandboxed: Yes`
- extension version: v2.6.1
**Your Team Size Using TC:**
2-10
| non_infrastructure | operation not permitted when i use a large csv file after iterations describe the bug error in set env eperm operation not permitted rename c users public my tests thunder tests thunderenvironment json tmp c users public my tests thunder tests thunderenvironment json Β to reproduce we did run a larger test with one request but with a large csv file approx datasets lines and columns the first run with datasets stop at around execution the second run did run smoothly the third run did stop at exection with the error above means it is not alway reproducible expected behavior a bit mor information would help do identify the problem platform os windows vscode version version user setup commit date electron chromium node js electron os windows nt sandboxed yes extension version your team size using tc | 0 |
115,034 | 24,709,743,833 | IssuesEvent | 2022-10-19 22:51:30 | mozilla-mobile/android-components | https://api.github.com/repos/mozilla-mobile/android-components | closed | Extract Logings and credit cards code from PromptFeature | β¨οΈ code <prompts> | We @pocmo, @csadilek we were chatting about refactoring the `PromptFeature` class to just have prompts that are coming from the web content and have the engine prompts like credit cards, logings and passwords in a separate feature/request as those need a special treatment, and mixing the feature/request makes the `PromptFeature` difficult to maintain.
βIssue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-18936)
| 1.0 | Extract Logings and credit cards code from PromptFeature - We @pocmo, @csadilek we were chatting about refactoring the `PromptFeature` class to just have prompts that are coming from the web content and have the engine prompts like credit cards, logings and passwords in a separate feature/request as those need a special treatment, and mixing the feature/request makes the `PromptFeature` difficult to maintain.
βIssue is synchronized with this [Jira Task](https://mozilla-hub.atlassian.net/browse/FNXV2-18936)
| non_infrastructure | extract logings and credit cards code from promptfeature we pocmo csadilek we were chatting about refactoring the promptfeature class to just have prompts that are coming from the web content and have the engine prompts like credit cards logings and passwords in a separate feature request as those need a special treatment and mixing the feature request makes the promptfeature difficult to maintain βissue is synchronized with this | 0 |
31,994 | 26,338,103,916 | IssuesEvent | 2023-01-10 15:42:55 | OpenLiberty/openliberty.io | https://api.github.com/repos/OpenLiberty/openliberty.io | closed | Upgrade Liberty version monthly instead of quarterly | infrastructure | By default, openliberty.io is running on the Liberty `quarterly` release from the buildpack. Consider enabling the usage of the Liberty `monthly` release.
To start using the `monthly` release...
```
To use the V19.0.01 Liberty monthly release, you must set the following environment variables:
JBP_CONFIG_LIBERTY = 'version: +'
IBM_LIBERTY_MONTHLY = true
```
19001 buildpack announcement
https://console.bluemix.net/status/notification/d5ebe0fb74647bcf134512e304e75588 | 1.0 | Upgrade Liberty version monthly instead of quarterly - By default, openliberty.io is running on the Liberty `quarterly` release from the buildpack. Consider enabling the usage of the Liberty `monthly` release.
To start using the `monthly` release...
```
To use the V19.0.01 Liberty monthly release, you must set the following environment variables:
JBP_CONFIG_LIBERTY = 'version: +'
IBM_LIBERTY_MONTHLY = true
```
19001 buildpack announcement
https://console.bluemix.net/status/notification/d5ebe0fb74647bcf134512e304e75588 | infrastructure | upgrade liberty version monthly instead of quarterly by default openliberty io is running on the liberty quarterly release from the buildpack consider enabling the usage of the liberty monthly release to start using the monthly release to use the liberty monthly release you must set the following environment variables jbp config liberty version ibm liberty monthly true buildpack announcement | 1 |
15,008 | 11,298,628,364 | IssuesEvent | 2020-01-17 09:26:31 | ecattez/shahmat | https://api.github.com/repos/ecattez/shahmat | opened | Board Projection : HAL Representation | draft infrastructure | Domain events should be listened to create projections of the board game.
For HTTP Clients, we should send a representation as below.
```
{
"_links": {
"self": {
"href": "/board/12345"
},
"white": {
"href": "/players/javadoc"
},
"black": {
"href": "/players/hostmax"
}
},
"_embedded": {
"pieces": [
{
"type": "BISHOP",
"color": "WHITE",
"location": "A1",
"_links": {
"self": "/board/12345/pieces/1"
},
"_templates": {
"move": {
"method": "POST",
"properties": [{
"name": "to",
"regex": "[aAbBcCdDeEfFgGhH][1-8]",
"suggest": [{
"value": "B2"
}]
}]
}
}
},
{
"type": "QUEEN",
"color": "BLACK",
"location": "A2",
"_links": {
"self": "/board/12345/pieces/2"
}
},
{
"type": "KING",
"color": "BLACK",
"location": "H2",
"checked": false,
"_links": {
"self": "/board/12345/pieces/3"
}
}
]
},
"files": [
"A", "B", "C", "D", "E", "F", "G", "H"
],
"ranks": [
1, 2, 3, 4, 5, 6, 7, 8
],
"turn-of": "WHITE",
"living-white-pieces": 5,
"living-black-pieces": 3
}
``` | 1.0 | Board Projection : HAL Representation - Domain events should be listened to create projections of the board game.
For HTTP Clients, we should send a representation as below.
```
{
"_links": {
"self": {
"href": "/board/12345"
},
"white": {
"href": "/players/javadoc"
},
"black": {
"href": "/players/hostmax"
}
},
"_embedded": {
"pieces": [
{
"type": "BISHOP",
"color": "WHITE",
"location": "A1",
"_links": {
"self": "/board/12345/pieces/1"
},
"_templates": {
"move": {
"method": "POST",
"properties": [{
"name": "to",
"regex": "[aAbBcCdDeEfFgGhH][1-8]",
"suggest": [{
"value": "B2"
}]
}]
}
}
},
{
"type": "QUEEN",
"color": "BLACK",
"location": "A2",
"_links": {
"self": "/board/12345/pieces/2"
}
},
{
"type": "KING",
"color": "BLACK",
"location": "H2",
"checked": false,
"_links": {
"self": "/board/12345/pieces/3"
}
}
]
},
"files": [
"A", "B", "C", "D", "E", "F", "G", "H"
],
"ranks": [
1, 2, 3, 4, 5, 6, 7, 8
],
"turn-of": "WHITE",
"living-white-pieces": 5,
"living-black-pieces": 3
}
``` | infrastructure | board projection hal representation domain events should be listened to create projections of the board game for http clients we should send a representation as below links self href board white href players javadoc black href players hostmax embedded pieces type bishop color white location links self board pieces templates move method post properties name to regex suggest value type queen color black location links self board pieces type king color black location checked false links self board pieces files a b c d e f g h ranks turn of white living white pieces living black pieces | 1 |
490,725 | 14,139,217,465 | IssuesEvent | 2020-11-10 09:31:57 | aau-giraf/weekplanner | https://api.github.com/repos/aau-giraf/weekplanner | closed | Text overflow in "Vælg billede fra galleri" screen | group 9 point: 13 priority: low type: bug | **Describe the bug**
When the search bar is pressed in the "Vælg billede fra galleri" screen. There is an overflow in the text.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to "Vælg billede fra galleri" screen
2. Click on the search bar.
4. See error
**Expected behavior**
There should not be a pixel overflow.
**Actual behavior**
There is a pixel overflow
**Screenshots**

**Environment (please complete the following information):**
- OS: Android
- Emulator: Yes
- WXGA 10.1 inch tablet
- APK Version [30] | 1.0 | Text overflow in "Vælg billede fra galleri" screen - **Describe the bug**
When the search bar is pressed in the "Vælg billede fra galleri" screen. There is an overflow in the text.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to "Vælg billede fra galleri" screen
2. Click on the search bar.
4. See error
**Expected behavior**
There should not be a pixel overflow.
**Actual behavior**
There is a pixel overflow
**Screenshots**

**Environment (please complete the following information):**
- OS: Android
- Emulator: Yes
- WXGA 10.1 inch tablet
- APK Version [30] | non_infrastructure | text overflow in vælg billede fra galleri screen describe the bug when the search bar is pressed in the vælg billede fra galleri screen there is an overflow in the text to reproduce steps to reproduce the behavior go to vælg billede fra galleri screen click on the search bar see error expected behavior there should not be a pixel overflow actual behavior there is a pixel overflow screenshots environment please complete the following information os android emulator yes wxga inch tablet apk version | 0 |
13,873 | 10,514,315,968 | IssuesEvent | 2019-09-27 23:55:27 | oppia/oppia-android | https://api.github.com/repos/oppia/oppia-android | closed | Introduce interface for TopicController | Priority: Essential Status: In implementation Type: Improvement Where: Infrastructure Workstream: Domain Interface | This is tracking introducing a stubbed interface for #15 without the real implementation being complete. | 1.0 | Introduce interface for TopicController - This is tracking introducing a stubbed interface for #15 without the real implementation being complete. | infrastructure | introduce interface for topiccontroller this is tracking introducing a stubbed interface for without the real implementation being complete | 1 |
13,293 | 10,194,888,899 | IssuesEvent | 2019-08-12 16:44:23 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | opened | Separate builders and testers | P1-High area-infrastructure | - [ ] Create build.py recipe:
1. checkout
2. build
3. upload to isolate server
4. set isolate hashes as output properties
- [ ] Create new neo recipe
1. Fetch test_matrix.json from gitiles API.
2. Read buildInputs from test_matrix.json & input commit.
3. build = bb.search(buildInputs).
4. if not build: build = bb.schedule(id=buildInputs.hash, bi). // led for led runs
5. <fs, hashes> = bb.collect(build).
6. shards = shard tests(hash)
7. collect shards
8. deflaking (only sharded)
9. process results (CIPD package to get tooling? gitiles API?)
10. bq upload
| 1.0 | Separate builders and testers - - [ ] Create build.py recipe:
1. checkout
2. build
3. upload to isolate server
4. set isolate hashes as output properties
- [ ] Create new neo recipe
1. Fetch test_matrix.json from gitiles API.
2. Read buildInputs from test_matrix.json & input commit.
3. build = bb.search(buildInputs).
4. if not build: build = bb.schedule(id=buildInputs.hash, bi). // led for led runs
5. <fs, hashes> = bb.collect(build).
6. shards = shard tests(hash)
7. collect shards
8. deflaking (only sharded)
9. process results (CIPD package to get tooling? gitiles API?)
10. bq upload
| infrastructure | separate builders and testers create build py recipe checkout build upload to isolate server set isolate hashes as output properties create new neo recipe fetch test matrix json from gitiles api read buildinputs from test matrix json input commit build bb search buildinputs if not build build bb schedule id buildinputs hash bi led for led runs bb collect build shards shard tests hash collect shards deflaking only sharded process results cipd package to get tooling gitiles api bq upload | 1 |
176,352 | 28,074,579,434 | IssuesEvent | 2023-03-29 22:00:56 | chapel-lang/chapel | https://api.github.com/repos/chapel-lang/chapel | closed | Should types be generic due to their initializers' argument lists rather than their fields? | type: Design area: Language | Capturing some recent incomplete thoughts related to generic types here for posterity or to see what it shakes loose for others:
Traditionally, in Chapel, I believe we've considered a type to be generic based on its fieldsβthat is, whether it has `param` fields, `type` fields, untyped fields, or fields whose declared types are generic. As we've been wrestling with generic types recently, I've been wondering whether this is completely wrong. For example, consider the case:
```chapel
record R {
param p: int;
type t;
var x;
}
```
Traditionally, we'd say this was generic because of `t`, `p`, and `x`. But now imagine its only initializer was:
```chapel
proc init() {
this.p = 2;
this.t = p*int;
this.x = 3.1415;
}
```
Because of this initializer's definition, `R` only has one possible definition, so is therefore is arguably concrete, not generic. We could view `p` and `t` as ways of creating symbolic names for the class's usage (note the relation to the conversation on https://github.com/chapel-lang/chapel/issues/12613 which I'd summarize as "sometimes I want `type t` to give me a shorthand and not to make my type generic"), and `x` as being a case of laziness / leveraging Chapel's type inference.
For me, there's a strong analogy to split initialization where this code is similarly not generic, it just defers some bindings until after the declaration point:
```chapel
param p: int;
type t;
var x;
p = 2;
t = p*int;
x = 3.1415;
```
This has led me to (re-?)wonder whether:
* the generic-ness of a type should be based on how generic its initializers and initializer arguments are rather than what fields it contains
* the phase 1 bodies of initializer routines should be unified by the compiler/required to be unifiable by the language similar to how the branches of a conditional statement are in the presence of split initialization, since they are reasonably analogous
* when combined with changes like those proposed in #21410 and/or #21455, whether this would make whether or not a class/record was generic easier to determine than it is today (by looking at how generic its initializer argument lists are)
Beyond those musings, the main challenge question for me currently is: If the arguments of an initializer were generic, what would the implications on the type's type signature be? For cases like `param` and `type` arguments to the initializer, I think it's straightforward. For example, if I replaced the 0-argument initializer above with:
```chapel
proc init(param p: int, type t, type xtype) {
this.p = p;
this.t = t;
this.x = new xtype();
}
```
It seems logical that R's type signature would be something like `R(param p, type t, type xtype)` so `R(2, int, C)` might be a concrete instantiation of the type. But if the initializer were:
```chapel
proc init(var x: int(?w)) {
this.p = w;
this.t = uint(w);
this.x = x;
}
```
then it's less clear what it should be. E.g., maybe `R(param w: int)`? Or, what if it were:
```chapel
proc init(var c: C(?)) { ... }
```
where C was a generic class (by this same definition)βwhat would it be then? And how much work would it be for a user or the compiler to determine this? Or how would they get the documentation for it?
Or, should it be the case that when an initializer does rely on generic arguments, the type author should have to write an explicit type initializer as well, for example, perhaps:
```chapel
proc init(param w: int) type {
this.w = w;
}
```
and the compiler will complain at them if they do not?
Anyway, despite this big lingering question, what I've liked about this thought process is that it seems to make field initialization and split variable initialization more similar to one another rather than less; and it seems to make types only as generic as they need to be (potentially not at all) rather than as generic as inspection of their fields might suggestβi.e., a naive reading of `R`'s fields suggests it's generic in three ways while the initializers above show that it might be generic in no ways or just 1 way. | 1.0 | Should types be generic due to their initializers' argument lists rather than their fields? - Capturing some recent incomplete thoughts related to generic types here for posterity or to see what it shakes loose for others:
Traditionally, in Chapel, I believe we've considered a type to be generic based on its fieldsβthat is, whether it has `param` fields, `type` fields, untyped fields, or fields whose declared types are generic. As we've been wrestling with generic types recently, I've been wondering whether this is completely wrong. For example, consider the case:
```chapel
record R {
param p: int;
type t;
var x;
}
```
Traditionally, we'd say this was generic because of `t`, `p`, and `x`. But now imagine its only initializer was:
```chapel
proc init() {
this.p = 2;
this.t = p*int;
this.x = 3.1415;
}
```
Because of this initializer's definition, `R` only has one possible definition, so is therefore is arguably concrete, not generic. We could view `p` and `t` as ways of creating symbolic names for the class's usage (note the relation to the conversation on https://github.com/chapel-lang/chapel/issues/12613 which I'd summarize as "sometimes I want `type t` to give me a shorthand and not to make my type generic"), and `x` as being a case of laziness / leveraging Chapel's type inference.
For me, there's a strong analogy to split initialization where this code is similarly not generic, it just defers some bindings until after the declaration point:
```chapel
param p: int;
type t;
var x;
p = 2;
t = p*int;
x = 3.1415;
```
This has led me to (re-?)wonder whether:
* the generic-ness of a type should be based on how generic its initializers and initializer arguments are rather than what fields it contains
* the phase 1 bodies of initializer routines should be unified by the compiler/required to be unifiable by the language similar to how the branches of a conditional statement are in the presence of split initialization, since they are reasonably analogous
* when combined with changes like those proposed in #21410 and/or #21455, whether this would make whether or not a class/record was generic easier to determine than it is today (by looking at how generic its initializer argument lists are)
Beyond those musings, the main challenge question for me currently is: If the arguments of an initializer were generic, what would the implications on the type's type signature be? For cases like `param` and `type` arguments to the initializer, I think it's straightforward. For example, if I replaced the 0-argument initializer above with:
```chapel
proc init(param p: int, type t, type xtype) {
this.p = p;
this.t = t;
this.x = new xtype();
}
```
It seems logical that R's type signature would be something like `R(param p, type t, type xtype)` so `R(2, int, C)` might be a concrete instantiation of the type. But if the initializer were:
```chapel
proc init(var x: int(?w)) {
this.p = w;
this.t = uint(w);
this.x = x;
}
```
then it's less clear what it should be. E.g., maybe `R(param w: int)`? Or, what if it were:
```chapel
proc init(var c: C(?)) { ... }
```
where C was a generic class (by this same definition)βwhat would it be then? And how much work would it be for a user or the compiler to determine this? Or how would they get the documentation for it?
Or, should it be the case that when an initializer does rely on generic arguments, the type author should have to write an explicit type initializer as well, for example, perhaps:
```chapel
proc init(param w: int) type {
this.w = w;
}
```
and the compiler will complain at them if they do not?
Anyway, despite this big lingering question, what I've liked about this thought process is that it seems to make field initialization and split variable initialization more similar to one another rather than less; and it seems to make types only as generic as they need to be (potentially not at all) rather than as generic as inspection of their fields might suggestβi.e., a naive reading of `R`'s fields suggests it's generic in three ways while the initializers above show that it might be generic in no ways or just 1 way. | non_infrastructure | should types be generic due to their initializers argument lists rather than their fields capturing some recent incomplete thoughts related to generic types here for posterity or to see what it shakes loose for others traditionally in chapel i believe we ve considered a type to be generic based on its fieldsβthat is whether it has param fields type fields untyped fields or fields whose declared types are generic as we ve been wrestling with generic types recently i ve been wondering whether this is completely wrong for example consider the case chapel record r param p int type t var x traditionally we d say this was generic because of t p and x but now imagine its only initializer was chapel proc init this p this t p int this x because of this initializer s definition r only has one possible definition so is therefore is arguably concrete not generic we could view p and t as ways of creating symbolic names for the class s usage note the relation to the conversation on which i d summarize as sometimes i want type t to give me a shorthand and not to make my type generic and x as being a case of laziness leveraging chapel s type inference for me there s a strong analogy to split initialization where this code is similarly not generic it just defers some bindings until after the declaration point chapel param p int type t var x p t p int x this has led me to re wonder whether the generic ness of a type should be based on how generic its initializers and initializer arguments are rather than what fields it contains the phase bodies of initializer routines should be unified by the compiler required to be unifiable by the language similar to how the branches of a conditional statement are in the presence of split initialization since they are reasonably analogous when combined with changes like those proposed in and or whether this would make whether or not a class record was generic easier to determine than it is today by looking at how generic its initializer argument lists are beyond those musings the main challenge question for me currently is if the arguments of an initializer were generic what would the implications on the type s type signature be for cases like param and type arguments to the initializer i think it s straightforward for example if i replaced the argument initializer above with chapel proc init param p int type t type xtype this p p this t t this x new xtype it seems logical that r s type signature would be something like r param p type t type xtype so r int c might be a concrete instantiation of the type but if the initializer were chapel proc init var x int w this p w this t uint w this x x then it s less clear what it should be e g maybe r param w int or what if it were chapel proc init var c c where c was a generic class by this same definition βwhat would it be then and how much work would it be for a user or the compiler to determine this or how would they get the documentation for it or should it be the case that when an initializer does rely on generic arguments the type author should have to write an explicit type initializer as well for example perhaps chapel proc init param w int type this w w and the compiler will complain at them if they do not anyway despite this big lingering question what i ve liked about this thought process is that it seems to make field initialization and split variable initialization more similar to one another rather than less and it seems to make types only as generic as they need to be potentially not at all rather than as generic as inspection of their fields might suggestβi e a naive reading of r s fields suggests it s generic in three ways while the initializers above show that it might be generic in no ways or just way | 0 |
7,349 | 6,916,269,003 | IssuesEvent | 2017-11-29 01:35:24 | zulip/zulip | https://api.github.com/repos/zulip/zulip | closed | Fix traceback error output from test_realm_scenarios | area: testing-infrastructure bug priority: high | I think this error output is the result of our having recently changed the queue processors to run the `consume` methods when `queue_json_publish` is called in tests. Not sure yet.
```
Running zerver.tests.test_messages.TestCrossRealmPMs.test_realm_scenarios
2017-10-27 23:29:50.575 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.645 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.713 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.775 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
``` | 1.0 | Fix traceback error output from test_realm_scenarios - I think this error output is the result of our having recently changed the queue processors to run the `consume` methods when `queue_json_publish` is called in tests. Not sure yet.
```
Running zerver.tests.test_messages.TestCrossRealmPMs.test_realm_scenarios
2017-10-27 23:29:50.575 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.645 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.713 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1811, in _internal_prep_message
content, realm=realm)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1763, in check_message
raise JsonableError(e.messages[0])
zerver.lib.exceptions.JsonableError: You can't send private messages outside of your organization.
2017-10-27 23:29:50.775 ERR [] Error queueing internal message by welcome-bot@zulip.com: You can't send private messages outside of your organization.
Traceback (most recent call last):
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1760, in check_message
forwarder_user_profile, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1544, in recipient_for_user_profiles
recipient_profile_ids = validate_recipient_user_profiles(user_profiles, sender)
File "/home/tabbott/zulip/zerver/lib/actions.py", line 1523, in validate_recipient_user_profiles
raise ValidationError(_("You can't send private messages outside of your organization."))
django.core.exceptions.ValidationError: ["You can't send private messages outside of your organization."]
``` | infrastructure | fix traceback error output from test realm scenarios i think this error output is the result of our having recently changed the queue processors to run the consume methods when queue json publish is called in tests not sure yet running zerver tests test messages testcrossrealmpms test realm scenarios err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror during handling of the above exception another exception occurred traceback most recent call last file home tabbott zulip zerver lib actions py line in internal prep message content realm realm file home tabbott zulip zerver lib actions py line in check message raise jsonableerror e messages zerver lib exceptions jsonableerror you can t send private messages outside of your organization err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror during handling of the above exception another exception occurred traceback most recent call last file home tabbott zulip zerver lib actions py line in internal prep message content realm realm file home tabbott zulip zerver lib actions py line in check message raise jsonableerror e messages zerver lib exceptions jsonableerror you can t send private messages outside of your organization err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror during handling of the above exception another exception occurred traceback most recent call last file home tabbott zulip zerver lib actions py line in internal prep message content realm realm file home tabbott zulip zerver lib actions py line in check message raise jsonableerror e messages zerver lib exceptions jsonableerror you can t send private messages outside of your organization err error queueing internal message by welcome bot zulip com you can t send private messages outside of your organization traceback most recent call last file home tabbott zulip zerver lib actions py line in check message forwarder user profile sender file home tabbott zulip zerver lib actions py line in recipient for user profiles recipient profile ids validate recipient user profiles user profiles sender file home tabbott zulip zerver lib actions py line in validate recipient user profiles raise validationerror you can t send private messages outside of your organization django core exceptions validationerror | 1 |
59,287 | 11,956,304,409 | IssuesEvent | 2020-04-04 09:43:50 | home-assistant/brands | https://api.github.com/repos/home-assistant/brands | closed | Yi Home Cameras is missing brand images | has-codeowner |
## The problem
The Yi Home Cameras integration has missing brand images.
We recently started this Brands repository, to create a centralized storage of all brand-related images. These images are used on our website and the Home Assistant frontend.
The following images are missing and would ideally be added:
- `src/yi/logo.png`
- `src/yi/icon@2x.png`
- `src/yi/logo@2x.png`
For image specifications and requirements, please see [README.md](https://github.com/home-assistant/brands/blob/master/README.md).
## Additional information
For more information about this repository, read the [README.md](https://github.com/home-assistant/brands/blob/master/README.md) file of this repository. It contains information on how this repository works, and image specification and requirements.
## Codeowner mention
Hi there, @bachya! Mind taking a look at this issue as it is with an integration (yi) you are listed as a [codeowner](https://github.com/home-assistant/core/blob/dev/homeassistant/components/yi/manifest.json) for? Thanks!
Resolving this issue is not limited to codeowners! If you want to help us out, feel free to resolve this issue! Thanks already!
| 1.0 | Yi Home Cameras is missing brand images -
## The problem
The Yi Home Cameras integration has missing brand images.
We recently started this Brands repository, to create a centralized storage of all brand-related images. These images are used on our website and the Home Assistant frontend.
The following images are missing and would ideally be added:
- `src/yi/logo.png`
- `src/yi/icon@2x.png`
- `src/yi/logo@2x.png`
For image specifications and requirements, please see [README.md](https://github.com/home-assistant/brands/blob/master/README.md).
## Additional information
For more information about this repository, read the [README.md](https://github.com/home-assistant/brands/blob/master/README.md) file of this repository. It contains information on how this repository works, and image specification and requirements.
## Codeowner mention
Hi there, @bachya! Mind taking a look at this issue as it is with an integration (yi) you are listed as a [codeowner](https://github.com/home-assistant/core/blob/dev/homeassistant/components/yi/manifest.json) for? Thanks!
Resolving this issue is not limited to codeowners! If you want to help us out, feel free to resolve this issue! Thanks already!
| non_infrastructure | yi home cameras is missing brand images the problem the yi home cameras integration has missing brand images we recently started this brands repository to create a centralized storage of all brand related images these images are used on our website and the home assistant frontend the following images are missing and would ideally be added src yi logo png src yi icon png src yi logo png for image specifications and requirements please see additional information for more information about this repository read the file of this repository it contains information on how this repository works and image specification and requirements codeowner mention hi there bachya mind taking a look at this issue as it is with an integration yi you are listed as a for thanks resolving this issue is not limited to codeowners if you want to help us out feel free to resolve this issue thanks already | 0 |
29,698 | 24,179,848,654 | IssuesEvent | 2022-09-23 07:49:46 | dart-lang/sdk | https://api.github.com/repos/dart-lang/sdk | reopened | [infra] Update ninja to v1.9.0 on Windows | area-infrastructure | Our current ninja version on Windows is 1.8.2:
```
google\dacoharkes@DACOHARKES-WIN3 C:\src\dart-sdk\sdk>C:\src\depot_tools\ninja.exe --version
1.8.2
```
This version fails to generate a useful `compile_commands.json` for C++ analysis support on Windows due to `.rsp` files.
Ninja 1.9.0 has a flag for expanding these contents of the rsp file rather than referring to it:
* https://github.com/ninja-build/ninja/pull/1223
Could we update ninja to at least 1.9.0?
Hopefully this will make development on Windows in vscode with the clangd plugin slightly more bearable. π
cc @athomas | 1.0 | [infra] Update ninja to v1.9.0 on Windows - Our current ninja version on Windows is 1.8.2:
```
google\dacoharkes@DACOHARKES-WIN3 C:\src\dart-sdk\sdk>C:\src\depot_tools\ninja.exe --version
1.8.2
```
This version fails to generate a useful `compile_commands.json` for C++ analysis support on Windows due to `.rsp` files.
Ninja 1.9.0 has a flag for expanding these contents of the rsp file rather than referring to it:
* https://github.com/ninja-build/ninja/pull/1223
Could we update ninja to at least 1.9.0?
Hopefully this will make development on Windows in vscode with the clangd plugin slightly more bearable. π
cc @athomas | infrastructure | update ninja to on windows our current ninja version on windows is google dacoharkes dacoharkes c src dart sdk sdk c src depot tools ninja exe version this version fails to generate a useful compile commands json for c analysis support on windows due to rsp files ninja has a flag for expanding these contents of the rsp file rather than referring to it could we update ninja to at least hopefully this will make development on windows in vscode with the clangd plugin slightly more bearable π cc athomas | 1 |
18,593 | 13,055,970,144 | IssuesEvent | 2020-07-30 03:16:09 | icecube-trac/tix2 | https://api.github.com/repos/icecube-trac/tix2 | opened | [buildbots] compiler updates (Trac #1819) | Incomplete Migration Migrated from Trac infrastructure task | Migrated from https://code.icecube.wisc.edu/ticket/1819
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "On the last software call it was decided that we'd open up production projects for C++11 by September 5th. We'll need the compilers updated to support C++11 on any bots with old compilers. We know the SL6 bot needs this upgrade at the very least.",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "infrastructure",
"summary": "[buildbots] compiler updates",
"priority": "normal",
"keywords": "",
"time": "2016-08-14T17:22:05",
"milestone": "",
"owner": "nega",
"type": "task"
}
```
| 1.0 | [buildbots] compiler updates (Trac #1819) - Migrated from https://code.icecube.wisc.edu/ticket/1819
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "On the last software call it was decided that we'd open up production projects for C++11 by September 5th. We'll need the compilers updated to support C++11 on any bots with old compilers. We know the SL6 bot needs this upgrade at the very least.",
"reporter": "olivas",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "infrastructure",
"summary": "[buildbots] compiler updates",
"priority": "normal",
"keywords": "",
"time": "2016-08-14T17:22:05",
"milestone": "",
"owner": "nega",
"type": "task"
}
```
| infrastructure | compiler updates trac migrated from json status closed changetime description on the last software call it was decided that we d open up production projects for c by september we ll need the compilers updated to support c on any bots with old compilers we know the bot needs this upgrade at the very least reporter olivas cc resolution fixed ts component infrastructure summary compiler updates priority normal keywords time milestone owner nega type task | 1 |
13,683 | 16,441,284,863 | IssuesEvent | 2021-05-20 14:38:23 | muellners/ltcim | https://api.github.com/repos/muellners/ltcim | closed | Website For LTCIM | in process | We use a static site generator for the website , Jekyll with a custom theme. | 1.0 | Website For LTCIM - We use a static site generator for the website , Jekyll with a custom theme. | non_infrastructure | website for ltcim we use a static site generator for the website jekyll with a custom theme | 0 |
21,167 | 14,406,987,994 | IssuesEvent | 2020-12-03 21:06:06 | twisted/towncrier | https://api.github.com/repos/twisted/towncrier | closed | Get off Travis | infrastructure | My perspective is that Travis is pretty much done. I like GitHub Actions (GHA). Twisted uses GHA, albeit along with others. PR incoming.
https://www.traviscistatus.com/#month

| 1.0 | Get off Travis - My perspective is that Travis is pretty much done. I like GitHub Actions (GHA). Twisted uses GHA, albeit along with others. PR incoming.
https://www.traviscistatus.com/#month

| infrastructure | get off travis my perspective is that travis is pretty much done i like github actions gha twisted uses gha albeit along with others pr incoming | 1 |
223,369 | 17,112,198,756 | IssuesEvent | 2021-07-10 15:00:50 | govdirectory/website | https://api.github.com/repos/govdirectory/website | opened | Figure out data quality threshold | data :computer: documentation :writing_hand: question | We should define some threshold for how good the data of a country must be before we publish it on the website. This should then be clearly communicatied on https://www.wikidata.org/wiki/Wikidata:Gov_Directory#Add_your_country son that any volunteer know exactly what they need to do before a country gets added. | 1.0 | Figure out data quality threshold - We should define some threshold for how good the data of a country must be before we publish it on the website. This should then be clearly communicatied on https://www.wikidata.org/wiki/Wikidata:Gov_Directory#Add_your_country son that any volunteer know exactly what they need to do before a country gets added. | non_infrastructure | figure out data quality threshold we should define some threshold for how good the data of a country must be before we publish it on the website this should then be clearly communicatied on son that any volunteer know exactly what they need to do before a country gets added | 0 |
33,434 | 27,446,480,761 | IssuesEvent | 2023-03-02 14:36:53 | opentargets/issues | https://api.github.com/repos/opentargets/issues | closed | Use workspaces for managing different deployment states / environments | Enhancement Backend Platform Infrastructure | As a developer I'd like to simplify the way different instances of Open Targets Platform are deployed, and their states managed
## Background
At Open Targets, our infrastructure definition code has been using custom deployment environments via a profile management subsystem and configurable environments, from an abstract point of view via commands like
```
make tfactivate profile='dev-platform'
make tfbackendremote
make tfinit
make depactivate profile='dev_platform'
```
We need to simplify this underlying processes as part of the journey to a continuous deployment system, where the infrastructure changes are automatically handle by automated pipelines.
## Actions to take in the process (not all have been specified)
- Simplification of deployment context so it conforms to _terraform_ auto-detection of input variables.
- Unification of states under an operations bucket | 1.0 | Use workspaces for managing different deployment states / environments - As a developer I'd like to simplify the way different instances of Open Targets Platform are deployed, and their states managed
## Background
At Open Targets, our infrastructure definition code has been using custom deployment environments via a profile management subsystem and configurable environments, from an abstract point of view via commands like
```
make tfactivate profile='dev-platform'
make tfbackendremote
make tfinit
make depactivate profile='dev_platform'
```
We need to simplify this underlying processes as part of the journey to a continuous deployment system, where the infrastructure changes are automatically handle by automated pipelines.
## Actions to take in the process (not all have been specified)
- Simplification of deployment context so it conforms to _terraform_ auto-detection of input variables.
- Unification of states under an operations bucket | infrastructure | use workspaces for managing different deployment states environments as a developer i d like to simplify the way different instances of open targets platform are deployed and their states managed background at open targets our infrastructure definition code has been using custom deployment environments via a profile management subsystem and configurable environments from an abstract point of view via commands like make tfactivate profile dev platform make tfbackendremote make tfinit make depactivate profile dev platform we need to simplify this underlying processes as part of the journey to a continuous deployment system where the infrastructure changes are automatically handle by automated pipelines actions to take in the process not all have been specified simplification of deployment context so it conforms to terraform auto detection of input variables unification of states under an operations bucket | 1 |
18,793 | 13,106,245,822 | IssuesEvent | 2020-08-04 13:34:09 | covidgraph/documentation | https://api.github.com/repos/covidgraph/documentation | opened | New Virtual Machine for NLP pipeline | Status: Suggested Tag: Infrastructure Type: Feature | Create a new virutal machine for a new NLP pipeline capable of processing additional text in the graph.
**Minimum Specs**
32Gb RAM
100Gb HDD
GPU | 1.0 | New Virtual Machine for NLP pipeline - Create a new virutal machine for a new NLP pipeline capable of processing additional text in the graph.
**Minimum Specs**
32Gb RAM
100Gb HDD
GPU | infrastructure | new virtual machine for nlp pipeline create a new virutal machine for a new nlp pipeline capable of processing additional text in the graph minimum specs ram hdd gpu | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.