Unnamed: 0 int64 0 832k | id float64 2.49B 32.1B | type stringclasses 1 value | created_at stringlengths 19 19 | repo stringlengths 7 112 | repo_url stringlengths 36 141 | action stringclasses 3 values | title stringlengths 2 665 | labels stringlengths 4 554 | body stringlengths 3 235k | index stringclasses 6 values | text_combine stringlengths 96 235k | label stringclasses 2 values | text stringlengths 96 196k | binary_label int64 0 1 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
18,787 | 13,103,727,978 | IssuesEvent | 2020-08-04 09:03:27 | NUTFes/group-manager-2 | https://api.github.com/repos/NUTFes/group-manager-2 | closed | ็ฎก็่
ใขใใชใฎไฝๆ | backend frontend infrastructure | - group-manager-2
- api
- view
- admin_api (new)
- admin_view (new)
- docs
- docker-compose.yml
| 1.0 | ็ฎก็่
ใขใใชใฎไฝๆ - - group-manager-2
- api
- view
- admin_api (new)
- admin_view (new)
- docs
- docker-compose.yml
| infrastructure | ็ฎก็่
ใขใใชใฎไฝๆ group manager api view admin api new admin view new docs docker compose yml | 1 |
266 | 2,602,005,035 | IssuesEvent | 2015-02-24 03:06:58 | jquery/esprima | https://api.github.com/repos/jquery/esprima | closed | Unit Tests: break into much smaller files | easy enhancement infrastructure | We should take a page from espree here and break out the test files into much smaller pieces organized by the main language feature they are testing.
Right now it is unwieldily working with the giant file, and intimidating to newer contributors, like myself. | 1.0 | Unit Tests: break into much smaller files - We should take a page from espree here and break out the test files into much smaller pieces organized by the main language feature they are testing.
Right now it is unwieldily working with the giant file, and intimidating to newer contributors, like myself. | infrastructure | unit tests break into much smaller files we should take a page from espree here and break out the test files into much smaller pieces organized by the main language feature they are testing right now it is unwieldily working with the giant file and intimidating to newer contributors like myself | 1 |
27,583 | 21,941,905,938 | IssuesEvent | 2022-05-23 19:03:28 | cloud-native-toolkit/software-everywhere | https://api.github.com/repos/cloud-native-toolkit/software-everywhere | closed | Request access: {Sarang/Dinesh/Dileep} New ROKS Cluster with Portworx for cp4ba(Business Automation) | category:infrastructure access_request | **Email address**
@saratrip @dineshchandrapandey @DileepPaul ,brahm.singh@ibm.com , mverma17@in.ibm.com
**Cloud environment**
IBM Cloud account,
**Purpose**
Request to create new ROKS Cluster with Portworx for cp4ba(Business Automation). We are using OCP48 gitops cluster but are unable to do test on the cluster because the common service operator is giving issues/not installed. We are unable to proceed and it is major blocker for us.
**Duration of access**
3 Months

| 1.0 | Request access: {Sarang/Dinesh/Dileep} New ROKS Cluster with Portworx for cp4ba(Business Automation) - **Email address**
@saratrip @dineshchandrapandey @DileepPaul ,brahm.singh@ibm.com , mverma17@in.ibm.com
**Cloud environment**
IBM Cloud account,
**Purpose**
Request to create new ROKS Cluster with Portworx for cp4ba(Business Automation). We are using OCP48 gitops cluster but are unable to do test on the cluster because the common service operator is giving issues/not installed. We are unable to proceed and it is major blocker for us.
**Duration of access**
3 Months

| infrastructure | request access sarang dinesh dileep new roks cluster with portworx for business automation email address saratrip dineshchandrapandey dileeppaul brahm singh ibm com in ibm com cloud environment ibm cloud account purpose request to create new roks cluster with portworx for business automation we are using gitops cluster but are unable to do test on the cluster because the common service operator is giving issues not installed we are unable to proceed and it is major blocker for us duration of access months | 1 |
30,793 | 25,083,464,283 | IssuesEvent | 2022-11-07 21:26:49 | ProjectPythiaCookbooks/cookbook-template | https://api.github.com/repos/ProjectPythiaCookbooks/cookbook-template | closed | Build via binderbot | infrastructure | Following successful experiments in https://github.com/ProjectPythiaCookbooks/cmip6-cookbook/pull/27 and https://github.com/ProjectPythia/pythia-foundations/pull/322, it's time to build the binderbot functionality into the template and (once it's working) push those changes out to all cookbook repos.
My recent refactor of the infrastructure makes this much easier. Most (all?) the changes will actually occur in the reusable workflows over at https://github.com/ProjectPythiaCookbooks/cookbook-actions.
What I have in mind is a python script that parses `_config.yml` and `_toc.yml` to get things needed for the call to binderbot:
- the link to the binder (stored in the field `binderhub_url:` in `_config.yml`)
- the list of all notebook files to be executed (from `_toc.yml`)
That would all happen within the reusable https://github.com/ProjectPythiaCookbooks/cookbook-actions/blob/main/.github/workflows/build-book.yaml
One question is whether this should be automatic (i.e. every Cookbook executes this way), or whether there should be a switch for the individual Cookbook to choose whether to execute via binderbot or on GitHub Actions. | 1.0 | Build via binderbot - Following successful experiments in https://github.com/ProjectPythiaCookbooks/cmip6-cookbook/pull/27 and https://github.com/ProjectPythia/pythia-foundations/pull/322, it's time to build the binderbot functionality into the template and (once it's working) push those changes out to all cookbook repos.
My recent refactor of the infrastructure makes this much easier. Most (all?) the changes will actually occur in the reusable workflows over at https://github.com/ProjectPythiaCookbooks/cookbook-actions.
What I have in mind is a python script that parses `_config.yml` and `_toc.yml` to get things needed for the call to binderbot:
- the link to the binder (stored in the field `binderhub_url:` in `_config.yml`)
- the list of all notebook files to be executed (from `_toc.yml`)
That would all happen within the reusable https://github.com/ProjectPythiaCookbooks/cookbook-actions/blob/main/.github/workflows/build-book.yaml
One question is whether this should be automatic (i.e. every Cookbook executes this way), or whether there should be a switch for the individual Cookbook to choose whether to execute via binderbot or on GitHub Actions. | infrastructure | build via binderbot following successful experiments in and it s time to build the binderbot functionality into the template and once it s working push those changes out to all cookbook repos my recent refactor of the infrastructure makes this much easier most all the changes will actually occur in the reusable workflows over at what i have in mind is a python script that parses config yml and toc yml to get things needed for the call to binderbot the link to the binder stored in the field binderhub url in config yml the list of all notebook files to be executed from toc yml that would all happen within the reusable one question is whether this should be automatic i e every cookbook executes this way or whether there should be a switch for the individual cookbook to choose whether to execute via binderbot or on github actions | 1 |
7,550 | 6,989,427,913 | IssuesEvent | 2017-12-14 16:08:37 | seqan/lambda | https://api.github.com/repos/seqan/lambda | closed | refactor codebase | infrastructure | - [ ] make one binary with different "actions" (mkindex, search)
- [ ] resort the source code | 1.0 | refactor codebase - - [ ] make one binary with different "actions" (mkindex, search)
- [ ] resort the source code | infrastructure | refactor codebase make one binary with different actions mkindex search resort the source code | 1 |
7,545 | 6,988,084,042 | IssuesEvent | 2017-12-14 11:32:51 | h-da/geli | https://api.github.com/repos/h-da/geli | opened | Introduce API-Docu, build automatically | api enhancement infrastructure | # User Story
### As a:
Developer
### I want:
to have an API-Documentation
### so that:
I can see all URL-Paths and their functions.
## Acceptance criteria:
- [ ] All API-Functions are commented
- [ ] Build the Documentation on build time
- [ ] The documentation should be generated only on tagged builds
- [ ] Publish the build (as GH Pages)
- [ ] The version of the docu should be switched
## Additional info:
https://pages.github.com/
------
_Please tag this issue if you are sure to which tag(s) it belongs._
| 1.0 | Introduce API-Docu, build automatically - # User Story
### As a:
Developer
### I want:
to have an API-Documentation
### so that:
I can see all URL-Paths and their functions.
## Acceptance criteria:
- [ ] All API-Functions are commented
- [ ] Build the Documentation on build time
- [ ] The documentation should be generated only on tagged builds
- [ ] Publish the build (as GH Pages)
- [ ] The version of the docu should be switched
## Additional info:
https://pages.github.com/
------
_Please tag this issue if you are sure to which tag(s) it belongs._
| infrastructure | introduce api docu build automatically user story as a developer i want to have an api documentation so that i can see all url paths and their functions acceptance criteria all api functions are commented build the documentation on build time the documentation should be generated only on tagged builds publish the build as gh pages the version of the docu should be switched additional info please tag this issue if you are sure to which tag s it belongs | 1 |
12,790 | 9,956,684,231 | IssuesEvent | 2019-07-05 14:33:48 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | generator.py fails silently if you specify an invalid --type | :Generator :infrastructure | For confirmed bugs, please report:
- Version: master
- Operating System: any
- Steps to Reproduce:
```
python ${GOPATH}/src/github.com/elastic/beats/script/generate.py --type=fakebeat --project_name=examplebeat
```
Since `fakebeat` doesn't exist, nothing happens. The script exits zero, no message is printed.
This is presumably because the python script is iterating over a directory that doesn't exist:
```
for root, dirs, files in os.walk(template_path + '/' + beat_type + '/{beat}'):
```
According to the pydoc, `os.walk` will ignore errors by default. We should either specify some error handling for `os.walk`, or check the validity of the type before hand.
| 1.0 | generator.py fails silently if you specify an invalid --type - For confirmed bugs, please report:
- Version: master
- Operating System: any
- Steps to Reproduce:
```
python ${GOPATH}/src/github.com/elastic/beats/script/generate.py --type=fakebeat --project_name=examplebeat
```
Since `fakebeat` doesn't exist, nothing happens. The script exits zero, no message is printed.
This is presumably because the python script is iterating over a directory that doesn't exist:
```
for root, dirs, files in os.walk(template_path + '/' + beat_type + '/{beat}'):
```
According to the pydoc, `os.walk` will ignore errors by default. We should either specify some error handling for `os.walk`, or check the validity of the type before hand.
| infrastructure | generator py fails silently if you specify an invalid type for confirmed bugs please report version master operating system any steps to reproduce python gopath src github com elastic beats script generate py type fakebeat project name examplebeat since fakebeat doesn t exist nothing happens the script exits zero no message is printed this is presumably because the python script is iterating over a directory that doesn t exist for root dirs files in os walk template path beat type beat according to the pydoc os walk will ignore errors by default we should either specify some error handling for os walk or check the validity of the type before hand | 1 |
12,719 | 9,935,404,424 | IssuesEvent | 2019-07-02 16:27:55 | raiden-network/raiden-services | https://api.github.com/repos/raiden-network/raiden-services | closed | Deployment guide | Infrastructure :office: | We need better docs for *Configuration and instructions for running Raiden Services*
A good example is the transport repo: https://github.com/raiden-network/raiden-transport/ | 1.0 | Deployment guide - We need better docs for *Configuration and instructions for running Raiden Services*
A good example is the transport repo: https://github.com/raiden-network/raiden-transport/ | infrastructure | deployment guide we need better docs for configuration and instructions for running raiden services a good example is the transport repo | 1 |
25,092 | 18,105,043,650 | IssuesEvent | 2021-09-22 18:15:07 | dotnet/fsharp | https://api.github.com/repos/dotnet/fsharp | closed | Remove IVT in our language service and consume it as other editors do | Area-Infrastructure | This is a tracking issue.
We are currently IVT our own language service. This makes things awkward, because we can consume our language service differently from other editors. It's also a real pain to deal with. We need to remove IVTs and consume our language service just like any other editor. | 1.0 | Remove IVT in our language service and consume it as other editors do - This is a tracking issue.
We are currently IVT our own language service. This makes things awkward, because we can consume our language service differently from other editors. It's also a real pain to deal with. We need to remove IVTs and consume our language service just like any other editor. | infrastructure | remove ivt in our language service and consume it as other editors do this is a tracking issue we are currently ivt our own language service this makes things awkward because we can consume our language service differently from other editors it s also a real pain to deal with we need to remove ivts and consume our language service just like any other editor | 1 |
9,724 | 3,314,774,350 | IssuesEvent | 2015-11-06 08:04:43 | gbv/paia | https://api.github.com/repos/gbv/paia | closed | URL | documentation question | After implementing DAIA in Bibdia for SLB Potsdam/KOBV I thought I'd have a look at the next interface PAIA. Why on earth does the specfication insist on explicit URLs? And why is the patron identifier also in the URL? First the explicit URLs mean I'd have to map /core internally into a URL to Bibdia. The site CANNOT use /core for anything else but PAIA. Secondly why is the patron identifier plain for all to see in the URL? Once again I'd have to rewrite this into a more conventional URL which, for example, DAIA uses. In DAIA the URL is not specified, just the parameters to it. OAI follows a similar procedure, so does NCIP via HTTPS. And why the patron ID is part of the URL instead of just simply a parameter (and then in a POST body) is beyond me, since the patron is actually a parameter and not a resource identifier. The last part of the URL is similarly a verb. | 1.0 | URL - After implementing DAIA in Bibdia for SLB Potsdam/KOBV I thought I'd have a look at the next interface PAIA. Why on earth does the specfication insist on explicit URLs? And why is the patron identifier also in the URL? First the explicit URLs mean I'd have to map /core internally into a URL to Bibdia. The site CANNOT use /core for anything else but PAIA. Secondly why is the patron identifier plain for all to see in the URL? Once again I'd have to rewrite this into a more conventional URL which, for example, DAIA uses. In DAIA the URL is not specified, just the parameters to it. OAI follows a similar procedure, so does NCIP via HTTPS. And why the patron ID is part of the URL instead of just simply a parameter (and then in a POST body) is beyond me, since the patron is actually a parameter and not a resource identifier. The last part of the URL is similarly a verb. | non_infrastructure | url after implementing daia in bibdia for slb potsdam kobv i thought i d have a look at the next interface paia why on earth does the specfication insist on explicit urls and why is the patron identifier also in the url first the explicit urls mean i d have to map core internally into a url to bibdia the site cannot use core for anything else but paia secondly why is the patron identifier plain for all to see in the url once again i d have to rewrite this into a more conventional url which for example daia uses in daia the url is not specified just the parameters to it oai follows a similar procedure so does ncip via https and why the patron id is part of the url instead of just simply a parameter and then in a post body is beyond me since the patron is actually a parameter and not a resource identifier the last part of the url is similarly a verb | 0 |
273,877 | 29,831,109,382 | IssuesEvent | 2023-06-18 09:33:23 | RG4421/ampere-centos-kernel | https://api.github.com/repos/RG4421/ampere-centos-kernel | closed | CVE-2019-19927 (Medium) detected in linuxv5.2 - autoclosed | Mend: dependency security vulnerability | ## CVE-2019-19927 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/ttm/ttm_page_alloc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/ttm/ttm_page_alloc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.0.0-rc7 (as distributed in ubuntu/linux.git on kernel.ubuntu.com), mounting a crafted f2fs filesystem image and performing some operations can lead to slab-out-of-bounds read access in ttm_put_pages in drivers/gpu/drm/ttm/ttm_page_alloc.c. This is related to the vmwgfx or ttm module.
<p>Publish Date: 2019-12-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19927>CVE-2019-19927</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-05-14</p>
<p>Fix Resolution: v5.1-rc6</p>
</p>
</details>
<p></p>
| True | CVE-2019-19927 (Medium) detected in linuxv5.2 - autoclosed - ## CVE-2019-19927 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary>
<p>
<p>Linux kernel source tree</p>
<p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p>
<p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/ttm/ttm_page_alloc.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/gpu/drm/ttm/ttm_page_alloc.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
In the Linux kernel 5.0.0-rc7 (as distributed in ubuntu/linux.git on kernel.ubuntu.com), mounting a crafted f2fs filesystem image and performing some operations can lead to slab-out-of-bounds read access in ttm_put_pages in drivers/gpu/drm/ttm/ttm_page_alloc.c. This is related to the vmwgfx or ttm module.
<p>Publish Date: 2019-12-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2019-19927>CVE-2019-19927</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.0</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: High
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2020-05-14</p>
<p>Fix Resolution: v5.1-rc6</p>
</p>
</details>
<p></p>
| non_infrastructure | cve medium detected in autoclosed cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files drivers gpu drm ttm ttm page alloc c drivers gpu drm ttm ttm page alloc c vulnerability details in the linux kernel as distributed in ubuntu linux git on kernel ubuntu com mounting a crafted filesystem image and performing some operations can lead to slab out of bounds read access in ttm put pages in drivers gpu drm ttm ttm page alloc c this is related to the vmwgfx or ttm module publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution | 0 |
23,893 | 16,677,437,563 | IssuesEvent | 2021-06-07 18:05:06 | onnx/onnx | https://api.github.com/repos/onnx/onnx | closed | Time to upgrade protobuf | infrastructure | I got the error message below about a deprecated Python feature when running ONNX tests locally. It seems that it's time to upgrade ONNX to use newer protobuf version. Otherwise, our code might not work with Python 3.8.
```
c:\programdata\anaconda3\lib\site-packages\google\protobuf\descriptor.py:47
c:\programdata\anaconda3\lib\site-packages\google\protobuf\descriptor.py:47:
DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from google.protobuf.pyext import _message
```
Here is how it can be reproduced.
1. pip install -e onnx
2. cd onnx
3. pytest
Not using recent versions of protobuf also causes [problem](https://github.com/conda-forge/onnx-feedstock/issues/29) and [problem](https://github.com/conda-forge/onnx-feedstock/issues/14) to other packages. | 1.0 | Time to upgrade protobuf - I got the error message below about a deprecated Python feature when running ONNX tests locally. It seems that it's time to upgrade ONNX to use newer protobuf version. Otherwise, our code might not work with Python 3.8.
```
c:\programdata\anaconda3\lib\site-packages\google\protobuf\descriptor.py:47
c:\programdata\anaconda3\lib\site-packages\google\protobuf\descriptor.py:47:
DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from google.protobuf.pyext import _message
```
Here is how it can be reproduced.
1. pip install -e onnx
2. cd onnx
3. pytest
Not using recent versions of protobuf also causes [problem](https://github.com/conda-forge/onnx-feedstock/issues/29) and [problem](https://github.com/conda-forge/onnx-feedstock/issues/14) to other packages. | infrastructure | time to upgrade protobuf i got the error message below about a deprecated python feature when running onnx tests locally it seems that it s time to upgrade onnx to use newer protobuf version otherwise our code might not work with python c programdata lib site packages google protobuf descriptor py c programdata lib site packages google protobuf descriptor py deprecationwarning using or importing the abcs from collections instead of from collections abc is deprecated and in it will stop working from google protobuf pyext import message here is how it can be reproduced pip install e onnx cd onnx pytest not using recent versions of protobuf also causes and to other packages | 1 |
12,680 | 9,914,035,429 | IssuesEvent | 2019-06-28 13:27:39 | pulibrary/drds_sprints | https://api.github.com/repos/pulibrary/drds_sprints | closed | Cantaloupe | [zube]: Planned infrastructure planned | Setup Cantaloupe (either on the new image server, or alongside Loris) and configure Figgy to use it. | 1.0 | Cantaloupe - Setup Cantaloupe (either on the new image server, or alongside Loris) and configure Figgy to use it. | infrastructure | cantaloupe setup cantaloupe either on the new image server or alongside loris and configure figgy to use it | 1 |
144 | 2,537,005,822 | IssuesEvent | 2015-01-26 17:40:06 | dinyar/uGMTfirmware | https://api.github.com/repos/dinyar/uGMTfirmware | opened | Allow one write op via IPbus to fill more than one value in memories | infrastructure | Most values in the internal memories are significantly smaller than 32 bit while IPbus uses 32 bit words. To speed up the configuration step we can fit several of these smaller values into one IPbus write word. | 1.0 | Allow one write op via IPbus to fill more than one value in memories - Most values in the internal memories are significantly smaller than 32 bit while IPbus uses 32 bit words. To speed up the configuration step we can fit several of these smaller values into one IPbus write word. | infrastructure | allow one write op via ipbus to fill more than one value in memories most values in the internal memories are significantly smaller than bit while ipbus uses bit words to speed up the configuration step we can fit several of these smaller values into one ipbus write word | 1 |
187,375 | 6,756,598,817 | IssuesEvent | 2017-10-24 07:46:34 | threefoldfoundation/app_backend | https://api.github.com/repos/threefoldfoundation/app_backend | closed | Wallet on Dashboard | priority_minor state_verification type_feature | - see list of users + search
- detail of user for transaction history + balance of tokens
payment admins should be able to grant tokens in detail page of a user | 1.0 | Wallet on Dashboard - - see list of users + search
- detail of user for transaction history + balance of tokens
payment admins should be able to grant tokens in detail page of a user | non_infrastructure | wallet on dashboard see list of users search detail of user for transaction history balance of tokens payment admins should be able to grant tokens in detail page of a user | 0 |
24,778 | 17,773,022,285 | IssuesEvent | 2021-08-30 15:40:53 | google/iree | https://api.github.com/repos/google/iree | closed | Providing IREE Snapshot "releases" as a branch | infrastructure ๐ ๏ธ | Chatting with @hcindyl about integrating IREE into a project that uses [git-repo](https://gerrit.googlesource.com/git-repo). They're interested in having a less frequently updated version of IREE than ToT. Not necessarily stable, but updated less frequently. We discussed using the [snapshot release](https://github.com/google/iree/blob/main/.github/workflows/schedule_snapshot_release.yml), which creates a release twice daily. The wrinkle is that git-repo apparently has spotty support for fetching from tags, so they'd like a branch instead. I think that's something we could do reasonably easily. It would also be nice if this didn't just snapshot latest main, but instead took the last commit where all checks passed. I can look into how we would query this with the GitHub API.
@stellaraccident, since you created this snapshot. Can you comment on its stability and whether I could piggy back off this?
| 1.0 | Providing IREE Snapshot "releases" as a branch - Chatting with @hcindyl about integrating IREE into a project that uses [git-repo](https://gerrit.googlesource.com/git-repo). They're interested in having a less frequently updated version of IREE than ToT. Not necessarily stable, but updated less frequently. We discussed using the [snapshot release](https://github.com/google/iree/blob/main/.github/workflows/schedule_snapshot_release.yml), which creates a release twice daily. The wrinkle is that git-repo apparently has spotty support for fetching from tags, so they'd like a branch instead. I think that's something we could do reasonably easily. It would also be nice if this didn't just snapshot latest main, but instead took the last commit where all checks passed. I can look into how we would query this with the GitHub API.
@stellaraccident, since you created this snapshot. Can you comment on its stability and whether I could piggy back off this?
| infrastructure | providing iree snapshot releases as a branch chatting with hcindyl about integrating iree into a project that uses they re interested in having a less frequently updated version of iree than tot not necessarily stable but updated less frequently we discussed using the which creates a release twice daily the wrinkle is that git repo apparently has spotty support for fetching from tags so they d like a branch instead i think that s something we could do reasonably easily it would also be nice if this didn t just snapshot latest main but instead took the last commit where all checks passed i can look into how we would query this with the github api stellaraccident since you created this snapshot can you comment on its stability and whether i could piggy back off this | 1 |
29,971 | 24,444,029,474 | IssuesEvent | 2022-10-06 16:25:59 | PostHog/posthog | https://api.github.com/repos/PostHog/posthog | closed | Couldn't find a package.json file in "/code" in web container on hobby 1.40.0 | bug infrastructure | ## Bug description
On 1.40.0 the web package is not initialized so the server never starts. rolling back to 1.39.1 solves this
## How to reproduce
1. upgrade to 1.40.0 and start hobby with docker compose
2. check web container logs
3.
## Environment
- [ ] PostHog Cloud
- [x] self-hosted PostHog, version/commit: please provide
- hobby 1.40.0
## Additional context
(internal workspace: https://posthog.slack.com/archives/C03KZUU124U/p1664371893124139)
#### *Thank you* for your bug report โ we love squashing them!
| 1.0 | Couldn't find a package.json file in "/code" in web container on hobby 1.40.0 - ## Bug description
On 1.40.0 the web package is not initialized so the server never starts. rolling back to 1.39.1 solves this
## How to reproduce
1. upgrade to 1.40.0 and start hobby with docker compose
2. check web container logs
3.
## Environment
- [ ] PostHog Cloud
- [x] self-hosted PostHog, version/commit: please provide
- hobby 1.40.0
## Additional context
(internal workspace: https://posthog.slack.com/archives/C03KZUU124U/p1664371893124139)
#### *Thank you* for your bug report โ we love squashing them!
| infrastructure | couldn t find a package json file in code in web container on hobby bug description on the web package is not initialized so the server never starts rolling back to solves this how to reproduce upgrade to and start hobby with docker compose check web container logs environment posthog cloud self hosted posthog version commit please provide hobby additional context internal workspace thank you for your bug report โ we love squashing them | 1 |
32,807 | 27,006,283,106 | IssuesEvent | 2023-02-10 11:57:25 | arduino/arduino-create-agent | https://api.github.com/repos/arduino/arduino-create-agent | closed | The CI should upload a json with version and checksum to S3, for the new update logic | type: enhancement os: macos topic: infrastructure | Similar to https://github.com/arduino/arduino-create-agent/issues/736, the json file has to be placed in https://downloads.arduino.cc/CreateAgent/Stable/darwin-arm64.json and should contain:
```
{
"Version":"1.2.8",
"Sha256": "3ahZ78JAs3cNfK60jofS/PsWRQiJvV1sZUchGvCqyLY="
}
``` | 1.0 | The CI should upload a json with version and checksum to S3, for the new update logic - Similar to https://github.com/arduino/arduino-create-agent/issues/736, the json file has to be placed in https://downloads.arduino.cc/CreateAgent/Stable/darwin-arm64.json and should contain:
```
{
"Version":"1.2.8",
"Sha256": "3ahZ78JAs3cNfK60jofS/PsWRQiJvV1sZUchGvCqyLY="
}
``` | infrastructure | the ci should upload a json with version and checksum to for the new update logic similar to the json file has to be placed in and should contain version | 1 |
28,623 | 23,395,929,268 | IssuesEvent | 2022-08-11 23:32:23 | grpc/grpc.io | https://api.github.com/repos/grpc/grpc.io | opened | Netlify: select a new build image | p1-high p2-medium infrastructure e0-minutes | As reported via deploy logs:
```nocode
---------------------------------------------------------------------
DEPRECATION NOTICE: Builds using the Xenial build image will fail after November 15th, 2022.
The build image for this site uses Ubuntu 16.04 Xenial Xerus, which is no longer supported.
All Netlify builds using the Xenial build image will begin failing in the week of November 15th, 2022.
To avoid service disruption, please select a newer build image at the following link:
https://app.netlify.com/sites/grpc-io/settings/deploys#build-image-selection
For more details, visit the build image migration guide:
https://answers.netlify.com/t/please-read-end-of-support-for-xenial-build-image-everything-you-need-to-know/68239
---------------------------------------------------------------------
| 1.0 | Netlify: select a new build image - As reported via deploy logs:
```nocode
---------------------------------------------------------------------
DEPRECATION NOTICE: Builds using the Xenial build image will fail after November 15th, 2022.
The build image for this site uses Ubuntu 16.04 Xenial Xerus, which is no longer supported.
All Netlify builds using the Xenial build image will begin failing in the week of November 15th, 2022.
To avoid service disruption, please select a newer build image at the following link:
https://app.netlify.com/sites/grpc-io/settings/deploys#build-image-selection
For more details, visit the build image migration guide:
https://answers.netlify.com/t/please-read-end-of-support-for-xenial-build-image-everything-you-need-to-know/68239
---------------------------------------------------------------------
| infrastructure | netlify select a new build image as reported via deploy logs nocode deprecation notice builds using the xenial build image will fail after november the build image for this site uses ubuntu xenial xerus which is no longer supported all netlify builds using the xenial build image will begin failing in the week of november to avoid service disruption please select a newer build image at the following link for more details visit the build image migration guide | 1 |
3,749 | 4,540,579,230 | IssuesEvent | 2016-09-09 15:03:29 | jquery/esprima | https://api.github.com/repos/jquery/esprima | opened | Drop support for io.js, Node.js v0.12 and v5 | infrastructure | This is to continue what has been started earlier (#1528).
* io.js: merged back to Node.js project long time ago
* Node.js v0.12: active LTS ended on 2016-04-01, maintenace will end on 2016-12-31
* Node.js v5: https://nodejs.org/en/blog/community/v5-to-v7/
Reference: https://github.com/nodejs/LTS | 1.0 | Drop support for io.js, Node.js v0.12 and v5 - This is to continue what has been started earlier (#1528).
* io.js: merged back to Node.js project long time ago
* Node.js v0.12: active LTS ended on 2016-04-01, maintenace will end on 2016-12-31
* Node.js v5: https://nodejs.org/en/blog/community/v5-to-v7/
Reference: https://github.com/nodejs/LTS | infrastructure | drop support for io js node js and this is to continue what has been started earlier io js merged back to node js project long time ago node js active lts ended on maintenace will end on node js reference | 1 |
146,655 | 23,099,754,473 | IssuesEvent | 2022-07-27 00:33:39 | Australian-Imaging-Service/pipelines | https://api.github.com/repos/Australian-Imaging-Service/pipelines | opened | [STORY] Create Pydra Task interface for dwi2response MRtrix3 command | pipelines-stream story analysis-design shallow 3pt ready | ### Description
As a pipeline developer, I would like to be able to use a ready built Pydra task interface for the `dwi2response` MRtrix3 command, so that I can call that function conveniently from Arcana.
### Acceptance Criteria
- [ ] dwi2response can be successfully called from Pydra
- [ ] all options can be set via the input interface
- [ ] all outputs can be retrieved from output interface
| 1.0 | [STORY] Create Pydra Task interface for dwi2response MRtrix3 command - ### Description
As a pipeline developer, I would like to be able to use a ready built Pydra task interface for the `dwi2response` MRtrix3 command, so that I can call that function conveniently from Arcana.
### Acceptance Criteria
- [ ] dwi2response can be successfully called from Pydra
- [ ] all options can be set via the input interface
- [ ] all outputs can be retrieved from output interface
| non_infrastructure | create pydra task interface for command description as a pipeline developer i would like to be able to use a ready built pydra task interface for the command so that i can call that function conveniently from arcana acceptance criteria can be successfully called from pydra all options can be set via the input interface all outputs can be retrieved from output interface | 0 |
81,113 | 7,768,116,691 | IssuesEvent | 2018-06-03 14:38:26 | nodejs/node | https://api.github.com/repos/nodejs/node | closed | investigate flaky parallel/test-crypto-dh-leak | CI / flaky test crypto | <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v11.0.0-pre (master)
* **Platform**: ubuntu1604_sharedlibs_debug_x64
* **Subsystem**: test crypto
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-commit-linux-containered/4846/nodes=ubuntu1604_sharedlibs_debug_x64/console
```console
03:13:14 not ok 414 parallel/test-crypto-dh-leak
03:13:14 ---
03:13:14 duration_ms: 2.717
03:13:14 severity: fail
03:13:14 exitcode: 1
03:13:14 stack: |-
03:13:14 assert.js:270
03:13:14 throw err;
03:13:14 ^
03:13:14
03:13:14 AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
03:13:14
03:13:14 assert(after - before < 5 << 20)
03:13:14
03:13:14 at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux-containered/nodes/ubuntu1604_sharedlibs_debug_x64/test/parallel/test-crypto-dh-leak.js:26:1)
03:13:14 at Module._compile (internal/modules/cjs/loader.js:702:30)
03:13:14 at Object.Module._extensions..js (internal/modules/cjs/loader.js:713:10)
03:13:14 at Module.load (internal/modules/cjs/loader.js:612:32)
03:13:14 at tryModuleLoad (internal/modules/cjs/loader.js:551:12)
03:13:14 at Function.Module._load (internal/modules/cjs/loader.js:543:3)
03:13:14 at Function.Module.runMain (internal/modules/cjs/loader.js:744:10)
03:13:14 at startup (internal/bootstrap/node.js:261:19)
03:13:14 at bootstrapNodeJSCore (internal/bootstrap/node.js:595:3)
03:13:14 ...
``` | 1.0 | investigate flaky parallel/test-crypto-dh-leak - <!--
Thank you for reporting an issue.
This issue tracker is for bugs and issues found within Node.js core.
If you require more general support please file an issue on our help
repo. https://github.com/nodejs/help
Please fill in as much of the template below as you're able.
Version: output of `node -v`
Platform: output of `uname -a` (UNIX), or version and 32 or 64-bit (Windows)
Subsystem: if known, please specify affected core module name
If possible, please provide code that demonstrates the problem, keeping it as
simple and free of external dependencies as you are able.
-->
* **Version**: v11.0.0-pre (master)
* **Platform**: ubuntu1604_sharedlibs_debug_x64
* **Subsystem**: test crypto
<!-- Enter your issue details below this comment. -->
https://ci.nodejs.org/job/node-test-commit-linux-containered/4846/nodes=ubuntu1604_sharedlibs_debug_x64/console
```console
03:13:14 not ok 414 parallel/test-crypto-dh-leak
03:13:14 ---
03:13:14 duration_ms: 2.717
03:13:14 severity: fail
03:13:14 exitcode: 1
03:13:14 stack: |-
03:13:14 assert.js:270
03:13:14 throw err;
03:13:14 ^
03:13:14
03:13:14 AssertionError [ERR_ASSERTION]: The expression evaluated to a falsy value:
03:13:14
03:13:14 assert(after - before < 5 << 20)
03:13:14
03:13:14 at Object.<anonymous> (/home/iojs/build/workspace/node-test-commit-linux-containered/nodes/ubuntu1604_sharedlibs_debug_x64/test/parallel/test-crypto-dh-leak.js:26:1)
03:13:14 at Module._compile (internal/modules/cjs/loader.js:702:30)
03:13:14 at Object.Module._extensions..js (internal/modules/cjs/loader.js:713:10)
03:13:14 at Module.load (internal/modules/cjs/loader.js:612:32)
03:13:14 at tryModuleLoad (internal/modules/cjs/loader.js:551:12)
03:13:14 at Function.Module._load (internal/modules/cjs/loader.js:543:3)
03:13:14 at Function.Module.runMain (internal/modules/cjs/loader.js:744:10)
03:13:14 at startup (internal/bootstrap/node.js:261:19)
03:13:14 at bootstrapNodeJSCore (internal/bootstrap/node.js:595:3)
03:13:14 ...
``` | non_infrastructure | investigate flaky parallel test crypto dh leak thank you for reporting an issue this issue tracker is for bugs and issues found within node js core if you require more general support please file an issue on our help repo please fill in as much of the template below as you re able version output of node v platform output of uname a unix or version and or bit windows subsystem if known please specify affected core module name if possible please provide code that demonstrates the problem keeping it as simple and free of external dependencies as you are able version pre master platform sharedlibs debug subsystem test crypto console not ok parallel test crypto dh leak duration ms severity fail exitcode stack assert js throw err assertionerror the expression evaluated to a falsy value assert after before at object home iojs build workspace node test commit linux containered nodes sharedlibs debug test parallel test crypto dh leak js at module compile internal modules cjs loader js at object module extensions js internal modules cjs loader js at module load internal modules cjs loader js at trymoduleload internal modules cjs loader js at function module load internal modules cjs loader js at function module runmain internal modules cjs loader js at startup internal bootstrap node js at bootstrapnodejscore internal bootstrap node js | 0 |
36,698 | 17,867,692,281 | IssuesEvent | 2021-09-06 11:32:59 | getsentry/sentry-java | https://api.github.com/repos/getsentry/sentry-java | closed | Add support for tracing origins | performance | Currently the [SentryOkHttpInterceptor](https://github.com/getsentry/sentry-java/blob/aca78d9ca024c75225e59c36e35622055d2f6cbf/sentry-android-okhttp/src/main/java/io/sentry/android/okhttp/SentryOkHttpInterceptor.kt#L33-L35) adds the [sentry-trace HTTP header](https://develop.sentry.dev/sdk/performance/#header-sentry-trace) to every HTTP request. Some HTTP requests are targeting backends that don't have support for the sentry-trace HTTP header. Therefore JavaScript has the concept of [tracingOrigins](https://docs.sentry.io/platforms/javascript/performance/instrumentation/automatic-instrumentation/#tracingorigins), which is a list of URLs to which the integration should append the sentry-trace HTTP header. | True | Add support for tracing origins - Currently the [SentryOkHttpInterceptor](https://github.com/getsentry/sentry-java/blob/aca78d9ca024c75225e59c36e35622055d2f6cbf/sentry-android-okhttp/src/main/java/io/sentry/android/okhttp/SentryOkHttpInterceptor.kt#L33-L35) adds the [sentry-trace HTTP header](https://develop.sentry.dev/sdk/performance/#header-sentry-trace) to every HTTP request. Some HTTP requests are targeting backends that don't have support for the sentry-trace HTTP header. Therefore JavaScript has the concept of [tracingOrigins](https://docs.sentry.io/platforms/javascript/performance/instrumentation/automatic-instrumentation/#tracingorigins), which is a list of URLs to which the integration should append the sentry-trace HTTP header. | non_infrastructure | add support for tracing origins currently the adds the to every http request some http requests are targeting backends that don t have support for the sentry trace http header therefore javascript has the concept of which is a list of urls to which the integration should append the sentry trace http header | 0 |
38,770 | 2,850,254,371 | IssuesEvent | 2015-05-31 12:14:52 | damonkohler/android-scripting | https://api.github.com/repos/damonkohler/android-scripting | closed | PIL module please | auto-migrated Priority-Medium Type-Enhancement | ```
What should be supported?
Python Imaging Library (PIL)
I'm looking for a way to convert .jpg into .pcx
```
Original issue reported on code.google.com by `paulherweg` on 17 Jan 2013 at 7:45 | 1.0 | PIL module please - ```
What should be supported?
Python Imaging Library (PIL)
I'm looking for a way to convert .jpg into .pcx
```
Original issue reported on code.google.com by `paulherweg` on 17 Jan 2013 at 7:45 | non_infrastructure | pil module please what should be supported python imaging library pil i m looking for a way to convert jpg into pcx original issue reported on code google com by paulherweg on jan at | 0 |
25,545 | 18,846,523,583 | IssuesEvent | 2021-11-11 15:31:07 | replab/replab | https://api.github.com/repos/replab/replab | closed | Use MATLAB profiler for code coverage | Infrastructure Priority: Low | We could remove MoCOV from the dependencies (it's quite fragile) | 1.0 | Use MATLAB profiler for code coverage - We could remove MoCOV from the dependencies (it's quite fragile) | infrastructure | use matlab profiler for code coverage we could remove mocov from the dependencies it s quite fragile | 1 |
27,635 | 22,053,136,869 | IssuesEvent | 2022-05-30 10:25:37 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | Fix ConnectTest.ConnectAsync_CancellationRequestedAfterConnect_ThrowsOperationCanceledException on NodeJS | arch-wasm area-Infrastructure-mono in-pr | Running test on NodeJS causes uncaught exception
```
info: Error: read ECONNRESET
info: at TCP.onStreamRead (node:internal/stream_base_commons:220:20) {
info: errno: -4077,
info: code: 'ECONNRESET',
info: syscall: 'read'
info: }
``` | 1.0 | Fix ConnectTest.ConnectAsync_CancellationRequestedAfterConnect_ThrowsOperationCanceledException on NodeJS - Running test on NodeJS causes uncaught exception
```
info: Error: read ECONNRESET
info: at TCP.onStreamRead (node:internal/stream_base_commons:220:20) {
info: errno: -4077,
info: code: 'ECONNRESET',
info: syscall: 'read'
info: }
``` | infrastructure | fix connecttest connectasync cancellationrequestedafterconnect throwsoperationcanceledexception on nodejs running test on nodejs causes uncaught exception info error read econnreset info at tcp onstreamread node internal stream base commons info errno info code econnreset info syscall read info | 1 |
415,110 | 12,124,932,745 | IssuesEvent | 2020-04-22 14:51:01 | opendifferentialprivacy/whitenoise-core | https://api.github.com/repos/opendifferentialprivacy/whitenoise-core | opened | License, Gitter, communication for Build | Effort 1 - Small :coffee: Priority 1: High | - Placeholder for communication with MS
- [ ] Add license
- [ ] When complete, add communication channels to README
- [ ] link to the minisite | 1.0 | License, Gitter, communication for Build - - Placeholder for communication with MS
- [ ] Add license
- [ ] When complete, add communication channels to README
- [ ] link to the minisite | non_infrastructure | license gitter communication for build placeholder for communication with ms add license when complete add communication channels to readme link to the minisite | 0 |
10,727 | 8,697,288,458 | IssuesEvent | 2018-12-04 19:50:38 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Breaking field changes for 6.0 | :infrastructure Metricbeat discuss meta module | Metricbeat uses the a convention for all its fields names: https://www.elastic.co/guide/en/beats/libbeat/master/event-conventions.html This convention evolves over time and gets improvements. Some of the older fields do not follow the full convention anymore and should be updated. The main issue with updating the fields is that it can break backward compatibility. That means if we do these changes they can only be done in a major relaese.
This issue is intended to track the fields which are not up-to-date and should potentially be changed for 6.0 and discuss potential migration paths.
Fields (current -> new)
* system.process.cpu.user -> system.process.cpu.user.ticks
* system.process.cpu.system -> system.process.cpu.system.ticks
| 1.0 | Breaking field changes for 6.0 - Metricbeat uses the a convention for all its fields names: https://www.elastic.co/guide/en/beats/libbeat/master/event-conventions.html This convention evolves over time and gets improvements. Some of the older fields do not follow the full convention anymore and should be updated. The main issue with updating the fields is that it can break backward compatibility. That means if we do these changes they can only be done in a major relaese.
This issue is intended to track the fields which are not up-to-date and should potentially be changed for 6.0 and discuss potential migration paths.
Fields (current -> new)
* system.process.cpu.user -> system.process.cpu.user.ticks
* system.process.cpu.system -> system.process.cpu.system.ticks
| infrastructure | breaking field changes for metricbeat uses the a convention for all its fields names this convention evolves over time and gets improvements some of the older fields do not follow the full convention anymore and should be updated the main issue with updating the fields is that it can break backward compatibility that means if we do these changes they can only be done in a major relaese this issue is intended to track the fields which are not up to date and should potentially be changed for and discuss potential migration paths fields current new system process cpu user system process cpu user ticks system process cpu system system process cpu system ticks | 1 |
631,978 | 20,166,907,954 | IssuesEvent | 2022-02-10 06:04:03 | space-wizards/space-station-14 | https://api.github.com/repos/space-wizards/space-station-14 | opened | Ahelp 'kick' button immediately kicks, no confirmation | Type: Bug Priority: 2-Before Release Difficulty: 1-Easy | ## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
accidentally kicked a guy this way and it was awkward | 1.0 | Ahelp 'kick' button immediately kicks, no confirmation - ## Description
<!-- Explain your issue in detail, including the steps to reproduce it if applicable. Issues without proper explanation are liable to be closed by maintainers.-->
accidentally kicked a guy this way and it was awkward | non_infrastructure | ahelp kick button immediately kicks no confirmation description accidentally kicked a guy this way and it was awkward | 0 |
119,077 | 25,464,553,049 | IssuesEvent | 2022-11-25 01:47:47 | objectos/objectos | https://api.github.com/repos/objectos/objectos | closed | Field declarations TC06: array initializer | a:objectos-code | Objectos Code:
```java
field(t(_int(), dim()), id("a"), a());
field(t(_int(), dim()), id("b"), a(i(0)));
field(t(_int(), dim()), id("c"), a(i(0), i(1)));
```
Should generate:
```java
int[] a = {};
int[] b = {0};
int[] c = {0, 1};
``` | 1.0 | Field declarations TC06: array initializer - Objectos Code:
```java
field(t(_int(), dim()), id("a"), a());
field(t(_int(), dim()), id("b"), a(i(0)));
field(t(_int(), dim()), id("c"), a(i(0), i(1)));
```
Should generate:
```java
int[] a = {};
int[] b = {0};
int[] c = {0, 1};
``` | non_infrastructure | field declarations array initializer objectos code java field t int dim id a a field t int dim id b a i field t int dim id c a i i should generate java int a int b int c | 0 |
3,592 | 4,426,976,176 | IssuesEvent | 2016-08-16 19:59:44 | coherence-community/coherence-incubator | https://api.github.com/repos/coherence-community/coherence-incubator | closed | Create Coherence Incubator 13 (bridging release) based on Coherence 12.2.1+ | Module: Command Pattern Module: Common Module: Event Distribution Pattern Module: Functor Pattern Module: JVisualVM Module: Messaging Pattern Module: Processing Pattern Module: Push Replication Pattern Priority: Major Type: Infrastructure | To support live migration and compatibility with the latest Coherence 12.2.1+ releases, we need to create a new release of the Coherence Incubator.
This will support using Coherence Incubator 12 features, on Coherence 12.2.1+ releases, while applications migrate to use the latest Coherence 12.2.1 features, namely Federated Caching instead of Push Replication. | 1.0 | Create Coherence Incubator 13 (bridging release) based on Coherence 12.2.1+ - To support live migration and compatibility with the latest Coherence 12.2.1+ releases, we need to create a new release of the Coherence Incubator.
This will support using Coherence Incubator 12 features, on Coherence 12.2.1+ releases, while applications migrate to use the latest Coherence 12.2.1 features, namely Federated Caching instead of Push Replication. | infrastructure | create coherence incubator bridging release based on coherence to support live migration and compatibility with the latest coherence releases we need to create a new release of the coherence incubator this will support using coherence incubator features on coherence releases while applications migrate to use the latest coherence features namely federated caching instead of push replication | 1 |
154,112 | 19,710,788,344 | IssuesEvent | 2022-01-13 04:55:46 | ChoeMinji/react-17.0.2 | https://api.github.com/repos/ChoeMinji/react-17.0.2 | opened | CVE-2020-15215 (Medium) detected in electron-9.1.0.tgz | security vulnerability | ## CVE-2020-15215 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>electron-9.1.0.tgz</b></p></summary>
<p>Build cross platform desktop apps with JavaScript, HTML, and CSS</p>
<p>Library home page: <a href="https://registry.npmjs.org/electron/-/electron-9.1.0.tgz">https://registry.npmjs.org/electron/-/electron-9.1.0.tgz</a></p>
<p>
Dependency Hierarchy:
- react-devtools-4.8.2.tgz (Root Library)
- :x: **electron-9.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Electron before versions 11.0.0-beta.6, 10.1.2, 9.3.1 or 8.5.2 is vulnerable to a context isolation bypass. Apps using both `contextIsolation` and `sandbox: true` are affected. Apps using both `contextIsolation` and `nodeIntegrationInSubFrames: true` are affected. This is a context isolation bypass, meaning that code running in the main world context in the renderer can reach into the isolated Electron context and perform privileged actions.
<p>Publish Date: 2020-10-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15215>CVE-2020-15215</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/electron/electron/security/advisories/GHSA-56pc-6jqp-xqj8">https://github.com/electron/electron/security/advisories/GHSA-56pc-6jqp-xqj8</a></p>
<p>Release Date: 2020-10-19</p>
<p>Fix Resolution: v8.5.2, v9.3.1, v10.1.2, v11.0.0-beta.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2020-15215 (Medium) detected in electron-9.1.0.tgz - ## CVE-2020-15215 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>electron-9.1.0.tgz</b></p></summary>
<p>Build cross platform desktop apps with JavaScript, HTML, and CSS</p>
<p>Library home page: <a href="https://registry.npmjs.org/electron/-/electron-9.1.0.tgz">https://registry.npmjs.org/electron/-/electron-9.1.0.tgz</a></p>
<p>
Dependency Hierarchy:
- react-devtools-4.8.2.tgz (Root Library)
- :x: **electron-9.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ChoeMinji/react-17.0.2/commit/4669645897ed4ebcd4ee037f4dabb509ed4754c7">4669645897ed4ebcd4ee037f4dabb509ed4754c7</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Electron before versions 11.0.0-beta.6, 10.1.2, 9.3.1 or 8.5.2 is vulnerable to a context isolation bypass. Apps using both `contextIsolation` and `sandbox: true` are affected. Apps using both `contextIsolation` and `nodeIntegrationInSubFrames: true` are affected. This is a context isolation bypass, meaning that code running in the main world context in the renderer can reach into the isolated Electron context and perform privileged actions.
<p>Publish Date: 2020-10-06
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-15215>CVE-2020-15215</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.6</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/electron/electron/security/advisories/GHSA-56pc-6jqp-xqj8">https://github.com/electron/electron/security/advisories/GHSA-56pc-6jqp-xqj8</a></p>
<p>Release Date: 2020-10-19</p>
<p>Fix Resolution: v8.5.2, v9.3.1, v10.1.2, v11.0.0-beta.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve medium detected in electron tgz cve medium severity vulnerability vulnerable library electron tgz build cross platform desktop apps with javascript html and css library home page a href dependency hierarchy react devtools tgz root library x electron tgz vulnerable library found in head commit a href found in base branch master vulnerability details electron before versions beta or is vulnerable to a context isolation bypass apps using both contextisolation and sandbox true are affected apps using both contextisolation and nodeintegrationinsubframes true are affected this is a context isolation bypass meaning that code running in the main world context in the renderer can reach into the isolated electron context and perform privileged actions publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution beta step up your open source security game with whitesource | 0 |
10,949 | 8,826,818,059 | IssuesEvent | 2019-01-03 05:15:26 | astroML/astroML | https://api.github.com/repos/astroML/astroML | closed | deprecation decorator | infrastructure question | For version 0.4 a few things need to be deprecated. We don't have our own infrastructure to do it nicely, but:
* `astropy.utils.deprecated` is a very powerful decorator, but it issues an `AstropyDeprecationWarning`. If there isn't an easy way to swap the warning to either an `AstroMLDeprecationWarning` or just a simple `DeprecationWarning` we may just write a lightweight wrapper around the astropy one.
* An alternative would be to use `sklearn.utils.deprecated`. | 1.0 | deprecation decorator - For version 0.4 a few things need to be deprecated. We don't have our own infrastructure to do it nicely, but:
* `astropy.utils.deprecated` is a very powerful decorator, but it issues an `AstropyDeprecationWarning`. If there isn't an easy way to swap the warning to either an `AstroMLDeprecationWarning` or just a simple `DeprecationWarning` we may just write a lightweight wrapper around the astropy one.
* An alternative would be to use `sklearn.utils.deprecated`. | infrastructure | deprecation decorator for version a few things need to be deprecated we don t have our own infrastructure to do it nicely but astropy utils deprecated is a very powerful decorator but it issues an astropydeprecationwarning if there isn t an easy way to swap the warning to either an astromldeprecationwarning or just a simple deprecationwarning we may just write a lightweight wrapper around the astropy one an alternative would be to use sklearn utils deprecated | 1 |
428,122 | 12,403,271,123 | IssuesEvent | 2020-05-21 13:37:48 | yalla-coop/tempo | https://api.github.com/repos/yalla-coop/tempo | closed | Apple Phone Design Issues | backlog bug priority-5 | Apple Phone devices are throwing up a few design issues for us to look at
- [x] Bug 187 - After sending TC's successfully to another member, the graphic above the congratulations message is a bit squashed and stretched
- [x] Bug 188 - On the member's homepage the gift graphic in the pink 'Share the loveโฆ' box doesn't load properly. Have tried reloading multiple times and restarting the browser and relogging in
- [x] Bug 193 - Similar issue to #187 - after I send TC's to my group the graphic is stretched above the 'Gift Sent!' header. The page also opens too zoomed in here so the full sub-heading message isn't displayed.
| 1.0 | Apple Phone Design Issues - Apple Phone devices are throwing up a few design issues for us to look at
- [x] Bug 187 - After sending TC's successfully to another member, the graphic above the congratulations message is a bit squashed and stretched
- [x] Bug 188 - On the member's homepage the gift graphic in the pink 'Share the loveโฆ' box doesn't load properly. Have tried reloading multiple times and restarting the browser and relogging in
- [x] Bug 193 - Similar issue to #187 - after I send TC's to my group the graphic is stretched above the 'Gift Sent!' header. The page also opens too zoomed in here so the full sub-heading message isn't displayed.
| non_infrastructure | apple phone design issues apple phone devices are throwing up a few design issues for us to look at bug after sending tc s successfully to another member the graphic above the congratulations message is a bit squashed and stretched bug on the member s homepage the gift graphic in the pink share the loveโฆ box doesn t load properly have tried reloading multiple times and restarting the browser and relogging in bug similar issue to after i send tc s to my group the graphic is stretched above the gift sent header the page also opens too zoomed in here so the full sub heading message isn t displayed | 0 |
10,658 | 8,665,737,595 | IssuesEvent | 2018-11-29 00:41:01 | astroML/astroML | https://api.github.com/repos/astroML/astroML | opened | Fix pytest 4 compatibility | infrastructure | We got quite a few `RemovedInPytest4Warning` that needs some attention first, and there are also issues that looks similar to https://github.com/astropy/astropy/issues/6025 | 1.0 | Fix pytest 4 compatibility - We got quite a few `RemovedInPytest4Warning` that needs some attention first, and there are also issues that looks similar to https://github.com/astropy/astropy/issues/6025 | infrastructure | fix pytest compatibility we got quite a few that needs some attention first and there are also issues that looks similar to | 1 |
6,795 | 6,612,650,606 | IssuesEvent | 2017-09-20 05:29:57 | archco/moss-ui | https://api.github.com/repos/archco/moss-ui | closed | Change scss directory structure. | css infrastructure | `scss/components/` ์ ๊ฐ์ด ์๋ scss-components์ vue-components๋ฅผ ๋ถ๋ฆฌํ๋ค. scss-components๋ฅผ scss-parts๋ก ๋ณ๊ฒฝํ๋ค.
```yml
scss:
- components # vue components.
- helpers # scss helper utilities.
- lib # libraries and functions.
- mixins # scss mixins.
- parts # scss parts.
``` | 1.0 | Change scss directory structure. - `scss/components/` ์ ๊ฐ์ด ์๋ scss-components์ vue-components๋ฅผ ๋ถ๋ฆฌํ๋ค. scss-components๋ฅผ scss-parts๋ก ๋ณ๊ฒฝํ๋ค.
```yml
scss:
- components # vue components.
- helpers # scss helper utilities.
- lib # libraries and functions.
- mixins # scss mixins.
- parts # scss parts.
``` | infrastructure | change scss directory structure scss components ์ ๊ฐ์ด ์๋ scss components์ vue components๋ฅผ ๋ถ๋ฆฌํ๋ค scss components๋ฅผ scss parts๋ก ๋ณ๊ฒฝํ๋ค yml scss components vue components helpers scss helper utilities lib libraries and functions mixins scss mixins parts scss parts | 1 |
16,824 | 12,152,117,272 | IssuesEvent | 2020-04-24 21:26:40 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | opened | Better sync with lab & prod templates eg.; jenkins | Infrastructure action-required | https://trello.com/c/XwDDreQy/31-better-sync-with-lab-prod-templates-eg-jenkins
Things like custom jenkins templates in Prod don't live in Lab. There should be some sync between these two. | 1.0 | Better sync with lab & prod templates eg.; jenkins - https://trello.com/c/XwDDreQy/31-better-sync-with-lab-prod-templates-eg-jenkins
Things like custom jenkins templates in Prod don't live in Lab. There should be some sync between these two. | infrastructure | better sync with lab prod templates eg jenkins things like custom jenkins templates in prod don t live in lab there should be some sync between these two | 1 |
20,303 | 13,797,400,778 | IssuesEvent | 2020-10-09 22:09:21 | dotnet/runtime | https://api.github.com/repos/dotnet/runtime | closed | osx-arm64 enable CoreCLR native component builds in CI. | arch-arm64 area-Infrastructure-coreclr os-mac-os-x-big-sur | Once #40435 merges we need to enable CI builds of CoreCLR osx-arm64 native components.
Blocked on #41133 | 1.0 | osx-arm64 enable CoreCLR native component builds in CI. - Once #40435 merges we need to enable CI builds of CoreCLR osx-arm64 native components.
Blocked on #41133 | infrastructure | osx enable coreclr native component builds in ci once merges we need to enable ci builds of coreclr osx native components blocked on | 1 |
11,939 | 9,529,591,196 | IssuesEvent | 2019-04-29 11:46:05 | buildit/gravity-ui-web | https://api.github.com/repos/buildit/gravity-ui-web | opened | Add tests! | :bulb: idea good first issue infrastructure ui | **Is your feature request related to a problem? Please describe.**
We do some linting on our SASS code, but other than that, there is currently no testing on the `gravity-ui-web` library. We ought to fix that!
**Describe the solution you'd like**
Off the top of my head, these are all kinds of tests we should consider adding:
* Directly on the SASS code:
* Testing our SASS mixins and functions using something like [`sass-true`](https://github.com/oddbird/true)
* Testing that the SASS files from our ITCSS `settings` and `tools` layers do not output any CSS. (I think this would currently fail due to `gravy`, but nonetheless it would be good to have a test covering this, so we can fix that and avoid future regressions)
* Directly on the client-side JS code
* Bog-standard unit-tests using Jest or whatever.
* Note, currently we only have a teeny bit of JS code for the toggle button, but I anticipate this will grow in future releases, so having some testing infrastrcture in place would be good
* Via the pattern library:
* Run a11y tests (using either [pa11y](http://pa11y.org/) or [aXe](https://www.deque.com/axe/) per component (if you look at the Pattern Lab output in `dist/`, you can see that each pattern is output to its own HTML file. We could potentially just load each of those in turn and run our tests on them
* Run visual regression tests per component.
* While we fully expect things to render subtly differently in different browsers, components should render identically in the same browser - unless we've intentionally changed something. Therefore, running visual regression tests in one or more representative browsers would flag up any unintentional styling regressions.
* Since some components change appearance across breakpoints, we ought to test each component at a selection of viewport sizes.
* The reference images could be quite large, so we may want to explore storing them somewhere other than our git repo
* Maybe do snapshot testing of the rendered HTML?
* Similar to the visual regression testing, our HTML structures shouldn't really change by accident. So, something like Jest's snapshot testing might be a convenient way of checking against a reference and warning when that changes.
* E2E tests
* Being able to test the bahviour of any interactive components (currently only toggle button - but there are likely to be more in future) in a browser would be good.
* Toolks like Cypress or something the runs over WebDriver would be good
* Personally, I'd prefer the latter because we could then run our tests across many different browsers (e.g. via BrowserStack or similar). Cypress is convenient but is tied to the Blink engine (i.e. Chromium). I think we should make an effort to test in other browser engines too.
I'm sure there's more - but you get the idea. Feel free to add comments and suggestions in the comments below.
I wouldn't expect all of these right away. Each category probably deserves its own PR (and deeper discussion before implementing), but I wanted to brain-dump my thoughts somewhere :-D
I think the SASS testing would be a good place to start as that's going to give us the most value in the short term. | 1.0 | Add tests! - **Is your feature request related to a problem? Please describe.**
We do some linting on our SASS code, but other than that, there is currently no testing on the `gravity-ui-web` library. We ought to fix that!
**Describe the solution you'd like**
Off the top of my head, these are all kinds of tests we should consider adding:
* Directly on the SASS code:
* Testing our SASS mixins and functions using something like [`sass-true`](https://github.com/oddbird/true)
* Testing that the SASS files from our ITCSS `settings` and `tools` layers do not output any CSS. (I think this would currently fail due to `gravy`, but nonetheless it would be good to have a test covering this, so we can fix that and avoid future regressions)
* Directly on the client-side JS code
* Bog-standard unit-tests using Jest or whatever.
* Note, currently we only have a teeny bit of JS code for the toggle button, but I anticipate this will grow in future releases, so having some testing infrastrcture in place would be good
* Via the pattern library:
* Run a11y tests (using either [pa11y](http://pa11y.org/) or [aXe](https://www.deque.com/axe/) per component (if you look at the Pattern Lab output in `dist/`, you can see that each pattern is output to its own HTML file. We could potentially just load each of those in turn and run our tests on them
* Run visual regression tests per component.
* While we fully expect things to render subtly differently in different browsers, components should render identically in the same browser - unless we've intentionally changed something. Therefore, running visual regression tests in one or more representative browsers would flag up any unintentional styling regressions.
* Since some components change appearance across breakpoints, we ought to test each component at a selection of viewport sizes.
* The reference images could be quite large, so we may want to explore storing them somewhere other than our git repo
* Maybe do snapshot testing of the rendered HTML?
* Similar to the visual regression testing, our HTML structures shouldn't really change by accident. So, something like Jest's snapshot testing might be a convenient way of checking against a reference and warning when that changes.
* E2E tests
* Being able to test the bahviour of any interactive components (currently only toggle button - but there are likely to be more in future) in a browser would be good.
* Toolks like Cypress or something the runs over WebDriver would be good
* Personally, I'd prefer the latter because we could then run our tests across many different browsers (e.g. via BrowserStack or similar). Cypress is convenient but is tied to the Blink engine (i.e. Chromium). I think we should make an effort to test in other browser engines too.
I'm sure there's more - but you get the idea. Feel free to add comments and suggestions in the comments below.
I wouldn't expect all of these right away. Each category probably deserves its own PR (and deeper discussion before implementing), but I wanted to brain-dump my thoughts somewhere :-D
I think the SASS testing would be a good place to start as that's going to give us the most value in the short term. | infrastructure | add tests is your feature request related to a problem please describe we do some linting on our sass code but other than that there is currently no testing on the gravity ui web library we ought to fix that describe the solution you d like off the top of my head these are all kinds of tests we should consider adding directly on the sass code testing our sass mixins and functions using something like testing that the sass files from our itcss settings and tools layers do not output any css i think this would currently fail due to gravy but nonetheless it would be good to have a test covering this so we can fix that and avoid future regressions directly on the client side js code bog standard unit tests using jest or whatever note currently we only have a teeny bit of js code for the toggle button but i anticipate this will grow in future releases so having some testing infrastrcture in place would be good via the pattern library run tests using either or per component if you look at the pattern lab output in dist you can see that each pattern is output to its own html file we could potentially just load each of those in turn and run our tests on them run visual regression tests per component while we fully expect things to render subtly differently in different browsers components should render identically in the same browser unless we ve intentionally changed something therefore running visual regression tests in one or more representative browsers would flag up any unintentional styling regressions since some components change appearance across breakpoints we ought to test each component at a selection of viewport sizes the reference images could be quite large so we may want to explore storing them somewhere other than our git repo maybe do snapshot testing of the rendered html similar to the visual regression testing our html structures shouldn t really change by accident so something like jest s snapshot testing might be a convenient way of checking against a reference and warning when that changes tests being able to test the bahviour of any interactive components currently only toggle button but there are likely to be more in future in a browser would be good toolks like cypress or something the runs over webdriver would be good personally i d prefer the latter because we could then run our tests across many different browsers e g via browserstack or similar cypress is convenient but is tied to the blink engine i e chromium i think we should make an effort to test in other browser engines too i m sure there s more but you get the idea feel free to add comments and suggestions in the comments below i wouldn t expect all of these right away each category probably deserves its own pr and deeper discussion before implementing but i wanted to brain dump my thoughts somewhere d i think the sass testing would be a good place to start as that s going to give us the most value in the short term | 1 |
515,588 | 14,965,628,905 | IssuesEvent | 2021-01-27 13:38:11 | FireDynamics/ARTSS | https://api.github.com/repos/FireDynamics/ARTSS | closed | Add input file to logging | effort: low priority: high type: enhancement | In case a user encounters a problem with ARTSS we need the option to extract the input XML file from the log file.
- is it efficient to use a new tag for the logger just to mark the input data?
- what is the best way to display the input file and which parameter are necessary? | 1.0 | Add input file to logging - In case a user encounters a problem with ARTSS we need the option to extract the input XML file from the log file.
- is it efficient to use a new tag for the logger just to mark the input data?
- what is the best way to display the input file and which parameter are necessary? | non_infrastructure | add input file to logging in case a user encounters a problem with artss we need the option to extract the input xml file from the log file is it efficient to use a new tag for the logger just to mark the input data what is the best way to display the input file and which parameter are necessary | 0 |
71,716 | 3,367,617,904 | IssuesEvent | 2015-11-22 10:19:04 | music-encoding/music-encoding | https://api.github.com/repos/music-encoding/music-encoding | closed | Make verse a member of att.color | Priority: Medium | _From [raffaele...@gmail.com](https://code.google.com/u/117612283088052098592/) on October 16, 2013 09:53:55_
This will allow @-color on \<verse>, but not on \<syl>. Individual syllables can be colored with \<rend>.
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=181_ | 1.0 | Make verse a member of att.color - _From [raffaele...@gmail.com](https://code.google.com/u/117612283088052098592/) on October 16, 2013 09:53:55_
This will allow @-color on \<verse>, but not on \<syl>. Individual syllables can be colored with \<rend>.
_Original issue: http://code.google.com/p/music-encoding/issues/detail?id=181_ | non_infrastructure | make verse a member of att color from on october this will allow color on but not on individual syllables can be colored with original issue | 0 |
47,021 | 6,035,766,587 | IssuesEvent | 2017-06-09 14:37:56 | elastic/kibana | https://api.github.com/repos/elastic/kibana | opened | ToolBarSearchBox directive should use replace: true | :Design bug | This is because this component uses flexbox to make the search box wider, but the directive in question inserts itself as a node in the DOM, which interferes with the flexbox layout. If we can set the directive to use `replace: true`, we should be able to fix this problem.
| 1.0 | ToolBarSearchBox directive should use replace: true - This is because this component uses flexbox to make the search box wider, but the directive in question inserts itself as a node in the DOM, which interferes with the flexbox layout. If we can set the directive to use `replace: true`, we should be able to fix this problem.
| non_infrastructure | toolbarsearchbox directive should use replace true this is because this component uses flexbox to make the search box wider but the directive in question inserts itself as a node in the dom which interferes with the flexbox layout if we can set the directive to use replace true we should be able to fix this problem | 0 |
3,993 | 4,756,195,475 | IssuesEvent | 2016-10-24 13:20:41 | camptocamp/ngeo | https://api.github.com/repos/camptocamp/ngeo | opened | Add mobile redirection for desktop version | Infrastructure Ready | When loading desktop version on mobile, it should redirect to mobile version. | 1.0 | Add mobile redirection for desktop version - When loading desktop version on mobile, it should redirect to mobile version. | infrastructure | add mobile redirection for desktop version when loading desktop version on mobile it should redirect to mobile version | 1 |
34,823 | 30,484,334,919 | IssuesEvent | 2023-07-17 23:52:10 | bootstrapworld/curriculum | https://api.github.com/repos/bootstrapworld/curriculum | closed | In standards alignment page, add a link back to the standards' websites | Infrastructure | We already store these links (see `lib/maker/read-alignments.lua`), but don't push them through to the dependency graph or dictionaries.
- [x] Dorai, we need to project these links (and the full standard name) out into the distribution directory
- [x] Emmanuel, we need to render that info on the standards alignment page | 1.0 | In standards alignment page, add a link back to the standards' websites - We already store these links (see `lib/maker/read-alignments.lua`), but don't push them through to the dependency graph or dictionaries.
- [x] Dorai, we need to project these links (and the full standard name) out into the distribution directory
- [x] Emmanuel, we need to render that info on the standards alignment page | infrastructure | in standards alignment page add a link back to the standards websites we already store these links see lib maker read alignments lua but don t push them through to the dependency graph or dictionaries dorai we need to project these links and the full standard name out into the distribution directory emmanuel we need to render that info on the standards alignment page | 1 |
11,687 | 9,376,571,960 | IssuesEvent | 2019-04-04 08:17:47 | elastic/beats | https://api.github.com/repos/elastic/beats | closed | Metricbeat parse docker image and node labels error | :infrastructure Metricbeat containers libbeat | - Version: metricbeat 6.4.3
- Operating System: centos 7.5
- Discuss Forum URL:
https://discuss.elastic.co/t/metricbeat-parse-docker-image-labels-error/152986
https://discuss.elastic.co/t/metricbeat-kubernetes-module-state-node-metricset-failed-to-parse-node-labels/156269
# metricbeat parse node lable error๏ผ
```
$ kubectl get node m7-devops-128071 --show-labels
m7-devops-128071 Ready <none> 110d v1.8.15 active=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,es-data=true,ip-masq-fix=true,kubernetes.io/hostname=m7-devops-128071,mysql-master=true,node.alpha.4pd.io/group-0816=true,node.alpha.4pd.io/group-group1112=true,project=maze.ingress,prophet.4paradigm.com/addon=true,prophet.4paradigm.co
m/app=true,prophet.4paradigm.com/elasticsearch=true,prophet.4paradigm.com/gpu=true,prophet.4paradigm.com/offline=true,prophet.4paradigm.com/online=true,prophet.4paradigm.com/prometheus=true,prophet.4paradigm.com/rtidb-nameserver=true,prophet.4paradigm.com/rtidb-tablet=true,prophet.4paradigm.com/system=true,prophet=true
```
metricbeat log๏ผ
> 2018-11-12T17:54:45.479+0800 WARN elasticsearch/client.go:520 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbef27156c82fe362, ext:1463147003462, loc:(*time.Location)(0x37e0b20)}, Meta:common.MapStr(nil), Fields:common.MapStr{"metricset":common.MapStr{"host":"172.27.128.71:8794", "rtt":9019564, "namespace":"kubernetes.
> node", "name":"state_node", "module":"kubernetes"}, "kubernetes":common.MapStr{"labels":common.MapStr{"ip-masq-fix":"true", "node":common.MapStr{"alpha":common.MapStr{"4pd":common.MapStr{"io/group-group1112":"true"}}}, "beta":common.MapStr{"kubernetes":common.MapStr{"io/os":"linux", "io/arch":"amd64"}}, "kubernetes":common.MapStr{"io/hostname":"m7-devops-128071
> }, "prophet":common.MapStr{"4paradigm":common.MapStr{"com/offline":"true", "com/app":"true", "com/elasticsearch":"true", "com/online":"true", "com/system":"true", "com/addon":"true"}, "value":"true"}, "es-data":"true"}, "node":common.MapStr{"pod":common.MapStr{"capacity":common.MapStr{"total":220}, "allocatable":common.MapStr{"total":220}}, "status":common.MapSt
> r{"unschedulable":false, "ready":"true"}, "memory":common.MapStr{"allocatable":common.MapStr{"bytes":2.70272487424e+11}, "capacity":common.MapStr{"bytes":2.70377345024e+11}}, "cpu":common.MapStr{"allocatable":common.MapStr{"cores":40}, "capacity":common.MapStr{"cores":40}}, "name":"m7-devops-128071"}}, "beat":common.MapStr{"name":"metricbeat-128071-bin", "hostna
> me":"m7-devops-128071", "version":"6.4.3"}, "host":common.MapStr{"name":"m7-devops-128071", "id":"61432cc588f747f4aec71029ea9e9408", "containerized":true, "architecture":"x86_64", "os":common.MapStr{"platform":"centos", "version":"7 (Core)", "family":"redhat", "codename":"Core"}}}, Private:interface {}(nil)}, Flags:0x0} (status=400): {"type":"mapper_parsing_exce
> ption","reason":"failed to parse [kubernetes.labels.node]","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:222"}}
# metricbeat parse image label error
docker version๏ผ18.06.0-ce
```
{
"_index": "metricbeat-power-6.4.1-2018.10.18",
"_type": "doc",
"_id": "yD3PhmYBo726M9O3TIlB",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2018-10-18T10:53:25.167Z",
"metricset": {
"module": "docker",
"host": "/var/run/docker.sock",
"rtt": 49248,
"name": "image"
},
"docker": {
"image": {
"labels": {
"org_label-schema_schema-version": "= 1.0 org.label-schema.name=CentOS Base Image org.label-schema.vendor=CentOS org.label-schema.license=GPLv2 org.label-schema.build-date=20180531"
},
"id": {
"current": "sha256:d3923d82ce305febd8a9b4c88e949995ebf4b3c725c7388e389c1b1ae40ee6ed",
"parent": ""
},
"created": "2018-10-11T12:39:24.000Z",
"size": {
"regular": 352640809,
"virtual": 352640809
},
"tags": []
}
},
"beat": {
"hostname": "m7-power-128050",
"version": "6.4.1",
"name": "m7-power-128050"
},
"host": {
"name": "m7-power-128050",
"architecture": "x86_64",
"os": {
"codename": "Core",
"platform": "centos",
"version": "7 (Core)",
"family": "redhat"
},
"id": "14759c8d771e43a2b10f7402e8060d8a",
"containerized": true
}
},
"fields": {
"@timestamp": [
"2018-10-18T10:53:25.167Z"
],
"docker.image.created": [
"2018-10-11T12:39:24.000Z"
]
},
"highlight": {
"beat.hostname": [
"@kibana-highlighted-field@m7-power-128050@/kibana-highlighted-field@"
],
"metricset.name": [
"@kibana-highlighted-field@image@/kibana-highlighted-field@"
],
"metricset.module": [
"@kibana-highlighted-field@docker@/kibana-highlighted-field@"
]
},
"sort": [
1539860005167
]
}
```
| 1.0 | Metricbeat parse docker image and node labels error - - Version: metricbeat 6.4.3
- Operating System: centos 7.5
- Discuss Forum URL:
https://discuss.elastic.co/t/metricbeat-parse-docker-image-labels-error/152986
https://discuss.elastic.co/t/metricbeat-kubernetes-module-state-node-metricset-failed-to-parse-node-labels/156269
# metricbeat parse node lable error๏ผ
```
$ kubectl get node m7-devops-128071 --show-labels
m7-devops-128071 Ready <none> 110d v1.8.15 active=true,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,es-data=true,ip-masq-fix=true,kubernetes.io/hostname=m7-devops-128071,mysql-master=true,node.alpha.4pd.io/group-0816=true,node.alpha.4pd.io/group-group1112=true,project=maze.ingress,prophet.4paradigm.com/addon=true,prophet.4paradigm.co
m/app=true,prophet.4paradigm.com/elasticsearch=true,prophet.4paradigm.com/gpu=true,prophet.4paradigm.com/offline=true,prophet.4paradigm.com/online=true,prophet.4paradigm.com/prometheus=true,prophet.4paradigm.com/rtidb-nameserver=true,prophet.4paradigm.com/rtidb-tablet=true,prophet.4paradigm.com/system=true,prophet=true
```
metricbeat log๏ผ
> 2018-11-12T17:54:45.479+0800 WARN elasticsearch/client.go:520 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbef27156c82fe362, ext:1463147003462, loc:(*time.Location)(0x37e0b20)}, Meta:common.MapStr(nil), Fields:common.MapStr{"metricset":common.MapStr{"host":"172.27.128.71:8794", "rtt":9019564, "namespace":"kubernetes.
> node", "name":"state_node", "module":"kubernetes"}, "kubernetes":common.MapStr{"labels":common.MapStr{"ip-masq-fix":"true", "node":common.MapStr{"alpha":common.MapStr{"4pd":common.MapStr{"io/group-group1112":"true"}}}, "beta":common.MapStr{"kubernetes":common.MapStr{"io/os":"linux", "io/arch":"amd64"}}, "kubernetes":common.MapStr{"io/hostname":"m7-devops-128071
> }, "prophet":common.MapStr{"4paradigm":common.MapStr{"com/offline":"true", "com/app":"true", "com/elasticsearch":"true", "com/online":"true", "com/system":"true", "com/addon":"true"}, "value":"true"}, "es-data":"true"}, "node":common.MapStr{"pod":common.MapStr{"capacity":common.MapStr{"total":220}, "allocatable":common.MapStr{"total":220}}, "status":common.MapSt
> r{"unschedulable":false, "ready":"true"}, "memory":common.MapStr{"allocatable":common.MapStr{"bytes":2.70272487424e+11}, "capacity":common.MapStr{"bytes":2.70377345024e+11}}, "cpu":common.MapStr{"allocatable":common.MapStr{"cores":40}, "capacity":common.MapStr{"cores":40}}, "name":"m7-devops-128071"}}, "beat":common.MapStr{"name":"metricbeat-128071-bin", "hostna
> me":"m7-devops-128071", "version":"6.4.3"}, "host":common.MapStr{"name":"m7-devops-128071", "id":"61432cc588f747f4aec71029ea9e9408", "containerized":true, "architecture":"x86_64", "os":common.MapStr{"platform":"centos", "version":"7 (Core)", "family":"redhat", "codename":"Core"}}}, Private:interface {}(nil)}, Flags:0x0} (status=400): {"type":"mapper_parsing_exce
> ption","reason":"failed to parse [kubernetes.labels.node]","caused_by":{"type":"illegal_state_exception","reason":"Can't get text on a START_OBJECT at 1:222"}}
# metricbeat parse image label error
docker version๏ผ18.06.0-ce
```
{
"_index": "metricbeat-power-6.4.1-2018.10.18",
"_type": "doc",
"_id": "yD3PhmYBo726M9O3TIlB",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2018-10-18T10:53:25.167Z",
"metricset": {
"module": "docker",
"host": "/var/run/docker.sock",
"rtt": 49248,
"name": "image"
},
"docker": {
"image": {
"labels": {
"org_label-schema_schema-version": "= 1.0 org.label-schema.name=CentOS Base Image org.label-schema.vendor=CentOS org.label-schema.license=GPLv2 org.label-schema.build-date=20180531"
},
"id": {
"current": "sha256:d3923d82ce305febd8a9b4c88e949995ebf4b3c725c7388e389c1b1ae40ee6ed",
"parent": ""
},
"created": "2018-10-11T12:39:24.000Z",
"size": {
"regular": 352640809,
"virtual": 352640809
},
"tags": []
}
},
"beat": {
"hostname": "m7-power-128050",
"version": "6.4.1",
"name": "m7-power-128050"
},
"host": {
"name": "m7-power-128050",
"architecture": "x86_64",
"os": {
"codename": "Core",
"platform": "centos",
"version": "7 (Core)",
"family": "redhat"
},
"id": "14759c8d771e43a2b10f7402e8060d8a",
"containerized": true
}
},
"fields": {
"@timestamp": [
"2018-10-18T10:53:25.167Z"
],
"docker.image.created": [
"2018-10-11T12:39:24.000Z"
]
},
"highlight": {
"beat.hostname": [
"@kibana-highlighted-field@m7-power-128050@/kibana-highlighted-field@"
],
"metricset.name": [
"@kibana-highlighted-field@image@/kibana-highlighted-field@"
],
"metricset.module": [
"@kibana-highlighted-field@docker@/kibana-highlighted-field@"
]
},
"sort": [
1539860005167
]
}
```
| infrastructure | metricbeat parse docker image and node labels error version metricbeat operating system centos discuss forum url metricbeat parse node lable error๏ผ kubectl get node devops show labels devops ready active true beta kubernetes io arch beta kubernetes io os linux es data true ip masq fix true kubernetes io hostname devops mysql master true node alpha io group true node alpha io group true project maze ingress prophet com addon true prophet co m app true prophet com elasticsearch true prophet com gpu true prophet com offline true prophet com online true prophet com prometheus true prophet com rtidb nameserver true prophet com rtidb tablet true prophet com system true prophet true metricbeat log๏ผ warn elasticsearch client go cannot index event publisher event content beat event timestamp time time wall ext loc time location meta common mapstr nil fields common mapstr metricset common mapstr host rtt namespace kubernetes node name state node module kubernetes kubernetes common mapstr labels common mapstr ip masq fix true node common mapstr alpha common mapstr common mapstr io group true beta common mapstr kubernetes common mapstr io os linux io arch kubernetes common mapstr io hostname devops prophet common mapstr common mapstr com offline true com app true com elasticsearch true com online true com system true com addon true value true es data true node common mapstr pod common mapstr capacity common mapstr total allocatable common mapstr total status common mapst r unschedulable false ready true memory common mapstr allocatable common mapstr bytes capacity common mapstr bytes cpu common mapstr allocatable common mapstr cores capacity common mapstr cores name devops beat common mapstr name metricbeat bin hostna me devops version host common mapstr name devops id containerized true architecture os common mapstr platform centos version core family redhat codename core private interface nil flags status type mapper parsing exce ption reason failed to parse caused by type illegal state exception reason can t get text on a start object at metricbeat parse image label error docker version๏ผ ce index metricbeat power type doc id version score null source timestamp metricset module docker host var run docker sock rtt name image docker image labels org label schema schema version org label schema name centos base image org label schema vendor centos org label schema license org label schema build date id current parent created size regular virtual tags beat hostname power version name power host name power architecture os codename core platform centos version core family redhat id containerized true fields timestamp docker image created highlight beat hostname kibana highlighted field power kibana highlighted field metricset name kibana highlighted field image kibana highlighted field metricset module kibana highlighted field docker kibana highlighted field sort | 1 |
276,032 | 8,583,273,817 | IssuesEvent | 2018-11-13 19:16:26 | counterfactual/monorepo | https://api.github.com/repos/counterfactual/monorepo | closed | [monorepo] Cache files based on package.json not on yarn.lock | Priority: Low Status: Accepted Type: Maintenance | The CI breaks when yarn.lock is computed using a different version of `yarn` from `1.10.1`. I think a fix to this should be to change `circle.yaml` to cache based on the various `package.json` files instead of caching based on the `yarn.lock` file. | 1.0 | [monorepo] Cache files based on package.json not on yarn.lock - The CI breaks when yarn.lock is computed using a different version of `yarn` from `1.10.1`. I think a fix to this should be to change `circle.yaml` to cache based on the various `package.json` files instead of caching based on the `yarn.lock` file. | non_infrastructure | cache files based on package json not on yarn lock the ci breaks when yarn lock is computed using a different version of yarn from i think a fix to this should be to change circle yaml to cache based on the various package json files instead of caching based on the yarn lock file | 0 |
19,908 | 13,536,599,606 | IssuesEvent | 2020-09-16 09:16:31 | appvia/kore | https://api.github.com/repos/appvia/kore | opened | Create an infrastructure environment custom resource + API endpoints | EPIC: Infrastructure environments | ## What
As an operator
* I want to define a default set of infrastructure environments for teams (e.g. a non-production and a production one)
* I want to be able to either set up account automation for these infra envs or I will manually assign existing cloud accounts to each infra env
* I want to be able to define different infra envs for teams who require additional environments or have a different setup (e.g. a team might not have a production environment, or needs dev/staging/prod)
## Things to consider
* Do we want to allow teams to define additional environments? If yes, how do we enforce they bind the new infra env to the right cloud account?
| 1.0 | Create an infrastructure environment custom resource + API endpoints - ## What
As an operator
* I want to define a default set of infrastructure environments for teams (e.g. a non-production and a production one)
* I want to be able to either set up account automation for these infra envs or I will manually assign existing cloud accounts to each infra env
* I want to be able to define different infra envs for teams who require additional environments or have a different setup (e.g. a team might not have a production environment, or needs dev/staging/prod)
## Things to consider
* Do we want to allow teams to define additional environments? If yes, how do we enforce they bind the new infra env to the right cloud account?
| infrastructure | create an infrastructure environment custom resource api endpoints what as an operator i want to define a default set of infrastructure environments for teams e g a non production and a production one i want to be able to either set up account automation for these infra envs or i will manually assign existing cloud accounts to each infra env i want to be able to define different infra envs for teams who require additional environments or have a different setup e g a team might not have a production environment or needs dev staging prod things to consider do we want to allow teams to define additional environments if yes how do we enforce they bind the new infra env to the right cloud account | 1 |
27,033 | 21,045,658,016 | IssuesEvent | 2022-03-31 15:47:25 | google/iree | https://api.github.com/repos/google/iree | opened | Add generic python test rule for non-python-binding tests | infrastructure/cmake | Currently we have `iree_py_test` cmake rule to test python binding tests. It includes some specified python paths for testing python binding code.
Since we are going to have other python tests outside python binding (e.g. benchmark tools), it is better to have a generic python test rules for all kinds of tests. For example, the rule should allow the test to specify the needed python paths. | 1.0 | Add generic python test rule for non-python-binding tests - Currently we have `iree_py_test` cmake rule to test python binding tests. It includes some specified python paths for testing python binding code.
Since we are going to have other python tests outside python binding (e.g. benchmark tools), it is better to have a generic python test rules for all kinds of tests. For example, the rule should allow the test to specify the needed python paths. | infrastructure | add generic python test rule for non python binding tests currently we have iree py test cmake rule to test python binding tests it includes some specified python paths for testing python binding code since we are going to have other python tests outside python binding e g benchmark tools it is better to have a generic python test rules for all kinds of tests for example the rule should allow the test to specify the needed python paths | 1 |
13,549 | 10,321,425,923 | IssuesEvent | 2019-08-31 01:52:23 | servo/servo | https://api.github.com/repos/servo/servo | closed | Some homu queues are busted | A-infrastructure I-bustage | The homu log is full of errors for the gleam and core-foundation-rs repos in particular, and the queue pages give internal server errors: https://build.servo.org/homu/queue/gleam . This means that we can't even press the synchronize button, and homu doesn't respond in PRs for those repos. | 1.0 | Some homu queues are busted - The homu log is full of errors for the gleam and core-foundation-rs repos in particular, and the queue pages give internal server errors: https://build.servo.org/homu/queue/gleam . This means that we can't even press the synchronize button, and homu doesn't respond in PRs for those repos. | infrastructure | some homu queues are busted the homu log is full of errors for the gleam and core foundation rs repos in particular and the queue pages give internal server errors this means that we can t even press the synchronize button and homu doesn t respond in prs for those repos | 1 |
77 | 2,514,561,877 | IssuesEvent | 2015-01-15 12:34:47 | Starcounter/Starcounter | https://api.github.com/repos/Starcounter/Starcounter | closed | Access denied on build server during process kill | Infrastructure stability issue | I experienced in [my build](https://scbuildserver/viewLog.html?tab=buildLog&buildTypeId=Starcounter_DevelopDaily&buildId=25250) the following error:
>[09:33:39][Step 1/1] Build utility exception:
[09:33:39][Step 1/1] 2015-01-15 09:33:39 (C:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\Level1\bsbin\Release\PostBuildTasks.exe): System.ComponentModel.Win32Exception (0x80004005): Access is denied
[09:33:39][Step 1/1] at System.Diagnostics.Process.Kill()
[09:33:39][Step 1/1] at BuildSystemHelper.BuildSystem.KillDisturbingProcesses(String[] procNames) in c:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\BuildSystem\src\BuildSystemHelper\BuildSystemHelper.cs:line 251
[09:33:39][Step 1/1] at BuildSystemHelper.BuildSystem.KillAll() in c:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\BuildSystem\src\BuildSystemHelper\BuildSystemHelper.cs:line 265
[09:33:39][Step 1/1] at PostBuildTasks.PostBuildTasks.Main(String[] args) in c:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\BuildSystem\src\PostBuildTasks\PostBuildTasks.cs:line 237 | 1.0 | Access denied on build server during process kill - I experienced in [my build](https://scbuildserver/viewLog.html?tab=buildLog&buildTypeId=Starcounter_DevelopDaily&buildId=25250) the following error:
>[09:33:39][Step 1/1] Build utility exception:
[09:33:39][Step 1/1] 2015-01-15 09:33:39 (C:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\Level1\bsbin\Release\PostBuildTasks.exe): System.ComponentModel.Win32Exception (0x80004005): Access is denied
[09:33:39][Step 1/1] at System.Diagnostics.Process.Kill()
[09:33:39][Step 1/1] at BuildSystemHelper.BuildSystem.KillDisturbingProcesses(String[] procNames) in c:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\BuildSystem\src\BuildSystemHelper\BuildSystemHelper.cs:line 251
[09:33:39][Step 1/1] at BuildSystemHelper.BuildSystem.KillAll() in c:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\BuildSystem\src\BuildSystemHelper\BuildSystemHelper.cs:line 265
[09:33:39][Step 1/1] at PostBuildTasks.PostBuildTasks.Main(String[] args) in c:\TeamCity\TeamCity8\buildAgent\work\sc-2.0.8807.2\BuildSystem\src\PostBuildTasks\PostBuildTasks.cs:line 237 | infrastructure | access denied on build server during process kill i experienced in the following error build utility exception c teamcity buildagent work sc bsbin release postbuildtasks exe system componentmodel access is denied at system diagnostics process kill at buildsystemhelper buildsystem killdisturbingprocesses string procnames in c teamcity buildagent work sc buildsystem src buildsystemhelper buildsystemhelper cs line at buildsystemhelper buildsystem killall in c teamcity buildagent work sc buildsystem src buildsystemhelper buildsystemhelper cs line at postbuildtasks postbuildtasks main string args in c teamcity buildagent work sc buildsystem src postbuildtasks postbuildtasks cs line | 1 |
341,662 | 24,707,406,095 | IssuesEvent | 2022-10-19 20:21:38 | Handenurcoskun/SWE577 | https://api.github.com/repos/Handenurcoskun/SWE577 | opened | Read and Summarise "INVESTIGATING THE PERFORMANCE OF SEGMENTATION METHODS WITH DEEP LEARNING MODELS FOR SENTIMENT ANALYSIS ON TURKISH INFORMAL TEXTS" | documentation paper Summarise | **INVESTIGATING THE PERFORMANCE OF SEGMENTATION METHODS WITH DEEP LEARNING MODELS FOR SENTIMENT ANALYSIS ON TURKISH INFORMAL TEXTS**
FATIH KURT | 1.0 | Read and Summarise "INVESTIGATING THE PERFORMANCE OF SEGMENTATION METHODS WITH DEEP LEARNING MODELS FOR SENTIMENT ANALYSIS ON TURKISH INFORMAL TEXTS" - **INVESTIGATING THE PERFORMANCE OF SEGMENTATION METHODS WITH DEEP LEARNING MODELS FOR SENTIMENT ANALYSIS ON TURKISH INFORMAL TEXTS**
FATIH KURT | non_infrastructure | read and summarise investigating the performance of segmentation methods with deep learning models for sentiment analysis on turkish informal texts investigating the performance of segmentation methods with deep learning models for sentiment analysis on turkish informal texts fatih kurt | 0 |
3,860 | 4,668,210,567 | IssuesEvent | 2016-10-06 00:56:17 | heidelberg-makerspace/do-something | https://api.github.com/repos/heidelberg-makerspace/do-something | closed | figure out recurring events in wiki calendar | IT infrastructure | We can still not display recurring events correctly which makes the calendar look sadly empty. The current "process" (so far, some malfunctioning code from the MediaWiki wiki) is tracked in our wiki on the [events help page](https://wiki.heidelberg-makerspace.de/wiki/Help:Events#Recurring_Events). | 1.0 | figure out recurring events in wiki calendar - We can still not display recurring events correctly which makes the calendar look sadly empty. The current "process" (so far, some malfunctioning code from the MediaWiki wiki) is tracked in our wiki on the [events help page](https://wiki.heidelberg-makerspace.de/wiki/Help:Events#Recurring_Events). | infrastructure | figure out recurring events in wiki calendar we can still not display recurring events correctly which makes the calendar look sadly empty the current process so far some malfunctioning code from the mediawiki wiki is tracked in our wiki on the | 1 |
11,159 | 8,968,838,482 | IssuesEvent | 2019-01-29 09:16:09 | Snusbolaget/product | https://api.github.com/repos/Snusbolaget/product | opened | Transform: Infra | feature infrastructure | - [ ] Create and config EC2 instance to run app on
- [ ] Create role and with policies needed attached
- [ ] Deployment | 1.0 | Transform: Infra - - [ ] Create and config EC2 instance to run app on
- [ ] Create role and with policies needed attached
- [ ] Deployment | infrastructure | transform infra create and config instance to run app on create role and with policies needed attached deployment | 1 |
580,721 | 17,265,110,350 | IssuesEvent | 2021-07-22 12:56:40 | mozilla/web-ext | https://api.github.com/repos/mozilla/web-ext | closed | "web-ext run" with custom manifest file | priority: enhancement | ### Is this a feature request or a bug?
This is a feature request (unless it's currently possible and I haven't figured it out). To help with developing / debugging extensions locally, it would be of great help to allow loading a separate `manifest.json` file. In my case, to add `localhost` to the URL patterns for some content scripts, which should not be present in production.
### What is the current behavior?
Currently, I'm adding `<all_urls>` to the content script while working locally, then being careful with each Git commit to not submit the changed manifest file.
### What is the expected or desired behavior?
It would be great to be able to `web-ext run --manifest=manifest.local.json`. Ideally (but not essentially), if it accepted a CommonJS file for the `--manifest` option, I'd be able to extend the main manifest with just the changes I need:
```js
const manifest = require('./manifest.json');
module.exports = {
...manifest,
content_scripts: manifest.content_scripts.map(
it => ({
...it,
matches: it.matches.concat('localhost')
})
)
};
```
Alternatively, I'd be glad to know if there's already a solution in the current version of `web-ext`, or otherwise a workflow that doesn't depend on me remembering to not commit `manifest.json` test data.
````
node --version && npm --version && npx web-ext --version
v15.2.0
6.14.10
6.1.0
````
| 1.0 | "web-ext run" with custom manifest file - ### Is this a feature request or a bug?
This is a feature request (unless it's currently possible and I haven't figured it out). To help with developing / debugging extensions locally, it would be of great help to allow loading a separate `manifest.json` file. In my case, to add `localhost` to the URL patterns for some content scripts, which should not be present in production.
### What is the current behavior?
Currently, I'm adding `<all_urls>` to the content script while working locally, then being careful with each Git commit to not submit the changed manifest file.
### What is the expected or desired behavior?
It would be great to be able to `web-ext run --manifest=manifest.local.json`. Ideally (but not essentially), if it accepted a CommonJS file for the `--manifest` option, I'd be able to extend the main manifest with just the changes I need:
```js
const manifest = require('./manifest.json');
module.exports = {
...manifest,
content_scripts: manifest.content_scripts.map(
it => ({
...it,
matches: it.matches.concat('localhost')
})
)
};
```
Alternatively, I'd be glad to know if there's already a solution in the current version of `web-ext`, or otherwise a workflow that doesn't depend on me remembering to not commit `manifest.json` test data.
````
node --version && npm --version && npx web-ext --version
v15.2.0
6.14.10
6.1.0
````
| non_infrastructure | web ext run with custom manifest file is this a feature request or a bug this is a feature request unless it s currently possible and i haven t figured it out to help with developing debugging extensions locally it would be of great help to allow loading a separate manifest json file in my case to add localhost to the url patterns for some content scripts which should not be present in production what is the current behavior currently i m adding to the content script while working locally then being careful with each git commit to not submit the changed manifest file what is the expected or desired behavior it would be great to be able to web ext run manifest manifest local json ideally but not essentially if it accepted a commonjs file for the manifest option i d be able to extend the main manifest with just the changes i need js const manifest require manifest json module exports manifest content scripts manifest content scripts map it it matches it matches concat localhost alternatively i d be glad to know if there s already a solution in the current version of web ext or otherwise a workflow that doesn t depend on me remembering to not commit manifest json test data node version npm version npx web ext version | 0 |
201,223 | 7,027,644,540 | IssuesEvent | 2017-12-25 00:50:03 | google/google-auth-library-nodejs | https://api.github.com/repos/google/google-auth-library-nodejs | closed | login.getPayload() not returning given_name and family_name anymore | Priority: P2+ | I'm using the lib like this:
```
client.verifyIdToken(
id_token,
[googleId],
(err, login) => {
const { aud, hd, sub, email_verified, email, given_name, family_name, picture } = login.getPayload()
...
}
)
```
Before I had given_name and family_name. I can't say since when, but now given_name and family_name are empty.
Is is something related with this lib ? I don't think we changed anything in the way we get the id_token... | 1.0 | login.getPayload() not returning given_name and family_name anymore - I'm using the lib like this:
```
client.verifyIdToken(
id_token,
[googleId],
(err, login) => {
const { aud, hd, sub, email_verified, email, given_name, family_name, picture } = login.getPayload()
...
}
)
```
Before I had given_name and family_name. I can't say since when, but now given_name and family_name are empty.
Is is something related with this lib ? I don't think we changed anything in the way we get the id_token... | non_infrastructure | login getpayload not returning given name and family name anymore i m using the lib like this client verifyidtoken id token err login const aud hd sub email verified email given name family name picture login getpayload before i had given name and family name i can t say since when but now given name and family name are empty is is something related with this lib i don t think we changed anything in the way we get the id token | 0 |
61,304 | 14,968,661,907 | IssuesEvent | 2021-01-27 17:07:15 | sandboxie-plus/Sandboxie | https://api.github.com/repos/sandboxie-plus/Sandboxie | closed | Problems with modal windows | fixed in next build | 1) Some modal windows block main window but not shown itself:
<details>
<summary>Show spoiler (gif with problem visualization)</summary>

</details>
2) If enabled "Always on Top" all modal windows showed behind main window:
<details>
<summary>Show spoiler (gif with problem visualization)</summary>

</details>
SBIE version is 0.5.5 / 5.46.4 (64 bit) | 1.0 | Problems with modal windows - 1) Some modal windows block main window but not shown itself:
<details>
<summary>Show spoiler (gif with problem visualization)</summary>

</details>
2) If enabled "Always on Top" all modal windows showed behind main window:
<details>
<summary>Show spoiler (gif with problem visualization)</summary>

</details>
SBIE version is 0.5.5 / 5.46.4 (64 bit) | non_infrastructure | problems with modal windows some modal windows block main window but not shown itself show spoiler gif with problem visualization if enabled always on top all modal windows showed behind main window show spoiler gif with problem visualization sbie version is bit | 0 |
8,882 | 7,718,235,470 | IssuesEvent | 2018-05-23 15:41:44 | GoogleCloudPlatform/forseti-security | https://api.github.com/repos/GoogleCloudPlatform/forseti-security | closed | Ensure Global Uniqueness of Bucket Name Across Many Different Deployments | module: infrastructure priority: p0 release-testing: 2.0 RC2 triaged: yes | Bucket names need to be globally unique, i.e. they have their own namespace outside the organization level. But our current naming is not long enough to ensure that global uniqueness, across many different deployments.
https://cloud.google.com/storage/docs/naming

| 1.0 | Ensure Global Uniqueness of Bucket Name Across Many Different Deployments - Bucket names need to be globally unique, i.e. they have their own namespace outside the organization level. But our current naming is not long enough to ensure that global uniqueness, across many different deployments.
https://cloud.google.com/storage/docs/naming

| infrastructure | ensure global uniqueness of bucket name across many different deployments bucket names need to be globally unique i e they have their own namespace outside the organization level but our current naming is not long enough to ensure that global uniqueness across many different deployments | 1 |
11,824 | 9,442,664,817 | IssuesEvent | 2019-04-15 07:27:41 | akvo/akvo-flow | https://api.github.com/repos/akvo/akvo-flow | closed | Add JSX transpiler to dashboard build process (estimated: 1) | Deployment & infrastructure | As part of the introduction of react components in the dashboard, we need to be able to compile [JSX](https://facebook.github.io/jsx/) to JS code that will be embedded in different parts of the ember dashboard code. This requires including JSX transpiler tools in our current build process.
Child of #2796
| 1.0 | Add JSX transpiler to dashboard build process (estimated: 1) - As part of the introduction of react components in the dashboard, we need to be able to compile [JSX](https://facebook.github.io/jsx/) to JS code that will be embedded in different parts of the ember dashboard code. This requires including JSX transpiler tools in our current build process.
Child of #2796
| infrastructure | add jsx transpiler to dashboard build process estimated as part of the introduction of react components in the dashboard we need to be able to compile to js code that will be embedded in different parts of the ember dashboard code this requires including jsx transpiler tools in our current build process child of | 1 |
20,589 | 14,019,902,547 | IssuesEvent | 2020-10-29 18:51:31 | dotnet/aspnetcore | https://api.github.com/repos/dotnet/aspnetcore | closed | Add test automation for Alpine | area-infrastructure | We should have test automation on Alpine, perhaps as part of migration to Helix. | 1.0 | Add test automation for Alpine - We should have test automation on Alpine, perhaps as part of migration to Helix. | infrastructure | add test automation for alpine we should have test automation on alpine perhaps as part of migration to helix | 1 |
15,084 | 3,924,154,888 | IssuesEvent | 2016-04-22 14:18:15 | EngoEngine/engo | https://api.github.com/repos/EngoEngine/engo | closed | Add images/diagrams to the Camera Tutorial | documentation | I think the tutorial about the Camera would benefit from diagrams / images showing objects in some 2d-plot.
This is quite some effort, so I'd say this is low-priority (you can label it as such).
( [Camera Wiki](https://github.com/paked/engi/wiki/Camera) ) | 1.0 | Add images/diagrams to the Camera Tutorial - I think the tutorial about the Camera would benefit from diagrams / images showing objects in some 2d-plot.
This is quite some effort, so I'd say this is low-priority (you can label it as such).
( [Camera Wiki](https://github.com/paked/engi/wiki/Camera) ) | non_infrastructure | add images diagrams to the camera tutorial i think the tutorial about the camera would benefit from diagrams images showing objects in some plot this is quite some effort so i d say this is low priority you can label it as such | 0 |
55,069 | 6,425,226,878 | IssuesEvent | 2017-08-09 15:01:32 | spirit-code/spirit | https://api.github.com/repos/spirit-code/spirit | closed | Core: VF rotation functions and corresp. unit tests | core core-cuda enhancement unit-test | The rotation function is currently wrong!
Unit tests are needed and in this context rotation functions for whole vectorfields could be created and tested:
```C++
void rotate(const scalar & angle, const Vector3 & axis, vectorfield & out);
void rotate(const scalar & angle, const vectorfield & axes, vectorfield & out);
``` | 1.0 | Core: VF rotation functions and corresp. unit tests - The rotation function is currently wrong!
Unit tests are needed and in this context rotation functions for whole vectorfields could be created and tested:
```C++
void rotate(const scalar & angle, const Vector3 & axis, vectorfield & out);
void rotate(const scalar & angle, const vectorfield & axes, vectorfield & out);
``` | non_infrastructure | core vf rotation functions and corresp unit tests the rotation function is currently wrong unit tests are needed and in this context rotation functions for whole vectorfields could be created and tested c void rotate const scalar angle const axis vectorfield out void rotate const scalar angle const vectorfield axes vectorfield out | 0 |
52,715 | 10,917,226,002 | IssuesEvent | 2019-11-21 14:48:13 | dcs4cop/xcube | https://api.github.com/repos/dcs4cop/xcube | closed | Address various warnings from 3rd-party packages | code important urgent | Address various warnings originating from 3rd-party packages and occuring during unit-testing. Not addressing them will likely break code and let tests fail when we update 3rd-party packages such as `xarray` etc.
Warnings from latest build:
=============================== warnings summary ===============================
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/heapdict.py:11
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/heapdict.py:11: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
class heapdict(collections.MutableMapping):
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/botocore/vendored/requests/packages/urllib3/_collections.py:1
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/botocore/vendored/requests/packages/urllib3/_collections.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Mapping, MutableMapping
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/osgeo/gdal.py:107
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/osgeo/gdal.py:107: DeprecationWarning: gdal.py was placed in a namespace, it is now available as osgeo.gdal
DeprecationWarning)
xcube/webapi/service.py:360
/home/travis/build/dcs4cop/xcube/xcube/webapi/service.py:360: DeprecationWarning: invalid escape sequence \;
name_pattern = '(?P<%s>[^\;\/\?\:\@\&\=\+\$\,]+)'
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/nanops.py:159
test/webapi/test_handlers.py::HandlersTest::test_fetch_time_series_geometry
test/webapi/controllers/test_catalogue.py::CatalogueControllerTest::test_dataset_with_details
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_geometries
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_geometry
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_point_with_uncertainty
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_info
test/webapi/controllers/test_wmts.py::WmtsControllerTest::test_get_wmts_capabilities_xml
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/nanops.py:159: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis=axis, dtype=dtype)
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/numpy/lib/nanfunctions.py:1628
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
test/webapi/controllers/test_catalogue.py::CatalogueControllerTest::test_dataset_with_details
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_point_with_uncertainty
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_info
test/webapi/controllers/test_wmts.py::WmtsControllerTest::test_get_wmts_capabilities_xml
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/numpy/lib/nanfunctions.py:1628: RuntimeWarning: Degrees of freedom <= 0 for slice.
keepdims=keepdims)
test/webapi/test_service.py:95
/home/travis/build/dcs4cop/xcube/test/webapi/test_service.py:95: DeprecationWarning: invalid escape sequence \;
'(?P<num>[^\;\/\?\:\@\&\=\+\$\,]+)/get')
test/webapi/test_service.py:97
/home/travis/build/dcs4cop/xcube/test/webapi/test_service.py:97: DeprecationWarning: invalid escape sequence \;
'/open/(?P<ws_name>[^\;\/\?\:\@\&\=\+\$\,]+)')
test/webapi/test_service.py:99
/home/travis/build/dcs4cop/xcube/test/webapi/test_service.py:99: DeprecationWarning: invalid escape sequence \;
'/open/ws(?P<id1>[^\;\/\?\:\@\&\=\+\$\,]+)/wf(?P<id2>[^\;\/\?\:\@\&\=\+\$\,]+)')
test/api/test_ts.py::TsTest::test_polygon_using_groupby
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/groupby.py:758: FutureWarning: Default reduction dimension will be changed to the grouped dimension in a future version of xarray. To silence this warning, pass dim=xarray.ALL_DIMS explicitly.
allow_lazy=True, **kwargs)
test/api/test_ts.py::TsTest::test_polygon_using_groupby
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/groupby.py:764: FutureWarning: Default reduction dimension will be changed to the grouped dimension in a future version of xarray. To silence this warning, pass dim=xarray.ALL_DIMS explicitly.
**kwargs)
test/api/gen/default/test_gen.py::DefaultProcessTest::test_handle_360_lon
/home/travis/build/dcs4cop/xcube/xcube/api/gen/default/iproc.py:183: FutureWarning: roll_coords will be set to False in the future. Explicitly set roll_coords to silence warning.
dataset = dataset.roll(lon=lon_size_05)
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_get_time_range
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_post_process
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_pre_process
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_reprojection_info
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/variable.py:139: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
To accept the future behavior, pass 'dtype=object'.
To keep the old behavior, pass 'dtype="datetime64[ns]"'.
return np.asarray(pd.Series(values.ravel())).reshape(values.shape)
test/cli/test_apply.py::ApplyCliTest::test_apply_with_init
test/cli/test_apply.py::ApplyCliTest::test_apply_without_init
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/dask/array/blockwise.py:210: UserWarning: The da.atop function has moved to da.blockwise
warnings.warn("The da.atop function has moved to da.blockwise")
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/build/dcs4cop/xcube/xcube/cli/timeit.py:82: RuntimeWarning: Mean of empty slice
times_mean = np.nanmean(times, axis=0)
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/numpy/lib/function_base.py:3405: RuntimeWarning: All-NaN slice encountered
r = func(a, **kwargs)
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/build/dcs4cop/xcube/xcube/cli/timeit.py:85: RuntimeWarning: All-NaN slice encountered
times_min = np.nanmin(times, axis=0)
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/build/dcs4cop/xcube/xcube/cli/timeit.py:86: RuntimeWarning: All-NaN slice encountered
times_max = np.nanmax(times, axis=0)
test/util/test_config.py::FlattenDictTest::test_from_yaml
/home/travis/build/dcs4cop/xcube/test/util/test_config.py:191: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
d = yaml.load(stream)
test/util/test_config.py::GetConfigDictTest::test_config_file_alone
test/util/test_config.py::GetConfigDictTest::test_config_file_overwritten_by_config_obj
/home/travis/build/dcs4cop/xcube/test/util/test_config.py:353: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml.dump(yaml.load(config_yaml), outfile)
test/util/test_plugin.py::PluginTest::test_load_plugins_fail_call
test/util/test_plugin.py::PluginTest::test_load_plugins_fail_load
/home/travis/build/dcs4cop/xcube/xcube/util/plugin.py:63: UserWarning: Unexpected exception while loading xcube plugin with entry point 'test':
warnings.warn('Unexpected exception while loading xcube plugin '
test/util/test_plugin.py::PluginTest::test_load_plugins_not_callable
/home/travis/build/dcs4cop/xcube/xcube/util/plugin.py:46: UserWarning: xcube plugin with entry point 'test' must be a callable but got a <class 'str'>
warnings.warn(f'xcube plugin with entry point {entry_point.name!r} '
test/webapi/controllers/test_tiles.py::TilesControllerTest::test_get_legend
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/matplotlib/tight_layout.py:231: UserWarning: tight_layout : falling back to Agg renderer
warnings.warn("tight_layout : falling back to Agg renderer")
| 1.0 | Address various warnings from 3rd-party packages - Address various warnings originating from 3rd-party packages and occuring during unit-testing. Not addressing them will likely break code and let tests fail when we update 3rd-party packages such as `xarray` etc.
Warnings from latest build:
=============================== warnings summary ===============================
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/heapdict.py:11
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/heapdict.py:11: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
class heapdict(collections.MutableMapping):
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/botocore/vendored/requests/packages/urllib3/_collections.py:1
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/botocore/vendored/requests/packages/urllib3/_collections.py:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Mapping, MutableMapping
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/osgeo/gdal.py:107
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/osgeo/gdal.py:107: DeprecationWarning: gdal.py was placed in a namespace, it is now available as osgeo.gdal
DeprecationWarning)
xcube/webapi/service.py:360
/home/travis/build/dcs4cop/xcube/xcube/webapi/service.py:360: DeprecationWarning: invalid escape sequence \;
name_pattern = '(?P<%s>[^\;\/\?\:\@\&\=\+\$\,]+)'
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/nanops.py:159
test/webapi/test_handlers.py::HandlersTest::test_fetch_time_series_geometry
test/webapi/controllers/test_catalogue.py::CatalogueControllerTest::test_dataset_with_details
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_geometries
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_geometry
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_point_with_uncertainty
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_info
test/webapi/controllers/test_wmts.py::WmtsControllerTest::test_get_wmts_capabilities_xml
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/nanops.py:159: RuntimeWarning: Mean of empty slice
return np.nanmean(a, axis=axis, dtype=dtype)
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/numpy/lib/nanfunctions.py:1628
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
test/webapi/controllers/test_catalogue.py::CatalogueControllerTest::test_dataset_with_details
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_for_point_with_uncertainty
test/webapi/controllers/test_time_series.py::TimeSeriesControllerTest::test_get_time_series_info
test/webapi/controllers/test_wmts.py::WmtsControllerTest::test_get_wmts_capabilities_xml
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/numpy/lib/nanfunctions.py:1628: RuntimeWarning: Degrees of freedom <= 0 for slice.
keepdims=keepdims)
test/webapi/test_service.py:95
/home/travis/build/dcs4cop/xcube/test/webapi/test_service.py:95: DeprecationWarning: invalid escape sequence \;
'(?P<num>[^\;\/\?\:\@\&\=\+\$\,]+)/get')
test/webapi/test_service.py:97
/home/travis/build/dcs4cop/xcube/test/webapi/test_service.py:97: DeprecationWarning: invalid escape sequence \;
'/open/(?P<ws_name>[^\;\/\?\:\@\&\=\+\$\,]+)')
test/webapi/test_service.py:99
/home/travis/build/dcs4cop/xcube/test/webapi/test_service.py:99: DeprecationWarning: invalid escape sequence \;
'/open/ws(?P<id1>[^\;\/\?\:\@\&\=\+\$\,]+)/wf(?P<id2>[^\;\/\?\:\@\&\=\+\$\,]+)')
test/api/test_ts.py::TsTest::test_polygon_using_groupby
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/groupby.py:758: FutureWarning: Default reduction dimension will be changed to the grouped dimension in a future version of xarray. To silence this warning, pass dim=xarray.ALL_DIMS explicitly.
allow_lazy=True, **kwargs)
test/api/test_ts.py::TsTest::test_polygon_using_groupby
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/groupby.py:764: FutureWarning: Default reduction dimension will be changed to the grouped dimension in a future version of xarray. To silence this warning, pass dim=xarray.ALL_DIMS explicitly.
**kwargs)
test/api/gen/default/test_gen.py::DefaultProcessTest::test_handle_360_lon
/home/travis/build/dcs4cop/xcube/xcube/api/gen/default/iproc.py:183: FutureWarning: roll_coords will be set to False in the future. Explicitly set roll_coords to silence warning.
dataset = dataset.roll(lon=lon_size_05)
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_get_time_range
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_post_process
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_pre_process
test/api/gen/default/test_iproc.py::DefaultInputProcessorTest::test_reprojection_info
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/xarray/core/variable.py:139: FutureWarning: Converting timezone-aware DatetimeArray to timezone-naive ndarray with 'datetime64[ns]' dtype. In the future, this will return an ndarray with 'object' dtype where each element is a 'pandas.Timestamp' with the correct 'tz'.
To accept the future behavior, pass 'dtype=object'.
To keep the old behavior, pass 'dtype="datetime64[ns]"'.
return np.asarray(pd.Series(values.ravel())).reshape(values.shape)
test/cli/test_apply.py::ApplyCliTest::test_apply_with_init
test/cli/test_apply.py::ApplyCliTest::test_apply_without_init
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/dask/array/blockwise.py:210: UserWarning: The da.atop function has moved to da.blockwise
warnings.warn("The da.atop function has moved to da.blockwise")
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/build/dcs4cop/xcube/xcube/cli/timeit.py:82: RuntimeWarning: Mean of empty slice
times_mean = np.nanmean(times, axis=0)
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/numpy/lib/function_base.py:3405: RuntimeWarning: All-NaN slice encountered
r = func(a, **kwargs)
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/build/dcs4cop/xcube/xcube/cli/timeit.py:85: RuntimeWarning: All-NaN slice encountered
times_min = np.nanmin(times, axis=0)
test/cli/test_timeit.py::TimeitCliTest::test_simple_with_repetitions
/home/travis/build/dcs4cop/xcube/xcube/cli/timeit.py:86: RuntimeWarning: All-NaN slice encountered
times_max = np.nanmax(times, axis=0)
test/util/test_config.py::FlattenDictTest::test_from_yaml
/home/travis/build/dcs4cop/xcube/test/util/test_config.py:191: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
d = yaml.load(stream)
test/util/test_config.py::GetConfigDictTest::test_config_file_alone
test/util/test_config.py::GetConfigDictTest::test_config_file_overwritten_by_config_obj
/home/travis/build/dcs4cop/xcube/test/util/test_config.py:353: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
yaml.dump(yaml.load(config_yaml), outfile)
test/util/test_plugin.py::PluginTest::test_load_plugins_fail_call
test/util/test_plugin.py::PluginTest::test_load_plugins_fail_load
/home/travis/build/dcs4cop/xcube/xcube/util/plugin.py:63: UserWarning: Unexpected exception while loading xcube plugin with entry point 'test':
warnings.warn('Unexpected exception while loading xcube plugin '
test/util/test_plugin.py::PluginTest::test_load_plugins_not_callable
/home/travis/build/dcs4cop/xcube/xcube/util/plugin.py:46: UserWarning: xcube plugin with entry point 'test' must be a callable but got a <class 'str'>
warnings.warn(f'xcube plugin with entry point {entry_point.name!r} '
test/webapi/controllers/test_tiles.py::TilesControllerTest::test_get_legend
/home/travis/miniconda/envs/xcube/lib/python3.7/site-packages/matplotlib/tight_layout.py:231: UserWarning: tight_layout : falling back to Agg renderer
warnings.warn("tight_layout : falling back to Agg renderer")
| non_infrastructure | address various warnings from party packages address various warnings originating from party packages and occuring during unit testing not addressing them will likely break code and let tests fail when we update party packages such as xarray etc warnings from latest build warnings summary home travis miniconda envs xcube lib site packages heapdict py home travis miniconda envs xcube lib site packages heapdict py deprecationwarning using or importing the abcs from collections instead of from collections abc is deprecated and in it will stop working class heapdict collections mutablemapping home travis miniconda envs xcube lib site packages botocore vendored requests packages collections py home travis miniconda envs xcube lib site packages botocore vendored requests packages collections py deprecationwarning using or importing the abcs from collections instead of from collections abc is deprecated and in it will stop working from collections import mapping mutablemapping home travis miniconda envs xcube lib site packages osgeo gdal py home travis miniconda envs xcube lib site packages osgeo gdal py deprecationwarning gdal py was placed in a namespace it is now available as osgeo gdal deprecationwarning xcube webapi service py home travis build xcube xcube webapi service py deprecationwarning invalid escape sequence name pattern p home travis miniconda envs xcube lib site packages xarray core nanops py test webapi test handlers py handlerstest test fetch time series geometry test webapi controllers test catalogue py cataloguecontrollertest test dataset with details test webapi controllers test time series py timeseriescontrollertest test get time series for geometries test webapi controllers test time series py timeseriescontrollertest test get time series for geometry test webapi controllers test time series py timeseriescontrollertest test get time series for point with uncertainty test webapi controllers test time series py timeseriescontrollertest test get time series info test webapi controllers test wmts py wmtscontrollertest test get wmts capabilities xml home travis miniconda envs xcube lib site packages xarray core nanops py runtimewarning mean of empty slice return np nanmean a axis axis dtype dtype home travis miniconda envs xcube lib site packages numpy lib nanfunctions py test cli test timeit py timeitclitest test simple with repetitions test webapi controllers test catalogue py cataloguecontrollertest test dataset with details test webapi controllers test time series py timeseriescontrollertest test get time series for point with uncertainty test webapi controllers test time series py timeseriescontrollertest test get time series info test webapi controllers test wmts py wmtscontrollertest test get wmts capabilities xml home travis miniconda envs xcube lib site packages numpy lib nanfunctions py runtimewarning degrees of freedom for slice keepdims keepdims test webapi test service py home travis build xcube test webapi test service py deprecationwarning invalid escape sequence p get test webapi test service py home travis build xcube test webapi test service py deprecationwarning invalid escape sequence open p test webapi test service py home travis build xcube test webapi test service py deprecationwarning invalid escape sequence open ws p wf p test api test ts py tstest test polygon using groupby home travis miniconda envs xcube lib site packages xarray core groupby py futurewarning default reduction dimension will be changed to the grouped dimension in a future version of xarray to silence this warning pass dim xarray all dims explicitly allow lazy true kwargs test api test ts py tstest test polygon using groupby home travis miniconda envs xcube lib site packages xarray core groupby py futurewarning default reduction dimension will be changed to the grouped dimension in a future version of xarray to silence this warning pass dim xarray all dims explicitly kwargs test api gen default test gen py defaultprocesstest test handle lon home travis build xcube xcube api gen default iproc py futurewarning roll coords will be set to false in the future explicitly set roll coords to silence warning dataset dataset roll lon lon size test api gen default test iproc py defaultinputprocessortest test get time range test api gen default test iproc py defaultinputprocessortest test post process test api gen default test iproc py defaultinputprocessortest test pre process test api gen default test iproc py defaultinputprocessortest test reprojection info home travis miniconda envs xcube lib site packages xarray core variable py futurewarning converting timezone aware datetimearray to timezone naive ndarray with dtype in the future this will return an ndarray with object dtype where each element is a pandas timestamp with the correct tz to accept the future behavior pass dtype object to keep the old behavior pass dtype return np asarray pd series values ravel reshape values shape test cli test apply py applyclitest test apply with init test cli test apply py applyclitest test apply without init home travis miniconda envs xcube lib site packages dask array blockwise py userwarning the da atop function has moved to da blockwise warnings warn the da atop function has moved to da blockwise test cli test timeit py timeitclitest test simple with repetitions home travis build xcube xcube cli timeit py runtimewarning mean of empty slice times mean np nanmean times axis test cli test timeit py timeitclitest test simple with repetitions home travis miniconda envs xcube lib site packages numpy lib function base py runtimewarning all nan slice encountered r func a kwargs test cli test timeit py timeitclitest test simple with repetitions home travis build xcube xcube cli timeit py runtimewarning all nan slice encountered times min np nanmin times axis test cli test timeit py timeitclitest test simple with repetitions home travis build xcube xcube cli timeit py runtimewarning all nan slice encountered times max np nanmax times axis test util test config py flattendicttest test from yaml home travis build xcube test util test config py yamlloadwarning calling yaml load without loader is deprecated as the default loader is unsafe please read for full details d yaml load stream test util test config py getconfigdicttest test config file alone test util test config py getconfigdicttest test config file overwritten by config obj home travis build xcube test util test config py yamlloadwarning calling yaml load without loader is deprecated as the default loader is unsafe please read for full details yaml dump yaml load config yaml outfile test util test plugin py plugintest test load plugins fail call test util test plugin py plugintest test load plugins fail load home travis build xcube xcube util plugin py userwarning unexpected exception while loading xcube plugin with entry point test warnings warn unexpected exception while loading xcube plugin test util test plugin py plugintest test load plugins not callable home travis build xcube xcube util plugin py userwarning xcube plugin with entry point test must be a callable but got a warnings warn f xcube plugin with entry point entry point name r test webapi controllers test tiles py tilescontrollertest test get legend home travis miniconda envs xcube lib site packages matplotlib tight layout py userwarning tight layout falling back to agg renderer warnings warn tight layout falling back to agg renderer | 0 |
31,119 | 25,341,955,133 | IssuesEvent | 2022-11-18 22:43:38 | directus/docs | https://api.github.com/repos/directus/docs | closed | Dynamically link GitHub discussions | Infrastructure | For each section in the docs, dynamically link the relevant Github discussion.
[Giscus](https://giscus.app/) might be a viable solution for this. | 1.0 | Dynamically link GitHub discussions - For each section in the docs, dynamically link the relevant Github discussion.
[Giscus](https://giscus.app/) might be a viable solution for this. | infrastructure | dynamically link github discussions for each section in the docs dynamically link the relevant github discussion might be a viable solution for this | 1 |
426,242 | 29,513,529,846 | IssuesEvent | 2023-06-04 08:02:04 | aayushchugh/maya | https://api.github.com/repos/aayushchugh/maya | opened | docs: Create docs on how to setup code locally | documentation help wanted | ### Project
Core
### Description
This will include following contents
1. Forking and cloning the repo
2. Downloading and setting up database (don't need to write everything just add reference to official docs)
3. Installing packages
4. Adding `.env` file
and everything else needed to get started with the development process
### Anything else?
_No response_ | 1.0 | docs: Create docs on how to setup code locally - ### Project
Core
### Description
This will include following contents
1. Forking and cloning the repo
2. Downloading and setting up database (don't need to write everything just add reference to official docs)
3. Installing packages
4. Adding `.env` file
and everything else needed to get started with the development process
### Anything else?
_No response_ | non_infrastructure | docs create docs on how to setup code locally project core description this will include following contents forking and cloning the repo downloading and setting up database don t need to write everything just add reference to official docs installing packages adding env file and everything else needed to get started with the development process anything else no response | 0 |
19,808 | 13,465,108,754 | IssuesEvent | 2020-09-09 20:18:39 | oppia/oppia-android | https://api.github.com/repos/oppia/oppia-android | opened | Add Espresso tests to CI [Blocked: #973] | Priority: Essential Status: Not started Type: Improvement Where: Infrastructure mini-project | After #973 is completed, we should be enabling all UI-facing tests to also run in Espresso via GitHub actions (or a more powerful CI framework if actions is insufficient). | 1.0 | Add Espresso tests to CI [Blocked: #973] - After #973 is completed, we should be enabling all UI-facing tests to also run in Espresso via GitHub actions (or a more powerful CI framework if actions is insufficient). | infrastructure | add espresso tests to ci after is completed we should be enabling all ui facing tests to also run in espresso via github actions or a more powerful ci framework if actions is insufficient | 1 |
368,230 | 25,783,191,525 | IssuesEvent | 2022-12-09 17:46:20 | open-contracting/deploy | https://api.github.com/repos/open-contracting/deploy | closed | Document common Docker commands for deployment | documentation docker | OCP has never used Docker for regular deployments, so we need some basic instructions.
If you just make a note of the commands you've had to run (e.g. `up`, `down`) and why/when you had to run those commands, then we can figure the rest out from the docs โ but we need a place to start.
For example, I see the following were frequently run. My understanding:
* [x] docker-compose pull: Download newer images: https://docs.docker.com/compose/reference/pull/ I've seen this run with `-d` โย I assume that is a typo. As I understand, running `pull` doesn't lead to any changes until you run `up`.
* docker-compose down: Stops (and removes) containers created by up: https://docs.docker.com/compose/reference/down/ I've seen this run with `--remove` โย not sure it that is for an older version of the CLI. I don't know if this needs to be run before `up`, if it should never be run except in development, or if there are undesirable side-effects.
* [x] docker-compose up -d: Builds, (re)creates, and starts containers. https://docs.docker.com/compose/reference/up/ I've seen this run without `-d` (detach), but I don't know in what scenario that's desirable.
* docker-compose restart: Restarts services. In what scenario is this used?
* [x] Has `docker-compose logs` ever been useful for debugging?
Full docs: https://docs.docker.com/compose/reference/ Other useful commands:
* [x] config (to validate a docker-compose.yaml file)
* [x] ps
* [x] top
* [x] run
| 1.0 | Document common Docker commands for deployment - OCP has never used Docker for regular deployments, so we need some basic instructions.
If you just make a note of the commands you've had to run (e.g. `up`, `down`) and why/when you had to run those commands, then we can figure the rest out from the docs โ but we need a place to start.
For example, I see the following were frequently run. My understanding:
* [x] docker-compose pull: Download newer images: https://docs.docker.com/compose/reference/pull/ I've seen this run with `-d` โย I assume that is a typo. As I understand, running `pull` doesn't lead to any changes until you run `up`.
* docker-compose down: Stops (and removes) containers created by up: https://docs.docker.com/compose/reference/down/ I've seen this run with `--remove` โย not sure it that is for an older version of the CLI. I don't know if this needs to be run before `up`, if it should never be run except in development, or if there are undesirable side-effects.
* [x] docker-compose up -d: Builds, (re)creates, and starts containers. https://docs.docker.com/compose/reference/up/ I've seen this run without `-d` (detach), but I don't know in what scenario that's desirable.
* docker-compose restart: Restarts services. In what scenario is this used?
* [x] Has `docker-compose logs` ever been useful for debugging?
Full docs: https://docs.docker.com/compose/reference/ Other useful commands:
* [x] config (to validate a docker-compose.yaml file)
* [x] ps
* [x] top
* [x] run
| non_infrastructure | document common docker commands for deployment ocp has never used docker for regular deployments so we need some basic instructions if you just make a note of the commands you ve had to run e g up down and why when you had to run those commands then we can figure the rest out from the docs โ but we need a place to start for example i see the following were frequently run my understanding docker compose pull download newer images i ve seen this run with d โย i assume that is a typo as i understand running pull doesn t lead to any changes until you run up docker compose down stops and removes containers created by up i ve seen this run with remove โย not sure it that is for an older version of the cli i don t know if this needs to be run before up if it should never be run except in development or if there are undesirable side effects docker compose up d builds re creates and starts containers i ve seen this run without d detach but i don t know in what scenario that s desirable docker compose restart restarts services in what scenario is this used has docker compose logs ever been useful for debugging full docs other useful commands config to validate a docker compose yaml file ps top run | 0 |
14,409 | 10,822,058,633 | IssuesEvent | 2019-11-08 20:14:12 | celo-org/celo-monorepo | https://api.github.com/repos/celo-org/celo-monorepo | opened | Deploy a new testnet with a single celotool command | infrastructure | ### Expected Behavior
Take the current deployment instructions for a *initial* developer testnet and automate them.
- [ ] Ensure not blocked by manual configuration steps in current instructions (e.g. need to move WalletKit -> ContractKit to avoid manual addresses)
- [ ] Write functions to wait until each step of existing deploy is completed --- e.g function to wait for blocks to be mined; function to wait for blockscout to be up and indexing
- [ ] Write a routine to deploy celostats, testnet, contracts, flatten-contracts-for-bs, blockscout
- [ ] Load testing --- regular load, attestations, voting bots
### Current Behavior
Follow the docs. | 1.0 | Deploy a new testnet with a single celotool command - ### Expected Behavior
Take the current deployment instructions for a *initial* developer testnet and automate them.
- [ ] Ensure not blocked by manual configuration steps in current instructions (e.g. need to move WalletKit -> ContractKit to avoid manual addresses)
- [ ] Write functions to wait until each step of existing deploy is completed --- e.g function to wait for blocks to be mined; function to wait for blockscout to be up and indexing
- [ ] Write a routine to deploy celostats, testnet, contracts, flatten-contracts-for-bs, blockscout
- [ ] Load testing --- regular load, attestations, voting bots
### Current Behavior
Follow the docs. | infrastructure | deploy a new testnet with a single celotool command expected behavior take the current deployment instructions for a initial developer testnet and automate them ensure not blocked by manual configuration steps in current instructions e g need to move walletkit contractkit to avoid manual addresses write functions to wait until each step of existing deploy is completed e g function to wait for blocks to be mined function to wait for blockscout to be up and indexing write a routine to deploy celostats testnet contracts flatten contracts for bs blockscout load testing regular load attestations voting bots current behavior follow the docs | 1 |
89,846 | 10,617,665,541 | IssuesEvent | 2019-10-12 20:49:40 | ryanwersal/crosswind | https://api.github.com/repos/ryanwersal/crosswind | opened | Update README with details on using poetry for development | documentation enhancement | The README hasn't been updated since we started using poetry - that needs to be done. | 1.0 | Update README with details on using poetry for development - The README hasn't been updated since we started using poetry - that needs to be done. | non_infrastructure | update readme with details on using poetry for development the readme hasn t been updated since we started using poetry that needs to be done | 0 |
8,100 | 7,228,889,200 | IssuesEvent | 2018-02-11 14:38:25 | opencv/opencv | https://api.github.com/repos/opencv/opencv | closed | Timeout for accessing http://pullrequest.opencv.org/ | category: infrastructure | <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
It takes a very very long time to open http://pullrequest.opencv.org/
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | 1.0 | Timeout for accessing http://pullrequest.opencv.org/ - <!--
If you have a question rather than reporting a bug please go to http://answers.opencv.org where you get much faster responses.
If you need further assistance please read [How To Contribute](https://github.com/opencv/opencv/wiki/How_to_contribute).
This is a template helping you to create an issue which can be processed as quickly as possible. This is the bug reporting section for the OpenCV library.
-->
<!-- Example
- OpenCV => 3.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2015
-->
It takes a very very long time to open http://pullrequest.opencv.org/
<!-- your description -->
<!-- to add code example fence it with triple backticks and optional file extension
```.cpp
// C++ code example
```
or attach as .txt or .zip file
--> | infrastructure | timeout for accessing if you have a question rather than reporting a bug please go to where you get much faster responses if you need further assistance please read this is a template helping you to create an issue which can be processed as quickly as possible this is the bug reporting section for the opencv library example opencv operating system platform windows bit compiler visual studio it takes a very very long time to open to add code example fence it with triple backticks and optional file extension cpp c code example or attach as txt or zip file | 1 |
30,494 | 24,871,939,387 | IssuesEvent | 2022-10-27 15:50:01 | hackforla/food-oasis | https://api.github.com/repos/hackforla/food-oasis | closed | Improve app performance | Role: Front-end size: 2pt Feature: Infrastructure Feature: Accessibility PM: Food Seekers | ### Overview
Currently, there has not been much focus on the actual performance of the app. This matters since we expect many users will access our app on budget phones with slow connections or on old computers.
### Actions
- [x] Audit the current performance of our app
- [x] Discuss possible solutions according to the audit
- [x] Invite dev team to this issue to review the Lighthouse results and see if there are easy ways to improve performance (i.e. remove unused javascript)
- [x] Create issue to do / correct ARIA (accessibility) tagging at some point to better support accessibility tools
### Ideas
To get optimal performance may require a full refactor. For now this issue just focuses on low hanging fruit.
### Ressources
See below | 1.0 | Improve app performance - ### Overview
Currently, there has not been much focus on the actual performance of the app. This matters since we expect many users will access our app on budget phones with slow connections or on old computers.
### Actions
- [x] Audit the current performance of our app
- [x] Discuss possible solutions according to the audit
- [x] Invite dev team to this issue to review the Lighthouse results and see if there are easy ways to improve performance (i.e. remove unused javascript)
- [x] Create issue to do / correct ARIA (accessibility) tagging at some point to better support accessibility tools
### Ideas
To get optimal performance may require a full refactor. For now this issue just focuses on low hanging fruit.
### Ressources
See below | infrastructure | improve app performance overview currently there has not been much focus on the actual performance of the app this matters since we expect many users will access our app on budget phones with slow connections or on old computers actions audit the current performance of our app discuss possible solutions according to the audit invite dev team to this issue to review the lighthouse results and see if there are easy ways to improve performance i e remove unused javascript create issue to do correct aria accessibility tagging at some point to better support accessibility tools ideas to get optimal performance may require a full refactor for now this issue just focuses on low hanging fruit ressources see below | 1 |
38,715 | 15,785,364,848 | IssuesEvent | 2021-04-01 16:14:38 | BCDevOps/developer-experience | https://api.github.com/repos/BCDevOps/developer-experience | closed | Use MTC to migrate the OIPC Lobbyist Registry System application from OCP3 to OCP4 (silver) | ops and shared services | Evaluate and migrate the LRS.
| 1.0 | Use MTC to migrate the OIPC Lobbyist Registry System application from OCP3 to OCP4 (silver) - Evaluate and migrate the LRS.
| non_infrastructure | use mtc to migrate the oipc lobbyist registry system application from to silver evaluate and migrate the lrs | 0 |
3,235 | 4,162,025,418 | IssuesEvent | 2016-06-17 18:45:05 | eslint/eslint | https://api.github.com/repos/eslint/eslint | opened | Proposal: Technical Steering Committee | evaluating infrastructure needs bikeshedding | ## Background
When I started transitioning ESLint to allow others to start getting more involved, I based the governance model off of the old [YUI model](https://github.com/yui/yui3/wiki/Contributor-Model). While I think that was a good start, and allowed us to scale to the point we're at today, I think we need to change things up now that the team has gotten larger and my availability has lessened due to my health. As such, I'd like to propose a model that is based on the [Node.js governance model](https://nodejs.org/en/about/governance/).
## The tl;dr Overview
* The project lead (currently just me) and reviewer (currently 5 people) levels of the current governance model will be collapsed into a single Technical Steering Committee (TSC).
* The TSC will have final authority over ESLint and will operate in a [consensus-seeking](https://en.wikipedia.org/wiki/Consensus-seeking_decision-making) manner.
* The TSC will have bi-weekly, open meetings where decisions are made.
## Why the change?
As ESLint has gained in popularity, the number of new issues being filed and the number of overall issues to manage has grown considerably. It's hard for any one person to be a gatekeeper or moderator to keep things moving. The result has been issues and pull requests being left open longer, sometimes without followup, and the focus always ends up on the incoming issues.
There are also gray areas where it's unclear who can make decisions on what. Originally, I wanted to approve everything. Then, as the team got bigger, I completely delegated approval of bugs, new rules, and rule options to everyone on the team but still maintained complete control over the core and its roadmap. Again, that was okay when our user base was small, but now that it's grown, there's far too much for me to keep track of. Also, it's unclear what "core" means and there's no documented way that the project makes decisions on anything other than bugs and rules.
I want ESLint to run in an open way and to know that its progress will not be sacrificed if I leave the project (which I don't plan to do, but who knows what my health has in store for me in the future) or otherwise need to disappear for a while.
## Practical Details
On a day-to-day basis, nothing is really going to change. The changes are more about what happens outside of the day-to-day maintenance of the project.
* We'll eliminate the "project lead" and "reviewer" levels in the governance model, and replace them with a single TSC level. The TSC member selection criteria will be the same as for reviewers right now plus we'll want to be sure that whomever joins the TSC is committed to attending meetings and participating on a regular basis.
* The initial TSC will be made up of the current reviewers (@mysticatea, @gyandeeps, @btmills, @alberto, @ilyavolodin) and me. Everyone has confirmed that they are willing to regularly attend meetings going forward.
* I'll draft a TSC charter explaining how things will work. In general, it will be modeled after the Node.js approach.
* Bugs, documentation changes, and new rules/rule options will still be handled as they are today, and the TSC will intervene only when progress has stalled or when requested.
* Anything else will need TSC approval to proceed, which can be accomplished either by discussion on the issue or by bringing it up explicitly in a meeting.
* The TSC will operate in a consensus-seeking manner.
* The TSC will have regular bi-weekly meetings by chat (day and time TBD).
* These meetings will be open, anyone can attend.
* Anyone can add an agenda item for an upcoming meeting (process TBD).
* Notes and transcript of meeting will be posted publicly afterwards.
* Issues can be labeled "tsc agenda" to automatically be added to the TSC agenda for the next meeting (this should be done only when TSC approval is necessary or desired for the proposal to move forward).
This is just a rough overview, I'll write up a more detailed explanation to formally add to the docs.
## Questions and Considerations
**Why not have the whole team involved?**
At this point, we have 18 people with commit access and 2 people on the issues team. Getting everyone to agree on something isn't practical and getting everyone to attend a single meeting is even more difficult. As such, I think it makes sense to adopt a model where a small subset of the team is able to move the project forward. We need to find a way to keep the project moving forward, and the best way to do that is to have a small group who can make decisions.
**Are committers still as important with this model?**
Yes! We still want everyone to participate the same as they've been doing. We want to avoid unilateral decision making and make sure everyone who's interested in a particular topic has a chance to speak their mind. Committers are invited to attending TSC meetings if they want to discuss something, or just to hang out and see what we're talking about. The majority of the day-to-day work is still firmly in the hands of the committers and that won't change. In short, we are not removing any privileges of committers, we are only adding responsibilities for the former reviewers.
**Why not meet every week?**
Because everyone is working on ESLint in their free time, I want to be as respectful of people's time as possible. In initial discussions with reviewers, it seemed like meeting every week was a bit too much to ask while meeting once a month would mean longer meetings. Hopefully every two weeks will be a nice middle ground where we can be effective and not overly-taxing on people's time. | 1.0 | Proposal: Technical Steering Committee - ## Background
When I started transitioning ESLint to allow others to start getting more involved, I based the governance model off of the old [YUI model](https://github.com/yui/yui3/wiki/Contributor-Model). While I think that was a good start, and allowed us to scale to the point we're at today, I think we need to change things up now that the team has gotten larger and my availability has lessened due to my health. As such, I'd like to propose a model that is based on the [Node.js governance model](https://nodejs.org/en/about/governance/).
## The tl;dr Overview
* The project lead (currently just me) and reviewer (currently 5 people) levels of the current governance model will be collapsed into a single Technical Steering Committee (TSC).
* The TSC will have final authority over ESLint and will operate in a [consensus-seeking](https://en.wikipedia.org/wiki/Consensus-seeking_decision-making) manner.
* The TSC will have bi-weekly, open meetings where decisions are made.
## Why the change?
As ESLint has gained in popularity, the number of new issues being filed and the number of overall issues to manage has grown considerably. It's hard for any one person to be a gatekeeper or moderator to keep things moving. The result has been issues and pull requests being left open longer, sometimes without followup, and the focus always ends up on the incoming issues.
There are also gray areas where it's unclear who can make decisions on what. Originally, I wanted to approve everything. Then, as the team got bigger, I completely delegated approval of bugs, new rules, and rule options to everyone on the team but still maintained complete control over the core and its roadmap. Again, that was okay when our user base was small, but now that it's grown, there's far too much for me to keep track of. Also, it's unclear what "core" means and there's no documented way that the project makes decisions on anything other than bugs and rules.
I want ESLint to run in an open way and to know that its progress will not be sacrificed if I leave the project (which I don't plan to do, but who knows what my health has in store for me in the future) or otherwise need to disappear for a while.
## Practical Details
On a day-to-day basis, nothing is really going to change. The changes are more about what happens outside of the day-to-day maintenance of the project.
* We'll eliminate the "project lead" and "reviewer" levels in the governance model, and replace them with a single TSC level. The TSC member selection criteria will be the same as for reviewers right now plus we'll want to be sure that whomever joins the TSC is committed to attending meetings and participating on a regular basis.
* The initial TSC will be made up of the current reviewers (@mysticatea, @gyandeeps, @btmills, @alberto, @ilyavolodin) and me. Everyone has confirmed that they are willing to regularly attend meetings going forward.
* I'll draft a TSC charter explaining how things will work. In general, it will be modeled after the Node.js approach.
* Bugs, documentation changes, and new rules/rule options will still be handled as they are today, and the TSC will intervene only when progress has stalled or when requested.
* Anything else will need TSC approval to proceed, which can be accomplished either by discussion on the issue or by bringing it up explicitly in a meeting.
* The TSC will operate in a consensus-seeking manner.
* The TSC will have regular bi-weekly meetings by chat (day and time TBD).
* These meetings will be open, anyone can attend.
* Anyone can add an agenda item for an upcoming meeting (process TBD).
* Notes and transcript of meeting will be posted publicly afterwards.
* Issues can be labeled "tsc agenda" to automatically be added to the TSC agenda for the next meeting (this should be done only when TSC approval is necessary or desired for the proposal to move forward).
This is just a rough overview, I'll write up a more detailed explanation to formally add to the docs.
## Questions and Considerations
**Why not have the whole team involved?**
At this point, we have 18 people with commit access and 2 people on the issues team. Getting everyone to agree on something isn't practical and getting everyone to attend a single meeting is even more difficult. As such, I think it makes sense to adopt a model where a small subset of the team is able to move the project forward. We need to find a way to keep the project moving forward, and the best way to do that is to have a small group who can make decisions.
**Are committers still as important with this model?**
Yes! We still want everyone to participate the same as they've been doing. We want to avoid unilateral decision making and make sure everyone who's interested in a particular topic has a chance to speak their mind. Committers are invited to attending TSC meetings if they want to discuss something, or just to hang out and see what we're talking about. The majority of the day-to-day work is still firmly in the hands of the committers and that won't change. In short, we are not removing any privileges of committers, we are only adding responsibilities for the former reviewers.
**Why not meet every week?**
Because everyone is working on ESLint in their free time, I want to be as respectful of people's time as possible. In initial discussions with reviewers, it seemed like meeting every week was a bit too much to ask while meeting once a month would mean longer meetings. Hopefully every two weeks will be a nice middle ground where we can be effective and not overly-taxing on people's time. | infrastructure | proposal technical steering committee background when i started transitioning eslint to allow others to start getting more involved i based the governance model off of the old while i think that was a good start and allowed us to scale to the point we re at today i think we need to change things up now that the team has gotten larger and my availability has lessened due to my health as such i d like to propose a model that is based on the the tl dr overview the project lead currently just me and reviewer currently people levels of the current governance model will be collapsed into a single technical steering committee tsc the tsc will have final authority over eslint and will operate in a manner the tsc will have bi weekly open meetings where decisions are made why the change as eslint has gained in popularity the number of new issues being filed and the number of overall issues to manage has grown considerably it s hard for any one person to be a gatekeeper or moderator to keep things moving the result has been issues and pull requests being left open longer sometimes without followup and the focus always ends up on the incoming issues there are also gray areas where it s unclear who can make decisions on what originally i wanted to approve everything then as the team got bigger i completely delegated approval of bugs new rules and rule options to everyone on the team but still maintained complete control over the core and its roadmap again that was okay when our user base was small but now that it s grown there s far too much for me to keep track of also it s unclear what core means and there s no documented way that the project makes decisions on anything other than bugs and rules i want eslint to run in an open way and to know that its progress will not be sacrificed if i leave the project which i don t plan to do but who knows what my health has in store for me in the future or otherwise need to disappear for a while practical details on a day to day basis nothing is really going to change the changes are more about what happens outside of the day to day maintenance of the project we ll eliminate the project lead and reviewer levels in the governance model and replace them with a single tsc level the tsc member selection criteria will be the same as for reviewers right now plus we ll want to be sure that whomever joins the tsc is committed to attending meetings and participating on a regular basis the initial tsc will be made up of the current reviewers mysticatea gyandeeps btmills alberto ilyavolodin and me everyone has confirmed that they are willing to regularly attend meetings going forward i ll draft a tsc charter explaining how things will work in general it will be modeled after the node js approach bugs documentation changes and new rules rule options will still be handled as they are today and the tsc will intervene only when progress has stalled or when requested anything else will need tsc approval to proceed which can be accomplished either by discussion on the issue or by bringing it up explicitly in a meeting the tsc will operate in a consensus seeking manner the tsc will have regular bi weekly meetings by chat day and time tbd these meetings will be open anyone can attend anyone can add an agenda item for an upcoming meeting process tbd notes and transcript of meeting will be posted publicly afterwards issues can be labeled tsc agenda to automatically be added to the tsc agenda for the next meeting this should be done only when tsc approval is necessary or desired for the proposal to move forward this is just a rough overview i ll write up a more detailed explanation to formally add to the docs questions and considerations why not have the whole team involved at this point we have people with commit access and people on the issues team getting everyone to agree on something isn t practical and getting everyone to attend a single meeting is even more difficult as such i think it makes sense to adopt a model where a small subset of the team is able to move the project forward we need to find a way to keep the project moving forward and the best way to do that is to have a small group who can make decisions are committers still as important with this model yes we still want everyone to participate the same as they ve been doing we want to avoid unilateral decision making and make sure everyone who s interested in a particular topic has a chance to speak their mind committers are invited to attending tsc meetings if they want to discuss something or just to hang out and see what we re talking about the majority of the day to day work is still firmly in the hands of the committers and that won t change in short we are not removing any privileges of committers we are only adding responsibilities for the former reviewers why not meet every week because everyone is working on eslint in their free time i want to be as respectful of people s time as possible in initial discussions with reviewers it seemed like meeting every week was a bit too much to ask while meeting once a month would mean longer meetings hopefully every two weeks will be a nice middle ground where we can be effective and not overly taxing on people s time | 1 |
113,706 | 17,150,880,489 | IssuesEvent | 2021-07-13 20:25:44 | snowdensb/braindump | https://api.github.com/repos/snowdensb/braindump | opened | CVE-2019-1010083 (High) detected in Flask-0.11.1-py2.py3-none-any.whl | security vulnerability | ## CVE-2019-1010083 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Flask-0.11.1-py2.py3-none-any.whl</b></p></summary>
<p>A simple framework for building complex web applications.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: braindump/requirements.txt</p>
<p>Path to vulnerable library: braindump/requirements.txt</p>
<p>
Dependency Hierarchy:
- Flask-Mail-0.9.1.tar.gz (Root Library)
- :x: **Flask-0.11.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010083>CVE-2019-1010083</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010083">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010083</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 1.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Flask","packageVersion":"0.11.1","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":true,"dependencyTree":"Flask-Mail:0.9.1;Flask:0.11.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-1010083","vulnerabilityDetails":"The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010083","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | True | CVE-2019-1010083 (High) detected in Flask-0.11.1-py2.py3-none-any.whl - ## CVE-2019-1010083 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>Flask-0.11.1-py2.py3-none-any.whl</b></p></summary>
<p>A simple framework for building complex web applications.</p>
<p>Library home page: <a href="https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl">https://files.pythonhosted.org/packages/63/2b/01f5ed23a78391f6e3e73075973da0ecb467c831376a0b09c0ec5afd7977/Flask-0.11.1-py2.py3-none-any.whl</a></p>
<p>Path to dependency file: braindump/requirements.txt</p>
<p>Path to vulnerable library: braindump/requirements.txt</p>
<p>
Dependency Hierarchy:
- Flask-Mail-0.9.1.tar.gz (Root Library)
- :x: **Flask-0.11.1-py2.py3-none-any.whl** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/snowdensb/braindump/commit/815ae0afebcf867f02143f3ab9cf88b1d4dacdec">815ae0afebcf867f02143f3ab9cf88b1d4dacdec</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656.
<p>Publish Date: 2019-07-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010083>CVE-2019-1010083</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010083">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-1010083</a></p>
<p>Release Date: 2019-07-17</p>
<p>Fix Resolution: 1.0</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Python","packageName":"Flask","packageVersion":"0.11.1","packageFilePaths":["/requirements.txt"],"isTransitiveDependency":true,"dependencyTree":"Flask-Mail:0.9.1;Flask:0.11.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-1010083","vulnerabilityDetails":"The Pallets Project Flask before 1.0 is affected by: unexpected memory usage. The impact is: denial of service. The attack vector is: crafted encoded JSON data. The fixed version is: 1. NOTE: this may overlap CVE-2018-1000656.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-1010083","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in flask none any whl cve high severity vulnerability vulnerable library flask none any whl a simple framework for building complex web applications library home page a href path to dependency file braindump requirements txt path to vulnerable library braindump requirements txt dependency hierarchy flask mail tar gz root library x flask none any whl vulnerable library found in head commit a href found in base branch master vulnerability details the pallets project flask before is affected by unexpected memory usage the impact is denial of service the attack vector is crafted encoded json data the fixed version is note this may overlap cve publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree flask mail flask isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails the pallets project flask before is affected by unexpected memory usage the impact is denial of service the attack vector is crafted encoded json data the fixed version is note this may overlap cve vulnerabilityurl | 0 |
329,113 | 10,012,422,425 | IssuesEvent | 2019-07-15 13:10:30 | BingLingGroup/autosub | https://api.github.com/repos/BingLingGroup/autosub | opened | Add lang codes support | Priority: Medium Status: Accepted Type: Bug Type: Maintenance | [lang codes](https://github.com/LuminosoInsight/langcodes)
Google [says](https://cloud.google.com/speech-to-text/docs/languages) its using [BCP-47](https://tools.ietf.org/html/bcp47) as a standard. According to my [test (agermanidis/autosub pull request #136)](https://github.com/agermanidis/autosub/pull/136), a more specific and compliant with a standard's lang codes get a better result, which means they are able to avoid Google's IP-local optimization.
Obviously currently using Google recommended lang codes strategy is not enough. (Sometimes it's hard to compare due to the non-standard usage by Google.) We need a library to compare lang codes and perhaps give a full lang codes reference to user.
| 1.0 | Add lang codes support - [lang codes](https://github.com/LuminosoInsight/langcodes)
Google [says](https://cloud.google.com/speech-to-text/docs/languages) its using [BCP-47](https://tools.ietf.org/html/bcp47) as a standard. According to my [test (agermanidis/autosub pull request #136)](https://github.com/agermanidis/autosub/pull/136), a more specific and compliant with a standard's lang codes get a better result, which means they are able to avoid Google's IP-local optimization.
Obviously currently using Google recommended lang codes strategy is not enough. (Sometimes it's hard to compare due to the non-standard usage by Google.) We need a library to compare lang codes and perhaps give a full lang codes reference to user.
| non_infrastructure | add lang codes support google its using as a standard according to my a more specific and compliant with a standard s lang codes get a better result which means they are able to avoid google s ip local optimization obviously currently using google recommended lang codes strategy is not enough sometimes it s hard to compare due to the non standard usage by google we need a library to compare lang codes and perhaps give a full lang codes reference to user | 0 |
33,571 | 27,593,054,830 | IssuesEvent | 2023-03-09 02:48:22 | bootstrapworld/curriculum | https://api.github.com/repos/bootstrapworld/curriculum | opened | @lesson-link in textbook files leaks path | Infrastructure | [Live on our website right now](https://bootstrapworld.org/materials/spring2023/en-us/textbooks/IM-grade-6.html):

It looks like @lesson-link isn't resolving to the lesson titles, when used from these textbook pages. It's broken in both `master` and `fall2023` (new build system), so it's likely a racket preprocessor issue. | 1.0 | @lesson-link in textbook files leaks path - [Live on our website right now](https://bootstrapworld.org/materials/spring2023/en-us/textbooks/IM-grade-6.html):

It looks like @lesson-link isn't resolving to the lesson titles, when used from these textbook pages. It's broken in both `master` and `fall2023` (new build system), so it's likely a racket preprocessor issue. | infrastructure | lesson link in textbook files leaks path it looks like lesson link isn t resolving to the lesson titles when used from these textbook pages it s broken in both master and new build system so it s likely a racket preprocessor issue | 1 |
6,923 | 6,663,630,619 | IssuesEvent | 2017-10-02 17:07:19 | digitalmediabremen/hej | https://api.github.com/repos/digitalmediabremen/hej | opened | Blogaccount | infrastructure public | ###### DE
Die [DM-Blogs](https://blogs.digitalmedia-bremen.de) bieten einen รberblick รผber eine Menge der Kurse und werden als Resources genutzt. Das [Compendium Digitale](https://blogs.digitalmedia-bremen.de/compendium/) ist eine Sammelstelle fรผr Tipps rund um die Digitalen Medien. Zu finden sind hier neben Leseempfehlungen Hinweise auf Galerien, Verรถffentlichungswege, aber auch HfK-interne Ablรคufe etwa zum 3D-Druck oder den Werkstรคtten zu finden. Jeder ist herzlichst eingeladen, sein Wissen zu teilen.
###### *EN*
*The [DM-Blogs](https://blogs.digitalmedia-bremen.de) offer a insight into many courses and are often used for resources. The [Compendium Digitale](https://blogs.digitalmedia-bremen.de/compendium/) is a collection of tips and tricks for digital media. Here you can find things like recommendations for blogs, galleries or ways to publish your work as well as procedures inside the HfK regarding things like 3D-printing or workshops. Everyone is invited to add their own knowledge.* | 1.0 | Blogaccount - ###### DE
Die [DM-Blogs](https://blogs.digitalmedia-bremen.de) bieten einen รberblick รผber eine Menge der Kurse und werden als Resources genutzt. Das [Compendium Digitale](https://blogs.digitalmedia-bremen.de/compendium/) ist eine Sammelstelle fรผr Tipps rund um die Digitalen Medien. Zu finden sind hier neben Leseempfehlungen Hinweise auf Galerien, Verรถffentlichungswege, aber auch HfK-interne Ablรคufe etwa zum 3D-Druck oder den Werkstรคtten zu finden. Jeder ist herzlichst eingeladen, sein Wissen zu teilen.
###### *EN*
*The [DM-Blogs](https://blogs.digitalmedia-bremen.de) offer a insight into many courses and are often used for resources. The [Compendium Digitale](https://blogs.digitalmedia-bremen.de/compendium/) is a collection of tips and tricks for digital media. Here you can find things like recommendations for blogs, galleries or ways to publish your work as well as procedures inside the HfK regarding things like 3D-printing or workshops. Everyone is invited to add their own knowledge.* | infrastructure | blogaccount de die bieten einen รผberblick รผber eine menge der kurse und werden als resources genutzt das ist eine sammelstelle fรผr tipps rund um die digitalen medien zu finden sind hier neben leseempfehlungen hinweise auf galerien verรถffentlichungswege aber auch hfk interne ablรคufe etwa zum druck oder den werkstรคtten zu finden jeder ist herzlichst eingeladen sein wissen zu teilen en the offer a insight into many courses and are often used for resources the is a collection of tips and tricks for digital media here you can find things like recommendations for blogs galleries or ways to publish your work as well as procedures inside the hfk regarding things like printing or workshops everyone is invited to add their own knowledge | 1 |
9,441 | 7,974,645,390 | IssuesEvent | 2018-07-17 06:37:24 | php-coder/mystamps | https://api.github.com/repos/php-coder/mystamps | closed | Send service's logs to admin every day | area/infrastructure environment/prod ready | Let's add cron task that will sent output of `journalctl -u mystamps --since=yesterday --until=today` command to root (admin) user every day. It should be executed at 00:05.
This task requires the following modifications:
- create `/etc/cron.d/mystamps-send-logs` file
- modify `/etc/sudoers.d/10_mystamps` to allow execution of `journalctl` command
- update `mystamps-app` role to update aforementioned files
| 1.0 | Send service's logs to admin every day - Let's add cron task that will sent output of `journalctl -u mystamps --since=yesterday --until=today` command to root (admin) user every day. It should be executed at 00:05.
This task requires the following modifications:
- create `/etc/cron.d/mystamps-send-logs` file
- modify `/etc/sudoers.d/10_mystamps` to allow execution of `journalctl` command
- update `mystamps-app` role to update aforementioned files
| infrastructure | send service s logs to admin every day let s add cron task that will sent output of journalctl u mystamps since yesterday until today command to root admin user every day it should be executed at this task requires the following modifications create etc cron d mystamps send logs file modify etc sudoers d mystamps to allow execution of journalctl command update mystamps app role to update aforementioned files | 1 |
220,702 | 17,242,031,081 | IssuesEvent | 2021-07-21 00:52:41 | Becksteinlab/GromacsWrapper | https://api.github.com/repos/Becksteinlab/GromacsWrapper | opened | test on MacOS | tests | Since the switch of CI to GitHub actions #200 we are currently only testing on **Linux** runners. There are versions of GROMACS available for MacOS #202 so we should be running at least some tests (Python 2.7 and 3.8) on MacOS runners, too.
The GROMACS versions can be limited, e.g.
- [ ] Python 2.7, GROMACS 2021
- [ ] Python 3.8, GROMACS 2021
for a start. | 1.0 | test on MacOS - Since the switch of CI to GitHub actions #200 we are currently only testing on **Linux** runners. There are versions of GROMACS available for MacOS #202 so we should be running at least some tests (Python 2.7 and 3.8) on MacOS runners, too.
The GROMACS versions can be limited, e.g.
- [ ] Python 2.7, GROMACS 2021
- [ ] Python 3.8, GROMACS 2021
for a start. | non_infrastructure | test on macos since the switch of ci to github actions we are currently only testing on linux runners there are versions of gromacs available for macos so we should be running at least some tests python and on macos runners too the gromacs versions can be limited e g python gromacs python gromacs for a start | 0 |
443,349 | 30,886,247,189 | IssuesEvent | 2023-08-03 22:11:28 | garyokeeffe/NSA | https://api.github.com/repos/garyokeeffe/NSA | opened | Video walkthrough of API deployment | documentation | We need better overall documentation of the API deployment process. A step-by-step video tutorial embedded in the readme would be a good first step. | 1.0 | Video walkthrough of API deployment - We need better overall documentation of the API deployment process. A step-by-step video tutorial embedded in the readme would be a good first step. | non_infrastructure | video walkthrough of api deployment we need better overall documentation of the api deployment process a step by step video tutorial embedded in the readme would be a good first step | 0 |
85,267 | 16,624,265,552 | IssuesEvent | 2021-06-03 07:35:52 | HydrolienF/Formiko | https://api.github.com/repos/HydrolienF/Formiko | closed | implement View #175 with GUI2D : ViewGUI | code reorganization doing done graphics | - [x] Game should work with all function of ViewGUI2d
- [x] ViewGUI need to be link to all Panel.
- [ ] To switch to an other panel it need to swap main panel in Fenetre to an other 1.
- [x] Any panel can be draw anytime. So we need to check that frame exist and add it if not.
- [x] launch of PanneauMenu kill cheat code listening, it shoundn't. | 1.0 | implement View #175 with GUI2D : ViewGUI - - [x] Game should work with all function of ViewGUI2d
- [x] ViewGUI need to be link to all Panel.
- [ ] To switch to an other panel it need to swap main panel in Fenetre to an other 1.
- [x] Any panel can be draw anytime. So we need to check that frame exist and add it if not.
- [x] launch of PanneauMenu kill cheat code listening, it shoundn't. | non_infrastructure | implement view with viewgui game should work with all function of viewgui need to be link to all panel to switch to an other panel it need to swap main panel in fenetre to an other any panel can be draw anytime so we need to check that frame exist and add it if not launch of panneaumenu kill cheat code listening it shoundn t | 0 |
408,947 | 11,954,696,930 | IssuesEvent | 2020-04-04 00:41:15 | frederik-hoeft/pmdbs | https://api.github.com/repos/frederik-hoeft/pmdbs | closed | On Register: Allow Email Reuse If Account Is Not Verified And if The 2FA Code is Expired | bug medium priority wontfix | also: run garbage collection periodically | 1.0 | On Register: Allow Email Reuse If Account Is Not Verified And if The 2FA Code is Expired - also: run garbage collection periodically | non_infrastructure | on register allow email reuse if account is not verified and if the code is expired also run garbage collection periodically | 0 |
31,262 | 25,492,964,184 | IssuesEvent | 2022-11-27 10:11:41 | Tonomy-Foundation/Tonomy-ID-SDK | https://api.github.com/repos/Tonomy-Foundation/Tonomy-ID-SDK | opened | Github pipeline runs linters check on PR | infrastructure | Definition of done
- [ ] Github action will also run npm lint on the repository
Follow up:
other repositories | 1.0 | Github pipeline runs linters check on PR - Definition of done
- [ ] Github action will also run npm lint on the repository
Follow up:
other repositories | infrastructure | github pipeline runs linters check on pr definition of done github action will also run npm lint on the repository follow up other repositories | 1 |
63,593 | 3,197,033,304 | IssuesEvent | 2015-10-01 00:37:10 | fusioninventory/fusioninventory-for-glpi | https://api.github.com/repos/fusioninventory/fusioninventory-for-glpi | closed | import machine failed when speed of network interface is not set | Category: Computer inventory Component: For junior contributor Component: Found in version Priority: Normal Status: Closed Tracker: Bug | ---
Author Name: **Nico LAS** (Nico LAS)
Original Redmine Issue: 2942, http://forge.fusioninventory.org/issues/2942
Original Date: 2015-05-19
Original Assignee: David Durieux
---
on a solaris 10 zone, when inventory is done, it fails to analyse network interface with name nxge
Speed is set with no value : <SPEED> </SPEED>
it failed to be impoorted in glpi : PHP Fatal error: Unsupported operand types in /wlc/glpi/plugins/fusioninventory/inc/formatconvert.class.php on line 657
Line 657 try to make operation on speed value : $array_tmp['speed'] = $array_tmp['speed'] / 1000000;
| 1.0 | import machine failed when speed of network interface is not set - ---
Author Name: **Nico LAS** (Nico LAS)
Original Redmine Issue: 2942, http://forge.fusioninventory.org/issues/2942
Original Date: 2015-05-19
Original Assignee: David Durieux
---
on a solaris 10 zone, when inventory is done, it fails to analyse network interface with name nxge
Speed is set with no value : <SPEED> </SPEED>
it failed to be impoorted in glpi : PHP Fatal error: Unsupported operand types in /wlc/glpi/plugins/fusioninventory/inc/formatconvert.class.php on line 657
Line 657 try to make operation on speed value : $array_tmp['speed'] = $array_tmp['speed'] / 1000000;
| non_infrastructure | import machine failed when speed of network interface is not set author name nico las nico las original redmine issue original date original assignee david durieux on a solaris zone when inventory is done it fails to analyse network interface with name nxge speed is set with no value it failed to be impoorted in glpi php fatal error unsupported operand types in wlc glpi plugins fusioninventory inc formatconvert class php on line line try to make operation on speed value array tmp array tmp | 0 |
3,300 | 4,210,234,954 | IssuesEvent | 2016-06-29 09:12:12 | matthiasbeyer/imag | https://api.github.com/repos/matthiasbeyer/imag | opened | StoreId: Store-internal/Store-external confusion | complexity/high kind/enhancement kind/infrastructure kind/refactor meta/importance/high part/lib/imagstore | At the moment we `Store::storify_id()` each `StoreId` object once it enters a `Store` function.
This alters the `StoreId` object and is not really nice (it really creates a new `StoreId` object which than gets passed around).
We should reimplement this and differ between internal and external `StoreId` objects, meaning that
* external `StoreId` objects are for the interface to the `Store`
* internal `StoreId` objects are basically `StoreId` objects _including_ the `Store` path in them.
We should make them seperate types (`libimagstore`-private ones), so we can be sure that we do not accidentially pass these internal data types (with the full path from the FS root to the store entry) to the user of the library (who should only use the external types, having the path root at the store path).
---
I consider this a non-breaking change, as we do not alter the `libimagstore` interface if we do things right. It might be a little complex to get it right, though.
---
If anyone wants to implement this, go ahead. I would gladly answer questions.
| 1.0 | StoreId: Store-internal/Store-external confusion - At the moment we `Store::storify_id()` each `StoreId` object once it enters a `Store` function.
This alters the `StoreId` object and is not really nice (it really creates a new `StoreId` object which than gets passed around).
We should reimplement this and differ between internal and external `StoreId` objects, meaning that
* external `StoreId` objects are for the interface to the `Store`
* internal `StoreId` objects are basically `StoreId` objects _including_ the `Store` path in them.
We should make them seperate types (`libimagstore`-private ones), so we can be sure that we do not accidentially pass these internal data types (with the full path from the FS root to the store entry) to the user of the library (who should only use the external types, having the path root at the store path).
---
I consider this a non-breaking change, as we do not alter the `libimagstore` interface if we do things right. It might be a little complex to get it right, though.
---
If anyone wants to implement this, go ahead. I would gladly answer questions.
| infrastructure | storeid store internal store external confusion at the moment we store storify id each storeid object once it enters a store function this alters the storeid object and is not really nice it really creates a new storeid object which than gets passed around we should reimplement this and differ between internal and external storeid objects meaning that external storeid objects are for the interface to the store internal storeid objects are basically storeid objects including the store path in them we should make them seperate types libimagstore private ones so we can be sure that we do not accidentially pass these internal data types with the full path from the fs root to the store entry to the user of the library who should only use the external types having the path root at the store path i consider this a non breaking change as we do not alter the libimagstore interface if we do things right it might be a little complex to get it right though if anyone wants to implement this go ahead i would gladly answer questions | 1 |
261,028 | 27,785,124,146 | IssuesEvent | 2023-03-17 02:04:52 | nk7598/linux-4.19.72 | https://api.github.com/repos/nk7598/linux-4.19.72 | reopened | CVE-2022-4095 (High) detected in linuxlinux-4.19.269 | Mend: dependency security vulnerability | ## CVE-2022-4095 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
CVE-2022-4095 kernel: Use-after-Free/Double-Free bug in read_bbreg_hdl in drivers/staging/rtl8712/rtl8712_cmd.c
<p>Publish Date: 2022-11-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4095>CVE-2022-4095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4095">https://www.linuxkernelcves.com/cves/CVE-2022-4095</a></p>
<p>Release Date: 2022-11-21</p>
<p>Fix Resolution: v4.9.328,v4.14.293,v4.19.258,v5.4.213,v5.10.142,v5.15.66</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | True | CVE-2022-4095 (High) detected in linuxlinux-4.19.269 - ## CVE-2022-4095 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/staging/rtl8712/rtl8712_cmd.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
CVE-2022-4095 kernel: Use-after-Free/Double-Free bug in read_bbreg_hdl in drivers/staging/rtl8712/rtl8712_cmd.c
<p>Publish Date: 2022-11-21
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-4095>CVE-2022-4095</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-4095">https://www.linuxkernelcves.com/cves/CVE-2022-4095</a></p>
<p>Release Date: 2022-11-21</p>
<p>Fix Resolution: v4.9.328,v4.14.293,v4.19.258,v5.4.213,v5.10.142,v5.15.66</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github) | non_infrastructure | cve high detected in linuxlinux cve high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in base branch master vulnerable source files drivers staging cmd c drivers staging cmd c vulnerability details cve kernel use after free double free bug in read bbreg hdl in drivers staging cmd c publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend | 0 |
13,002 | 10,060,602,637 | IssuesEvent | 2019-07-22 19:14:44 | dotnet/core-setup | https://api.github.com/repos/dotnet/core-setup | closed | source-build: Update build task project to use stable versions to remove prebuilts | Triaged area-Infrastructure | | | Prebuilt | Planned ref-only package
--- | --- | ---
NuGet.ProjectModel | 4.3.0-preview2-4095 | 4.9.4
Microsoft.Extensions.DependencyModel | 2.1.0-preview2-26306-03 | 2.1.0
For example:
https://github.com/dotnet/core-setup/blob/3a2d0293bc7717a6fa038f2a12a9a91391205b06/dependencies.props#L34
should be changed to:
https://github.com/dotnet/core-setup/blob/8d445c071754cbb95b28b7b84d066adf92689660/eng/Versions.props#L85
Currently it's a source-build prebuilt, and 4.9.4 is the planned ref package version: https://github.com/dotnet/source-build/issues/1090.
This is already done in `dev/arcade-migration`, this issue is to track it getting into `master` to help with source-build prebuilt reporting. | 1.0 | source-build: Update build task project to use stable versions to remove prebuilts - | | Prebuilt | Planned ref-only package
--- | --- | ---
NuGet.ProjectModel | 4.3.0-preview2-4095 | 4.9.4
Microsoft.Extensions.DependencyModel | 2.1.0-preview2-26306-03 | 2.1.0
For example:
https://github.com/dotnet/core-setup/blob/3a2d0293bc7717a6fa038f2a12a9a91391205b06/dependencies.props#L34
should be changed to:
https://github.com/dotnet/core-setup/blob/8d445c071754cbb95b28b7b84d066adf92689660/eng/Versions.props#L85
Currently it's a source-build prebuilt, and 4.9.4 is the planned ref package version: https://github.com/dotnet/source-build/issues/1090.
This is already done in `dev/arcade-migration`, this issue is to track it getting into `master` to help with source-build prebuilt reporting. | infrastructure | source build update build task project to use stable versions to remove prebuilts prebuilt planned ref only package nuget projectmodel microsoft extensions dependencymodel for example should be changed to currently it s a source build prebuilt and is the planned ref package version this is already done in dev arcade migration this issue is to track it getting into master to help with source build prebuilt reporting | 1 |
587,601 | 17,620,498,596 | IssuesEvent | 2021-08-18 14:47:26 | kitzeslab/opensoundscape | https://api.github.com/repos/kitzeslab/opensoundscape | closed | Prediction should save intermediate results | high priority | Prediction on large datasets can take hundreds of cpu-hours, so results should be saved periodically to avoid losing all progress if there's an exception. For instance, there could be an option to append results to a file after each batch finishes. | 1.0 | Prediction should save intermediate results - Prediction on large datasets can take hundreds of cpu-hours, so results should be saved periodically to avoid losing all progress if there's an exception. For instance, there could be an option to append results to a file after each batch finishes. | non_infrastructure | prediction should save intermediate results prediction on large datasets can take hundreds of cpu hours so results should be saved periodically to avoid losing all progress if there s an exception for instance there could be an option to append results to a file after each batch finishes | 0 |
59,818 | 7,296,436,266 | IssuesEvent | 2018-02-26 10:45:06 | matomo-org/matomo | https://api.github.com/repos/matomo-org/matomo | closed | update checker display issues | c: Design / UI | Followup to #12463 and #12459, related to #12485
I don't have time to create a fix, so I'll just post it here:
The gif is still on the right and oddly the box disappears and creates another arrow.

| 1.0 | update checker display issues - Followup to #12463 and #12459, related to #12485
I don't have time to create a fix, so I'll just post it here:
The gif is still on the right and oddly the box disappears and creates another arrow.

| non_infrastructure | update checker display issues followup to and related to i don t have time to create a fix so i ll just post it here the gif is still on the right and oddly the box disappears and creates another arrow | 0 |
29,995 | 24,463,675,510 | IssuesEvent | 2022-10-07 13:22:19 | iiif-prezi/iiif-prezi3 | https://api.github.com/repos/iiif-prezi/iiif-prezi3 | closed | Update PyPi publish action | infrastructure | The following warning showed up in the logs for the last action that made a release to PyPi:
>Warning: You are using "pypa/gh-action-pypi-publish@master". The "master" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use "pypa/gh-action-pypi-publish@release/v1". If you feel adventurous, you may opt to use use "pypa/gh-action-pypi-publish@unstable/v1" instead. A more general recommendation is to pin to exact tags or commit shas. | 1.0 | Update PyPi publish action - The following warning showed up in the logs for the last action that made a release to PyPi:
>Warning: You are using "pypa/gh-action-pypi-publish@master". The "master" branch of this project has been sunset and will not receive any updates, not even security bug fixes. Please, make sure to use a supported version. If you want to pin to v1 major version, use "pypa/gh-action-pypi-publish@release/v1". If you feel adventurous, you may opt to use use "pypa/gh-action-pypi-publish@unstable/v1" instead. A more general recommendation is to pin to exact tags or commit shas. | infrastructure | update pypi publish action the following warning showed up in the logs for the last action that made a release to pypi warning you are using pypa gh action pypi publish master the master branch of this project has been sunset and will not receive any updates not even security bug fixes please make sure to use a supported version if you want to pin to major version use pypa gh action pypi publish release if you feel adventurous you may opt to use use pypa gh action pypi publish unstable instead a more general recommendation is to pin to exact tags or commit shas | 1 |
5,754 | 5,930,705,665 | IssuesEvent | 2017-05-24 02:39:17 | dotnet/corefx | https://api.github.com/repos/dotnet/corefx | closed | ConditionalFact/ConditionalTheory tests are skipped with` netfx | area-Infrastructure test bug | Following test fails with `msbuild /T:BuildAndTest` but passes with `msbuild /T:BuildAndTest /P:TargetGroup=netfx`. The test isn't being run.
```cs
public static bool AlwaysTrue { get; } = true;
[ConditionalFact(nameof(AlwaysTrue))]
public void Test()
{
Assert.False(true);
}
``` | 1.0 | ConditionalFact/ConditionalTheory tests are skipped with` netfx - Following test fails with `msbuild /T:BuildAndTest` but passes with `msbuild /T:BuildAndTest /P:TargetGroup=netfx`. The test isn't being run.
```cs
public static bool AlwaysTrue { get; } = true;
[ConditionalFact(nameof(AlwaysTrue))]
public void Test()
{
Assert.False(true);
}
``` | infrastructure | conditionalfact conditionaltheory tests are skipped with netfx following test fails with msbuild t buildandtest but passes with msbuild t buildandtest p targetgroup netfx the test isn t being run cs public static bool alwaystrue get true public void test assert false true | 1 |
14,765 | 11,134,826,575 | IssuesEvent | 2019-12-20 12:52:57 | thibaultmeyer/sparrow | https://api.github.com/repos/thibaultmeyer/sparrow | opened | Add task scheduler | area/async-task area/infrastructure kind/enhancement priority/medium | Implements task scheduler with cron-like feature to allow usage of background admin/management tasks. | 1.0 | Add task scheduler - Implements task scheduler with cron-like feature to allow usage of background admin/management tasks. | infrastructure | add task scheduler implements task scheduler with cron like feature to allow usage of background admin management tasks | 1 |
409,901 | 11,979,927,808 | IssuesEvent | 2020-04-07 08:28:27 | ahmedkaludi/accelerated-mobile-pages | https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages | closed | Need to have an option for auto slide of images in AMP page builder - slider module. | [Priority: HIGH] bug | The user is adding images in the slider module of AMP page builder which is working perfectly but the user don't want to use auto slide the images. so need to add an option such that 'images should be auto slide or not' (by default images are sliding automatically).
Exact Issue: https://monosnap.com/file/dYEqpyRFxK9N6azR7YGPnuvZrrh4hk
Ref URL:
https://secure.helpscout.net/conversation/1129255137/0/?folderId=3126461 | 1.0 | Need to have an option for auto slide of images in AMP page builder - slider module. - The user is adding images in the slider module of AMP page builder which is working perfectly but the user don't want to use auto slide the images. so need to add an option such that 'images should be auto slide or not' (by default images are sliding automatically).
Exact Issue: https://monosnap.com/file/dYEqpyRFxK9N6azR7YGPnuvZrrh4hk
Ref URL:
https://secure.helpscout.net/conversation/1129255137/0/?folderId=3126461 | non_infrastructure | need to have an option for auto slide of images in amp page builder slider module the user is adding images in the slider module of amp page builder which is working perfectly but the user don t want to use auto slide the images so need to add an option such that images should be auto slide or not by default images are sliding automatically exact issue ref url | 0 |
5,296 | 5,556,965,189 | IssuesEvent | 2017-03-24 10:39:29 | Framstag/libosmscout | https://api.github.com/repos/Framstag/libosmscout | closed | Add explicit download timeout to Qt network api | enhancement help wanted infrastructure | Qt don't support to configure download timeout. System default (about one hour) is used! This cause problems when you start map download (or online map tile) and go away from wifi signal... Downloading stucks then. Lost tcp connection is detected (by app) after hour and downloading is restarted. It is toooo late!
All tutorials for Qt networking api suggest to add custom download timeout and reset it when some data arrives (signal `QNetworkReply::downloadProgress`). | 1.0 | Add explicit download timeout to Qt network api - Qt don't support to configure download timeout. System default (about one hour) is used! This cause problems when you start map download (or online map tile) and go away from wifi signal... Downloading stucks then. Lost tcp connection is detected (by app) after hour and downloading is restarted. It is toooo late!
All tutorials for Qt networking api suggest to add custom download timeout and reset it when some data arrives (signal `QNetworkReply::downloadProgress`). | infrastructure | add explicit download timeout to qt network api qt don t support to configure download timeout system default about one hour is used this cause problems when you start map download or online map tile and go away from wifi signal downloading stucks then lost tcp connection is detected by app after hour and downloading is restarted it is toooo late all tutorials for qt networking api suggest to add custom download timeout and reset it when some data arrives signal qnetworkreply downloadprogress | 1 |
20,644 | 3,391,586,221 | IssuesEvent | 2015-11-30 16:05:26 | bridgedotnet/Bridge | https://api.github.com/repos/bridgedotnet/Bridge | closed | Cannot read property '$is' of undefined [1.10 regression?] | defect question | I just tried upgrading a little demo project from 1.9 to 1.10, and I now get this error:
Uncaught TypeError: Cannot read property '$is' of undefined
May be related to casting inside a `.Select()` or the `[Ignore]`s on the class with the implicit casts?
You can [repro this error on live.bridge.net here](http://live.bridge.net/#5ccfca1c51b09d959ffc) which is running 1.10.0. If you run the same code in 1.9, it seems to work fine.
Full code for convenience:
```
using System.Linq;
public class App
{
[Ready]
public static void Main()
{
var danny = new [] {
new Danny(),
new Danny()
};
var a = danny.Select(d => (ReactElementOrText)d).ToArray();
}
}
public class Danny : ReactElement {
}
public class ReactElement {
}
[Ignore]
public sealed class ReactElementOrText
{
private ReactElementOrText() { }
[Ignore] public extern static implicit operator ReactElementOrText(string text);
[Ignore] public extern static implicit operator ReactElementOrText(ReactElement element);
}
``` | 1.0 | Cannot read property '$is' of undefined [1.10 regression?] - I just tried upgrading a little demo project from 1.9 to 1.10, and I now get this error:
Uncaught TypeError: Cannot read property '$is' of undefined
May be related to casting inside a `.Select()` or the `[Ignore]`s on the class with the implicit casts?
You can [repro this error on live.bridge.net here](http://live.bridge.net/#5ccfca1c51b09d959ffc) which is running 1.10.0. If you run the same code in 1.9, it seems to work fine.
Full code for convenience:
```
using System.Linq;
public class App
{
[Ready]
public static void Main()
{
var danny = new [] {
new Danny(),
new Danny()
};
var a = danny.Select(d => (ReactElementOrText)d).ToArray();
}
}
public class Danny : ReactElement {
}
public class ReactElement {
}
[Ignore]
public sealed class ReactElementOrText
{
private ReactElementOrText() { }
[Ignore] public extern static implicit operator ReactElementOrText(string text);
[Ignore] public extern static implicit operator ReactElementOrText(ReactElement element);
}
``` | non_infrastructure | cannot read property is of undefined i just tried upgrading a little demo project from to and i now get this error uncaught typeerror cannot read property is of undefined may be related to casting inside a select or the s on the class with the implicit casts you can which is running if you run the same code in it seems to work fine full code for convenience using system linq public class app public static void main var danny new new danny new danny var a danny select d reactelementortext d toarray public class danny reactelement public class reactelement public sealed class reactelementortext private reactelementortext public extern static implicit operator reactelementortext string text public extern static implicit operator reactelementortext reactelement element | 0 |
29,487 | 24,042,946,953 | IssuesEvent | 2022-09-16 04:57:48 | oppia/oppia-android | https://api.github.com/repos/oppia/oppia-android | reopened | Add support for downloading interactions to drive exploration progress controller & other decision makers | issue_type_infrastructure issue_user_impact_low dev_team issue_temp_ben_triaged | Rather than making assumptions about interaction structures, domain controllers should keep an actual registry of interactions. This will also provide some future-proofing in the app as more interactions are added in the future. This is especially important for representing terminal interactions since there could conceivably be other terminal interactions in the future. | 1.0 | Add support for downloading interactions to drive exploration progress controller & other decision makers - Rather than making assumptions about interaction structures, domain controllers should keep an actual registry of interactions. This will also provide some future-proofing in the app as more interactions are added in the future. This is especially important for representing terminal interactions since there could conceivably be other terminal interactions in the future. | infrastructure | add support for downloading interactions to drive exploration progress controller other decision makers rather than making assumptions about interaction structures domain controllers should keep an actual registry of interactions this will also provide some future proofing in the app as more interactions are added in the future this is especially important for representing terminal interactions since there could conceivably be other terminal interactions in the future | 1 |
61,468 | 14,627,769,431 | IssuesEvent | 2020-12-23 12:56:21 | bitbar/test-samples | https://api.github.com/repos/bitbar/test-samples | opened | CVE-2020-11620 (High) detected in jackson-databind-2.6.0.jar | security vulnerability | ## CVE-2020-11620 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: test-samples/samples/testing-frameworks/appium/server-side/image-recognition/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.0/jackson-databind-2.6.0.jar</p>
<p>
Dependency Hierarchy:
- testdroid-api-2.38.jar (Root Library)
- :x: **jackson-databind-2.6.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bitbar/test-samples/commit/12af4f854b64888df6e4492ecc94e141388e939a">12af4f854b64888df6e4492ecc94e141388e939a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620>CVE-2020-11620</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.0","isTransitiveDependency":true,"dependencyTree":"com.testdroid:testdroid-api:2.38;com.fasterxml.jackson.core:jackson-databind:2.6.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"}],"vulnerabilityIdentifier":"CVE-2020-11620","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | True | CVE-2020-11620 (High) detected in jackson-databind-2.6.0.jar - ## CVE-2020-11620 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.6.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: test-samples/samples/testing-frameworks/appium/server-side/image-recognition/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.6.0/jackson-databind-2.6.0.jar</p>
<p>
Dependency Hierarchy:
- testdroid-api-2.38.jar (Root Library)
- :x: **jackson-databind-2.6.0.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/bitbar/test-samples/commit/12af4f854b64888df6e4492ecc94e141388e939a">12af4f854b64888df6e4492ecc94e141388e939a</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).
<p>Publish Date: 2020-04-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620>CVE-2020-11620</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-11620</a></p>
<p>Release Date: 2020-04-07</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.4</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.6.0","isTransitiveDependency":true,"dependencyTree":"com.testdroid:testdroid-api:2.38;com.fasterxml.jackson.core:jackson-databind:2.6.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.fasterxml.jackson.core:jackson-databind:2.9.10.4"}],"vulnerabilityIdentifier":"CVE-2020-11620","vulnerabilityDetails":"FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to org.apache.commons.jelly.impl.Embedded (aka commons-jelly).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-11620","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> --> | non_infrastructure | cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file test samples samples testing frameworks appium server side image recognition pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy testdroid api jar root library x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons jelly impl embedded aka commons jelly publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache commons jelly impl embedded aka commons jelly vulnerabilityurl | 0 |
91,753 | 8,317,785,079 | IssuesEvent | 2018-09-25 13:07:54 | tomato42/tlsfuzzer | https://api.github.com/repos/tomato42/tlsfuzzer | closed | TLS 1.3 key_share omitted | good first issue help wanted new test script | # New test script idea
## What TLS message this idea relates to?
ClientHello
## What TLS extension this idea relates to?
`key_share`, `supported_groups`
## What is the behaviour the test script should test?
Check if TLS 1.3 ClientHello that omits `key_share` extension completely, but does send `supported_groups` is rejected with `missing_extension` alert
also check if a completely empty extension triggers `decode_error`
## Are there scripts that test related functionality?
none
## Additional information
mandatory to implement extensions check
| 1.0 | TLS 1.3 key_share omitted - # New test script idea
## What TLS message this idea relates to?
ClientHello
## What TLS extension this idea relates to?
`key_share`, `supported_groups`
## What is the behaviour the test script should test?
Check if TLS 1.3 ClientHello that omits `key_share` extension completely, but does send `supported_groups` is rejected with `missing_extension` alert
also check if a completely empty extension triggers `decode_error`
## Are there scripts that test related functionality?
none
## Additional information
mandatory to implement extensions check
| non_infrastructure | tls key share omitted new test script idea what tls message this idea relates to clienthello what tls extension this idea relates to key share supported groups what is the behaviour the test script should test check if tls clienthello that omits key share extension completely but does send supported groups is rejected with missing extension alert also check if a completely empty extension triggers decode error are there scripts that test related functionality none additional information mandatory to implement extensions check | 0 |
173,265 | 27,412,204,902 | IssuesEvent | 2023-03-01 11:18:32 | mui/mui-toolpad | https://api.github.com/repos/mui/mui-toolpad | opened | Inform users about the future of Toolpad | design: ui | ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] I have tested the latest version
### Summary ๐ก
To all of our existing users we need to inform that the docker version is going to be discontinued and they'll be able to use Toolpad from NPM. This information needs to be shown in:
-[ ] Self-host version
-[ ] Demo
-[ ] Readme.md
Suggestion:
We replace this with the message.
<img width="1437" alt="Screenshot 2023-03-01 at 4 40 07 PM" src="https://user-images.githubusercontent.com/92228082/222122980-b9e42cd1-ee22-4729-8c17-f1f4436a1b95.png">
Message suggestion:
`Note: Big changes are coming - we're excited to announce a new direction for our product to better serve your needs. Stay tuned for further updates!`
### Examples ๐
_No response_
### Motivation ๐ฆ
_No response_ | 1.0 | Inform users about the future of Toolpad - ### Duplicates
- [X] I have searched the existing issues
### Latest version
- [X] I have tested the latest version
### Summary ๐ก
To all of our existing users we need to inform that the docker version is going to be discontinued and they'll be able to use Toolpad from NPM. This information needs to be shown in:
-[ ] Self-host version
-[ ] Demo
-[ ] Readme.md
Suggestion:
We replace this with the message.
<img width="1437" alt="Screenshot 2023-03-01 at 4 40 07 PM" src="https://user-images.githubusercontent.com/92228082/222122980-b9e42cd1-ee22-4729-8c17-f1f4436a1b95.png">
Message suggestion:
`Note: Big changes are coming - we're excited to announce a new direction for our product to better serve your needs. Stay tuned for further updates!`
### Examples ๐
_No response_
### Motivation ๐ฆ
_No response_ | non_infrastructure | inform users about the future of toolpad duplicates i have searched the existing issues latest version i have tested the latest version summary ๐ก to all of our existing users we need to inform that the docker version is going to be discontinued and they ll be able to use toolpad from npm this information needs to be shown in self host version demo readme md suggestion we replace this with the message img width alt screenshot at pm src message suggestion note big changes are coming we re excited to announce a new direction for our product to better serve your needs stay tuned for further updates examples ๐ no response motivation ๐ฆ no response | 0 |
14,334 | 10,755,421,971 | IssuesEvent | 2019-10-31 09:06:21 | nest/nest-simulator | https://api.github.com/repos/nest/nest-simulator | opened | Not all files are checked with static code analysis on large PRs | C: Infrastructure I: No breaking change P: Pending S: High T: Bug | On larger PRs, not all changed files are checked with static code analysis. This is because we are getting the changed files using the GitHub API. Because the GitHub API uses pagination, the code analysis will currently only get the first 30 changed files, while the remaining files are not checked for formatting errors.
Looking at for example [this Travis run](https://travis-ci.org/nest/nest-simulator/jobs/605035767#L1069) of #1282 with 947 changed files, [this Travis run](https://travis-ci.org/nest/nest-simulator/jobs/603833810#L1133) of #1283 with 460 changed files, or [this Travis run](https://travis-ci.org/nest/nest-simulator/jobs/411694777#L1289) of #920 with 279 changed files, only the the first 30 files are checked.
It is possible to increase the number of files returned using `per_page`, but this only goes up to 100. So for PRs with more than 100 changed files we must iterate the pages if we want to check all the files using the GitHub API.
Alternatively it might be possible to get the changed files using plain `git` and the variables set by Travis. The command
```
git diff --name-only --diff-filter=AM HEAD...$TRAVIS_BRANCH
```
could do the trick. @lekshmideepu Has this alternative been discussed before? | 1.0 | Not all files are checked with static code analysis on large PRs - On larger PRs, not all changed files are checked with static code analysis. This is because we are getting the changed files using the GitHub API. Because the GitHub API uses pagination, the code analysis will currently only get the first 30 changed files, while the remaining files are not checked for formatting errors.
Looking at for example [this Travis run](https://travis-ci.org/nest/nest-simulator/jobs/605035767#L1069) of #1282 with 947 changed files, [this Travis run](https://travis-ci.org/nest/nest-simulator/jobs/603833810#L1133) of #1283 with 460 changed files, or [this Travis run](https://travis-ci.org/nest/nest-simulator/jobs/411694777#L1289) of #920 with 279 changed files, only the the first 30 files are checked.
It is possible to increase the number of files returned using `per_page`, but this only goes up to 100. So for PRs with more than 100 changed files we must iterate the pages if we want to check all the files using the GitHub API.
Alternatively it might be possible to get the changed files using plain `git` and the variables set by Travis. The command
```
git diff --name-only --diff-filter=AM HEAD...$TRAVIS_BRANCH
```
could do the trick. @lekshmideepu Has this alternative been discussed before? | infrastructure | not all files are checked with static code analysis on large prs on larger prs not all changed files are checked with static code analysis this is because we are getting the changed files using the github api because the github api uses pagination the code analysis will currently only get the first changed files while the remaining files are not checked for formatting errors looking at for example of with changed files of with changed files or of with changed files only the the first files are checked it is possible to increase the number of files returned using per page but this only goes up to so for prs with more than changed files we must iterate the pages if we want to check all the files using the github api alternatively it might be possible to get the changed files using plain git and the variables set by travis the command git diff name only diff filter am head travis branch could do the trick lekshmideepu has this alternative been discussed before | 1 |
24,365 | 17,143,443,565 | IssuesEvent | 2021-07-13 12:15:52 | meateam/drive-project | https://api.github.com/repos/meateam/drive-project | closed | change S3 bucket postfix | infrastructure | because bucket id is unique.
Need to run a script to do that on all of the s3 bucket | 1.0 | change S3 bucket postfix - because bucket id is unique.
Need to run a script to do that on all of the s3 bucket | infrastructure | change bucket postfix because bucket id is unique need to run a script to do that on all of the bucket | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.