Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9,956
| 12,979,256,640
|
IssuesEvent
|
2020-07-22 01:30:53
|
googleapis/python-bigquery
|
https://api.github.com/repos/googleapis/python-bigquery
|
closed
|
release 1.26.0
|
api: bigquery type: process
|
Any chance we could get a release? I'd like #170 in a released version to support a tool I'm building on my current team.
|
1.0
|
release 1.26.0 - Any chance we could get a release? I'd like #170 in a released version to support a tool I'm building on my current team.
|
process
|
release any chance we could get a release i d like in a released version to support a tool i m building on my current team
| 1
|
3,615
| 6,061,817,407
|
IssuesEvent
|
2017-06-14 07:49:54
|
aditya00j/Griffin
|
https://api.github.com/repos/aditya00j/Griffin
|
opened
|
Create Simulation instance and test
|
requirement
|
Create a Simulation instance with 1 DynamicObject and test. This entails the following:
1. Properly instantiating the Simulation class, with inputs, states, and outputs.
2. Integrating DynamicObject interactions.
3. ODE solver.
4. Positive testing required. Negative testing not required.
|
1.0
|
Create Simulation instance and test - Create a Simulation instance with 1 DynamicObject and test. This entails the following:
1. Properly instantiating the Simulation class, with inputs, states, and outputs.
2. Integrating DynamicObject interactions.
3. ODE solver.
4. Positive testing required. Negative testing not required.
|
non_process
|
create simulation instance and test create a simulation instance with dynamicobject and test this entails the following properly instantiating the simulation class with inputs states and outputs integrating dynamicobject interactions ode solver positive testing required negative testing not required
| 0
|
636,307
| 20,596,943,256
|
IssuesEvent
|
2022-03-05 16:57:33
|
AY2122S2-CS2103-F09-4/tp
|
https://api.github.com/repos/AY2122S2-CS2103-F09-4/tp
|
closed
|
As a home baker that has multiple customers, I can look at all my customers
|
type.story priority.high
|
... so that I can access the information for different customers
|
1.0
|
As a home baker that has multiple customers, I can look at all my customers - ... so that I can access the information for different customers
|
non_process
|
as a home baker that has multiple customers i can look at all my customers so that i can access the information for different customers
| 0
|
102,019
| 16,543,129,244
|
IssuesEvent
|
2021-05-27 19:36:34
|
RG4421/grafana
|
https://api.github.com/repos/RG4421/grafana
|
opened
|
CVE-2020-7753 (High) detected in trim-0.0.1.tgz
|
security vulnerability
|
## CVE-2020-7753 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-0.0.1.tgz</b></p></summary>
<p>Trim string whitespace</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim/-/trim-0.0.1.tgz">https://registry.npmjs.org/trim/-/trim-0.0.1.tgz</a></p>
<p>Path to dependency file: grafana/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/trim</p>
<p>
Dependency Hierarchy:
- @grafana/ui-8.1.0-pre.tgz (Root Library)
- addon-essentials-6.2.7.tgz
- addon-docs-6.2.7.tgz
- mdx-1.6.22.tgz
- remark-parse-8.0.3.tgz
- :x: **trim-0.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/grafana/commit/ec817a5e348ac0cae2aa8950a6cd386b9a378f92">ec817a5e348ac0cae2aa8950a6cd386b9a378f92</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package trim are vulnerable to Regular Expression Denial of Service (ReDoS) via trim().
<p>Publish Date: 2020-10-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7753>CVE-2020-7753</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/component/trim/pull/8">https://github.com/component/trim/pull/8</a></p>
<p>Release Date: 2020-10-27</p>
<p>Fix Resolution: trim - 0.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"trim","packageVersion":"0.0.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@grafana/ui:8.1.0-pre;@storybook/addon-essentials:6.2.7;@storybook/addon-docs:6.2.7;@mdx-js/mdx:1.6.22;remark-parse:8.0.3;trim:0.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"trim - 0.0.3"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-7753","vulnerabilityDetails":"All versions of package trim are vulnerable to Regular Expression Denial of Service (ReDoS) via trim().","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7753","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-7753 (High) detected in trim-0.0.1.tgz - ## CVE-2020-7753 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>trim-0.0.1.tgz</b></p></summary>
<p>Trim string whitespace</p>
<p>Library home page: <a href="https://registry.npmjs.org/trim/-/trim-0.0.1.tgz">https://registry.npmjs.org/trim/-/trim-0.0.1.tgz</a></p>
<p>Path to dependency file: grafana/package.json</p>
<p>Path to vulnerable library: grafana/node_modules/trim</p>
<p>
Dependency Hierarchy:
- @grafana/ui-8.1.0-pre.tgz (Root Library)
- addon-essentials-6.2.7.tgz
- addon-docs-6.2.7.tgz
- mdx-1.6.22.tgz
- remark-parse-8.0.3.tgz
- :x: **trim-0.0.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/RG4421/grafana/commit/ec817a5e348ac0cae2aa8950a6cd386b9a378f92">ec817a5e348ac0cae2aa8950a6cd386b9a378f92</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
All versions of package trim are vulnerable to Regular Expression Denial of Service (ReDoS) via trim().
<p>Publish Date: 2020-10-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7753>CVE-2020-7753</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/component/trim/pull/8">https://github.com/component/trim/pull/8</a></p>
<p>Release Date: 2020-10-27</p>
<p>Fix Resolution: trim - 0.0.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"trim","packageVersion":"0.0.1","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"@grafana/ui:8.1.0-pre;@storybook/addon-essentials:6.2.7;@storybook/addon-docs:6.2.7;@mdx-js/mdx:1.6.22;remark-parse:8.0.3;trim:0.0.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"trim - 0.0.3"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2020-7753","vulnerabilityDetails":"All versions of package trim are vulnerable to Regular Expression Denial of Service (ReDoS) via trim().","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7753","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in trim tgz cve high severity vulnerability vulnerable library trim tgz trim string whitespace library home page a href path to dependency file grafana package json path to vulnerable library grafana node modules trim dependency hierarchy grafana ui pre tgz root library addon essentials tgz addon docs tgz mdx tgz remark parse tgz x trim tgz vulnerable library found in head commit a href found in base branch main vulnerability details all versions of package trim are vulnerable to regular expression denial of service redos via trim publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution trim isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grafana ui pre storybook addon essentials storybook addon docs mdx js mdx remark parse trim isminimumfixversionavailable true minimumfixversion trim basebranches vulnerabilityidentifier cve vulnerabilitydetails all versions of package trim are vulnerable to regular expression denial of service redos via trim vulnerabilityurl
| 0
|
274,541
| 30,052,891,798
|
IssuesEvent
|
2023-06-28 03:02:33
|
temporalio/samples-java
|
https://api.github.com/repos/temporalio/samples-java
|
opened
|
h2-2.1.214.jar: 1 vulnerabilities (highest severity is: 7.8)
|
Mend: dependency security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>h2-2.1.214.jar</b></p></summary>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /springboot/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.h2database/h2/2.1.214/d5c2005c9e3279201e12d4776c948578b16bf8b2/h2-2.1.214.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (h2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-45868](https://www.mend.io/vulnerability-database/CVE-2022-45868) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.8 | h2-2.1.214.jar | Direct | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-45868</summary>
### Vulnerable Library - <b>h2-2.1.214.jar</b></p>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /springboot/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.h2database/h2/2.1.214/d5c2005c9e3279201e12d4776c948578b16bf8b2/h2-2.1.214.jar</p>
<p>
Dependency Hierarchy:
- :x: **h2-2.1.214.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states "This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that."
<p>Publish Date: 2022-11-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45868>CVE-2022-45868</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
|
True
|
h2-2.1.214.jar: 1 vulnerabilities (highest severity is: 7.8) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>h2-2.1.214.jar</b></p></summary>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /springboot/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.h2database/h2/2.1.214/d5c2005c9e3279201e12d4776c948578b16bf8b2/h2-2.1.214.jar</p>
<p>
</details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in (h2 version) | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | ------------- | --- |
| [CVE-2022-45868](https://www.mend.io/vulnerability-database/CVE-2022-45868) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> High | 7.8 | h2-2.1.214.jar | Direct | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png?' width=19 height=20> CVE-2022-45868</summary>
### Vulnerable Library - <b>h2-2.1.214.jar</b></p>
<p>H2 Database Engine</p>
<p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p>
<p>Path to dependency file: /springboot/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.h2database/h2/2.1.214/d5c2005c9e3279201e12d4776c948578b16bf8b2/h2-2.1.214.jar</p>
<p>
Dependency Hierarchy:
- :x: **h2-2.1.214.jar** (Vulnerable Library)
<p>Found in base branch: <b>main</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
The web-based admin console in H2 Database Engine through 2.1.214 can be started via the CLI with the argument -webAdminPassword, which allows the user to specify the password in cleartext for the web admin console. Consequently, a local user (or an attacker that has obtained local access through some means) would be able to discover the password by listing processes and their arguments. NOTE: the vendor states "This is not a vulnerability of H2 Console ... Passwords should never be passed on the command line and every qualified DBA or system administrator is expected to know that."
<p>Publish Date: 2022-11-23
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-45868>CVE-2022-45868</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.8</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
</details>
|
non_process
|
jar vulnerabilities highest severity is vulnerable library jar database engine library home page a href path to dependency file springboot build gradle path to vulnerable library home wss scanner gradle caches modules files com jar vulnerabilities cve severity cvss dependency type fixed in version remediation available high jar direct n a details cve vulnerable library jar database engine library home page a href path to dependency file springboot build gradle path to vulnerable library home wss scanner gradle caches modules files com jar dependency hierarchy x jar vulnerable library found in base branch main vulnerability details the web based admin console in database engine through can be started via the cli with the argument webadminpassword which allows the user to specify the password in cleartext for the web admin console consequently a local user or an attacker that has obtained local access through some means would be able to discover the password by listing processes and their arguments note the vendor states this is not a vulnerability of console passwords should never be passed on the command line and every qualified dba or system administrator is expected to know that publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href
| 0
|
449,864
| 12,976,152,270
|
IssuesEvent
|
2020-07-21 18:15:32
|
google/ground-platform
|
https://api.github.com/repos/google/ground-platform
|
closed
|
[Side panel] Routing from one feature to another does not change contents of side panel
|
priority: p1 type: bug
|
**Describe the bug**
Routing from one feature to another changes the side panel, but the color is not correct (still the previous color)
**To Reproduce**
Steps to reproduce the behavior:
1. Go to home page.
2. Click on a feature.
3. Shows the feature in side panel.
4. Click on a feature from a different layer.
5. Does not show the right color for the marker in side panel.
|
1.0
|
[Side panel] Routing from one feature to another does not change contents of side panel - **Describe the bug**
Routing from one feature to another changes the side panel, but the color is not correct (still the previous color)
**To Reproduce**
Steps to reproduce the behavior:
1. Go to home page.
2. Click on a feature.
3. Shows the feature in side panel.
4. Click on a feature from a different layer.
5. Does not show the right color for the marker in side panel.
|
non_process
|
routing from one feature to another does not change contents of side panel describe the bug routing from one feature to another changes the side panel but the color is not correct still the previous color to reproduce steps to reproduce the behavior go to home page click on a feature shows the feature in side panel click on a feature from a different layer does not show the right color for the marker in side panel
| 0
|
9,655
| 12,625,149,758
|
IssuesEvent
|
2020-06-14 10:22:00
|
Arch666Angel/mods
|
https://api.github.com/repos/Arch666Angel/mods
|
opened
|
Take bio out of beta
|
Angels Bio Processing Impact: Enhancement
|
**Describe the bug**
Seems like bio is more or less to a stable state, maybe still some tweaking needed, but content wise seems to be stable, so we should take it out of beta?
|
1.0
|
Take bio out of beta - **Describe the bug**
Seems like bio is more or less to a stable state, maybe still some tweaking needed, but content wise seems to be stable, so we should take it out of beta?
|
process
|
take bio out of beta describe the bug seems like bio is more or less to a stable state maybe still some tweaking needed but content wise seems to be stable so we should take it out of beta
| 1
|
11,649
| 4,270,894,315
|
IssuesEvent
|
2016-07-13 09:04:28
|
joomla/joomla-cms
|
https://api.github.com/repos/joomla/joomla-cms
|
closed
|
[com_media] media form field gives javascript error
|
No Code Attached Yet
|
#### Steps to reproduce the issue
On a clean installation of Joomla 3.6.0 beta 2 with sample data:
1) Go to administrator
2) Component Banners -> Banners
3) Select any of the banners (for example Shop 1)
4) On details tab remove the current value of image
5) Select a new image on the modal window and insert
6) Check you development tool in Edge:
#### Expected result
No javascript error and save banner
#### Actual result
Saves the banner update, but gives a javascript error:
SCRIPT5007: Object expected
index.php (62,206)
[screen shot 2016-06-26 at 09 30 25](https://issues.joomla.org/uploads/1/c27f9df181fb2051e246e125c112ab43.jpg)
#### System information (as much as possible)
Windows Edge browser
Joomla 3.6.0 Beta 2
#### Additional comments
Also tested in Joomla 3.5.1 with same error
|
1.0
|
[com_media] media form field gives javascript error - #### Steps to reproduce the issue
On a clean installation of Joomla 3.6.0 beta 2 with sample data:
1) Go to administrator
2) Component Banners -> Banners
3) Select any of the banners (for example Shop 1)
4) On details tab remove the current value of image
5) Select a new image on the modal window and insert
6) Check you development tool in Edge:
#### Expected result
No javascript error and save banner
#### Actual result
Saves the banner update, but gives a javascript error:
SCRIPT5007: Object expected
index.php (62,206)
[screen shot 2016-06-26 at 09 30 25](https://issues.joomla.org/uploads/1/c27f9df181fb2051e246e125c112ab43.jpg)
#### System information (as much as possible)
Windows Edge browser
Joomla 3.6.0 Beta 2
#### Additional comments
Also tested in Joomla 3.5.1 with same error
|
non_process
|
media form field gives javascript error steps to reproduce the issue on a clean installation of joomla beta with sample data go to administrator component banners banners select any of the banners for example shop on details tab remove the current value of image select a new image on the modal window and insert check you development tool in edge expected result no javascript error and save banner actual result saves the banner update but gives a javascript error object expected index php system information as much as possible windows edge browser joomla beta additional comments also tested in joomla with same error
| 0
|
270,380
| 20,599,018,095
|
IssuesEvent
|
2022-03-06 00:31:31
|
intelligent-environments-lab/bevo_iaq
|
https://api.github.com/repos/intelligent-environments-lab/bevo_iaq
|
opened
|
BEVO Beacon Log: 46
|
documentation
|
# BEVO Beacon 46
### Sensors/Connections
| Module | Included | Condition | Connections |
| ---| --- | --- | --- |
| CO | 🟢 |Original | Factory |
| CO2 | 🟢 |Original | Factory |
| Fan | 🟢 |Original/Cleaned | Original |
| Light | 🟢 |Original | Factory |
| NO2 | 🔴 |- | - |
| PM | 🟢 |Original | Factory |
| Screen | 🟢 |Original | Factory |
| RTC | 🟢 |Original | Factory |
| TVOC | 🟢 |Original | Factory |
**RPi**: Zero
**PCB**: Hat
**Housing**: 6/7 (not Partition)
**Calibrated**: Yes
**Deployed**: No
|
1.0
|
BEVO Beacon Log: 46 - # BEVO Beacon 46
### Sensors/Connections
| Module | Included | Condition | Connections |
| ---| --- | --- | --- |
| CO | 🟢 |Original | Factory |
| CO2 | 🟢 |Original | Factory |
| Fan | 🟢 |Original/Cleaned | Original |
| Light | 🟢 |Original | Factory |
| NO2 | 🔴 |- | - |
| PM | 🟢 |Original | Factory |
| Screen | 🟢 |Original | Factory |
| RTC | 🟢 |Original | Factory |
| TVOC | 🟢 |Original | Factory |
**RPi**: Zero
**PCB**: Hat
**Housing**: 6/7 (not Partition)
**Calibrated**: Yes
**Deployed**: No
|
non_process
|
bevo beacon log bevo beacon sensors connections module included condition connections co 🟢 original factory 🟢 original factory fan 🟢 original cleaned original light 🟢 original factory 🔴 pm 🟢 original factory screen 🟢 original factory rtc 🟢 original factory tvoc 🟢 original factory rpi zero pcb hat housing not partition calibrated yes deployed no
| 0
|
22,210
| 30,762,150,656
|
IssuesEvent
|
2023-07-29 21:06:32
|
apache/arrow-rs
|
https://api.github.com/repos/apache/arrow-rs
|
reopened
|
Prototype ArrayView Types
|
arrow enhancement development-process
|
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
<!--
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and *why* for this feature, in addition to the *what*)
-->
There is ongoing discussion of introducing an ArrayView type to the format - https://lists.apache.org/thread/r28rw5n39jwtvn08oljl09d4q2c1ysvb
We should explore the design space around this, in particular to gather some empirical data as to the impact of introducing such a type.
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
I would like to prototype an implementation of StringView and explore integrating it into the parquet reader, where it ostensibly could yield to some non-trivial performance improvements
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or screenshots about the feature request here.
-->
|
1.0
|
Prototype ArrayView Types - **Is your feature request related to a problem or challenge? Please describe what you are trying to do.**
<!--
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
(This section helps Arrow developers understand the context and *why* for this feature, in addition to the *what*)
-->
There is ongoing discussion of introducing an ArrayView type to the format - https://lists.apache.org/thread/r28rw5n39jwtvn08oljl09d4q2c1ysvb
We should explore the design space around this, in particular to gather some empirical data as to the impact of introducing such a type.
**Describe the solution you'd like**
<!--
A clear and concise description of what you want to happen.
-->
I would like to prototype an implementation of StringView and explore integrating it into the parquet reader, where it ostensibly could yield to some non-trivial performance improvements
**Describe alternatives you've considered**
<!--
A clear and concise description of any alternative solutions or features you've considered.
-->
**Additional context**
<!--
Add any other context or screenshots about the feature request here.
-->
|
process
|
prototype arrayview types is your feature request related to a problem or challenge please describe what you are trying to do a clear and concise description of what the problem is ex i m always frustrated when this section helps arrow developers understand the context and why for this feature in addition to the what there is ongoing discussion of introducing an arrayview type to the format we should explore the design space around this in particular to gather some empirical data as to the impact of introducing such a type describe the solution you d like a clear and concise description of what you want to happen i would like to prototype an implementation of stringview and explore integrating it into the parquet reader where it ostensibly could yield to some non trivial performance improvements describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
| 1
|
14,117
| 17,014,188,556
|
IssuesEvent
|
2021-07-02 09:37:04
|
bazelbuild/bazel
|
https://api.github.com/repos/bazelbuild/bazel
|
opened
|
Status of Bazel 5.0.0-pre.20210623.2
|
P1 release team-XProduct type: process
|
- Expected release date: July 2nd
Task list:
- [x] Pick release baseline: 8b453331163378071f1cfe0ae7c74d551c21b834 with cherrypick 223113c9202e8f338b183d1736d97327d28241ea
- [ ] Create release candidate:
- [ ] Post-submit:
- [ ] Push the release:
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
1.0
|
Status of Bazel 5.0.0-pre.20210623.2 - - Expected release date: July 2nd
Task list:
- [x] Pick release baseline: 8b453331163378071f1cfe0ae7c74d551c21b834 with cherrypick 223113c9202e8f338b183d1736d97327d28241ea
- [ ] Create release candidate:
- [ ] Post-submit:
- [ ] Push the release:
- [ ] Update the [release page](https://github.com/bazelbuild/bazel/releases/)
|
process
|
status of bazel pre expected release date july task list pick release baseline with cherrypick create release candidate post submit push the release update the
| 1
|
909
| 3,371,993,215
|
IssuesEvent
|
2015-11-23 21:31:38
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
Perf_Process.Kill test failed in CI on Windows
|
2 - In Progress System.Diagnostics.Process
|
http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_debug_prtest/6046/console
```
System.InvalidOperationException : Cannot process request because the process (10824) has exited.
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(810,0): at System.Diagnostics.Process.GetProcessHandle(Int32 access, Boolean throwIfExited)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(839,0): at System.Diagnostics.Process.GetProcessHandle(Int32 access)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(71,0): at System.Diagnostics.Process.Kill()
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\tests\Performance\Perf.Process.cs(30,0): at System.Diagnostics.Tests.Perf_Process.Kill()
```
|
1.0
|
Perf_Process.Kill test failed in CI on Windows - http://dotnet-ci.cloudapp.net/job/dotnet_corefx_windows_debug_prtest/6046/console
```
System.InvalidOperationException : Cannot process request because the process (10824) has exited.
Stack Trace:
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(810,0): at System.Diagnostics.Process.GetProcessHandle(Int32 access, Boolean throwIfExited)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(839,0): at System.Diagnostics.Process.GetProcessHandle(Int32 access)
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\src\System\Diagnostics\Process.Windows.cs(71,0): at System.Diagnostics.Process.Kill()
d:\j\workspace\dotnet_corefx_windows_debug_prtest\src\System.Diagnostics.Process\tests\Performance\Perf.Process.cs(30,0): at System.Diagnostics.Tests.Perf_Process.Kill()
```
|
process
|
perf process kill test failed in ci on windows system invalidoperationexception cannot process request because the process has exited stack trace d j workspace dotnet corefx windows debug prtest src system diagnostics process src system diagnostics process windows cs at system diagnostics process getprocesshandle access boolean throwifexited d j workspace dotnet corefx windows debug prtest src system diagnostics process src system diagnostics process windows cs at system diagnostics process getprocesshandle access d j workspace dotnet corefx windows debug prtest src system diagnostics process src system diagnostics process windows cs at system diagnostics process kill d j workspace dotnet corefx windows debug prtest src system diagnostics process tests performance perf process cs at system diagnostics tests perf process kill
| 1
|
9,158
| 12,217,582,740
|
IssuesEvent
|
2020-05-01 17:30:29
|
GoogleCloudPlatform/python-docs-samples
|
https://api.github.com/repos/GoogleCloudPlatform/python-docs-samples
|
closed
|
Create sample scripts for verifying HSM certificates and attestations
|
api: kms type: process
|
Scripts to verify HSM certificates and attestations need to be added to https://cloud.google.com/kms/docs/attest-key#script_for_verifying_an_attestation, but they should be hosted on Github rather than in the documentation.
|
1.0
|
Create sample scripts for verifying HSM certificates and attestations - Scripts to verify HSM certificates and attestations need to be added to https://cloud.google.com/kms/docs/attest-key#script_for_verifying_an_attestation, but they should be hosted on Github rather than in the documentation.
|
process
|
create sample scripts for verifying hsm certificates and attestations scripts to verify hsm certificates and attestations need to be added to but they should be hosted on github rather than in the documentation
| 1
|
10,322
| 13,161,677,907
|
IssuesEvent
|
2020-08-10 20:01:33
|
MicrosoftDocs/azure-devops-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
|
closed
|
ACR 'endpoint myprivate.azurecr.io' not possible w/out preview
|
Pri1 devops-cicd-process/tech devops/prod doc-bug
|
Add notice that accessing private ACR repositories requires an endpoint that is in preview currently.
I was unpleasantly surprised by this and am now having to wait for support to give me access.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3339a2e0-be29-1363-f588-b231d4472c02
* Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3
* Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/container-phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
1.0
|
ACR 'endpoint myprivate.azurecr.io' not possible w/out preview - Add notice that accessing private ACR repositories requires an endpoint that is in preview currently.
I was unpleasantly surprised by this and am now having to wait for support to give me access.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3339a2e0-be29-1363-f588-b231d4472c02
* Version Independent ID: 72dd11a3-704d-d0fd-6dfa-cf49f3352de3
* Content: [Container Jobs in Azure Pipelines and TFS - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops#feedback)
* Content Source: [docs/pipelines/process/container-phases.md](https://github.com/MicrosoftDocs/vsts-docs/blob/master/docs/pipelines/process/container-phases.md)
* Product: **devops**
* Technology: **devops-cicd-process**
* GitHub Login: @juliakm
* Microsoft Alias: **jukullam**
|
process
|
acr endpoint myprivate azurecr io not possible w out preview add notice that accessing private acr repositories requires an endpoint that is in preview currently i was unpleasantly surprised by this and am now having to wait for support to give me access document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
| 1
|
19,461
| 25,753,179,015
|
IssuesEvent
|
2022-12-08 14:38:31
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Enable UseSelfSignedJwts by default in Storage/Translation
|
api: storage api: translation type: process
|
Once GAX 2.3.0 is released, we should modify StorageClientBuilder and TranslationClientBuilder (and AdvancedTranslationClientBuilder) to enable the use of self-signed JWTs with scopes by default.
We can't do this *just* by default, as BigQuery doesn't appear to support self-signed JWTs with scopes at the moment, and we don't want to break users if they just update GAX. (Even if we explicitly disable it in the BQ library in an update, that won't help users who update GAX but not BQ.)
|
1.0
|
Enable UseSelfSignedJwts by default in Storage/Translation - Once GAX 2.3.0 is released, we should modify StorageClientBuilder and TranslationClientBuilder (and AdvancedTranslationClientBuilder) to enable the use of self-signed JWTs with scopes by default.
We can't do this *just* by default, as BigQuery doesn't appear to support self-signed JWTs with scopes at the moment, and we don't want to break users if they just update GAX. (Even if we explicitly disable it in the BQ library in an update, that won't help users who update GAX but not BQ.)
|
process
|
enable useselfsignedjwts by default in storage translation once gax is released we should modify storageclientbuilder and translationclientbuilder and advancedtranslationclientbuilder to enable the use of self signed jwts with scopes by default we can t do this just by default as bigquery doesn t appear to support self signed jwts with scopes at the moment and we don t want to break users if they just update gax even if we explicitly disable it in the bq library in an update that won t help users who update gax but not bq
| 1
|
17,462
| 23,286,430,702
|
IssuesEvent
|
2022-08-05 17:01:17
|
googleapis/google-cloud-php
|
https://api.github.com/repos/googleapis/google-cloud-php
|
opened
|
Warning: a recent release failed
|
type: process
|
The following release PRs may have failed:
* #5427 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
* #5370 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
* #5347 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
1.0
|
Warning: a recent release failed - The following release PRs may have failed:
* #5427 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
* #5370 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
* #5347 - The release job is 'autorelease: tagged', but expected 'autorelease: published'.
|
process
|
warning a recent release failed the following release prs may have failed the release job is autorelease tagged but expected autorelease published the release job is autorelease tagged but expected autorelease published the release job is autorelease tagged but expected autorelease published
| 1
|
39,250
| 12,651,379,026
|
IssuesEvent
|
2020-06-17 00:07:50
|
rhari26/RailsDemo
|
https://api.github.com/repos/rhari26/RailsDemo
|
opened
|
CVE-2019-16770 (High) detected in puma-3.6.2.gem
|
security vulnerability
|
## CVE-2019-16770 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>puma-3.6.2.gem</b></p></summary>
<p>Puma is a simple, fast, threaded, and highly concurrent HTTP 1.1 server for Ruby/Rack applications. Puma is intended for use in both development and production environments. In order to get the best throughput, it is highly recommended that you use a Ruby implementation with real threads like Rubinius or JRuby.</p>
<p>Library home page: <a href="https://rubygems.org/gems/puma-3.6.2.gem">https://rubygems.org/gems/puma-3.6.2.gem</a></p>
<p>Path to dependency file: /tmp/ws-scm/RailsDemo/Gemfile.lock</p>
<p>Path to vulnerable library: ms/2.5.0/cache/puma-3.6.2.gem</p>
<p>
Dependency Hierarchy:
- :x: **puma-3.6.2.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rhari26/RailsDemo/commit/516c7c469d1211d9b4e132c4780a1c93beb804ae">516c7c469d1211d9b4e132c4780a1c93beb804ae</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Puma before versions 3.12.2 and 4.3.1, a poorly-behaved client could use keepalive requests to monopolize Puma's reactor and create a denial of service attack. If more keepalive connections to Puma are opened than there are threads available, additional connections will wait permanently if the attacker sends requests frequently enough. This vulnerability is patched in Puma 4.3.1 and 3.12.2.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16770>CVE-2019-16770</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16770</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: v4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-16770 (High) detected in puma-3.6.2.gem - ## CVE-2019-16770 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>puma-3.6.2.gem</b></p></summary>
<p>Puma is a simple, fast, threaded, and highly concurrent HTTP 1.1 server for Ruby/Rack applications. Puma is intended for use in both development and production environments. In order to get the best throughput, it is highly recommended that you use a Ruby implementation with real threads like Rubinius or JRuby.</p>
<p>Library home page: <a href="https://rubygems.org/gems/puma-3.6.2.gem">https://rubygems.org/gems/puma-3.6.2.gem</a></p>
<p>Path to dependency file: /tmp/ws-scm/RailsDemo/Gemfile.lock</p>
<p>Path to vulnerable library: ms/2.5.0/cache/puma-3.6.2.gem</p>
<p>
Dependency Hierarchy:
- :x: **puma-3.6.2.gem** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/rhari26/RailsDemo/commit/516c7c469d1211d9b4e132c4780a1c93beb804ae">516c7c469d1211d9b4e132c4780a1c93beb804ae</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
In Puma before versions 3.12.2 and 4.3.1, a poorly-behaved client could use keepalive requests to monopolize Puma's reactor and create a denial of service attack. If more keepalive connections to Puma are opened than there are threads available, additional connections will wait permanently if the attacker sends requests frequently enough. This vulnerability is patched in Puma 4.3.1 and 3.12.2.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16770>CVE-2019-16770</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16770">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16770</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: v4.3.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in puma gem cve high severity vulnerability vulnerable library puma gem puma is a simple fast threaded and highly concurrent http server for ruby rack applications puma is intended for use in both development and production environments in order to get the best throughput it is highly recommended that you use a ruby implementation with real threads like rubinius or jruby library home page a href path to dependency file tmp ws scm railsdemo gemfile lock path to vulnerable library ms cache puma gem dependency hierarchy x puma gem vulnerable library found in head commit a href vulnerability details in puma before versions and a poorly behaved client could use keepalive requests to monopolize puma s reactor and create a denial of service attack if more keepalive connections to puma are opened than there are threads available additional connections will wait permanently if the attacker sends requests frequently enough this vulnerability is patched in puma and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
21,084
| 28,038,948,132
|
IssuesEvent
|
2023-03-28 16:59:26
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
When Sandbox is granted to linked table column, but no access to the linked table, then "dirty" queries fail
|
Type:Bug Priority:P3 Querying/Processor Administration/Permissions Querying/Nested Queries Difficulty:Easy .Backend .Reproduced Administration/Data Sandboxes .Product Input Needed .Team/FunPolice :police_officer:
|
**Describe the bug**
If sandbox grant access is set to a linked table column, but without granting access to that table explicitly, then all "dirty" queries will result in `You do not have permissions to run this query` until the query is saved (and refreshed query/browser).
**To Reproduce**
1. Admin > People > create user "U1" with attribute `user_id`=`1`
2. Admin > Permissions > Data > Sample Dataset > revoke all access, and set sandbox access on Orders to "People.ID"=`user_id`

3. Login as "U1" and try to view Browse Data > Sample Dataset > Orders - will result in error `You do not have permissions to run this query`
<details><summary>Full stacktrace</summary>
```
2021-03-09 17:15:56,748 ERROR middleware.catch-exceptions :: Error processing query: null
{:database_id 1,
:started_at #t "2021-03-09T17:15:55.923739+01:00[Europe/Copenhagen]",
:error_type :missing-required-permissions,
:json_query
{:database 1,
:query {:source-table 2},
:type "query",
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> query_processor.middleware.permissions$perms_exception.invokeStatic(permissions.clj:34)"
"query_processor.middleware.permissions$perms_exception.invoke(permissions.clj:33)"
"query_processor.middleware.permissions$fn__46263$check_ad_hoc_query_perms__46268$fn__46272.invoke(permissions.clj:54)"
"query_processor.middleware.permissions$fn__46263$check_ad_hoc_query_perms__46268.invoke(permissions.clj:43)"
"query_processor.middleware.permissions$fn__46300$check_query_permissions_STAR___46305$fn__46306.invoke(permissions.clj:65)"
"query_processor.middleware.permissions$fn__46300$check_query_permissions_STAR___46305.invoke(permissions.clj:58)"
"query_processor.middleware.permissions$check_query_permissions$fn__46319.invoke(permissions.clj:74)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__47880.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__46517.invoke(cumulative_aggregations.clj:60)"
"metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49452.invoke(row_level_restrictions.clj:331)"
"query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__48193.invoke(resolve_joined_fields.clj:94)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__48498.invoke(resolve_joins.clj:178)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__44842.invoke(add_implicit_joins.clj:181)"
"query_processor.middleware.large_int_id$convert_id_to_string$fn__47153.invoke(large_int_id.clj:44)"
"query_processor.middleware.format_rows$format_rows$fn__47133.invoke(format_rows.clj:74)"
"query_processor.middleware.desugar$desugar$fn__46583.invoke(desugar.clj:21)"
"query_processor.middleware.binning$update_binning_strategy$fn__45608.invoke(binning.clj:228)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__46126.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__44472.invoke(add_dimension_projections.clj:316)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__44703.invoke(add_implicit_clauses.clj:146)"
"metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49452.invoke(row_level_restrictions.clj:326)"
"query_processor.middleware.upgrade_field_literals$upgrade_field_literals$fn__48928.invoke(upgrade_field_literals.clj:45)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__44995.invoke(add_source_metadata.clj:122)"
"metabase_enterprise.sandbox.query_processor.middleware.column_level_perms_check$maybe_apply_column_level_perms_check$fn__48969.invoke(column_level_perms_check.clj:25)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__48077.invoke(reconcile_breakout_and_order_by_bucketing.clj:97)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45195.invoke(auto_bucket_datetimes.clj:139)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46173.invoke(resolve_source_table.clj:45)"
"query_processor.middleware.parameters$substitute_parameters$fn__47862.invoke(parameters.clj:111)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46225.invoke(resolve_referenced.clj:79)"
"query_processor.middleware.expand_macros$expand_macros$fn__46839.invoke(expand_macros.clj:155)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__45004.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__48864.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__48088$fn__48092.invoke(resolve_database_and_driver.clj:31)"
"driver$do_with_driver.invokeStatic(driver.clj:60)"
"driver$do_with_driver.invoke(driver.clj:56)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__48088.invoke(resolve_database_and_driver.clj:25)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__47079.invoke(fetch_source_query.clj:274)"
"query_processor.middleware.store$initialize_store$fn__48873$fn__48874.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:44)"
"query_processor.store$do_with_store.invoke(store.clj:38)"
"query_processor.middleware.store$initialize_store$fn__48873.invoke(store.clj:10)"
"query_processor.middleware.validate$validate_query$fn__48935.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__47205.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__44860.invoke(add_rows_truncated.clj:35)"
"metabase_enterprise.audit.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__31308.invoke(handle_audit_queries.clj:162)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__48849.invoke(results_metadata.clj:147)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__46460.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__47951.invoke(process_userland_query.clj:135)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__46403.invoke(catch_exceptions.clj:173)"
"query_processor.reducible$async_qp$qp_STAR___33116$thunk__33117.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___33116.invoke(reducible.clj:109)"
"query_processor.reducible$sync_qp$qp_STAR___33125$fn__33128.invoke(reducible.clj:135)"
"query_processor.reducible$sync_qp$qp_STAR___33125.invoke(reducible.clj:134)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:237)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:233)"
"query_processor$fn__49498$process_query_and_save_execution_BANG___49507$fn__49510.invoke(query_processor.clj:249)"
"query_processor$fn__49498$process_query_and_save_execution_BANG___49507.invoke(query_processor.clj:241)"
"query_processor$fn__49542$process_query_and_save_with_max_results_constraints_BANG___49551$fn__49554.invoke(query_processor.clj:261)"
"query_processor$fn__49542$process_query_and_save_with_max_results_constraints_BANG___49551.invoke(query_processor.clj:254)"
"api.dataset$run_query_async$fn__63828.invoke(dataset.clj:56)"
"query_processor.streaming$streaming_response_STAR_$fn__63807$fn__63808.invoke(streaming.clj:72)"
"query_processor.streaming$streaming_response_STAR_$fn__63807.invoke(streaming.clj:71)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)"
"async.streaming_response$do_f_async$fn__17489.invoke(streaming_response.clj:84)"],
:context :ad-hoc,
:error "You do not have permissions to run this query.",
:row_count 0,
:running_time 0,
:preprocessed nil,
:ex-data
{:type :missing-required-permissions,
:required-permissions #{"/db/1/schema/PUBLIC/table/3/query/"},
:actual-permissions
#{"/db/1/schema/PUBLIC/table/2/read/" "/collection/2/" "/collection/root/" "/collection/6/read/"
"/db/1/schema/PUBLIC/table/2/query/segmented/" "/collection/8/"},
:card-id nil,
:permissions-error? true},
:data {:rows [], :cols []}}
```
</details>
4. While on that error, save the question, and refresh query/browser (on the question ID, non-dirty) - will show results (all products with ID=1)
5. While on the question, do anything that would "dirty" it like filtering/summarizing, which again will show the error, but clicking "Save question" and refreshing query/browser will then show the filtered/summarized results.
**Information about your Metabase Installation:**
Tested 1.34.1 thru 1.38.1
**Additional context**
Previously EE454 https://github.com/metabase/metabase-enterprise/issues/454
Related to #8765
|
1.0
|
When Sandbox is granted to linked table column, but no access to the linked table, then "dirty" queries fail - **Describe the bug**
If sandbox grant access is set to a linked table column, but without granting access to that table explicitly, then all "dirty" queries will result in `You do not have permissions to run this query` until the query is saved (and refreshed query/browser).
**To Reproduce**
1. Admin > People > create user "U1" with attribute `user_id`=`1`
2. Admin > Permissions > Data > Sample Dataset > revoke all access, and set sandbox access on Orders to "People.ID"=`user_id`

3. Login as "U1" and try to view Browse Data > Sample Dataset > Orders - will result in error `You do not have permissions to run this query`
<details><summary>Full stacktrace</summary>
```
2021-03-09 17:15:56,748 ERROR middleware.catch-exceptions :: Error processing query: null
{:database_id 1,
:started_at #t "2021-03-09T17:15:55.923739+01:00[Europe/Copenhagen]",
:error_type :missing-required-permissions,
:json_query
{:database 1,
:query {:source-table 2},
:type "query",
:parameters [],
:middleware {:js-int-to-string? true, :add-default-userland-constraints? true}},
:native nil,
:status :failed,
:class clojure.lang.ExceptionInfo,
:stacktrace
["--> query_processor.middleware.permissions$perms_exception.invokeStatic(permissions.clj:34)"
"query_processor.middleware.permissions$perms_exception.invoke(permissions.clj:33)"
"query_processor.middleware.permissions$fn__46263$check_ad_hoc_query_perms__46268$fn__46272.invoke(permissions.clj:54)"
"query_processor.middleware.permissions$fn__46263$check_ad_hoc_query_perms__46268.invoke(permissions.clj:43)"
"query_processor.middleware.permissions$fn__46300$check_query_permissions_STAR___46305$fn__46306.invoke(permissions.clj:65)"
"query_processor.middleware.permissions$fn__46300$check_query_permissions_STAR___46305.invoke(permissions.clj:58)"
"query_processor.middleware.permissions$check_query_permissions$fn__46319.invoke(permissions.clj:74)"
"query_processor.middleware.pre_alias_aggregations$pre_alias_aggregations$fn__47880.invoke(pre_alias_aggregations.clj:40)"
"query_processor.middleware.cumulative_aggregations$handle_cumulative_aggregations$fn__46517.invoke(cumulative_aggregations.clj:60)"
"metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49452.invoke(row_level_restrictions.clj:331)"
"query_processor.middleware.resolve_joined_fields$resolve_joined_fields$fn__48193.invoke(resolve_joined_fields.clj:94)"
"query_processor.middleware.resolve_joins$resolve_joins$fn__48498.invoke(resolve_joins.clj:178)"
"query_processor.middleware.add_implicit_joins$add_implicit_joins$fn__44842.invoke(add_implicit_joins.clj:181)"
"query_processor.middleware.large_int_id$convert_id_to_string$fn__47153.invoke(large_int_id.clj:44)"
"query_processor.middleware.format_rows$format_rows$fn__47133.invoke(format_rows.clj:74)"
"query_processor.middleware.desugar$desugar$fn__46583.invoke(desugar.clj:21)"
"query_processor.middleware.binning$update_binning_strategy$fn__45608.invoke(binning.clj:228)"
"query_processor.middleware.resolve_fields$resolve_fields$fn__46126.invoke(resolve_fields.clj:24)"
"query_processor.middleware.add_dimension_projections$add_remapping$fn__44472.invoke(add_dimension_projections.clj:316)"
"query_processor.middleware.add_implicit_clauses$add_implicit_clauses$fn__44703.invoke(add_implicit_clauses.clj:146)"
"metabase_enterprise.sandbox.query_processor.middleware.row_level_restrictions$apply_row_level_permissions$fn__49452.invoke(row_level_restrictions.clj:326)"
"query_processor.middleware.upgrade_field_literals$upgrade_field_literals$fn__48928.invoke(upgrade_field_literals.clj:45)"
"query_processor.middleware.add_source_metadata$add_source_metadata_for_source_queries$fn__44995.invoke(add_source_metadata.clj:122)"
"metabase_enterprise.sandbox.query_processor.middleware.column_level_perms_check$maybe_apply_column_level_perms_check$fn__48969.invoke(column_level_perms_check.clj:25)"
"query_processor.middleware.reconcile_breakout_and_order_by_bucketing$reconcile_breakout_and_order_by_bucketing$fn__48077.invoke(reconcile_breakout_and_order_by_bucketing.clj:97)"
"query_processor.middleware.auto_bucket_datetimes$auto_bucket_datetimes$fn__45195.invoke(auto_bucket_datetimes.clj:139)"
"query_processor.middleware.resolve_source_table$resolve_source_tables$fn__46173.invoke(resolve_source_table.clj:45)"
"query_processor.middleware.parameters$substitute_parameters$fn__47862.invoke(parameters.clj:111)"
"query_processor.middleware.resolve_referenced$resolve_referenced_card_resources$fn__46225.invoke(resolve_referenced.clj:79)"
"query_processor.middleware.expand_macros$expand_macros$fn__46839.invoke(expand_macros.clj:155)"
"query_processor.middleware.add_timezone_info$add_timezone_info$fn__45004.invoke(add_timezone_info.clj:15)"
"query_processor.middleware.splice_params_in_response$splice_params_in_response$fn__48864.invoke(splice_params_in_response.clj:32)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__48088$fn__48092.invoke(resolve_database_and_driver.clj:31)"
"driver$do_with_driver.invokeStatic(driver.clj:60)"
"driver$do_with_driver.invoke(driver.clj:56)"
"query_processor.middleware.resolve_database_and_driver$resolve_database_and_driver$fn__48088.invoke(resolve_database_and_driver.clj:25)"
"query_processor.middleware.fetch_source_query$resolve_card_id_source_tables$fn__47079.invoke(fetch_source_query.clj:274)"
"query_processor.middleware.store$initialize_store$fn__48873$fn__48874.invoke(store.clj:11)"
"query_processor.store$do_with_store.invokeStatic(store.clj:44)"
"query_processor.store$do_with_store.invoke(store.clj:38)"
"query_processor.middleware.store$initialize_store$fn__48873.invoke(store.clj:10)"
"query_processor.middleware.validate$validate_query$fn__48935.invoke(validate.clj:10)"
"query_processor.middleware.normalize_query$normalize$fn__47205.invoke(normalize_query.clj:22)"
"query_processor.middleware.add_rows_truncated$add_rows_truncated$fn__44860.invoke(add_rows_truncated.clj:35)"
"metabase_enterprise.audit.query_processor.middleware.handle_audit_queries$handle_internal_queries$fn__31308.invoke(handle_audit_queries.clj:162)"
"query_processor.middleware.results_metadata$record_and_return_metadata_BANG_$fn__48849.invoke(results_metadata.clj:147)"
"query_processor.middleware.constraints$add_default_userland_constraints$fn__46460.invoke(constraints.clj:42)"
"query_processor.middleware.process_userland_query$process_userland_query$fn__47951.invoke(process_userland_query.clj:135)"
"query_processor.middleware.catch_exceptions$catch_exceptions$fn__46403.invoke(catch_exceptions.clj:173)"
"query_processor.reducible$async_qp$qp_STAR___33116$thunk__33117.invoke(reducible.clj:103)"
"query_processor.reducible$async_qp$qp_STAR___33116.invoke(reducible.clj:109)"
"query_processor.reducible$sync_qp$qp_STAR___33125$fn__33128.invoke(reducible.clj:135)"
"query_processor.reducible$sync_qp$qp_STAR___33125.invoke(reducible.clj:134)"
"query_processor$process_userland_query.invokeStatic(query_processor.clj:237)"
"query_processor$process_userland_query.doInvoke(query_processor.clj:233)"
"query_processor$fn__49498$process_query_and_save_execution_BANG___49507$fn__49510.invoke(query_processor.clj:249)"
"query_processor$fn__49498$process_query_and_save_execution_BANG___49507.invoke(query_processor.clj:241)"
"query_processor$fn__49542$process_query_and_save_with_max_results_constraints_BANG___49551$fn__49554.invoke(query_processor.clj:261)"
"query_processor$fn__49542$process_query_and_save_with_max_results_constraints_BANG___49551.invoke(query_processor.clj:254)"
"api.dataset$run_query_async$fn__63828.invoke(dataset.clj:56)"
"query_processor.streaming$streaming_response_STAR_$fn__63807$fn__63808.invoke(streaming.clj:72)"
"query_processor.streaming$streaming_response_STAR_$fn__63807.invoke(streaming.clj:71)"
"async.streaming_response$do_f_STAR_.invokeStatic(streaming_response.clj:65)"
"async.streaming_response$do_f_STAR_.invoke(streaming_response.clj:63)"
"async.streaming_response$do_f_async$fn__17489.invoke(streaming_response.clj:84)"],
:context :ad-hoc,
:error "You do not have permissions to run this query.",
:row_count 0,
:running_time 0,
:preprocessed nil,
:ex-data
{:type :missing-required-permissions,
:required-permissions #{"/db/1/schema/PUBLIC/table/3/query/"},
:actual-permissions
#{"/db/1/schema/PUBLIC/table/2/read/" "/collection/2/" "/collection/root/" "/collection/6/read/"
"/db/1/schema/PUBLIC/table/2/query/segmented/" "/collection/8/"},
:card-id nil,
:permissions-error? true},
:data {:rows [], :cols []}}
```
</details>
4. While on that error, save the question, and refresh query/browser (on the question ID, non-dirty) - will show results (all products with ID=1)
5. While on the question, do anything that would "dirty" it like filtering/summarizing, which again will show the error, but clicking "Save question" and refreshing query/browser will then show the filtered/summarized results.
**Information about your Metabase Installation:**
Tested 1.34.1 thru 1.38.1
**Additional context**
Previously EE454 https://github.com/metabase/metabase-enterprise/issues/454
Related to #8765
|
process
|
when sandbox is granted to linked table column but no access to the linked table then dirty queries fail describe the bug if sandbox grant access is set to a linked table column but without granting access to that table explicitly then all dirty queries will result in you do not have permissions to run this query until the query is saved and refreshed query browser to reproduce admin people create user with attribute user id admin permissions data sample dataset revoke all access and set sandbox access on orders to people id user id login as and try to view browse data sample dataset orders will result in error you do not have permissions to run this query full stacktrace error middleware catch exceptions error processing query null database id started at t error type missing required permissions json query database query source table type query parameters middleware js int to string true add default userland constraints true native nil status failed class clojure lang exceptioninfo stacktrace query processor middleware permissions perms exception invokestatic permissions clj query processor middleware permissions perms exception invoke permissions clj query processor middleware permissions fn check ad hoc query perms fn invoke permissions clj query processor middleware permissions fn check ad hoc query perms invoke permissions clj query processor middleware permissions fn check query permissions star fn invoke permissions clj query processor middleware permissions fn check query permissions star invoke permissions clj query processor middleware permissions check query permissions fn invoke permissions clj query processor middleware pre alias aggregations pre alias aggregations fn invoke pre alias aggregations clj query processor middleware cumulative aggregations handle cumulative aggregations fn invoke cumulative aggregations clj metabase enterprise sandbox query processor middleware row level restrictions apply row level permissions fn invoke row level restrictions clj query processor middleware resolve joined fields resolve joined fields fn invoke resolve joined fields clj query processor middleware resolve joins resolve joins fn invoke resolve joins clj query processor middleware add implicit joins add implicit joins fn invoke add implicit joins clj query processor middleware large int id convert id to string fn invoke large int id clj query processor middleware format rows format rows fn invoke format rows clj query processor middleware desugar desugar fn invoke desugar clj query processor middleware binning update binning strategy fn invoke binning clj query processor middleware resolve fields resolve fields fn invoke resolve fields clj query processor middleware add dimension projections add remapping fn invoke add dimension projections clj query processor middleware add implicit clauses add implicit clauses fn invoke add implicit clauses clj metabase enterprise sandbox query processor middleware row level restrictions apply row level permissions fn invoke row level restrictions clj query processor middleware upgrade field literals upgrade field literals fn invoke upgrade field literals clj query processor middleware add source metadata add source metadata for source queries fn invoke add source metadata clj metabase enterprise sandbox query processor middleware column level perms check maybe apply column level perms check fn invoke column level perms check clj query processor middleware reconcile breakout and order by bucketing reconcile breakout and order by bucketing fn invoke reconcile breakout and order by bucketing clj query processor middleware auto bucket datetimes auto bucket datetimes fn invoke auto bucket datetimes clj query processor middleware resolve source table resolve source tables fn invoke resolve source table clj query processor middleware parameters substitute parameters fn invoke parameters clj query processor middleware resolve referenced resolve referenced card resources fn invoke resolve referenced clj query processor middleware expand macros expand macros fn invoke expand macros clj query processor middleware add timezone info add timezone info fn invoke add timezone info clj query processor middleware splice params in response splice params in response fn invoke splice params in response clj query processor middleware resolve database and driver resolve database and driver fn fn invoke resolve database and driver clj driver do with driver invokestatic driver clj driver do with driver invoke driver clj query processor middleware resolve database and driver resolve database and driver fn invoke resolve database and driver clj query processor middleware fetch source query resolve card id source tables fn invoke fetch source query clj query processor middleware store initialize store fn fn invoke store clj query processor store do with store invokestatic store clj query processor store do with store invoke store clj query processor middleware store initialize store fn invoke store clj query processor middleware validate validate query fn invoke validate clj query processor middleware normalize query normalize fn invoke normalize query clj query processor middleware add rows truncated add rows truncated fn invoke add rows truncated clj metabase enterprise audit query processor middleware handle audit queries handle internal queries fn invoke handle audit queries clj query processor middleware results metadata record and return metadata bang fn invoke results metadata clj query processor middleware constraints add default userland constraints fn invoke constraints clj query processor middleware process userland query process userland query fn invoke process userland query clj query processor middleware catch exceptions catch exceptions fn invoke catch exceptions clj query processor reducible async qp qp star thunk invoke reducible clj query processor reducible async qp qp star invoke reducible clj query processor reducible sync qp qp star fn invoke reducible clj query processor reducible sync qp qp star invoke reducible clj query processor process userland query invokestatic query processor clj query processor process userland query doinvoke query processor clj query processor fn process query and save execution bang fn invoke query processor clj query processor fn process query and save execution bang invoke query processor clj query processor fn process query and save with max results constraints bang fn invoke query processor clj query processor fn process query and save with max results constraints bang invoke query processor clj api dataset run query async fn invoke dataset clj query processor streaming streaming response star fn fn invoke streaming clj query processor streaming streaming response star fn invoke streaming clj async streaming response do f star invokestatic streaming response clj async streaming response do f star invoke streaming response clj async streaming response do f async fn invoke streaming response clj context ad hoc error you do not have permissions to run this query row count running time preprocessed nil ex data type missing required permissions required permissions db schema public table query actual permissions db schema public table read collection collection root collection read db schema public table query segmented collection card id nil permissions error true data rows cols while on that error save the question and refresh query browser on the question id non dirty will show results all products with id while on the question do anything that would dirty it like filtering summarizing which again will show the error but clicking save question and refreshing query browser will then show the filtered summarized results information about your metabase installation tested thru additional context previously related to
| 1
|
14,410
| 17,462,336,329
|
IssuesEvent
|
2021-08-06 12:21:12
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
opened
|
Provide access to the message router options via the message pump options for custom message router implementations
|
enhancement area:message-processing
|
**Is your feature request related to a problem? Please describe.**
When implementing a custom `IAzureServiceBusMessageRouter`, we can't use the same `configureMessagePump` function (message router options are part of the message pump options in this case), because the message router options are `internal` in the message pump options.
**Describe the solution you'd like**
We should consider making this `MessageRouterOptions` property either `public` or something in this line, so we can re-use the `configureMessagePump` function and therefore configure the message router the same way.
**Additional context**
https://github.com/arcus-azure/arcus.messaging/blob/7b46969afb2638563152878203e10bd1099d28fb/src/Arcus.Messaging.Pumps.ServiceBus/Configuration/AzureServiceBusMessagePumpOptions.cs#L112
|
1.0
|
Provide access to the message router options via the message pump options for custom message router implementations - **Is your feature request related to a problem? Please describe.**
When implementing a custom `IAzureServiceBusMessageRouter`, we can't use the same `configureMessagePump` function (message router options are part of the message pump options in this case), because the message router options are `internal` in the message pump options.
**Describe the solution you'd like**
We should consider making this `MessageRouterOptions` property either `public` or something in this line, so we can re-use the `configureMessagePump` function and therefore configure the message router the same way.
**Additional context**
https://github.com/arcus-azure/arcus.messaging/blob/7b46969afb2638563152878203e10bd1099d28fb/src/Arcus.Messaging.Pumps.ServiceBus/Configuration/AzureServiceBusMessagePumpOptions.cs#L112
|
process
|
provide access to the message router options via the message pump options for custom message router implementations is your feature request related to a problem please describe when implementing a custom iazureservicebusmessagerouter we can t use the same configuremessagepump function message router options are part of the message pump options in this case because the message router options are internal in the message pump options describe the solution you d like we should consider making this messagerouteroptions property either public or something in this line so we can re use the configuremessagepump function and therefore configure the message router the same way additional context
| 1
|
125,590
| 12,263,597,232
|
IssuesEvent
|
2020-05-07 01:31:55
|
rerost/issue-creator
|
https://api.github.com/repos/rerost/issue-creator
|
closed
|
[08/21/2019-08/28/2019] Sample
|
documentation
|
issue-creator
Create new issue from this issue
```
issue-creator create https://github.com/rerost/issue-creator/issues/1
```
Create new issue from this issue by every monday
```
issue-creator schedule apply '0 0 * * 1' https://github.com/rerost/issue-creator/issues/1
```
last issue: https://github.com/rerost/issue-creator/issues/9
_created from https://github.com/rerost/issue-creator/issues/1 by [issue-creator](https://github.com/rerost/issue-creator) _
|
1.0
|
[08/21/2019-08/28/2019] Sample - issue-creator
Create new issue from this issue
```
issue-creator create https://github.com/rerost/issue-creator/issues/1
```
Create new issue from this issue by every monday
```
issue-creator schedule apply '0 0 * * 1' https://github.com/rerost/issue-creator/issues/1
```
last issue: https://github.com/rerost/issue-creator/issues/9
_created from https://github.com/rerost/issue-creator/issues/1 by [issue-creator](https://github.com/rerost/issue-creator) _
|
non_process
|
sample issue creator create new issue from this issue issue creator create create new issue from this issue by every monday issue creator schedule apply last issue created from by
| 0
|
19,526
| 25,837,160,001
|
IssuesEvent
|
2022-12-12 20:38:16
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
closed
|
Failed to split the terminal created by "open in integrated terminal"
|
bug *not-reproducible remote terminal-process
|
<!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: I don't know -- I need the remote development extension and "Extension Bisect" seems not to support test with only one extension enabled.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.58.2
- OS Version: Windows_ NT x64 10.0.19041
SSH Remote OS Version: Ubuntu 18.04 LTS
Steps to Reproduce:
Connect to an ubuntu SSH remote, and then
1. Right click on a file
2. Choose "open in integrated terminal"
3. Find the newly opened terminal windows and try to split it
4. Get the error notification: "The terminal process failed to launch: Starting directory (cwd) "\path\to\somewhere" does not exist."
But it's okay to open a new terminal and split.


Updated: The same for opening a folder in integrated terminal :(
|
1.0
|
Failed to split the terminal created by "open in integrated terminal" - <!-- ⚠️⚠️ Do Not Delete This! bug_report_template ⚠️⚠️ -->
<!-- Please read our Rules of Conduct: https://opensource.microsoft.com/codeofconduct/ -->
<!-- 🕮 Read our guide about submitting issues: https://github.com/microsoft/vscode/wiki/Submitting-Bugs-and-Suggestions -->
<!-- 🔎 Search existing issues to avoid creating duplicates. -->
<!-- 🧪 Test using the latest Insiders build to see if your issue has already been fixed: https://code.visualstudio.com/insiders/ -->
<!-- 💡 Instead of creating your report here, use 'Report Issue' from the 'Help' menu in VS Code to pre-fill useful information. -->
<!-- 🔧 Launch with `code --disable-extensions` to check. -->
Does this issue occur when all extensions are disabled?: I don't know -- I need the remote development extension and "Extension Bisect" seems not to support test with only one extension enabled.
<!-- 🪓 If you answered No above, use 'Help: Start Extension Bisect' from Command Palette to try to identify the cause. -->
<!-- 📣 Issues caused by an extension need to be reported directly to the extension publisher. The 'Help > Report Issue' dialog can assist with this. -->
- VS Code Version: 1.58.2
- OS Version: Windows_ NT x64 10.0.19041
SSH Remote OS Version: Ubuntu 18.04 LTS
Steps to Reproduce:
Connect to an ubuntu SSH remote, and then
1. Right click on a file
2. Choose "open in integrated terminal"
3. Find the newly opened terminal windows and try to split it
4. Get the error notification: "The terminal process failed to launch: Starting directory (cwd) "\path\to\somewhere" does not exist."
But it's okay to open a new terminal and split.


Updated: The same for opening a folder in integrated terminal :(
|
process
|
failed to split the terminal created by open in integrated terminal does this issue occur when all extensions are disabled i don t know i need the remote development extension and extension bisect seems not to support test with only one extension enabled report issue dialog can assist with this vs code version os version windows nt ssh remote os version ubuntu lts steps to reproduce connect to an ubuntu ssh remote and then right click on a file choose open in integrated terminal find the newly opened terminal windows and try to split it get the error notification the terminal process failed to launch starting directory cwd path to somewhere does not exist but it s okay to open a new terminal and split updated the same for opening a folder in integrated terminal
| 1
|
18,268
| 24,347,508,682
|
IssuesEvent
|
2022-10-02 14:14:22
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
closed
|
[FALSE-POSITIVE?] ccsu.edu
|
whitelisting process
|
**Domains or links**
```
web.ccsu.edu
www.ccsu.edu
www1.ccsu.edu
www2.ccsu.edu
chortle.ccsu.edu
```
**Example url**
`https://chortle.ccsu.edu/finiteautomata/Section07/sect07_12.html`
**More Information**
domain blocked by UHB dns.
**Have you requested removal from other sources?**
No, as does not belong to any blacklist
|
1.0
|
[FALSE-POSITIVE?] ccsu.edu - **Domains or links**
```
web.ccsu.edu
www.ccsu.edu
www1.ccsu.edu
www2.ccsu.edu
chortle.ccsu.edu
```
**Example url**
`https://chortle.ccsu.edu/finiteautomata/Section07/sect07_12.html`
**More Information**
domain blocked by UHB dns.
**Have you requested removal from other sources?**
No, as does not belong to any blacklist
|
process
|
ccsu edu domains or links web ccsu edu ccsu edu ccsu edu chortle ccsu edu example url more information domain blocked by uhb dns have you requested removal from other sources no as does not belong to any blacklist
| 1
|
87,525
| 15,779,928,756
|
IssuesEvent
|
2021-04-01 09:18:49
|
AlexRogalskiy/gradle-java-sample
|
https://api.github.com/repos/AlexRogalskiy/gradle-java-sample
|
closed
|
CVE-2021-21350 (High) detected in xstream-1.4.10.jar - autoclosed
|
security vulnerability
|
## CVE-2021-21350 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.10.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: gradle-java-sample/buildSrc/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.thoughtworks.xstream/xstream/1.4.10/dfecae23647abc9d9fd0416629a4213a3882b101/xstream-1.4.10.jar</p>
<p>
Dependency Hierarchy:
- gradle-versions-plugin-0.28.0.jar (Root Library)
- :x: **xstream-1.4.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/gradle-java-sample/commit/1c1a55240565871e92ab2d546b45e8a2bad65ef3">1c1a55240565871e92ab2d546b45e8a2bad65ef3</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350>CVE-2021-21350</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq">https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-21350 (High) detected in xstream-1.4.10.jar - autoclosed - ## CVE-2021-21350 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xstream-1.4.10.jar</b></p></summary>
<p>XStream is a serialization library from Java objects to XML and back.</p>
<p>Library home page: <a href="http://x-stream.github.io">http://x-stream.github.io</a></p>
<p>Path to dependency file: gradle-java-sample/buildSrc/build.gradle</p>
<p>Path to vulnerable library: /home/wss-scanner/.gradle/caches/modules-2/files-2.1/com.thoughtworks.xstream/xstream/1.4.10/dfecae23647abc9d9fd0416629a4213a3882b101/xstream-1.4.10.jar</p>
<p>
Dependency Hierarchy:
- gradle-versions-plugin-0.28.0.jar (Root Library)
- :x: **xstream-1.4.10.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/AlexRogalskiy/gradle-java-sample/commit/1c1a55240565871e92ab2d546b45e8a2bad65ef3">1c1a55240565871e92ab2d546b45e8a2bad65ef3</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
XStream is a Java library to serialize objects to XML and back again. In XStream before version 1.4.16, there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream. No user is affected, who followed the recommendation to setup XStream's security framework with a whitelist limited to the minimal required types. If you rely on XStream's default blacklist of the Security Framework, you will have to use at least version 1.4.16.
<p>Publish Date: 2021-03-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-21350>CVE-2021-21350</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq">https://github.com/x-stream/xstream/security/advisories/GHSA-43gc-mjxg-gvrq</a></p>
<p>Release Date: 2021-03-23</p>
<p>Fix Resolution: com.thoughtworks.xstream:xstream:1.4.16</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in xstream jar autoclosed cve high severity vulnerability vulnerable library xstream jar xstream is a serialization library from java objects to xml and back library home page a href path to dependency file gradle java sample buildsrc build gradle path to vulnerable library home wss scanner gradle caches modules files com thoughtworks xstream xstream xstream jar dependency hierarchy gradle versions plugin jar root library x xstream jar vulnerable library found in head commit a href vulnerability details xstream is a java library to serialize objects to xml and back again in xstream before version there is a vulnerability which may allow a remote attacker to execute arbitrary code only by manipulating the processed input stream no user is affected who followed the recommendation to setup xstream s security framework with a whitelist limited to the minimal required types if you rely on xstream s default blacklist of the security framework you will have to use at least version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com thoughtworks xstream xstream step up your open source security game with whitesource
| 0
|
17,865
| 23,812,322,729
|
IssuesEvent
|
2022-09-04 23:26:53
|
bisq-network/proposals
|
https://api.github.com/repos/bisq-network/proposals
|
closed
|
Have a clearly defined process for how users with accepted DAO reimbursement requests can trade with Burning Man
|
was:approved a:proposal re:processes
|
> _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._
<!-- Please do not remove the text above. -->
## Background
About 12 months ago @refund-agent2 started to [partially reimburse high volume trades](https://github.com/bisq-network/proposals/issues/296).
This was done to reduce the risk of human error (eg https://github.com/bisq-network/roles/issues/93#issuecomment-659568163) and the potential for bugs in Bisq (such as https://github.com/bisq-network/bisq/pull/5649) to affect payouts.
This meant that traders in arbitration with refunds greater than 0.5 BTC needed to approach the DAO for compensation. Recent examples of users that have been accepted for reimbursement are:
| User | Issue | BSQ payout|
| ------------- | ------------- | ------------- |
| @wildduck379 | [1042](https://github.com/bisq-network/support/issues/1042) | 64,000 |
| @polygonk | [1044](https://github.com/bisq-network/support/issues/1044) | 39,331.37 |
| @rtgeek | [1041](https://github.com/bisq-network/support/issues/1041) | 26,238.11 |
| @Akira45-0 | [908](https://github.com/bisq-network/support/issues/908) | 27,426.69 |
## Problem
Currently there is no consensus on:
- If users that have been reimbursed in BSQ can trade with burningman
- How users that have been reimbursed in BSQ can trade with burningman
Of the users above it appears that at least one, @Akira45-0, traded with burningman judging by the [comment](https://github.com/bisq-network/support/issues/908#issuecomment-846414138) they posted.
For the other users I know I have personally supported @wildduck379 who concerned about how they would be able to realize a fair price for their BSQ in a reasonable time frame.
The issue has also lead to some perceived unclarity about how long funds in the donation address should be kept before being traded. Reference; read comments from https://github.com/bisq-network/roles/issues/80#issuecomment-1046334613 downwards.
The issue of partial reimbursements is also not documented anywhere in the Bisq wiki AFAIK.
A lack of consensus around this issue can lead to the following problems:
- Build up of funds in the donation address - risk to the DAO
- Disgruntled users, being surprised to be reimbursed in BSQ, and then frustrated when they are unable to trade at large volumes to BSQ/BTC prices similar to that on their reimbursement request.
- Uncertainty around the reimbursement process putting off people making large trades (> 0.5 BTC) on alt coin markets (XMR and ETH)
- Currently the donation address is responsible for a significant volume of the BSQ market. Not using BTC in the donation address means a lack of BSQ buying. A lack of BSQ buying leads to a downward pressure on BSQ price (even if only for a short term).
## Outcome
I see the principle of this proposal as the trader should have the option to be reimbursed in the same amount of BTC as per their arbitration summary.
A good outcome would be:
- Decrease risk to the DAO by not having donation address funds build up to high levels.
- No surprises for traders when they are reimbursed in BSQ
- Traders to be given the opportunity to trade with burningman at a BSQ/BTC prices that is the same as their reimbursement request. Therefore the option to get the reimbursement in BTC rather than BSQ.
- Certainty around the reimbursement process making traders more comfortable to make trades > 0.5 BTC.
## Solution
I propose that their is a clearly defined process around reimbursement.
- How users that have been reimbursed in BSQ can trade with burningman
### Principles:
- Reimbursed traders should have the option to be reimbursed in the same amount of BTC as per their arbitration summary.
- Reimbursed traders should be able to trade their BSQ with burningman
### Process:
- Should the trader want to trade with burningman directly they have 2 cycles following their arbitration summary to make a reimbursement request. If they make a reimbursement request after 2 cycles they will forfeit the option to trade directly with burningman.
- Once the trader that has had their DAO request for reimbursement approved they are pointed to instructions as to how to trade with burningman
- @refund-agent2 is responsible for advising trader about the above process at the earliest possible stage, informing the trader about putting in a reimburment request within 2 cycles, and making the connection between the trader and burningman once DAO has accepted it.
- The trader has 1 cycle following reimbursement to trade with burningman for the exact BSQ/BTC price as in their proposal. This will mean they end up with the same amount of BTC as per their arbitration summary.
- Should the trader not trade with the burningman within 1 cycle they will not be able to trade with burningman directly and will instead have to sell BSQ on the open market in Bisq.
### To Do:
Should this proposal be accepted I will do the following:
- Create a new Github template for traders seeking partial reimbursements.
- Update the relevant pages in wiki, and create a new one if requires, to explain to users what happens during arbitration when payouts are over 0.5 BTC
## Expectations
Should this proposal be accepted these are the additional expectations of the parties involved:
### Expectations of trader:
- Trader must raise reimbursement request within 2 cycles of final arbitration date
- Should the reimbursed trader wish to trade with burningman for the BSQ/BTC price given in their reimbursement request they must do so within 1 cycle of their request being accepted.
### Expectations of refund agent:
- Advising trader about the process at the earliest possible stage
- Inform / support the trader about how to make a reimburment request within 2 cycles
- Once reimbursement has been accepted by the DAO make the connection between the trader and burningman.
### Expectations of burningman:
- Communicate with refund agent about upcoming reimbursements and their amounts to ensure they have enough BTC in the donation address for trades with trader that have been partially reimbursed.
- Verify trader that is requesting trade is affected user.
- Report on trades that were done as partial reimbursements as part of their regular cycle reports.
## Risks
This proposal would create the following risks:
- Extra people for Burningman to communicate with. Maybe communication could be done via Refund Agent or Mediators as a proxy to avoid them communicating directly with traders.
- Potential for option trades. Traders could have the option to trade with burningman for a fixed price or use market price. Might mean traders will take a fixed amount where BSQ prices falls, but reject the fixed amount when BSQ rises. I see this is risk as small and would be part and parcel of making a trader take extra steps to be reimbursed. If they end up getting a better price then so be it.
## Summary
The above is my proposal created in discussion with the support team. I think it would be great to add more clarity to the process around partial reimbursements and achieve the objectives above.
|
1.0
|
Have a clearly defined process for how users with accepted DAO reimbursement requests can trade with Burning Man - > _This is a Bisq Network proposal. Please familiarize yourself with the [submission and review process](https://bisq.wiki/Proposals)._
<!-- Please do not remove the text above. -->
## Background
About 12 months ago @refund-agent2 started to [partially reimburse high volume trades](https://github.com/bisq-network/proposals/issues/296).
This was done to reduce the risk of human error (eg https://github.com/bisq-network/roles/issues/93#issuecomment-659568163) and the potential for bugs in Bisq (such as https://github.com/bisq-network/bisq/pull/5649) to affect payouts.
This meant that traders in arbitration with refunds greater than 0.5 BTC needed to approach the DAO for compensation. Recent examples of users that have been accepted for reimbursement are:
| User | Issue | BSQ payout|
| ------------- | ------------- | ------------- |
| @wildduck379 | [1042](https://github.com/bisq-network/support/issues/1042) | 64,000 |
| @polygonk | [1044](https://github.com/bisq-network/support/issues/1044) | 39,331.37 |
| @rtgeek | [1041](https://github.com/bisq-network/support/issues/1041) | 26,238.11 |
| @Akira45-0 | [908](https://github.com/bisq-network/support/issues/908) | 27,426.69 |
## Problem
Currently there is no consensus on:
- If users that have been reimbursed in BSQ can trade with burningman
- How users that have been reimbursed in BSQ can trade with burningman
Of the users above it appears that at least one, @Akira45-0, traded with burningman judging by the [comment](https://github.com/bisq-network/support/issues/908#issuecomment-846414138) they posted.
For the other users I know I have personally supported @wildduck379 who concerned about how they would be able to realize a fair price for their BSQ in a reasonable time frame.
The issue has also lead to some perceived unclarity about how long funds in the donation address should be kept before being traded. Reference; read comments from https://github.com/bisq-network/roles/issues/80#issuecomment-1046334613 downwards.
The issue of partial reimbursements is also not documented anywhere in the Bisq wiki AFAIK.
A lack of consensus around this issue can lead to the following problems:
- Build up of funds in the donation address - risk to the DAO
- Disgruntled users, being surprised to be reimbursed in BSQ, and then frustrated when they are unable to trade at large volumes to BSQ/BTC prices similar to that on their reimbursement request.
- Uncertainty around the reimbursement process putting off people making large trades (> 0.5 BTC) on alt coin markets (XMR and ETH)
- Currently the donation address is responsible for a significant volume of the BSQ market. Not using BTC in the donation address means a lack of BSQ buying. A lack of BSQ buying leads to a downward pressure on BSQ price (even if only for a short term).
## Outcome
I see the principle of this proposal as the trader should have the option to be reimbursed in the same amount of BTC as per their arbitration summary.
A good outcome would be:
- Decrease risk to the DAO by not having donation address funds build up to high levels.
- No surprises for traders when they are reimbursed in BSQ
- Traders to be given the opportunity to trade with burningman at a BSQ/BTC prices that is the same as their reimbursement request. Therefore the option to get the reimbursement in BTC rather than BSQ.
- Certainty around the reimbursement process making traders more comfortable to make trades > 0.5 BTC.
## Solution
I propose that their is a clearly defined process around reimbursement.
- How users that have been reimbursed in BSQ can trade with burningman
### Principles:
- Reimbursed traders should have the option to be reimbursed in the same amount of BTC as per their arbitration summary.
- Reimbursed traders should be able to trade their BSQ with burningman
### Process:
- Should the trader want to trade with burningman directly they have 2 cycles following their arbitration summary to make a reimbursement request. If they make a reimbursement request after 2 cycles they will forfeit the option to trade directly with burningman.
- Once the trader that has had their DAO request for reimbursement approved they are pointed to instructions as to how to trade with burningman
- @refund-agent2 is responsible for advising trader about the above process at the earliest possible stage, informing the trader about putting in a reimburment request within 2 cycles, and making the connection between the trader and burningman once DAO has accepted it.
- The trader has 1 cycle following reimbursement to trade with burningman for the exact BSQ/BTC price as in their proposal. This will mean they end up with the same amount of BTC as per their arbitration summary.
- Should the trader not trade with the burningman within 1 cycle they will not be able to trade with burningman directly and will instead have to sell BSQ on the open market in Bisq.
### To Do:
Should this proposal be accepted I will do the following:
- Create a new Github template for traders seeking partial reimbursements.
- Update the relevant pages in wiki, and create a new one if requires, to explain to users what happens during arbitration when payouts are over 0.5 BTC
## Expectations
Should this proposal be accepted these are the additional expectations of the parties involved:
### Expectations of trader:
- Trader must raise reimbursement request within 2 cycles of final arbitration date
- Should the reimbursed trader wish to trade with burningman for the BSQ/BTC price given in their reimbursement request they must do so within 1 cycle of their request being accepted.
### Expectations of refund agent:
- Advising trader about the process at the earliest possible stage
- Inform / support the trader about how to make a reimburment request within 2 cycles
- Once reimbursement has been accepted by the DAO make the connection between the trader and burningman.
### Expectations of burningman:
- Communicate with refund agent about upcoming reimbursements and their amounts to ensure they have enough BTC in the donation address for trades with trader that have been partially reimbursed.
- Verify trader that is requesting trade is affected user.
- Report on trades that were done as partial reimbursements as part of their regular cycle reports.
## Risks
This proposal would create the following risks:
- Extra people for Burningman to communicate with. Maybe communication could be done via Refund Agent or Mediators as a proxy to avoid them communicating directly with traders.
- Potential for option trades. Traders could have the option to trade with burningman for a fixed price or use market price. Might mean traders will take a fixed amount where BSQ prices falls, but reject the fixed amount when BSQ rises. I see this is risk as small and would be part and parcel of making a trader take extra steps to be reimbursed. If they end up getting a better price then so be it.
## Summary
The above is my proposal created in discussion with the support team. I think it would be great to add more clarity to the process around partial reimbursements and achieve the objectives above.
|
process
|
have a clearly defined process for how users with accepted dao reimbursement requests can trade with burning man this is a bisq network proposal please familiarize yourself with the background about months ago refund started to this was done to reduce the risk of human error eg and the potential for bugs in bisq such as to affect payouts this meant that traders in arbitration with refunds greater than btc needed to approach the dao for compensation recent examples of users that have been accepted for reimbursement are user issue bsq payout polygonk rtgeek problem currently there is no consensus on if users that have been reimbursed in bsq can trade with burningman how users that have been reimbursed in bsq can trade with burningman of the users above it appears that at least one traded with burningman judging by the they posted for the other users i know i have personally supported who concerned about how they would be able to realize a fair price for their bsq in a reasonable time frame the issue has also lead to some perceived unclarity about how long funds in the donation address should be kept before being traded reference read comments from downwards the issue of partial reimbursements is also not documented anywhere in the bisq wiki afaik a lack of consensus around this issue can lead to the following problems build up of funds in the donation address risk to the dao disgruntled users being surprised to be reimbursed in bsq and then frustrated when they are unable to trade at large volumes to bsq btc prices similar to that on their reimbursement request uncertainty around the reimbursement process putting off people making large trades btc on alt coin markets xmr and eth currently the donation address is responsible for a significant volume of the bsq market not using btc in the donation address means a lack of bsq buying a lack of bsq buying leads to a downward pressure on bsq price even if only for a short term outcome i see the principle of this proposal as the trader should have the option to be reimbursed in the same amount of btc as per their arbitration summary a good outcome would be decrease risk to the dao by not having donation address funds build up to high levels no surprises for traders when they are reimbursed in bsq traders to be given the opportunity to trade with burningman at a bsq btc prices that is the same as their reimbursement request therefore the option to get the reimbursement in btc rather than bsq certainty around the reimbursement process making traders more comfortable to make trades btc solution i propose that their is a clearly defined process around reimbursement how users that have been reimbursed in bsq can trade with burningman principles reimbursed traders should have the option to be reimbursed in the same amount of btc as per their arbitration summary reimbursed traders should be able to trade their bsq with burningman process should the trader want to trade with burningman directly they have cycles following their arbitration summary to make a reimbursement request if they make a reimbursement request after cycles they will forfeit the option to trade directly with burningman once the trader that has had their dao request for reimbursement approved they are pointed to instructions as to how to trade with burningman refund is responsible for advising trader about the above process at the earliest possible stage informing the trader about putting in a reimburment request within cycles and making the connection between the trader and burningman once dao has accepted it the trader has cycle following reimbursement to trade with burningman for the exact bsq btc price as in their proposal this will mean they end up with the same amount of btc as per their arbitration summary should the trader not trade with the burningman within cycle they will not be able to trade with burningman directly and will instead have to sell bsq on the open market in bisq to do should this proposal be accepted i will do the following create a new github template for traders seeking partial reimbursements update the relevant pages in wiki and create a new one if requires to explain to users what happens during arbitration when payouts are over btc expectations should this proposal be accepted these are the additional expectations of the parties involved expectations of trader trader must raise reimbursement request within cycles of final arbitration date should the reimbursed trader wish to trade with burningman for the bsq btc price given in their reimbursement request they must do so within cycle of their request being accepted expectations of refund agent advising trader about the process at the earliest possible stage inform support the trader about how to make a reimburment request within cycles once reimbursement has been accepted by the dao make the connection between the trader and burningman expectations of burningman communicate with refund agent about upcoming reimbursements and their amounts to ensure they have enough btc in the donation address for trades with trader that have been partially reimbursed verify trader that is requesting trade is affected user report on trades that were done as partial reimbursements as part of their regular cycle reports risks this proposal would create the following risks extra people for burningman to communicate with maybe communication could be done via refund agent or mediators as a proxy to avoid them communicating directly with traders potential for option trades traders could have the option to trade with burningman for a fixed price or use market price might mean traders will take a fixed amount where bsq prices falls but reject the fixed amount when bsq rises i see this is risk as small and would be part and parcel of making a trader take extra steps to be reimbursed if they end up getting a better price then so be it summary the above is my proposal created in discussion with the support team i think it would be great to add more clarity to the process around partial reimbursements and achieve the objectives above
| 1
|
578,690
| 17,150,121,982
|
IssuesEvent
|
2021-07-13 19:21:52
|
brave/brave-browser
|
https://api.github.com/repos/brave/brave-browser
|
opened
|
Storybook: Build out Send Tab UI
|
OS/Desktop QA/No feature/wallet priority/P3 release-notes/exclude
|
<!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Build out the Send Tab UI for the Crypto Wallet Prototype.

You can find the specs for this component in Figma / Desktop Crypto Wallets.
Add the component too storybook with mock data for testing.
## Expected result:
Component should be visible and functional in Storybook under the Wallet/Desktop/Concepts and Wallet/Desktop/Components tabs.
|
1.0
|
Storybook: Build out Send Tab UI - <!-- Have you searched for similar issues? Before submitting this issue, please check the open issues and add a note before logging a new issue.
PLEASE USE THE TEMPLATE BELOW TO PROVIDE INFORMATION ABOUT THE ISSUE.
INSUFFICIENT INFO WILL GET THE ISSUE CLOSED. IT WILL ONLY BE REOPENED AFTER SUFFICIENT INFO IS PROVIDED-->
## Description
Build out the Send Tab UI for the Crypto Wallet Prototype.

You can find the specs for this component in Figma / Desktop Crypto Wallets.
Add the component too storybook with mock data for testing.
## Expected result:
Component should be visible and functional in Storybook under the Wallet/Desktop/Concepts and Wallet/Desktop/Components tabs.
|
non_process
|
storybook build out send tab ui have you searched for similar issues before submitting this issue please check the open issues and add a note before logging a new issue please use the template below to provide information about the issue insufficient info will get the issue closed it will only be reopened after sufficient info is provided description build out the send tab ui for the crypto wallet prototype you can find the specs for this component in figma desktop crypto wallets add the component too storybook with mock data for testing expected result component should be visible and functional in storybook under the wallet desktop concepts and wallet desktop components tabs
| 0
|
138,892
| 20,740,673,403
|
IssuesEvent
|
2022-03-14 17:22:38
|
cryptic-game/frontend
|
https://api.github.com/repos/cryptic-game/frontend
|
closed
|
Base Components without Styling
|
wontfix enhancement design
|
## Description
Add Base Components without Styling.
## Problem
No consistent design.
## Important
Do not merge for now, so that no new interaction elements can be created later.
|
1.0
|
Base Components without Styling - ## Description
Add Base Components without Styling.
## Problem
No consistent design.
## Important
Do not merge for now, so that no new interaction elements can be created later.
|
non_process
|
base components without styling description add base components without styling problem no consistent design important do not merge for now so that no new interaction elements can be created later
| 0
|
1,032
| 3,489,289,899
|
IssuesEvent
|
2016-01-03 19:16:26
|
Forket/connect2sa.co.za_01
|
https://api.github.com/repos/Forket/connect2sa.co.za_01
|
opened
|
Rating on thumbnail
|
In process
|
Rating on thumbnail
http://themes.themegoods2.com/rigel/demo/?rigelstyle=15
On the original design the rating for every post is displayed in the corner of thumbnails. Although we have changed our rating/review system, we’d like to display the ratings on the thumbnails in the same way. Can you please develop that for us?

|
1.0
|
Rating on thumbnail - Rating on thumbnail
http://themes.themegoods2.com/rigel/demo/?rigelstyle=15
On the original design the rating for every post is displayed in the corner of thumbnails. Although we have changed our rating/review system, we’d like to display the ratings on the thumbnails in the same way. Can you please develop that for us?

|
process
|
rating on thumbnail rating on thumbnail on the original design the rating for every post is displayed in the corner of thumbnails although we have changed our rating review system we’d like to display the ratings on the thumbnails in the same way can you please develop that for us
| 1
|
270,899
| 8,474,475,107
|
IssuesEvent
|
2018-10-24 16:14:50
|
syndesisio/syndesis.io
|
https://api.github.com/repos/syndesisio/syndesis.io
|
opened
|
Improve/update information and documentation
|
bug enhancement high priority
|
Similar, but separate, effort to #88 . This is to update the information that currently exists on the website (and is likely outdated), or lack thereof.
|
1.0
|
Improve/update information and documentation - Similar, but separate, effort to #88 . This is to update the information that currently exists on the website (and is likely outdated), or lack thereof.
|
non_process
|
improve update information and documentation similar but separate effort to this is to update the information that currently exists on the website and is likely outdated or lack thereof
| 0
|
39,825
| 12,704,316,921
|
IssuesEvent
|
2020-06-23 01:01:44
|
mpulsemobile/doccano
|
https://api.github.com/repos/mpulsemobile/doccano
|
opened
|
CVE-2020-13822 (High) detected in elliptic-6.4.0.tgz
|
security vulnerability
|
## CVE-2020-13822 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.4.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/doccano/app/server/static/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/doccano/app/server/static/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- webpack-4.12.0.tgz (Root Library)
- node-libs-browser-2.1.0.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.0.4.tgz
- :x: **elliptic-6.4.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading '\0' bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.
<p>Publish Date: 2020-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822>CVE-2020-13822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/indutny/elliptic/tree/v6.5.3">https://github.com/indutny/elliptic/tree/v6.5.3</a></p>
<p>Release Date: 2020-06-04</p>
<p>Fix Resolution: v6.5.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"elliptic","packageVersion":"6.4.0","isTransitiveDependency":true,"dependencyTree":"webpack:4.12.0;node-libs-browser:2.1.0;crypto-browserify:3.12.0;browserify-sign:4.0.4;elliptic:6.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v6.5.3"}],"vulnerabilityIdentifier":"CVE-2020-13822","vulnerabilityDetails":"The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading \u0027\\0\u0027 bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2020-13822 (High) detected in elliptic-6.4.0.tgz - ## CVE-2020-13822 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>elliptic-6.4.0.tgz</b></p></summary>
<p>EC cryptography</p>
<p>Library home page: <a href="https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz">https://registry.npmjs.org/elliptic/-/elliptic-6.4.0.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/doccano/app/server/static/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/doccano/app/server/static/node_modules/elliptic/package.json</p>
<p>
Dependency Hierarchy:
- webpack-4.12.0.tgz (Root Library)
- node-libs-browser-2.1.0.tgz
- crypto-browserify-3.12.0.tgz
- browserify-sign-4.0.4.tgz
- :x: **elliptic-6.4.0.tgz** (Vulnerable Library)
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading '\0' bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.
<p>Publish Date: 2020-06-04
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822>CVE-2020-13822</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/indutny/elliptic/tree/v6.5.3">https://github.com/indutny/elliptic/tree/v6.5.3</a></p>
<p>Release Date: 2020-06-04</p>
<p>Fix Resolution: v6.5.3</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"elliptic","packageVersion":"6.4.0","isTransitiveDependency":true,"dependencyTree":"webpack:4.12.0;node-libs-browser:2.1.0;crypto-browserify:3.12.0;browserify-sign:4.0.4;elliptic:6.4.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"v6.5.3"}],"vulnerabilityIdentifier":"CVE-2020-13822","vulnerabilityDetails":"The Elliptic package 6.5.2 for Node.js allows ECDSA signature malleability via variations in encoding, leading \u0027\\0\u0027 bytes, or integer overflows. This could conceivably have a security-relevant impact if an application relied on a single canonical signature.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-13822","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in elliptic tgz cve high severity vulnerability vulnerable library elliptic tgz ec cryptography library home page a href path to dependency file tmp ws scm doccano app server static package json path to vulnerable library tmp ws scm doccano app server static node modules elliptic package json dependency hierarchy webpack tgz root library node libs browser tgz crypto browserify tgz browserify sign tgz x elliptic tgz vulnerable library vulnerability details the elliptic package for node js allows ecdsa signature malleability via variations in encoding leading bytes or integer overflows this could conceivably have a security relevant impact if an application relied on a single canonical signature publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails the elliptic package for node js allows ecdsa signature malleability via variations in encoding leading bytes or integer overflows this could conceivably have a security relevant impact if an application relied on a single canonical signature vulnerabilityurl
| 0
|
17,869
| 9,941,490,216
|
IssuesEvent
|
2019-07-03 11:45:13
|
JuliaReach/LazySets.jl
|
https://api.github.com/repos/JuliaReach/LazySets.jl
|
closed
|
Pass solver to removevredundancy of polytopes
|
performance
|
`Polyhedra.removevredundancy` has improved, but we are not using it correctly. We should pass an (LP) solver.
|
True
|
Pass solver to removevredundancy of polytopes - `Polyhedra.removevredundancy` has improved, but we are not using it correctly. We should pass an (LP) solver.
|
non_process
|
pass solver to removevredundancy of polytopes polyhedra removevredundancy has improved but we are not using it correctly we should pass an lp solver
| 0
|
11,361
| 14,175,663,259
|
IssuesEvent
|
2020-11-12 21:59:51
|
googleapis/python-storage
|
https://api.github.com/repos/googleapis/python-storage
|
closed
|
New test for 'Blob._do_multipart_upload' w/ metadata flakes
|
api: storage priority: p1 type: process
|
The [flaky test](https://source.cloud.google.com/results/invocations/26bf2df0-2ddb-4699-bc41-2430500326e9/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fpresubmit/log) was [introduced in #298](https://github.com/googleapis/python-storage/pull/298/files#diff-e4a098aa380a5eea289442e5395c3af9905e5208d2e89ed56e1bea1f8ee3ebeaR1916)
The issue is that the code which constructs the expected payload for the request is flaky given Python's [hash randomization](https://docs.python.org/3/reference/datamodel.html#object.__hash__):
```python
blob_data = b'{"name": "blob-name"}\r\n'
if metadata:
blob_data = (
b'{"name": "blob-name", "metadata": '
+ json.dumps(metadata).encode("utf-8")
+ b"}\r\n"
)
self.assertEqual(blob._changes, set(["metadata"]))
payload = (
b"--==0==\r\n"
+ b"content-type: application/json; charset=UTF-8\r\n\r\n"
+ b'{"name": "blob-name"}\r\n'
+ blob_data
+ b"--==0==\r\n"
+ b"content-type: application/xml\r\n\r\n"
+ data_read
```
|
1.0
|
New test for 'Blob._do_multipart_upload' w/ metadata flakes - The [flaky test](https://source.cloud.google.com/results/invocations/26bf2df0-2ddb-4699-bc41-2430500326e9/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fpresubmit/log) was [introduced in #298](https://github.com/googleapis/python-storage/pull/298/files#diff-e4a098aa380a5eea289442e5395c3af9905e5208d2e89ed56e1bea1f8ee3ebeaR1916)
The issue is that the code which constructs the expected payload for the request is flaky given Python's [hash randomization](https://docs.python.org/3/reference/datamodel.html#object.__hash__):
```python
blob_data = b'{"name": "blob-name"}\r\n'
if metadata:
blob_data = (
b'{"name": "blob-name", "metadata": '
+ json.dumps(metadata).encode("utf-8")
+ b"}\r\n"
)
self.assertEqual(blob._changes, set(["metadata"]))
payload = (
b"--==0==\r\n"
+ b"content-type: application/json; charset=UTF-8\r\n\r\n"
+ b'{"name": "blob-name"}\r\n'
+ blob_data
+ b"--==0==\r\n"
+ b"content-type: application/xml\r\n\r\n"
+ data_read
```
|
process
|
new test for blob do multipart upload w metadata flakes the was the issue is that the code which constructs the expected payload for the request is flaky given python s python blob data b name blob name r n if metadata blob data b name blob name metadata json dumps metadata encode utf b r n self assertequal blob changes set payload b r n b content type application json charset utf r n r n b name blob name r n blob data b r n b content type application xml r n r n data read
| 1
|
148,513
| 23,356,237,980
|
IssuesEvent
|
2022-08-10 07:41:38
|
DouyinFE/semi-design
|
https://api.github.com/repos/DouyinFE/semi-design
|
closed
|
[Input] 按下态 bg color 与 Select 对齐
|
💄 Design PR Welcome
|
### Which Component 出现bug的组件
- Input
### semi-ui version
- latest
### Expected result 期望的结果是什么

### Actual result 实际的结果是什么

### Additional information 补充说明
- 遇到这个bug的业务场景、上下文、或者你的需求场景
|
1.0
|
[Input] 按下态 bg color 与 Select 对齐 - ### Which Component 出现bug的组件
- Input
### semi-ui version
- latest
### Expected result 期望的结果是什么

### Actual result 实际的结果是什么

### Additional information 补充说明
- 遇到这个bug的业务场景、上下文、或者你的需求场景
|
non_process
|
按下态 bg color 与 select 对齐 which component 出现bug的组件 input semi ui version latest expected result 期望的结果是什么 actual result 实际的结果是什么 additional information 补充说明 遇到这个bug的业务场景、上下文、或者你的需求场景
| 0
|
18,581
| 24,564,242,581
|
IssuesEvent
|
2022-10-13 00:21:06
|
googleapis/nodejs-phishing-protection
|
https://api.github.com/repos/googleapis/nodejs-phishing-protection
|
closed
|
tests: we no longer have a good integration tests
|
type: process api: phishingprotection
|
Our samples tests recently began failing, due to a collision on resource name [see](https://github.com/googleapis/nodejs-phishing-protection/pull/190).
We should get back to a place where we have a good integration test.
|
1.0
|
tests: we no longer have a good integration tests - Our samples tests recently began failing, due to a collision on resource name [see](https://github.com/googleapis/nodejs-phishing-protection/pull/190).
We should get back to a place where we have a good integration test.
|
process
|
tests we no longer have a good integration tests our samples tests recently began failing due to a collision on resource name we should get back to a place where we have a good integration test
| 1
|
5,881
| 8,705,216,601
|
IssuesEvent
|
2018-12-05 21:43:15
|
googleapis/google-cloud-python
|
https://api.github.com/repos/googleapis/google-cloud-python
|
closed
|
Trace: lease do a new release
|
api: cloudtrace type: process
|
This package seems to be forgotten and the last release is from ~1y ago.
|
1.0
|
Trace: lease do a new release - This package seems to be forgotten and the last release is from ~1y ago.
|
process
|
trace lease do a new release this package seems to be forgotten and the last release is from ago
| 1
|
196,485
| 22,441,937,126
|
IssuesEvent
|
2022-06-21 02:21:03
|
arielorn/goalert
|
https://api.github.com/repos/arielorn/goalert
|
opened
|
CVE-2022-33987 (Medium) detected in got-7.1.0.tgz, got-8.3.2.tgz
|
security vulnerability
|
## CVE-2022-33987 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>got-7.1.0.tgz</b>, <b>got-8.3.2.tgz</b></p></summary>
<p>
<details><summary><b>got-7.1.0.tgz</b></p></summary>
<p>Simplified HTTP requests</p>
<p>Library home page: <a href="https://registry.npmjs.org/got/-/got-7.1.0.tgz">https://registry.npmjs.org/got/-/got-7.1.0.tgz</a></p>
<p>Path to dependency file: /web/src/package.json</p>
<p>Path to vulnerable library: /web/src/node_modules/got/package.json</p>
<p>
Dependency Hierarchy:
- image-webpack-loader-5.0.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-build-3.0.0.tgz
- download-6.2.5.tgz
- :x: **got-7.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>got-8.3.2.tgz</b></p></summary>
<p>Simplified HTTP requests</p>
<p>Library home page: <a href="https://registry.npmjs.org/got/-/got-8.3.2.tgz">https://registry.npmjs.org/got/-/got-8.3.2.tgz</a></p>
<p>Path to dependency file: /web/src/package.json</p>
<p>Path to vulnerable library: /web/src/node_modules/bin-wrapper/node_modules/got/package.json</p>
<p>
Dependency Hierarchy:
- image-webpack-loader-5.0.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- :x: **got-8.3.2.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The got package before 12.1.0 for Node.js allows a redirect to a UNIX socket.
<p>Publish Date: 2022-06-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33987>CVE-2022-33987</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987</a></p>
<p>Release Date: 2022-06-18</p>
<p>Fix Resolution: got - 11.8.5,12.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-33987 (Medium) detected in got-7.1.0.tgz, got-8.3.2.tgz - ## CVE-2022-33987 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>got-7.1.0.tgz</b>, <b>got-8.3.2.tgz</b></p></summary>
<p>
<details><summary><b>got-7.1.0.tgz</b></p></summary>
<p>Simplified HTTP requests</p>
<p>Library home page: <a href="https://registry.npmjs.org/got/-/got-7.1.0.tgz">https://registry.npmjs.org/got/-/got-7.1.0.tgz</a></p>
<p>Path to dependency file: /web/src/package.json</p>
<p>Path to vulnerable library: /web/src/node_modules/got/package.json</p>
<p>
Dependency Hierarchy:
- image-webpack-loader-5.0.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-build-3.0.0.tgz
- download-6.2.5.tgz
- :x: **got-7.1.0.tgz** (Vulnerable Library)
</details>
<details><summary><b>got-8.3.2.tgz</b></p></summary>
<p>Simplified HTTP requests</p>
<p>Library home page: <a href="https://registry.npmjs.org/got/-/got-8.3.2.tgz">https://registry.npmjs.org/got/-/got-8.3.2.tgz</a></p>
<p>Path to dependency file: /web/src/package.json</p>
<p>Path to vulnerable library: /web/src/node_modules/bin-wrapper/node_modules/got/package.json</p>
<p>
Dependency Hierarchy:
- image-webpack-loader-5.0.0.tgz (Root Library)
- imagemin-gifsicle-6.0.1.tgz
- gifsicle-4.0.1.tgz
- bin-wrapper-4.1.0.tgz
- download-7.1.0.tgz
- :x: **got-8.3.2.tgz** (Vulnerable Library)
</details>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The got package before 12.1.0 for Node.js allows a redirect to a UNIX socket.
<p>Publish Date: 2022-06-18
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33987>CVE-2022-33987</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-33987</a></p>
<p>Release Date: 2022-06-18</p>
<p>Fix Resolution: got - 11.8.5,12.1.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in got tgz got tgz cve medium severity vulnerability vulnerable libraries got tgz got tgz got tgz simplified http requests library home page a href path to dependency file web src package json path to vulnerable library web src node modules got package json dependency hierarchy image webpack loader tgz root library imagemin gifsicle tgz gifsicle tgz bin build tgz download tgz x got tgz vulnerable library got tgz simplified http requests library home page a href path to dependency file web src package json path to vulnerable library web src node modules bin wrapper node modules got package json dependency hierarchy image webpack loader tgz root library imagemin gifsicle tgz gifsicle tgz bin wrapper tgz download tgz x got tgz vulnerable library vulnerability details the got package before for node js allows a redirect to a unix socket publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution got step up your open source security game with mend
| 0
|
115,195
| 24,732,261,542
|
IssuesEvent
|
2022-10-20 18:39:13
|
nagios-plugins/nagios-plugins
|
https://api.github.com/repos/nagios-plugins/nagios-plugins
|
closed
|
check_disk.c: build failure with upcoming clang-16
|
Code Quality
|
clang-15 tried to enable the following by default:
* `-Werror=implicit-function-declaration`
* `-Werror=implicit-int`
* `-Werror=strict-prototypes`
This caused some breakage, so they're delaying it until clang-16. But it's still coming. With those flags, nagios-plugins fails to build:
```
check_disk.c:392:219: warning: format specifies type 'unsigned long long' but
the argument has type 'uintmax_t' (aka 'unsigned long') [-Wformat]
...path->dfree_inodes_percent, fsp.fsu_blocksize, mult);
```
``` ^~~~
check_disk.c:413:23: error: call to undeclared function 'min_state'; ISO C99 and
later do not support implicit function declarations
[-Werror,-Wimplicit-function-declaration]
```
|
1.0
|
check_disk.c: build failure with upcoming clang-16 - clang-15 tried to enable the following by default:
* `-Werror=implicit-function-declaration`
* `-Werror=implicit-int`
* `-Werror=strict-prototypes`
This caused some breakage, so they're delaying it until clang-16. But it's still coming. With those flags, nagios-plugins fails to build:
```
check_disk.c:392:219: warning: format specifies type 'unsigned long long' but
the argument has type 'uintmax_t' (aka 'unsigned long') [-Wformat]
...path->dfree_inodes_percent, fsp.fsu_blocksize, mult);
```
``` ^~~~
check_disk.c:413:23: error: call to undeclared function 'min_state'; ISO C99 and
later do not support implicit function declarations
[-Werror,-Wimplicit-function-declaration]
```
|
non_process
|
check disk c build failure with upcoming clang clang tried to enable the following by default werror implicit function declaration werror implicit int werror strict prototypes this caused some breakage so they re delaying it until clang but it s still coming with those flags nagios plugins fails to build check disk c warning format specifies type unsigned long long but the argument has type uintmax t aka unsigned long path dfree inodes percent fsp fsu blocksize mult check disk c error call to undeclared function min state iso and later do not support implicit function declarations
| 0
|
32,592
| 6,088,352,864
|
IssuesEvent
|
2017-06-18 20:59:53
|
uccser/cs-unplugged
|
https://api.github.com/repos/uccser/cs-unplugged
|
closed
|
Update authors in the docs
|
documentation
|
`authors` currently lists a few of our names, this should be changed to "University of Canterbury Computer Science Education Research Group" :)
|
1.0
|
Update authors in the docs - `authors` currently lists a few of our names, this should be changed to "University of Canterbury Computer Science Education Research Group" :)
|
non_process
|
update authors in the docs authors currently lists a few of our names this should be changed to university of canterbury computer science education research group
| 0
|
118,154
| 17,576,869,117
|
IssuesEvent
|
2021-08-15 19:34:52
|
ghc-dev/Ronald-Lynch
|
https://api.github.com/repos/ghc-dev/Ronald-Lynch
|
opened
|
CVE-2017-18077 (High) detected in brace-expansion-1.1.6.tgz
|
security vulnerability
|
## CVE-2017-18077 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>brace-expansion-1.1.6.tgz</b></p></summary>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz</a></p>
<p>Path to dependency file: Ronald-Lynch/package.json</p>
<p>Path to vulnerable library: Ronald-Lynch/node_modules/brace-expansion</p>
<p>
Dependency Hierarchy:
- jest-cli-15.1.1.tgz (Root Library)
- istanbul-api-1.0.0-aplha.10.tgz
- istanbul-lib-source-maps-1.0.1.tgz
- rimraf-2.5.4.tgz
- glob-7.1.0.tgz
- minimatch-3.0.3.tgz
- :x: **brace-expansion-1.1.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Ronald-Lynch/commit/261fc8df587f5f58a8b22de30162b5685ad6adb3">261fc8df587f5f58a8b22de30162b5685ad6adb3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
index.js in brace-expansion before 1.1.7 is vulnerable to Regular Expression Denial of Service (ReDoS) attacks, as demonstrated by an expand argument containing many comma characters.
<p>Publish Date: 2018-01-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18077>CVE-2017-18077</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-18077">https://nvd.nist.gov/vuln/detail/CVE-2017-18077</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution: 1.1.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"brace-expansion","packageVersion":"1.1.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"jest-cli:15.1.1;istanbul-api:1.0.0-aplha.10;istanbul-lib-source-maps:1.0.1;rimraf:2.5.4;glob:7.1.0;minimatch:3.0.3;brace-expansion:1.1.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.1.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-18077","vulnerabilityDetails":"index.js in brace-expansion before 1.1.7 is vulnerable to Regular Expression Denial of Service (ReDoS) attacks, as demonstrated by an expand argument containing many comma characters.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18077","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2017-18077 (High) detected in brace-expansion-1.1.6.tgz - ## CVE-2017-18077 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>brace-expansion-1.1.6.tgz</b></p></summary>
<p>Brace expansion as known from sh/bash</p>
<p>Library home page: <a href="https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz">https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.6.tgz</a></p>
<p>Path to dependency file: Ronald-Lynch/package.json</p>
<p>Path to vulnerable library: Ronald-Lynch/node_modules/brace-expansion</p>
<p>
Dependency Hierarchy:
- jest-cli-15.1.1.tgz (Root Library)
- istanbul-api-1.0.0-aplha.10.tgz
- istanbul-lib-source-maps-1.0.1.tgz
- rimraf-2.5.4.tgz
- glob-7.1.0.tgz
- minimatch-3.0.3.tgz
- :x: **brace-expansion-1.1.6.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/ghc-dev/Ronald-Lynch/commit/261fc8df587f5f58a8b22de30162b5685ad6adb3">261fc8df587f5f58a8b22de30162b5685ad6adb3</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
index.js in brace-expansion before 1.1.7 is vulnerable to Regular Expression Denial of Service (ReDoS) attacks, as demonstrated by an expand argument containing many comma characters.
<p>Publish Date: 2018-01-27
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18077>CVE-2017-18077</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2017-18077">https://nvd.nist.gov/vuln/detail/CVE-2017-18077</a></p>
<p>Release Date: 2018-01-27</p>
<p>Fix Resolution: 1.1.7</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"brace-expansion","packageVersion":"1.1.6","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"jest-cli:15.1.1;istanbul-api:1.0.0-aplha.10;istanbul-lib-source-maps:1.0.1;rimraf:2.5.4;glob:7.1.0;minimatch:3.0.3;brace-expansion:1.1.6","isMinimumFixVersionAvailable":true,"minimumFixVersion":"1.1.7"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2017-18077","vulnerabilityDetails":"index.js in brace-expansion before 1.1.7 is vulnerable to Regular Expression Denial of Service (ReDoS) attacks, as demonstrated by an expand argument containing many comma characters.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-18077","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve high detected in brace expansion tgz cve high severity vulnerability vulnerable library brace expansion tgz brace expansion as known from sh bash library home page a href path to dependency file ronald lynch package json path to vulnerable library ronald lynch node modules brace expansion dependency hierarchy jest cli tgz root library istanbul api aplha tgz istanbul lib source maps tgz rimraf tgz glob tgz minimatch tgz x brace expansion tgz vulnerable library found in head commit a href found in base branch master vulnerability details index js in brace expansion before is vulnerable to regular expression denial of service redos attacks as demonstrated by an expand argument containing many comma characters publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree jest cli istanbul api aplha istanbul lib source maps rimraf glob minimatch brace expansion isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails index js in brace expansion before is vulnerable to regular expression denial of service redos attacks as demonstrated by an expand argument containing many comma characters vulnerabilityurl
| 0
|
364,414
| 25,490,130,854
|
IssuesEvent
|
2022-11-27 00:11:25
|
Zsupi/VolumeRendering
|
https://api.github.com/repos/Zsupi/VolumeRendering
|
closed
|
Documentation
|
documentation framework volume rendering
|
# Task:
## Create documentation for the project
- [ ] Volume Rendering
- [ ] Physic
- [ ] Framework
|
1.0
|
Documentation - # Task:
## Create documentation for the project
- [ ] Volume Rendering
- [ ] Physic
- [ ] Framework
|
non_process
|
documentation task create documentation for the project volume rendering physic framework
| 0
|
318,757
| 9,696,956,641
|
IssuesEvent
|
2019-05-25 12:38:35
|
yalla-coop/earwig
|
https://api.github.com/repos/yalla-coop/earwig
|
opened
|
I cannot see images uploaded to Worksite profiles
|
bug priority-1
| ERROR: type should be string, got "\r\nhttps://www.loom.com/share/ee34cffff2514322adbc911bfa2e6363 | \r\n\r\nThis isn't rendering the photos when you look at the review in the view section\r\n\r\n\r\n\r\n"
|
1.0
|
I cannot see images uploaded to Worksite profiles -
https://www.loom.com/share/ee34cffff2514322adbc911bfa2e6363 |
This isn't rendering the photos when you look at the review in the view section
|
non_process
|
i cannot see images uploaded to worksite profiles this isn t rendering the photos when you look at the review in the view section
| 0
|
175,052
| 21,300,718,535
|
IssuesEvent
|
2022-04-15 02:28:41
|
YaronSpawn/NodeGoat
|
https://api.github.com/repos/YaronSpawn/NodeGoat
|
opened
|
CVE-2021-44906 (High) detected in multiple libraries
|
security vulnerability
|
## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-0.0.10.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- mocha-2.5.3.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.10.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- forever-0.15.3.tgz (Root Library)
- optimist-0.6.1.tgz
- :x: **minimist-0.0.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/detect-indent/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- grunt-if-0.2.0.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- coveralls-2.13.3.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/issues/164">https://github.com/substack/minimist/issues/164</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: minimist - 1.2.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2021-44906 (High) detected in multiple libraries - ## CVE-2021-44906 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimist-0.0.8.tgz</b>, <b>minimist-0.0.10.tgz</b>, <b>minimist-1.2.0.tgz</b></p></summary>
<p>
<details><summary><b>minimist-0.0.8.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.8.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- mocha-2.5.3.tgz (Root Library)
- mkdirp-0.5.1.tgz
- :x: **minimist-0.0.8.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-0.0.10.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz">https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- forever-0.15.3.tgz (Root Library)
- optimist-0.6.1.tgz
- :x: **minimist-0.0.10.tgz** (Vulnerable Library)
</details>
<details><summary><b>minimist-1.2.0.tgz</b></p></summary>
<p>parse argument options</p>
<p>Library home page: <a href="https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz">https://registry.npmjs.org/minimist/-/minimist-1.2.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/nyc/node_modules/detect-indent/node_modules/minimist/package.json</p>
<p>
Dependency Hierarchy:
- grunt-if-0.2.0.tgz (Root Library)
- grunt-contrib-nodeunit-1.0.0.tgz
- nodeunit-0.9.5.tgz
- tap-7.1.2.tgz
- coveralls-2.13.3.tgz
- :x: **minimist-1.2.0.tgz** (Vulnerable Library)
</details>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Minimist <=1.2.5 is vulnerable to Prototype Pollution via file index.js, function setKey() (lines 69-95).
<p>Publish Date: 2022-03-17
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-44906>CVE-2021-44906</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/substack/minimist/issues/164">https://github.com/substack/minimist/issues/164</a></p>
<p>Release Date: 2022-03-17</p>
<p>Fix Resolution: minimist - 1.2.6</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries minimist tgz minimist tgz minimist tgz minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules nyc node modules minimist package json dependency hierarchy mocha tgz root library mkdirp tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules minimist package json dependency hierarchy forever tgz root library optimist tgz x minimist tgz vulnerable library minimist tgz parse argument options library home page a href path to dependency file package json path to vulnerable library node modules nyc node modules detect indent node modules minimist package json dependency hierarchy grunt if tgz root library grunt contrib nodeunit tgz nodeunit tgz tap tgz coveralls tgz x minimist tgz vulnerable library found in base branch master vulnerability details minimist is vulnerable to prototype pollution via file index js function setkey lines publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution minimist step up your open source security game with whitesource
| 0
|
205,850
| 15,691,938,154
|
IssuesEvent
|
2021-03-25 18:29:19
|
M-Davies/eye-of-horus
|
https://api.github.com/repos/M-Davies/eye-of-horus
|
opened
|
Consider making lock gesture optional
|
bug gesture testing
|
Mainly thinking of scenarios where a user cannot log themselves out (although that could also be handled with timeouts)
|
1.0
|
Consider making lock gesture optional - Mainly thinking of scenarios where a user cannot log themselves out (although that could also be handled with timeouts)
|
non_process
|
consider making lock gesture optional mainly thinking of scenarios where a user cannot log themselves out although that could also be handled with timeouts
| 0
|
2,657
| 5,434,174,072
|
IssuesEvent
|
2017-03-05 03:34:55
|
coala/documentation
|
https://api.github.com/repos/coala/documentation
|
reopened
|
Docs: Add short links on the pages that have them
|
difficulty/low difficulty/newcomer process/pending review
|
Some presentation had direct links to the newcomer guide.
We have some "URL shortener":
https://coala.io/newcomer -> http://docs.coala.io/en/latest/Developers/Newcomers_Guide.html
https://coala.io/new -> https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Acoala+label%3Adifficulty%2Fnewcomer++no%3Aassignee
etc...
We should add the URL shortened link on the page getting shortened in the documentation; so people browsing around the docs know there's a short URL pointing to the page.
URLS:
```
location /newcomer { return 302 http://docs.coala.io/en/latest/Developers/Newcomers_Guide.html; }
location /newcomers { return 302 http://docs.coala.io/en/latest/Developers/Newcomers_Guide.html; }
location /new { return 302 https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Acoala+label%3Adifficulty%2Fnewcomer++no%3Aassignee; }
location /low { return 302 https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Acoala+label%3Adifficulty%2Flow++no%3Aassignee; }
location /review { return 302 https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Acoala+label%3A%22process%2Fpending+review%22+sort%3Acreated-asc; }
location /languages { return 302 https://github.com/coala/bear-docs/blob/master/README.rst; }
location /chat { return 302 https://gitter.im/coala-analyzer/coala; }
location /git { return 302 http://docs.coala.io/en/latest/Developers/Git_Basics.html; }
location /commit { return 302 http://docs.coala.io/en/latest/Developers/Writing_Good_Commits.html; }
location /cep { return 302 https://github.com/coala/cEPs/blob/master/cEP-0000.md; }
location /tutorial { return 302 https://docs.coala.io/en/latest/Users/Tutorial.html; }
location /writingbears { return 302 https://docs.coala.io/en/latest/Developers/Writing_Bears.html; }
location /channels { return 302 https://github.com/coala/coala/wiki/Communication-Channels; }
location /newform { return 302 https://docs.google.com/forms/d/e/1FAIpQLSd7g_MU_c-BMQ62WHeznrvcoXwqW87O_Wq4Gz7-pp8PJ38Wdg/viewform; }
location /projects { return 302 https://github.com/coala/coala/wiki/Project-Ideas; }
location /reviewsprint { return 302 https://docs.google.com/forms/d/e/1FAIpQLSd4vHafTyY4RW--fOyIVecBM0WKNEeF-RyFvUn83jCF9ou2tg/viewform; }
location /reply { return 302 https://github.com/coala/coala/wiki/Reply-Templates; }
location /linespots { return 302 https://gitlab.com/sims1253/Linespots; }
location /usability { return 302 https://docs.google.com/forms/d/e/1FAIpQLSe9lZxuYEKlvxXzQUOTwrre3CQMNsks7eOzEl49_2q5vlDl0w/viewform; }
location /cep5 { return 302 https://github.com/coala/cEPs/blob/master/cEP-0005.md; }
location /starwars { return 302 https://www.youtube.com/watch?v=JWVCMjKU_10; }
```
|
1.0
|
Docs: Add short links on the pages that have them - Some presentation had direct links to the newcomer guide.
We have some "URL shortener":
https://coala.io/newcomer -> http://docs.coala.io/en/latest/Developers/Newcomers_Guide.html
https://coala.io/new -> https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Acoala+label%3Adifficulty%2Fnewcomer++no%3Aassignee
etc...
We should add the URL shortened link on the page getting shortened in the documentation; so people browsing around the docs know there's a short URL pointing to the page.
URLS:
```
location /newcomer { return 302 http://docs.coala.io/en/latest/Developers/Newcomers_Guide.html; }
location /newcomers { return 302 http://docs.coala.io/en/latest/Developers/Newcomers_Guide.html; }
location /new { return 302 https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Acoala+label%3Adifficulty%2Fnewcomer++no%3Aassignee; }
location /low { return 302 https://github.com/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+user%3Acoala+label%3Adifficulty%2Flow++no%3Aassignee; }
location /review { return 302 https://github.com/pulls?q=is%3Aopen+is%3Apr+user%3Acoala+label%3A%22process%2Fpending+review%22+sort%3Acreated-asc; }
location /languages { return 302 https://github.com/coala/bear-docs/blob/master/README.rst; }
location /chat { return 302 https://gitter.im/coala-analyzer/coala; }
location /git { return 302 http://docs.coala.io/en/latest/Developers/Git_Basics.html; }
location /commit { return 302 http://docs.coala.io/en/latest/Developers/Writing_Good_Commits.html; }
location /cep { return 302 https://github.com/coala/cEPs/blob/master/cEP-0000.md; }
location /tutorial { return 302 https://docs.coala.io/en/latest/Users/Tutorial.html; }
location /writingbears { return 302 https://docs.coala.io/en/latest/Developers/Writing_Bears.html; }
location /channels { return 302 https://github.com/coala/coala/wiki/Communication-Channels; }
location /newform { return 302 https://docs.google.com/forms/d/e/1FAIpQLSd7g_MU_c-BMQ62WHeznrvcoXwqW87O_Wq4Gz7-pp8PJ38Wdg/viewform; }
location /projects { return 302 https://github.com/coala/coala/wiki/Project-Ideas; }
location /reviewsprint { return 302 https://docs.google.com/forms/d/e/1FAIpQLSd4vHafTyY4RW--fOyIVecBM0WKNEeF-RyFvUn83jCF9ou2tg/viewform; }
location /reply { return 302 https://github.com/coala/coala/wiki/Reply-Templates; }
location /linespots { return 302 https://gitlab.com/sims1253/Linespots; }
location /usability { return 302 https://docs.google.com/forms/d/e/1FAIpQLSe9lZxuYEKlvxXzQUOTwrre3CQMNsks7eOzEl49_2q5vlDl0w/viewform; }
location /cep5 { return 302 https://github.com/coala/cEPs/blob/master/cEP-0005.md; }
location /starwars { return 302 https://www.youtube.com/watch?v=JWVCMjKU_10; }
```
|
process
|
docs add short links on the pages that have them some presentation had direct links to the newcomer guide we have some url shortener etc we should add the url shortened link on the page getting shortened in the documentation so people browsing around the docs know there s a short url pointing to the page urls location newcomer return location newcomers return location new return location low return location review return location languages return location chat return location git return location commit return location cep return location tutorial return location writingbears return location channels return location newform return location projects return location reviewsprint return location reply return location linespots return location usability return location return location starwars return
| 1
|
820,715
| 30,784,649,758
|
IssuesEvent
|
2023-07-31 12:29:07
|
PHI-base/PHI5_web_display
|
https://api.github.com/repos/PHI-base/PHI5_web_display
|
opened
|
Don't show 'Annotation extension term' in Advanced Search
|
medium priority
|
In the Advanced Search page, there is an option for 'Annotation extension term' that was meant to search based on the values of annotation extensions.
This search option currently doesn't do anything, and in issue #94 we decided that this search option probably won't be useful for most users of PHI-base 5.
So, we want to hide (or remove) this search option until the users of PHI-base 5 request it.

|
1.0
|
Don't show 'Annotation extension term' in Advanced Search - In the Advanced Search page, there is an option for 'Annotation extension term' that was meant to search based on the values of annotation extensions.
This search option currently doesn't do anything, and in issue #94 we decided that this search option probably won't be useful for most users of PHI-base 5.
So, we want to hide (or remove) this search option until the users of PHI-base 5 request it.

|
non_process
|
don t show annotation extension term in advanced search in the advanced search page there is an option for annotation extension term that was meant to search based on the values of annotation extensions this search option currently doesn t do anything and in issue we decided that this search option probably won t be useful for most users of phi base so we want to hide or remove this search option until the users of phi base request it
| 0
|
115,204
| 14,703,595,089
|
IssuesEvent
|
2021-01-04 15:16:13
|
dusk-network/plonk
|
https://api.github.com/repos/dusk-network/plonk
|
opened
|
Implement min constant generics
|
API-design area:cryptography constraint_system type:feature
|
Currently, the api allows the user to build two different arity lookup tables. Either 3 or 4.
The former is for witness of single operations types for lookups and the latter is for concatenated lookups. Instead of having both structs, we should implement min constant generics to have rules that permeates both use cases.
|
1.0
|
Implement min constant generics - Currently, the api allows the user to build two different arity lookup tables. Either 3 or 4.
The former is for witness of single operations types for lookups and the latter is for concatenated lookups. Instead of having both structs, we should implement min constant generics to have rules that permeates both use cases.
|
non_process
|
implement min constant generics currently the api allows the user to build two different arity lookup tables either or the former is for witness of single operations types for lookups and the latter is for concatenated lookups instead of having both structs we should implement min constant generics to have rules that permeates both use cases
| 0
|
13,390
| 15,865,857,526
|
IssuesEvent
|
2021-04-08 15:08:46
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Android] Study Activities > Need to correct the format to display the activity runs
|
Android P2 Process: Enhancement Process: Fixed Process: Tested dev
|

A/R:- **Run: 4/33, 3 done, 1 missed**
E/R:- **Run 4 of 33, 3 done, 1 missed**
|
3.0
|
[Android] Study Activities > Need to correct the format to display the activity runs - 
A/R:- **Run: 4/33, 3 done, 1 missed**
E/R:- **Run 4 of 33, 3 done, 1 missed**
|
process
|
study activities need to correct the format to display the activity runs a r run done missed e r run of done missed
| 1
|
15,667
| 19,847,154,340
|
IssuesEvent
|
2022-01-21 08:07:58
|
ooi-data/CE02SHSM-SBD12-08-FDCHPA000-recovered_host-fdchp_a_dcl_instrument_recovered
|
https://api.github.com/repos/ooi-data/CE02SHSM-SBD12-08-FDCHPA000-recovered_host-fdchp_a_dcl_instrument_recovered
|
opened
|
🛑 Processing failed: ValueError
|
process
|
## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:07:57.895380.
## Details
Flow name: `CE02SHSM-SBD12-08-FDCHPA000-recovered_host-fdchp_a_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
1.0
|
🛑 Processing failed: ValueError - ## Overview
`ValueError` found in `processing_task` task during run ended on 2022-01-21T08:07:57.895380.
## Details
Flow name: `CE02SHSM-SBD12-08-FDCHPA000-recovered_host-fdchp_a_dcl_instrument_recovered`
Task name: `processing_task`
Error type: `ValueError`
Error message: not enough values to unpack (expected 3, got 0)
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 187, in _append_zarr
existing_arr.append(var_data.values)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 519, in values
return _as_array_or_item(self._data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/variable.py", line 259, in _as_array_or_item
data = np.asarray(data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 1541, in __array__
x = self.compute()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 288, in compute
(result,) = compute(self, traverse=False, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/base.py", line 571, in compute
results = schedule(dsk, keys, **kwargs)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/threaded.py", line 79, in get
results = get_async(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 507, in get_async
raise_exception(exc, tb)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 315, in reraise
raise exc
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/local.py", line 220, in execute_task
result = _execute_task(task, data)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/core.py", line 119, in _execute_task
return func(*(_execute_task(a, cache) for a in args))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/dask/array/core.py", line 116, in getter
c = np.asarray(c)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 357, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 551, in __array__
self._ensure_cached()
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 548, in _ensure_cached
self.array = NumpyIndexingAdapter(np.asarray(self.array))
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 521, in __array__
return np.asarray(self.array, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 70, in __array__
return self.func(self.array)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/coding/variables.py", line 137, in _apply_mask
data = np.asarray(data, dtype=dtype)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 422, in __array__
return np.asarray(array[self.key], dtype=None)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/backends/zarr.py", line 73, in __getitem__
return array[key.tuple]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 673, in __getitem__
return self.get_basic_selection(selection, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 798, in get_basic_selection
return self._get_basic_selection_nd(selection=selection, out=out,
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 841, in _get_basic_selection_nd
return self._get_selection(indexer=indexer, out=out, fields=fields)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/core.py", line 1135, in _get_selection
lchunk_coords, lchunk_selection, lout_selection = zip(*indexer)
ValueError: not enough values to unpack (expected 3, got 0)
```
</details>
|
process
|
🛑 processing failed valueerror overview valueerror found in processing task task during run ended on details flow name recovered host fdchp a dcl instrument recovered task name processing task error type valueerror error message not enough values to unpack expected got traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr existing arr append var data values file srv conda envs notebook lib site packages xarray core variable py line in values return as array or item self data file srv conda envs notebook lib site packages xarray core variable py line in as array or item data np asarray data file srv conda envs notebook lib site packages dask array core py line in array x self compute file srv conda envs notebook lib site packages dask base py line in compute result compute self traverse false kwargs file srv conda envs notebook lib site packages dask base py line in compute results schedule dsk keys kwargs file srv conda envs notebook lib site packages dask threaded py line in get results get async file srv conda envs notebook lib site packages dask local py line in get async raise exception exc tb file srv conda envs notebook lib site packages dask local py line in reraise raise exc file srv conda envs notebook lib site packages dask local py line in execute task result execute task task data file srv conda envs notebook lib site packages dask core py line in execute task return func execute task a cache for a in args file srv conda envs notebook lib site packages dask array core py line in getter c np asarray c file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array self ensure cached file srv conda envs notebook lib site packages xarray core indexing py line in ensure cached self array numpyindexingadapter np asarray self array file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray self array dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray coding variables py line in array return self func self array file srv conda envs notebook lib site packages xarray coding variables py line in apply mask data np asarray data dtype dtype file srv conda envs notebook lib site packages xarray core indexing py line in array return np asarray array dtype none file srv conda envs notebook lib site packages xarray backends zarr py line in getitem return array file srv conda envs notebook lib site packages zarr core py line in getitem return self get basic selection selection fields fields file srv conda envs notebook lib site packages zarr core py line in get basic selection return self get basic selection nd selection selection out out file srv conda envs notebook lib site packages zarr core py line in get basic selection nd return self get selection indexer indexer out out fields fields file srv conda envs notebook lib site packages zarr core py line in get selection lchunk coords lchunk selection lout selection zip indexer valueerror not enough values to unpack expected got
| 1
|
4,515
| 7,360,184,040
|
IssuesEvent
|
2018-03-10 15:59:35
|
ODiogoSilva/assemblerflow
|
https://api.github.com/repos/ODiogoSilva/assemblerflow
|
opened
|
Add definition of compute resources to Process classes
|
enhancement process
|
Add the `cpu` and `ram` attributes to the Process base class. These attributes will be used to build the nextflow configuration file based on the preset values. They can be later edited in the nextflow config as usual.
|
1.0
|
Add definition of compute resources to Process classes - Add the `cpu` and `ram` attributes to the Process base class. These attributes will be used to build the nextflow configuration file based on the preset values. They can be later edited in the nextflow config as usual.
|
process
|
add definition of compute resources to process classes add the cpu and ram attributes to the process base class these attributes will be used to build the nextflow configuration file based on the preset values they can be later edited in the nextflow config as usual
| 1
|
24,406
| 12,291,937,766
|
IssuesEvent
|
2020-05-10 12:28:45
|
Bantr/Spawn
|
https://api.github.com/repos/Bantr/Spawn
|
opened
|
useWhyDidYouUpdate
|
Performance
|
This hook makes it easy to see which prop changes are causing a component to re-render. If a function is particularly expensive to run
|
True
|
useWhyDidYouUpdate - This hook makes it easy to see which prop changes are causing a component to re-render. If a function is particularly expensive to run
|
non_process
|
usewhydidyouupdate this hook makes it easy to see which prop changes are causing a component to re render if a function is particularly expensive to run
| 0
|
13,160
| 15,589,713,407
|
IssuesEvent
|
2021-03-18 08:25:59
|
Ultimate-Hosts-Blacklist/whitelist
|
https://api.github.com/repos/Ultimate-Hosts-Blacklist/whitelist
|
opened
|
[FALSE-POSITIVE?] boards-api.greenhouse.io
|
whitelisting process
|
Greenhouse is an applicant tracking system and recruiting software that is designed to help make companies great at hiring and hire for what's next.
Many companies use it, and it shouldn't be blocked. Example of broken page: https://www.knock.com/careers#current-openings
|
1.0
|
[FALSE-POSITIVE?] boards-api.greenhouse.io - Greenhouse is an applicant tracking system and recruiting software that is designed to help make companies great at hiring and hire for what's next.
Many companies use it, and it shouldn't be blocked. Example of broken page: https://www.knock.com/careers#current-openings
|
process
|
boards api greenhouse io greenhouse is an applicant tracking system and recruiting software that is designed to help make companies great at hiring and hire for what s next many companies use it and it shouldn t be blocked example of broken page
| 1
|
22,053
| 30,571,748,330
|
IssuesEvent
|
2023-07-20 23:11:42
|
h4sh5/pypi-auto-scanner
|
https://api.github.com/repos/h4sh5/pypi-auto-scanner
|
opened
|
roblox-pyc 1.19.72 has 2 GuardDog issues
|
guarddog silent-process-execution
|
https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.19.72",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.19.72/src/robloxpy.py:134",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.72/src/robloxpy.py:141",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpfakzkslj/roblox-pyc"
}
}```
|
1.0
|
roblox-pyc 1.19.72 has 2 GuardDog issues - https://pypi.org/project/roblox-pyc
https://inspector.pypi.io/project/roblox-pyc
```{
"dependency": "roblox-pyc",
"version": "1.19.72",
"result": {
"issues": 2,
"errors": {},
"results": {
"silent-process-execution": [
{
"location": "roblox-pyc-1.19.72/src/robloxpy.py:134",
"code": " subprocess.call([\"luarocks\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
},
{
"location": "roblox-pyc-1.19.72/src/robloxpy.py:141",
"code": " subprocess.call([\"moonc\", \"--version\"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, stdin=subprocess.DEVNULL)",
"message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null"
}
]
},
"path": "/tmp/tmpfakzkslj/roblox-pyc"
}
}```
|
process
|
roblox pyc has guarddog issues dependency roblox pyc version result issues errors results silent process execution location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null location roblox pyc src robloxpy py code subprocess call stdout subprocess devnull stderr subprocess devnull stdin subprocess devnull message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmpfakzkslj roblox pyc
| 1
|
36,535
| 9,819,964,006
|
IssuesEvent
|
2019-06-14 00:12:30
|
Exawind/nalu-wind
|
https://api.github.com/repos/Exawind/nalu-wind
|
closed
|
Compilation failure with nalu-wind VOTD and Trilinos dev VOTD
|
build-issues question
|
Using NaluWind SHA 32994aff27c5 and Trilinos dev 8a82b322ba, I'm seeing this error. Is my Trilinos just too far ahead of NaluWind?
```
/opt/cray/pe/craype/2.5.15/bin/CC -DHYPRE_COGMRES -DNALU_USES_HYPRE -I/global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/include -Iinclude -isystem /global/project/projectdirs/m2853/jjhu/build-try/install/trilinos/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/superlu-4.3-d35xuc4stqx5jha67i4llnnk6kg5pveb/include -isystem /global/common/cori/software/zlib/1.2.8/hsw/intel/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/boost-1.67.0-6ml43almbqi7cwm5qfhpyosvq2kn53wl/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/netcdf-4.4.1.1-k67r6x6j44bf2pn25edw4xjm5m5xi23b/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/hdf5-1.10.1-5hzg466kg6pcs7zhwcxfynkxvuu6ooc3/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/parallel-netcdf-1.8.0-accgi67vigvqsuut3czxa22iuhchu7nb/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/yaml-cpp-develop-oh7l2iywwzgo3gf4jokberavczyklkx3/lib/cmake/yaml-cpp/../../../include -Wall -Wextra -pedantic -diag-disable:11074,11076 -restrict -qopenmp -O3 -DNDEBUG -std=c++11 -MD -MT CMakeFiles/nalu.dir/src/xfer/Transfer.C.o -MF CMakeFiles/nalu.dir/src/xfer/Transfer.C.o.d -o CMakeFiles/nalu.dir/src/xfer/Transfer.C.o -c /global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C
/global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C(365): error: no instance of constructor "stk::transfer::GeometricTransfer<INTERPOLATE>::GeometricTransfer [with INTERPOLATE=sierra::nalu::LinInterp<sierra::nalu::FromMesh, sierra::nalu::ToMesh>]" matches the argument list
argument types are: (std::shared_ptr<sierra::nalu::FromMesh>, std::shared_ptr<sierra::nalu::ToMesh>, std::__cxx11::string, double, stk::search::SearchMethod)
transfer_.reset(new STKTransfer(from_mesh, to_mesh, name_, searchExpansionFactor_, searchMethod));
^
/global/project/projectdirs/m2853/jjhu/build-try/install/trilinos/include/stk_transfer/GeometricTransfer.hpp(78): note: this candidate was rejected because mismatch in count of arguments
template <class INTERPOLATE> class GeometricTransfer : public TransferBase {
^
/global/project/projectdirs/m2853/jjhu/build-try/install/trilinos/include/stk_transfer/GeometricTransfer.hpp(103): note: this candidate was rejected because arguments do not match
GeometricTransfer(boost::shared_ptr<MeshA> &mesha,
^
/global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C(377): error: class "stk::transfer::GeometricTransfer<sierra::nalu::LinInterp<sierra::nalu::FromMesh, sierra::nalu::ToMesh>>" has no member "meshA"
const std::shared_ptr<STKTransfer::MeshA> mesha = transfer->meshA();
^
compilation aborted for /global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C (code 2)
[2/5] Building CXX object CMakeFiles/nalu.dir/src/wind_energy/SyntheticLidar.C.o
```
|
1.0
|
Compilation failure with nalu-wind VOTD and Trilinos dev VOTD - Using NaluWind SHA 32994aff27c5 and Trilinos dev 8a82b322ba, I'm seeing this error. Is my Trilinos just too far ahead of NaluWind?
```
/opt/cray/pe/craype/2.5.15/bin/CC -DHYPRE_COGMRES -DNALU_USES_HYPRE -I/global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/include -Iinclude -isystem /global/project/projectdirs/m2853/jjhu/build-try/install/trilinos/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/superlu-4.3-d35xuc4stqx5jha67i4llnnk6kg5pveb/include -isystem /global/common/cori/software/zlib/1.2.8/hsw/intel/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/boost-1.67.0-6ml43almbqi7cwm5qfhpyosvq2kn53wl/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/netcdf-4.4.1.1-k67r6x6j44bf2pn25edw4xjm5m5xi23b/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/hdf5-1.10.1-5hzg466kg6pcs7zhwcxfynkxvuu6ooc3/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/parallel-netcdf-1.8.0-accgi67vigvqsuut3czxa22iuhchu7nb/include -isystem /global/project/projectdirs/m2853/shreyas/exawind-green/spack/opt/spack/cray-cnl9-haswell/intel-18.0.1.163/yaml-cpp-develop-oh7l2iywwzgo3gf4jokberavczyklkx3/lib/cmake/yaml-cpp/../../../include -Wall -Wextra -pedantic -diag-disable:11074,11076 -restrict -qopenmp -O3 -DNDEBUG -std=c++11 -MD -MT CMakeFiles/nalu.dir/src/xfer/Transfer.C.o -MF CMakeFiles/nalu.dir/src/xfer/Transfer.C.o.d -o CMakeFiles/nalu.dir/src/xfer/Transfer.C.o -c /global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C
/global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C(365): error: no instance of constructor "stk::transfer::GeometricTransfer<INTERPOLATE>::GeometricTransfer [with INTERPOLATE=sierra::nalu::LinInterp<sierra::nalu::FromMesh, sierra::nalu::ToMesh>]" matches the argument list
argument types are: (std::shared_ptr<sierra::nalu::FromMesh>, std::shared_ptr<sierra::nalu::ToMesh>, std::__cxx11::string, double, stk::search::SearchMethod)
transfer_.reset(new STKTransfer(from_mesh, to_mesh, name_, searchExpansionFactor_, searchMethod));
^
/global/project/projectdirs/m2853/jjhu/build-try/install/trilinos/include/stk_transfer/GeometricTransfer.hpp(78): note: this candidate was rejected because mismatch in count of arguments
template <class INTERPOLATE> class GeometricTransfer : public TransferBase {
^
/global/project/projectdirs/m2853/jjhu/build-try/install/trilinos/include/stk_transfer/GeometricTransfer.hpp(103): note: this candidate was rejected because arguments do not match
GeometricTransfer(boost::shared_ptr<MeshA> &mesha,
^
/global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C(377): error: class "stk::transfer::GeometricTransfer<sierra::nalu::LinInterp<sierra::nalu::FromMesh, sierra::nalu::ToMesh>>" has no member "meshA"
const std::shared_ptr<STKTransfer::MeshA> mesha = transfer->meshA();
^
compilation aborted for /global/project/projectdirs/m2853/jjhu/build-try/source/nalu-wind/src/xfer/Transfer.C (code 2)
[2/5] Building CXX object CMakeFiles/nalu.dir/src/wind_energy/SyntheticLidar.C.o
```
|
non_process
|
compilation failure with nalu wind votd and trilinos dev votd using naluwind sha and trilinos dev i m seeing this error is my trilinos just too far ahead of naluwind opt cray pe craype bin cc dhypre cogmres dnalu uses hypre i global project projectdirs jjhu build try source nalu wind include iinclude isystem global project projectdirs jjhu build try install trilinos include isystem global project projectdirs shreyas exawind green spack opt spack cray haswell intel superlu include isystem global common cori software zlib hsw intel include isystem global project projectdirs shreyas exawind green spack opt spack cray haswell intel boost include isystem global project projectdirs shreyas exawind green spack opt spack cray haswell intel netcdf include isystem global project projectdirs shreyas exawind green spack opt spack cray haswell intel include isystem global project projectdirs shreyas exawind green spack opt spack cray haswell intel parallel netcdf include isystem global project projectdirs shreyas exawind green spack opt spack cray haswell intel yaml cpp develop lib cmake yaml cpp include wall wextra pedantic diag disable restrict qopenmp dndebug std c md mt cmakefiles nalu dir src xfer transfer c o mf cmakefiles nalu dir src xfer transfer c o d o cmakefiles nalu dir src xfer transfer c o c global project projectdirs jjhu build try source nalu wind src xfer transfer c global project projectdirs jjhu build try source nalu wind src xfer transfer c error no instance of constructor stk transfer geometrictransfer geometrictransfer matches the argument list argument types are std shared ptr std shared ptr std string double stk search searchmethod transfer reset new stktransfer from mesh to mesh name searchexpansionfactor searchmethod global project projectdirs jjhu build try install trilinos include stk transfer geometrictransfer hpp note this candidate was rejected because mismatch in count of arguments template class geometrictransfer public transferbase global project projectdirs jjhu build try install trilinos include stk transfer geometrictransfer hpp note this candidate was rejected because arguments do not match geometrictransfer boost shared ptr mesha global project projectdirs jjhu build try source nalu wind src xfer transfer c error class stk transfer geometrictransfer has no member mesha const std shared ptr mesha transfer mesha compilation aborted for global project projectdirs jjhu build try source nalu wind src xfer transfer c code building cxx object cmakefiles nalu dir src wind energy syntheticlidar c o
| 0
|
82,099
| 10,270,335,939
|
IssuesEvent
|
2019-08-23 11:20:05
|
primefaces/primefaces
|
https://api.github.com/repos/primefaces/primefaces
|
opened
|
Documentation: Add AJAX events
|
documentation
|
Based on this Stack Overflow: https://stackoverflow.com/questions/57616538/what-are-the-possible-ajax-events-for-a-primefaces-inputtext
I think our docs pages should have an AJAX Events table for each component that simply has...
AJAX Events
**Default Event:** valueChange;
**Events:** [blur, change, valueChange, click, dblclick, focus, keydown, keypress, keyup, mousedown, mousemove, mouseout, mouseover, mouseup, select]
|
1.0
|
Documentation: Add AJAX events - Based on this Stack Overflow: https://stackoverflow.com/questions/57616538/what-are-the-possible-ajax-events-for-a-primefaces-inputtext
I think our docs pages should have an AJAX Events table for each component that simply has...
AJAX Events
**Default Event:** valueChange;
**Events:** [blur, change, valueChange, click, dblclick, focus, keydown, keypress, keyup, mousedown, mousemove, mouseout, mouseover, mouseup, select]
|
non_process
|
documentation add ajax events based on this stack overflow i think our docs pages should have an ajax events table for each component that simply has ajax events default event valuechange events
| 0
|
16,027
| 20,188,243,255
|
IssuesEvent
|
2022-02-11 01:21:05
|
savitamittalmsft/WAS-SEC-TEST
|
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
|
opened
|
Establish a SecOps team and monitor security related events
|
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Health Modeling & Monitoring Application Level Monitoring
|
<a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-security-operations#incident-response">Establish a SecOps team and monitor security related events</a>
<p><b>Why Consider This?</b></p>
Is the organization effectively monitoring security posture across workloads, with a central SecOps team monitoring security-related telemetry data and investigating possible security breaches? Communication, investigation and hunting activities need to be aligned with the application team(s).
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Consider using Azure Defender (Azure Security Center) to monitor security related events and get alerted automatically</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-alerts-overview" target="_blank"><span>https://docs.microsoft.com/en-us/azure/security-center/security-center-alerts-overview</span></a><span /></p>
|
1.0
|
Establish a SecOps team and monitor security related events - <a href="https://docs.microsoft.com/azure/architecture/framework/security/monitor-security-operations#incident-response">Establish a SecOps team and monitor security related events</a>
<p><b>Why Consider This?</b></p>
Is the organization effectively monitoring security posture across workloads, with a central SecOps team monitoring security-related telemetry data and investigating possible security breaches? Communication, investigation and hunting activities need to be aligned with the application team(s).
<p><b>Context</b></p>
<p><b>Suggested Actions</b></p>
<p><span>Consider using Azure Defender (Azure Security Center) to monitor security related events and get alerted automatically</span></p>
<p><b>Learn More</b></p>
<p><a href="https://docs.microsoft.com/en-us/azure/security-center/security-center-alerts-overview" target="_blank"><span>https://docs.microsoft.com/en-us/azure/security-center/security-center-alerts-overview</span></a><span /></p>
|
process
|
establish a secops team and monitor security related events why consider this is the organization effectively monitoring security posture across workloads with a central secops team monitoring security related telemetry data and investigating possible security breaches communication investigation and hunting activities need to be aligned with the application team s context suggested actions consider using azure defender azure security center to monitor security related events and get alerted automatically learn more
| 1
|
10,544
| 13,326,325,571
|
IssuesEvent
|
2020-08-27 11:26:55
|
GoogleCloudPlatform/cloud-opensource-java
|
https://api.github.com/repos/GoogleCloudPlatform/cloud-opensource-java
|
opened
|
Check kokoro scripts in google3 for JDK choice
|
process
|
audit the various release scripts and kokoro configs in google3 to check whether they explicitly specify a JDK. java 8 or Java 11, this should be a deliberate choice, not an accident.
|
1.0
|
Check kokoro scripts in google3 for JDK choice - audit the various release scripts and kokoro configs in google3 to check whether they explicitly specify a JDK. java 8 or Java 11, this should be a deliberate choice, not an accident.
|
process
|
check kokoro scripts in for jdk choice audit the various release scripts and kokoro configs in to check whether they explicitly specify a jdk java or java this should be a deliberate choice not an accident
| 1
|
395,854
| 11,697,386,468
|
IssuesEvent
|
2020-03-06 11:42:25
|
codacy/codacy-meta
|
https://api.github.com/repos/codacy/codacy-meta
|
opened
|
Support new PHPMD configuration files
|
Medium Priority
|
> Was wondering why Codacy could not detect my phpmd.dist.xml file, and then noticed you are expecting one that doesn't appear to be documented as the default by phpmd. Ideally, you would use the same file as standard, so that phpmd config can be shared across multiple consumers.
Both `phpmd.xml` and `phpmd.xml.dist` are used as configuration files for PHPMD.
https://phpmd.org/documentation/index.html
https://gitlab.com/mamyn0va/php-template/blob/master/phpmd.xml.dist
|
1.0
|
Support new PHPMD configuration files - > Was wondering why Codacy could not detect my phpmd.dist.xml file, and then noticed you are expecting one that doesn't appear to be documented as the default by phpmd. Ideally, you would use the same file as standard, so that phpmd config can be shared across multiple consumers.
Both `phpmd.xml` and `phpmd.xml.dist` are used as configuration files for PHPMD.
https://phpmd.org/documentation/index.html
https://gitlab.com/mamyn0va/php-template/blob/master/phpmd.xml.dist
|
non_process
|
support new phpmd configuration files was wondering why codacy could not detect my phpmd dist xml file and then noticed you are expecting one that doesn t appear to be documented as the default by phpmd ideally you would use the same file as standard so that phpmd config can be shared across multiple consumers both phpmd xml and phpmd xml dist are used as configuration files for phpmd
| 0
|
186,715
| 6,742,250,084
|
IssuesEvent
|
2017-10-20 06:48:02
|
RSPluto/Web-UI
|
https://api.github.com/repos/RSPluto/Web-UI
|
closed
|
实时监测 - 平面图编辑区域中的设备位置保存后,离开再进入无法看到保存结果
|
bug Fixed High Priority
|
测试步骤:
1. 进入“实时监测” -> "切换至平面图";
2. 选择某一个区域;
3. 点击“编辑”,在区域中选择某些设备进行位置拖动;
4. 点击“保存”
5. 点击“切换至列表”,之后再重复步骤1. 2 检查刚才保存的内容。
期望结果:
5. 拖动设备位置可以保存成功并显示;
实际结果:
5. 设备没有在拖动的新位置上显示,依然是之前的。
|
1.0
|
实时监测 - 平面图编辑区域中的设备位置保存后,离开再进入无法看到保存结果 - 测试步骤:
1. 进入“实时监测” -> "切换至平面图";
2. 选择某一个区域;
3. 点击“编辑”,在区域中选择某些设备进行位置拖动;
4. 点击“保存”
5. 点击“切换至列表”,之后再重复步骤1. 2 检查刚才保存的内容。
期望结果:
5. 拖动设备位置可以保存成功并显示;
实际结果:
5. 设备没有在拖动的新位置上显示,依然是之前的。
|
non_process
|
实时监测 平面图编辑区域中的设备位置保存后,离开再进入无法看到保存结果 测试步骤 进入“实时监测” 切换至平面图 ; 选择某一个区域; 点击“编辑”,在区域中选择某些设备进行位置拖动; 点击“保存” 点击“切换至列表”, 检查刚才保存的内容。 期望结果: 拖动设备位置可以保存成功并显示; 实际结果: 设备没有在拖动的新位置上显示,依然是之前的。
| 0
|
29,805
| 8,410,174,328
|
IssuesEvent
|
2018-10-12 09:43:47
|
PowerShell/PowerShell
|
https://api.github.com/repos/PowerShell/PowerShell
|
closed
|
Ubuntu: Software updater and apt-get update to 6.1.0-preview despite pre-release option being not ticked
|
Area-Build Issue-Enhancement OS-Linux
|
Steps to reproduce
------------------
Run the Ubuntu Software updater GUI or `sudo apt-get update; sudo apt-get upgrade`
Expected behavior
-----------------
When `pre-releases` are not ticked then the updater should not suggest an upgrade to preview versions.
Actual behavior
---------------
It suggest and update to the new preview version despite the pre-release option being not ticked in the settings.

The same is true for `apt-get`
Environment data
----------------
```powershell
> $PSVersionTable
Name Value
---- -----
PSVersion 6.0.2
PSEdition Core
GitCommitId v6.0.2
OS Linux 4.13.0-37-generic #42~16.04.1-Ubuntu SMP Wed Mar 7 16:03:28 UTC 2018
Platform Unix
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```
|
1.0
|
Ubuntu: Software updater and apt-get update to 6.1.0-preview despite pre-release option being not ticked - Steps to reproduce
------------------
Run the Ubuntu Software updater GUI or `sudo apt-get update; sudo apt-get upgrade`
Expected behavior
-----------------
When `pre-releases` are not ticked then the updater should not suggest an upgrade to preview versions.
Actual behavior
---------------
It suggest and update to the new preview version despite the pre-release option being not ticked in the settings.

The same is true for `apt-get`
Environment data
----------------
```powershell
> $PSVersionTable
Name Value
---- -----
PSVersion 6.0.2
PSEdition Core
GitCommitId v6.0.2
OS Linux 4.13.0-37-generic #42~16.04.1-Ubuntu SMP Wed Mar 7 16:03:28 UTC 2018
Platform Unix
PSCompatibleVersions {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion 2.3
SerializationVersion 1.1.0.1
WSManStackVersion 3.0
```
|
non_process
|
ubuntu software updater and apt get update to preview despite pre release option being not ticked steps to reproduce run the ubuntu software updater gui or sudo apt get update sudo apt get upgrade expected behavior when pre releases are not ticked then the updater should not suggest an upgrade to preview versions actual behavior it suggest and update to the new preview version despite the pre release option being not ticked in the settings the same is true for apt get environment data powershell psversiontable name value psversion psedition core gitcommitid os linux generic ubuntu smp wed mar utc platform unix pscompatibleversions psremotingprotocolversion serializationversion wsmanstackversion
| 0
|
5,019
| 7,845,527,818
|
IssuesEvent
|
2018-06-19 13:12:44
|
openvstorage/volumedriver
|
https://api.github.com/repos/openvstorage/volumedriver
|
closed
|
Volume migration times out due to slow volume shutdown
|
process_duplicate type_bug
|
* source:
```
2017-09-08 04:20:02 957916 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VFSLocalNode - 000000000420a9dc - info - transfer: Volume-d65df314-cebe-4d6b-97d4-fb6bae85e055: target_node data-ny1-02roLR7yTDk9hMdPGK
2017-09-08 04:20:02 958025 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VFSLocalNode - 000000000420a9dd - info - destroy_: d65df314-cebe-4d6b-97d4-fb6bae85e055: trying to sync to the backend
2017-09-08 04:20:02 961001 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/DataStoreNG - 000000000420a9de - info - updateCurrentSCO_: created new write SCO 00_00000b7e_00 for Volume d65df314-cebe-4d6b-97d4-fb6bae85e055
2017-09-08 04:20:02 961579 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/CachedMetaDataStore - 000000000420a9e1 - info - cork: d65df314-cebe-4d6b-97d4-fb6bae85e055: corking f67d2fb9-986d-4476-a6e4-7779edefee08
2017-09-08 04:20:03 984177 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VFSLocalNode - 000000000420aa07 - info - destroy_: d65df314-cebe-4d6b-97d4-fb6bae85e055: synced to the backend
2017-09-08 04:20:03 984265 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VolManager - 000000000420aa08 - notice - Destroy Volume, VolumeId: d65df314-cebe-4d6b-97d4-fb6bae85e055, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::F, delete namespace DeleteVolumeNamespace::F, force deletion ForceVolumeDeletion::F, START
2017-09-08 04:20:03 984774 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/Volume - 000000000420aa09 - info - destroy: d65df314-cebe-4d6b-97d4-fb6bae85e055: destroying volume: delete local data DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::F, force_volume deletion ForceVolumeDeletion::F
2017-09-08 04:20:03 984820 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/FailOverCacheProxy - 000000000420aa0a - info - unregister_: Not deleting failover dir for d65df314-cebe-4d6b-97d4-fb6bae85e055
2017-09-08 04:23:22 246209 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/MDSMetaDataBackend - 000000000420acfd - info - ~MDSMetaDataBackend: d65df314-cebe-4d6b-97d4-fb6bae85e055: used clusters: 1602486
2017-09-08 04:23:22 246312 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/DataStoreNG - 000000000420acfe - info - destroy: d65df314-cebe-4d6b-97d4-fb6bae85e055: destroying DataStore, DeleteLocalData::T
2017-09-08 04:23:22 246829 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/SCOCacheMountPoint - 000000000420acff - info - removeNamespace: "/mnt/ssd2/data-ny1-02_write_sco_1": removing namespace d65df314-cebe-4d6b-97d4-fb6bae85e055 from mountpoint
2017-09-08 04:23:22 247143 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/SCOCacheMountPoint - 000000000420ad00 - info - removeNamespace: "/mnt/ssd3/data-ny1-02_write_sco_1": removing namespace d65df314-cebe-4d6b-97d4-fb6bae85e055 from mountpoint
2017-09-08 04:23:22 247466 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/Volume - 000000000420ad01 - info - destroy: d65df314-cebe-4d6b-97d4-fb6bae85e055: Unregistering volume from ClusterCache
2017-09-08 04:23:22 247502 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VolManager - 000000000420ad02 - notice - Destroy Volume, VolumeId: d65df314-cebe-4d6b-97d4-fb6bae85e055, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::F, delete namespace DeleteVolumeNamespace::F, force deletion ForceVolumeDeletion::F, FINISHED
2017-09-08 04:23:22 247518 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/Volume - 000000000420ad03 - info - ~Volume: d65df314-cebe-4d6b-97d4-fb6bae85e055: Destructor of 0x7f77869244a0
2017-09-08 04:23:22 250181 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/ObjectRegistry - 000000000420ad04 - info - migrate: 3b180e86-6a76-481b-95e0-93d139f87f06/data-ny1-02JmyoUr04PHfa41vA: trying to move d65df314-cebe-4d6b-97d4-fb6bae85e055 from data-ny1-02JmyoUr04PHfa41vA to data-ny1-02roLR7yTDk9hMdPGK
2017-09-08 04:23:22 250219 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/ObjectRegistry - 000000000420ad05 - info - prepare_migrate: 3b180e86-6a76-481b-95e0-93d139f87f06/data-ny1-02JmyoUr04PHfa41vA: trying to add migration of d65df314-cebe-4d6b-97d4-fb6bae85e055 from data-ny1-02JmyoUr04PHfa41vA to data-ny1-02roLR7yTDk9hMdPGK to sequence
```
* destination:
```
017-09-08 04:20:06 106338 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 -
volumedriverfs/VFSObjectRouter - 0000000000a1defc - info - migrate_: Trying to migrate d65df314-cebe-4d6b-97d4-fb6bae85e055 from data-ny1-02JmyoUr04PHfa41vA, only steal if offline OnlyStealFromOfflineNode::T
2017-09-08 04:22:06 106484 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 - volumedriverfs/VFSRemoteNode - 0000000000a1e0fc - info - handle_: data-ny1-02JmyoUr04PHfa41vA: remote did not respond within 120000 milliseconds milliseconds - giving up
2017-09-08 04:22:06 106869 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 -
volumedriverfs/XMLRPCTimingWrapper - 0000000000a1e0fd - error - execute: migrateVolume Caught fungi::IOException: request to remote node timed out
2017-09-08 04:22:06 106969 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 -
volumedriverfs/XMLRPCConnection - 0000000000a1e0fe - info - operator(): Closing socket 948
```
|
1.0
|
Volume migration times out due to slow volume shutdown - * source:
```
2017-09-08 04:20:02 957916 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VFSLocalNode - 000000000420a9dc - info - transfer: Volume-d65df314-cebe-4d6b-97d4-fb6bae85e055: target_node data-ny1-02roLR7yTDk9hMdPGK
2017-09-08 04:20:02 958025 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VFSLocalNode - 000000000420a9dd - info - destroy_: d65df314-cebe-4d6b-97d4-fb6bae85e055: trying to sync to the backend
2017-09-08 04:20:02 961001 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/DataStoreNG - 000000000420a9de - info - updateCurrentSCO_: created new write SCO 00_00000b7e_00 for Volume d65df314-cebe-4d6b-97d4-fb6bae85e055
2017-09-08 04:20:02 961579 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/CachedMetaDataStore - 000000000420a9e1 - info - cork: d65df314-cebe-4d6b-97d4-fb6bae85e055: corking f67d2fb9-986d-4476-a6e4-7779edefee08
2017-09-08 04:20:03 984177 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VFSLocalNode - 000000000420aa07 - info - destroy_: d65df314-cebe-4d6b-97d4-fb6bae85e055: synced to the backend
2017-09-08 04:20:03 984265 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VolManager - 000000000420aa08 - notice - Destroy Volume, VolumeId: d65df314-cebe-4d6b-97d4-fb6bae85e055, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::F, delete namespace DeleteVolumeNamespace::F, force deletion ForceVolumeDeletion::F, START
2017-09-08 04:20:03 984774 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/Volume - 000000000420aa09 - info - destroy: d65df314-cebe-4d6b-97d4-fb6bae85e055: destroying volume: delete local data DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::F, force_volume deletion ForceVolumeDeletion::F
2017-09-08 04:20:03 984820 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/FailOverCacheProxy - 000000000420aa0a - info - unregister_: Not deleting failover dir for d65df314-cebe-4d6b-97d4-fb6bae85e055
2017-09-08 04:23:22 246209 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/MDSMetaDataBackend - 000000000420acfd - info - ~MDSMetaDataBackend: d65df314-cebe-4d6b-97d4-fb6bae85e055: used clusters: 1602486
2017-09-08 04:23:22 246312 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/DataStoreNG - 000000000420acfe - info - destroy: d65df314-cebe-4d6b-97d4-fb6bae85e055: destroying DataStore, DeleteLocalData::T
2017-09-08 04:23:22 246829 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/SCOCacheMountPoint - 000000000420acff - info - removeNamespace: "/mnt/ssd2/data-ny1-02_write_sco_1": removing namespace d65df314-cebe-4d6b-97d4-fb6bae85e055 from mountpoint
2017-09-08 04:23:22 247143 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/SCOCacheMountPoint - 000000000420ad00 - info - removeNamespace: "/mnt/ssd3/data-ny1-02_write_sco_1": removing namespace d65df314-cebe-4d6b-97d4-fb6bae85e055 from mountpoint
2017-09-08 04:23:22 247466 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/Volume - 000000000420ad01 - info - destroy: d65df314-cebe-4d6b-97d4-fb6bae85e055: Unregistering volume from ClusterCache
2017-09-08 04:23:22 247502 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/VolManager - 000000000420ad02 - notice - Destroy Volume, VolumeId: d65df314-cebe-4d6b-97d4-fb6bae85e055, delete local data: DeleteLocalData::T, remove volume completely RemoveVolumeCompletely::F, delete namespace DeleteVolumeNamespace::F, force deletion ForceVolumeDeletion::F, FINISHED
2017-09-08 04:23:22 247518 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/Volume - 000000000420ad03 - info - ~Volume: d65df314-cebe-4d6b-97d4-fb6bae85e055: Destructor of 0x7f77869244a0
2017-09-08 04:23:22 250181 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/ObjectRegistry - 000000000420ad04 - info - migrate: 3b180e86-6a76-481b-95e0-93d139f87f06/data-ny1-02JmyoUr04PHfa41vA: trying to move d65df314-cebe-4d6b-97d4-fb6bae85e055 from data-ny1-02JmyoUr04PHfa41vA to data-ny1-02roLR7yTDk9hMdPGK
2017-09-08 04:23:22 250219 -0400 - NY1SRV0008 - 5111/0x00007f79787f8700 - volumedriverfs/ObjectRegistry - 000000000420ad05 - info - prepare_migrate: 3b180e86-6a76-481b-95e0-93d139f87f06/data-ny1-02JmyoUr04PHfa41vA: trying to add migration of d65df314-cebe-4d6b-97d4-fb6bae85e055 from data-ny1-02JmyoUr04PHfa41vA to data-ny1-02roLR7yTDk9hMdPGK to sequence
```
* destination:
```
017-09-08 04:20:06 106338 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 -
volumedriverfs/VFSObjectRouter - 0000000000a1defc - info - migrate_: Trying to migrate d65df314-cebe-4d6b-97d4-fb6bae85e055 from data-ny1-02JmyoUr04PHfa41vA, only steal if offline OnlyStealFromOfflineNode::T
2017-09-08 04:22:06 106484 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 - volumedriverfs/VFSRemoteNode - 0000000000a1e0fc - info - handle_: data-ny1-02JmyoUr04PHfa41vA: remote did not respond within 120000 milliseconds milliseconds - giving up
2017-09-08 04:22:06 106869 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 -
volumedriverfs/XMLRPCTimingWrapper - 0000000000a1e0fd - error - execute: migrateVolume Caught fungi::IOException: request to remote node timed out
2017-09-08 04:22:06 106969 -0400 - NY1SRV0014 - 26773/0x00007f77ad7fa700 -
volumedriverfs/XMLRPCConnection - 0000000000a1e0fe - info - operator(): Closing socket 948
```
|
process
|
volume migration times out due to slow volume shutdown source volumedriverfs vfslocalnode info transfer volume cebe target node data volumedriverfs vfslocalnode info destroy cebe trying to sync to the backend volumedriverfs datastoreng info updatecurrentsco created new write sco for volume cebe volumedriverfs cachedmetadatastore info cork cebe corking volumedriverfs vfslocalnode info destroy cebe synced to the backend volumedriverfs volmanager notice destroy volume volumeid cebe delete local data deletelocaldata t remove volume completely removevolumecompletely f delete namespace deletevolumenamespace f force deletion forcevolumedeletion f start volumedriverfs volume info destroy cebe destroying volume delete local data deletelocaldata t remove volume completely removevolumecompletely f force volume deletion forcevolumedeletion f volumedriverfs failovercacheproxy info unregister not deleting failover dir for cebe volumedriverfs mdsmetadatabackend info mdsmetadatabackend cebe used clusters volumedriverfs datastoreng info destroy cebe destroying datastore deletelocaldata t volumedriverfs scocachemountpoint info removenamespace mnt data write sco removing namespace cebe from mountpoint volumedriverfs scocachemountpoint info removenamespace mnt data write sco removing namespace cebe from mountpoint volumedriverfs volume info destroy cebe unregistering volume from clustercache volumedriverfs volmanager notice destroy volume volumeid cebe delete local data deletelocaldata t remove volume completely removevolumecompletely f delete namespace deletevolumenamespace f force deletion forcevolumedeletion f finished volumedriverfs volume info volume cebe destructor of volumedriverfs objectregistry info migrate data trying to move cebe from data to data volumedriverfs objectregistry info prepare migrate data trying to add migration of cebe from data to data to sequence destination volumedriverfs vfsobjectrouter info migrate trying to migrate cebe from data only steal if offline onlystealfromofflinenode t volumedriverfs vfsremotenode info handle data remote did not respond within milliseconds milliseconds giving up volumedriverfs xmlrpctimingwrapper error execute migratevolume caught fungi ioexception request to remote node timed out volumedriverfs xmlrpcconnection info operator closing socket
| 1
|
17,139
| 22,677,761,367
|
IssuesEvent
|
2022-07-04 07:03:29
|
microsoft/vscode
|
https://api.github.com/repos/microsoft/vscode
|
opened
|
Adopt utility process for shared process and set `app.enableSandbox()`
|
plan-item shared-process sandbox
|
This is a follow up from https://github.com/microsoft/vscode/issues/92164 and covers remaining work to eventually enable sandboxed renderers fully in Electron.
This means that our shared process has to move away from a node.js enabled browser window to the new utility process. Breaking down the usages today:
* extension management
* settings sync
* terminals
* file watcher
**Some initial thoughts:**
* the shared process should probably just change to be a utility process as a first step
* however, any code that relies on the browser window network stack instead has to leverage Electrons [`net`](https://www.electronjs.org/docs/latest/api/net) APIs from the `electron-main` process to not loose proxy support
* any child process has to decide whether it wants to lift up to a utility process off the main process or remain inside the shared process
//cc @alexdima
|
1.0
|
Adopt utility process for shared process and set `app.enableSandbox()` - This is a follow up from https://github.com/microsoft/vscode/issues/92164 and covers remaining work to eventually enable sandboxed renderers fully in Electron.
This means that our shared process has to move away from a node.js enabled browser window to the new utility process. Breaking down the usages today:
* extension management
* settings sync
* terminals
* file watcher
**Some initial thoughts:**
* the shared process should probably just change to be a utility process as a first step
* however, any code that relies on the browser window network stack instead has to leverage Electrons [`net`](https://www.electronjs.org/docs/latest/api/net) APIs from the `electron-main` process to not loose proxy support
* any child process has to decide whether it wants to lift up to a utility process off the main process or remain inside the shared process
//cc @alexdima
|
process
|
adopt utility process for shared process and set app enablesandbox this is a follow up from and covers remaining work to eventually enable sandboxed renderers fully in electron this means that our shared process has to move away from a node js enabled browser window to the new utility process breaking down the usages today extension management settings sync terminals file watcher some initial thoughts the shared process should probably just change to be a utility process as a first step however any code that relies on the browser window network stack instead has to leverage electrons apis from the electron main process to not loose proxy support any child process has to decide whether it wants to lift up to a utility process off the main process or remain inside the shared process cc alexdima
| 1
|
72,352
| 8,723,533,540
|
IssuesEvent
|
2018-12-09 22:39:36
|
hypnospinner/Project-Focus
|
https://api.github.com/repos/hypnospinner/Project-Focus
|
closed
|
Создать back-end решение и настроить инфраструктуру
|
design groundwork important
|
Back-end приложение будет использовать .NET Core 2.1 микросервисы. В качестве языка наиболее выгодно использовать F# (алгебраические типы данных для команд и событий улучшат восприятие кода и ускорят разработку). В качестве СУБД предлагается использовать Mongo DB (структура хранимых типов данных будет сильно изменяться в ходе развития проекта) Возможность интеграции с реляционными СУБД на поздних стадиях развития проекта должна быть сохранена. В качестве шины событий предлагается использовать Rabbit MQ.
|
1.0
|
Создать back-end решение и настроить инфраструктуру - Back-end приложение будет использовать .NET Core 2.1 микросервисы. В качестве языка наиболее выгодно использовать F# (алгебраические типы данных для команд и событий улучшат восприятие кода и ускорят разработку). В качестве СУБД предлагается использовать Mongo DB (структура хранимых типов данных будет сильно изменяться в ходе развития проекта) Возможность интеграции с реляционными СУБД на поздних стадиях развития проекта должна быть сохранена. В качестве шины событий предлагается использовать Rabbit MQ.
|
non_process
|
создать back end решение и настроить инфраструктуру back end приложение будет использовать net core микросервисы в качестве языка наиболее выгодно использовать f алгебраические типы данных для команд и событий улучшат восприятие кода и ускорят разработку в качестве субд предлагается использовать mongo db структура хранимых типов данных будет сильно изменяться в ходе развития проекта возможность интеграции с реляционными субд на поздних стадиях развития проекта должна быть сохранена в качестве шины событий предлагается использовать rabbit mq
| 0
|
201,990
| 15,818,277,975
|
IssuesEvent
|
2021-04-05 15:47:58
|
linrunner/TLP
|
https://api.github.com/repos/linrunner/TLP
|
closed
|
Docs vs manpage on battery thresholds
|
committed documentation change
|
On the latest docs version (1.3) for `setcharge` it [says](https://linrunner.de/tlp/usage/tlp.html#change-battery-charge-thresholds-to-temporary-values),
> Without parameters the configured settings for the main battery (BAT0) are applied. **Upon reboot, thresholds are reset to the configured settings.**
and latest `tlp` (1.3) manpage says,
> **Configured thresholds are restored upon next system boot-up.** When called without arguments, configured thresholds are set.
However, as far as I've tested temporary thresholds are restored across system reboots, as the manpage says. So, I guess docs are old, right?
In that case, it might be better to update manpage's one to "~Configured~Temporary thresholds are restored upon next system boot-up.", so that it's clear that only when you run `sudo tlp setcharge` then configured (from settings file) thresholds are actually restored.
By the way, where are those temporary thresholds stored?
|
1.0
|
Docs vs manpage on battery thresholds - On the latest docs version (1.3) for `setcharge` it [says](https://linrunner.de/tlp/usage/tlp.html#change-battery-charge-thresholds-to-temporary-values),
> Without parameters the configured settings for the main battery (BAT0) are applied. **Upon reboot, thresholds are reset to the configured settings.**
and latest `tlp` (1.3) manpage says,
> **Configured thresholds are restored upon next system boot-up.** When called without arguments, configured thresholds are set.
However, as far as I've tested temporary thresholds are restored across system reboots, as the manpage says. So, I guess docs are old, right?
In that case, it might be better to update manpage's one to "~Configured~Temporary thresholds are restored upon next system boot-up.", so that it's clear that only when you run `sudo tlp setcharge` then configured (from settings file) thresholds are actually restored.
By the way, where are those temporary thresholds stored?
|
non_process
|
docs vs manpage on battery thresholds on the latest docs version for setcharge it without parameters the configured settings for the main battery are applied upon reboot thresholds are reset to the configured settings and latest tlp manpage says configured thresholds are restored upon next system boot up when called without arguments configured thresholds are set however as far as i ve tested temporary thresholds are restored across system reboots as the manpage says so i guess docs are old right in that case it might be better to update manpage s one to configured temporary thresholds are restored upon next system boot up so that it s clear that only when you run sudo tlp setcharge then configured from settings file thresholds are actually restored by the way where are those temporary thresholds stored
| 0
|
12,377
| 14,897,108,338
|
IssuesEvent
|
2021-01-21 11:17:24
|
GoogleCloudPlatform/fda-mystudies
|
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
|
closed
|
[Hydra - Mobile apps] [Audit Logs] "appVersion" is displayed null for the events
|
Bug Hydra P2 Process: Fixed Process: Tested dev
|
**Events:**
1. SIGNIN_SUCCEEDED
2. SIGNIN_FAILED_UNREGISTERED_USER
3. SIGNIN_FAILED_INVALID_PASSWORD
4. ACCOUNT_LOCKED
and other events in Hydra for mobile
Sample snippet for PASSWORD_RESET_EMAIL_SENT_FOR_LOCKED_ACCOUNT event
```
{
"insertId": "1sxm6f8g1bpho96",
"jsonPayload": {
"userIp": "59.92.248.109",
"mobilePlatform": "IOS",
"sourceApplicationVersion": "1.0",
"eventCode": "PASSWORD_RESET_EMAIL_SENT_FOR_LOCKED_ACCOUNT",
"participantId": null,
"occurred": 1610447259447,
"siteId": null,
"description": null,
"correlationId": "46BD3A68-DB67-4435-8D6A-0CE48D3E2C12",
"destination": "SCIM AUTH SERVER",
"userId": null,
"appVersion": null,
"studyId": null,
"userAccessLevel": null,
"platformVersion": "1.0",
"source": "MOBILE APPS",
"studyVersion": null,
"resourceServer": "PARTICIPANT USER DATASTORE",
"appId": "BTCDEV001",
"destinationApplicationVersion": "1.0"
},
"resource": {
"type": "global",
"labels": {
"project_id": "mystudies-open-impl-track1-dev"
}
},
"timestamp": "2021-01-12T10:27:39.447Z",
"severity": "INFO",
"logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log",
"receiveTimestamp": "2021-01-12T10:27:39.574229454Z"
}
```
|
2.0
|
[Hydra - Mobile apps] [Audit Logs] "appVersion" is displayed null for the events - **Events:**
1. SIGNIN_SUCCEEDED
2. SIGNIN_FAILED_UNREGISTERED_USER
3. SIGNIN_FAILED_INVALID_PASSWORD
4. ACCOUNT_LOCKED
and other events in Hydra for mobile
Sample snippet for PASSWORD_RESET_EMAIL_SENT_FOR_LOCKED_ACCOUNT event
```
{
"insertId": "1sxm6f8g1bpho96",
"jsonPayload": {
"userIp": "59.92.248.109",
"mobilePlatform": "IOS",
"sourceApplicationVersion": "1.0",
"eventCode": "PASSWORD_RESET_EMAIL_SENT_FOR_LOCKED_ACCOUNT",
"participantId": null,
"occurred": 1610447259447,
"siteId": null,
"description": null,
"correlationId": "46BD3A68-DB67-4435-8D6A-0CE48D3E2C12",
"destination": "SCIM AUTH SERVER",
"userId": null,
"appVersion": null,
"studyId": null,
"userAccessLevel": null,
"platformVersion": "1.0",
"source": "MOBILE APPS",
"studyVersion": null,
"resourceServer": "PARTICIPANT USER DATASTORE",
"appId": "BTCDEV001",
"destinationApplicationVersion": "1.0"
},
"resource": {
"type": "global",
"labels": {
"project_id": "mystudies-open-impl-track1-dev"
}
},
"timestamp": "2021-01-12T10:27:39.447Z",
"severity": "INFO",
"logName": "projects/mystudies-open-impl-track1-dev/logs/application-audit-log",
"receiveTimestamp": "2021-01-12T10:27:39.574229454Z"
}
```
|
process
|
appversion is displayed null for the events events signin succeeded signin failed unregistered user signin failed invalid password account locked and other events in hydra for mobile sample snippet for password reset email sent for locked account event insertid jsonpayload userip mobileplatform ios sourceapplicationversion eventcode password reset email sent for locked account participantid null occurred siteid null description null correlationid destination scim auth server userid null appversion null studyid null useraccesslevel null platformversion source mobile apps studyversion null resourceserver participant user datastore appid destinationapplicationversion resource type global labels project id mystudies open impl dev timestamp severity info logname projects mystudies open impl dev logs application audit log receivetimestamp
| 1
|
614,225
| 19,161,554,192
|
IssuesEvent
|
2021-12-03 01:10:31
|
yukiHaga/regex-hunting
|
https://api.github.com/repos/yukiHaga/regex-hunting
|
opened
|
spaのルーティングを設定する
|
Priority: high Type: new feature
|
## 概要
spaのルーティングを設定する。
## やること
- [ ] AccountSettings.jsxにコンポーネントを定義する。
- [ ] Games.jsxにコンポーネントを定義する。
- [ ] LandingPages.jsxにコンポーネントを定義する。
- [ ] MyPages.jsxにコンポーネントを定義する。
- [ ] PasswordResets.jsxにコンポーネントを定義する。
- [ ] PasswordUpdates.jsxにコンポーネントを定義する。
- [ ] PrivacyPolicies.jsxにコンポーネントを定義する。
- [ ] Rankings.jsxにコンポーネントを定義する。
- [ ] UseTreaties.jsxにコンポーネントを定義する。
- [ ] src/App.jsにルーティングを定義する。
## 受け入れ条件
それぞれのページにアクセスしたら、そのページのコンポーネントに書いた内容が表示される。
## 参考記事
特になし。
|
1.0
|
spaのルーティングを設定する - ## 概要
spaのルーティングを設定する。
## やること
- [ ] AccountSettings.jsxにコンポーネントを定義する。
- [ ] Games.jsxにコンポーネントを定義する。
- [ ] LandingPages.jsxにコンポーネントを定義する。
- [ ] MyPages.jsxにコンポーネントを定義する。
- [ ] PasswordResets.jsxにコンポーネントを定義する。
- [ ] PasswordUpdates.jsxにコンポーネントを定義する。
- [ ] PrivacyPolicies.jsxにコンポーネントを定義する。
- [ ] Rankings.jsxにコンポーネントを定義する。
- [ ] UseTreaties.jsxにコンポーネントを定義する。
- [ ] src/App.jsにルーティングを定義する。
## 受け入れ条件
それぞれのページにアクセスしたら、そのページのコンポーネントに書いた内容が表示される。
## 参考記事
特になし。
|
non_process
|
spaのルーティングを設定する 概要 spaのルーティングを設定する。 やること accountsettings jsxにコンポーネントを定義する。 games jsxにコンポーネントを定義する。 landingpages jsxにコンポーネントを定義する。 mypages jsxにコンポーネントを定義する。 passwordresets jsxにコンポーネントを定義する。 passwordupdates jsxにコンポーネントを定義する。 privacypolicies jsxにコンポーネントを定義する。 rankings jsxにコンポーネントを定義する。 usetreaties jsxにコンポーネントを定義する。 src app jsにルーティングを定義する。 受け入れ条件 それぞれのページにアクセスしたら、そのページのコンポーネントに書いた内容が表示される。 参考記事 特になし。
| 0
|
670
| 3,143,384,302
|
IssuesEvent
|
2015-09-14 06:25:31
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
В дашборде устранить ошибочное появление диалога "Не обран шаблон для друку"
|
active hi priority In process of testing test
|
Это происходит, когда в комбобоксе выбран шаблон для печати а потом "Роздрукувати", закрыть диалог и перейти на другую таску

|
1.0
|
В дашборде устранить ошибочное появление диалога "Не обран шаблон для друку" - Это происходит, когда в комбобоксе выбран шаблон для печати а потом "Роздрукувати", закрыть диалог и перейти на другую таску

|
process
|
в дашборде устранить ошибочное появление диалога не обран шаблон для друку это происходит когда в комбобоксе выбран шаблон для печати а потом роздрукувати закрыть диалог и перейти на другую таску
| 1
|
10,625
| 13,439,391,072
|
IssuesEvent
|
2020-09-07 20:56:37
|
timberio/vector
|
https://api.github.com/repos/timberio/vector
|
opened
|
New URL functions for the remap syntax
|
domain: mapping domain: processing needs: requirements type: feature
|
Similar to #3761, I wanted to open an issue to represent a set of URL related functions.
|
1.0
|
New URL functions for the remap syntax - Similar to #3761, I wanted to open an issue to represent a set of URL related functions.
|
process
|
new url functions for the remap syntax similar to i wanted to open an issue to represent a set of url related functions
| 1
|
18,667
| 24,583,055,247
|
IssuesEvent
|
2022-10-13 17:10:53
|
keras-team/keras-cv
|
https://api.github.com/repos/keras-team/keras-cv
|
closed
|
Add RandomFog preprocessing layer
|
preprocessing
|
## Weather Augmentation
One of the real-world scenarios that pose challenges for training neural networks of Autonomous vehicles

Impl. Ref
- https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library
- https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomFog
|
1.0
|
Add RandomFog preprocessing layer - ## Weather Augmentation
One of the real-world scenarios that pose challenges for training neural networks of Autonomous vehicles

Impl. Ref
- https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library
- https://albumentations.ai/docs/api_reference/augmentations/transforms/#albumentations.augmentations.transforms.RandomFog
|
process
|
add randomfog preprocessing layer weather augmentation one of the real world scenarios that pose challenges for training neural networks of autonomous vehicles impl ref
| 1
|
11,560
| 14,438,665,966
|
IssuesEvent
|
2020-12-07 13:22:32
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Expand --version command
|
kind/improvement process/candidate team/client topic: binary topic: cli
|
Currently the version command just returns the CLI version and one commit hash for the binary which is hardcoded in the package.json - not the _actual_ hash that the binary returns when asked for it.
Ideally the CLI would request the version from all the binaries version endpoints and output something like this:
```
prisma2@2.0.0-alpha.473
All binaries are on the same commit : 9sfhshfjdhfjsdhfs....`
```
Or in the case they differ (due to the usage of env vars to point to local binaries)
```
prisma2@2.0.0-alpha.473
The binaries are on different commits:
Introspection-Engine : 9fadsfsdfsdf...
Query-Engine : 5j245h45kk2hhgfsfsdf...
Migration-Engine : 92047254hsfs7s8fs02f...
```
|
1.0
|
Expand --version command - Currently the version command just returns the CLI version and one commit hash for the binary which is hardcoded in the package.json - not the _actual_ hash that the binary returns when asked for it.
Ideally the CLI would request the version from all the binaries version endpoints and output something like this:
```
prisma2@2.0.0-alpha.473
All binaries are on the same commit : 9sfhshfjdhfjsdhfs....`
```
Or in the case they differ (due to the usage of env vars to point to local binaries)
```
prisma2@2.0.0-alpha.473
The binaries are on different commits:
Introspection-Engine : 9fadsfsdfsdf...
Query-Engine : 5j245h45kk2hhgfsfsdf...
Migration-Engine : 92047254hsfs7s8fs02f...
```
|
process
|
expand version command currently the version command just returns the cli version and one commit hash for the binary which is hardcoded in the package json not the actual hash that the binary returns when asked for it ideally the cli would request the version from all the binaries version endpoints and output something like this alpha all binaries are on the same commit or in the case they differ due to the usage of env vars to point to local binaries alpha the binaries are on different commits introspection engine query engine migration engine
| 1
|
16,930
| 22,274,358,745
|
IssuesEvent
|
2022-06-10 15:09:32
|
python/cpython
|
https://api.github.com/repos/python/cpython
|
closed
|
Library multiprocess leaks named resources.
|
performance stdlib 3.11 3.10 3.9 3.8 expert-multiprocessing
|
BPO | [46391](https://bugs.python.org/issue46391)
--- | :---
Nosy | @pitrou, @benjaminp, @1st1, @applio, @arhadthedev, @jxdabc, @BarkingBad
PRs | <li>python/cpython#30617</li>
Files | <li>[screen.png](https://bugs.python.org/file50580/screen.png "Uploaded as image/png at 2022-01-24.09:31:44 by @jxdabc"): screenshot</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2022-01-15.18:04:57.278>
labels = ['3.8', '3.9', '3.10', 'performance', '3.11', 'library']
title = 'Library multiprocess leaks named resources.'
updated_at = <Date 2022-02-18.19:39:09.294>
user = 'https://github.com/jxdabc'
```
bugs.python.org fields:
```python
activity = <Date 2022-02-18.19:39:09.294>
actor = 'BarkingBad'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2022-01-15.18:04:57.278>
creator = 'milestonejxd'
dependencies = []
files = ['50580']
hgrepos = []
issue_num = 46391
keywords = ['patch']
message_count = 5.0
messages = ['410654', '410657', '410853', '411456', '413504']
nosy_count = 8.0
nosy_names = ['pitrou', 'benjamin.peterson', 'sbt', 'yselivanov', 'davin', 'arhadthedev', 'milestonejxd', 'BarkingBad']
pr_nums = ['30617']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'resource usage'
url = 'https://bugs.python.org/issue46391'
versions = ['Python 3.8', 'Python 3.9', 'Python 3.10', 'Python 3.11']
```
</p></details>
|
1.0
|
Library multiprocess leaks named resources. - BPO | [46391](https://bugs.python.org/issue46391)
--- | :---
Nosy | @pitrou, @benjaminp, @1st1, @applio, @arhadthedev, @jxdabc, @BarkingBad
PRs | <li>python/cpython#30617</li>
Files | <li>[screen.png](https://bugs.python.org/file50580/screen.png "Uploaded as image/png at 2022-01-24.09:31:44 by @jxdabc"): screenshot</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2022-01-15.18:04:57.278>
labels = ['3.8', '3.9', '3.10', 'performance', '3.11', 'library']
title = 'Library multiprocess leaks named resources.'
updated_at = <Date 2022-02-18.19:39:09.294>
user = 'https://github.com/jxdabc'
```
bugs.python.org fields:
```python
activity = <Date 2022-02-18.19:39:09.294>
actor = 'BarkingBad'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation = <Date 2022-01-15.18:04:57.278>
creator = 'milestonejxd'
dependencies = []
files = ['50580']
hgrepos = []
issue_num = 46391
keywords = ['patch']
message_count = 5.0
messages = ['410654', '410657', '410853', '411456', '413504']
nosy_count = 8.0
nosy_names = ['pitrou', 'benjamin.peterson', 'sbt', 'yselivanov', 'davin', 'arhadthedev', 'milestonejxd', 'BarkingBad']
pr_nums = ['30617']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'resource usage'
url = 'https://bugs.python.org/issue46391'
versions = ['Python 3.8', 'Python 3.9', 'Python 3.10', 'Python 3.11']
```
</p></details>
|
process
|
library multiprocess leaks named resources bpo nosy pitrou benjaminp applio arhadthedev jxdabc barkingbad prs python cpython files uploaded as image png at by jxdabc screenshot note these values reflect the state of the issue at the time it was migrated and might not reflect the current state show more details github fields python assignee none closed at none created at labels title library multiprocess leaks named resources updated at user bugs python org fields python activity actor barkingbad assignee none closed false closed date none closer none components creation creator milestonejxd dependencies files hgrepos issue num keywords message count messages nosy count nosy names pr nums priority normal resolution none stage patch review status open superseder none type resource usage url versions
| 1
|
557,721
| 16,517,037,583
|
IssuesEvent
|
2021-05-26 10:46:25
|
woocommerce/woocommerce-ios
|
https://api.github.com/repos/woocommerce/woocommerce-ios
|
closed
|
[Mobile Payments] Connecting a reader from payments leaves the app in an odd state and sometimes crashes
|
feature: mobile payments priority: critical type: bug type: crash
|
I'm not sure how to explain what's going on, but I recorded a video. When you connect to a reader directly from an Order details, there are a few issues:
1. The list of connected readers with the option to disconnect is briefly visible after connecting, before the modal is dismissed [00:16, 00:48]. Possibly related to #4220
2. When you go to settings and tap on Disconnect, a new modal (Connect your card reader) is presented, on top of an identical one [00:26] . The button in the presented one does nothing [00:28]. When you dismiss it, the one underneath is functional [00:30]. However, on the first run, connecting to a reader crashed the app.
3. On the second run, I didn't press the connect button on the presented modal and dismissed it immediately. This didn't crash the app. However, it immediately presented the modal to collect payment [01:03], even though I was in the app settings screen.
https://user-images.githubusercontent.com/8739/118800816-9d674100-b8a0-11eb-988f-c52c7d974db5.mp4
|
1.0
|
[Mobile Payments] Connecting a reader from payments leaves the app in an odd state and sometimes crashes - I'm not sure how to explain what's going on, but I recorded a video. When you connect to a reader directly from an Order details, there are a few issues:
1. The list of connected readers with the option to disconnect is briefly visible after connecting, before the modal is dismissed [00:16, 00:48]. Possibly related to #4220
2. When you go to settings and tap on Disconnect, a new modal (Connect your card reader) is presented, on top of an identical one [00:26] . The button in the presented one does nothing [00:28]. When you dismiss it, the one underneath is functional [00:30]. However, on the first run, connecting to a reader crashed the app.
3. On the second run, I didn't press the connect button on the presented modal and dismissed it immediately. This didn't crash the app. However, it immediately presented the modal to collect payment [01:03], even though I was in the app settings screen.
https://user-images.githubusercontent.com/8739/118800816-9d674100-b8a0-11eb-988f-c52c7d974db5.mp4
|
non_process
|
connecting a reader from payments leaves the app in an odd state and sometimes crashes i m not sure how to explain what s going on but i recorded a video when you connect to a reader directly from an order details there are a few issues the list of connected readers with the option to disconnect is briefly visible after connecting before the modal is dismissed possibly related to when you go to settings and tap on disconnect a new modal connect your card reader is presented on top of an identical one the button in the presented one does nothing when you dismiss it the one underneath is functional however on the first run connecting to a reader crashed the app on the second run i didn t press the connect button on the presented modal and dismissed it immediately this didn t crash the app however it immediately presented the modal to collect payment even though i was in the app settings screen
| 0
|
109,457
| 13,774,599,744
|
IssuesEvent
|
2020-10-08 06:32:27
|
PostHog/posthog.com
|
https://api.github.com/repos/PostHog/posthog.com
|
opened
|
Standardize .com design & make code modular
|
design enhancement
|
After speaking with @berntgl and @jamesefhawkins and looking through posthog.com + code, it seems like there are some key areas we can improve:
- More design consistency throughout different pages
- Make this easy with a component-based design system in Figma
- Figure out where we're at –– review & document inconsistencies in the site currently
- Make the code modular, more reusable UI components
- Add MDX hopefully, so we can have React components throughout Docs & Handbook (all markdown pages)
- Tame the CSS
- Minimize repetition in code by making classes more modular, introducing variables –– [Sass](https://sass-lang.com/) or [something similar](https://stackshare.io/sass/alternatives#:~:text=Stylus%2C%20styled%2Dcomponents%2C%20PostCSS,alternatives%20and%20competitors%20to%20Sass.) would help with this
- This would help decrease usage of `!important`, which can cause some hard-to-diagnose bugs down the line
|
1.0
|
Standardize .com design & make code modular - After speaking with @berntgl and @jamesefhawkins and looking through posthog.com + code, it seems like there are some key areas we can improve:
- More design consistency throughout different pages
- Make this easy with a component-based design system in Figma
- Figure out where we're at –– review & document inconsistencies in the site currently
- Make the code modular, more reusable UI components
- Add MDX hopefully, so we can have React components throughout Docs & Handbook (all markdown pages)
- Tame the CSS
- Minimize repetition in code by making classes more modular, introducing variables –– [Sass](https://sass-lang.com/) or [something similar](https://stackshare.io/sass/alternatives#:~:text=Stylus%2C%20styled%2Dcomponents%2C%20PostCSS,alternatives%20and%20competitors%20to%20Sass.) would help with this
- This would help decrease usage of `!important`, which can cause some hard-to-diagnose bugs down the line
|
non_process
|
standardize com design make code modular after speaking with berntgl and jamesefhawkins and looking through posthog com code it seems like there are some key areas we can improve more design consistency throughout different pages make this easy with a component based design system in figma figure out where we re at –– review document inconsistencies in the site currently make the code modular more reusable ui components add mdx hopefully so we can have react components throughout docs handbook all markdown pages tame the css minimize repetition in code by making classes more modular introducing variables –– or would help with this this would help decrease usage of important which can cause some hard to diagnose bugs down the line
| 0
|
9,547
| 12,512,085,379
|
IssuesEvent
|
2020-06-02 21:53:56
|
googleapis/google-cloud-ruby
|
https://api.github.com/repos/googleapis/google-cloud-ruby
|
closed
|
Add Ruby 2.7 to the CI
|
type: process
|
The Ruby 2.7 compatible build of google-protobuf (version 3.12.0) landed a few days ago. We should be able to add Ruby 2.7 to the CI now.
|
1.0
|
Add Ruby 2.7 to the CI - The Ruby 2.7 compatible build of google-protobuf (version 3.12.0) landed a few days ago. We should be able to add Ruby 2.7 to the CI now.
|
process
|
add ruby to the ci the ruby compatible build of google protobuf version landed a few days ago we should be able to add ruby to the ci now
| 1
|
18,786
| 24,690,980,309
|
IssuesEvent
|
2022-10-19 08:31:48
|
geneontology/go-ontology
|
https://api.github.com/repos/geneontology/go-ontology
|
closed
|
BP refactoring: homeostatic process: amino acid homeostatis process is_a cellular homeostatic process
|
cellular processes
|
- [x] GO:0080144 amino acid homeostasis' -> changed label to 'cellular amino acid homeostasis', changed parent to GO:0019725 cellular homeostasis & definition to "Any process involved in the maintenance of an internal steady state of amino acid within a cell." (done in #24218)
- [x] Change labels and definitions for all children:
GO:0090465 arginine homeostasis
GO:0090459 aspartate homeostasis
GO:0080145 cysteine homeostasis
GO:0090461 glutamate homeostasis
GO:0090464 histidine homeostasis
GO:0090463 lysine homeostasis
GO:0090462 ornithine homeostasis
GO:0090460 threonine homeostasis
|
1.0
|
BP refactoring: homeostatic process: amino acid homeostatis process is_a cellular homeostatic process - - [x] GO:0080144 amino acid homeostasis' -> changed label to 'cellular amino acid homeostasis', changed parent to GO:0019725 cellular homeostasis & definition to "Any process involved in the maintenance of an internal steady state of amino acid within a cell." (done in #24218)
- [x] Change labels and definitions for all children:
GO:0090465 arginine homeostasis
GO:0090459 aspartate homeostasis
GO:0080145 cysteine homeostasis
GO:0090461 glutamate homeostasis
GO:0090464 histidine homeostasis
GO:0090463 lysine homeostasis
GO:0090462 ornithine homeostasis
GO:0090460 threonine homeostasis
|
process
|
bp refactoring homeostatic process amino acid homeostatis process is a cellular homeostatic process go amino acid homeostasis changed label to cellular amino acid homeostasis changed parent to go cellular homeostasis definition to any process involved in the maintenance of an internal steady state of amino acid within a cell done in change labels and definitions for all children go arginine homeostasis go aspartate homeostasis go cysteine homeostasis go glutamate homeostasis go histidine homeostasis go lysine homeostasis go ornithine homeostasis go threonine homeostasis
| 1
|
8,486
| 11,645,643,155
|
IssuesEvent
|
2020-03-01 03:08:29
|
googleapis/google-cloud-node
|
https://api.github.com/repos/googleapis/google-cloud-node
|
closed
|
Reference doc publication needs to work with GitHub app
|
type: process
|
Currently, the publication of reference docs is kicked off by the `publish` kokoro job, as we start to move publication towards our new GitHub-app-based approach for publication, we need to figure out how we're going to kick off this job (perhaps we could just move it to being nightly?).
|
1.0
|
Reference doc publication needs to work with GitHub app - Currently, the publication of reference docs is kicked off by the `publish` kokoro job, as we start to move publication towards our new GitHub-app-based approach for publication, we need to figure out how we're going to kick off this job (perhaps we could just move it to being nightly?).
|
process
|
reference doc publication needs to work with github app currently the publication of reference docs is kicked off by the publish kokoro job as we start to move publication towards our new github app based approach for publication we need to figure out how we re going to kick off this job perhaps we could just move it to being nightly
| 1
|
269,812
| 23,467,888,947
|
IssuesEvent
|
2022-08-16 18:36:35
|
void-linux/void-packages
|
https://api.github.com/repos/void-linux/void-packages
|
opened
|
Libconfig
|
bug needs-testing
|
### Is this a new report?
Yes
### System Info
Void 5.18.16_1 x86_64-musl GenuineIntel uptodate rFFFF
### Package(s) Affected
libconfig-1.7.3_1, libconfig-devel-1.7.3_1
### Does a report exist for this bug with the project's home (upstream) and/or another distro?
https://github.com/yshui/picom/issues/872
### Expected behaviour
Compiling picom should generate an executable `picom` under `src/` under the building tree
### Actual behaviour
A bunch of linker error messages related to `libconfig` are displayed
### Steps to reproduce
1. Clone the repo `git clone https://github.com/yshui/picom`
2. Install make deps including `libconfig-devel`
3. meson --buildtype=release . build
4. ninja -C build
|
1.0
|
Libconfig - ### Is this a new report?
Yes
### System Info
Void 5.18.16_1 x86_64-musl GenuineIntel uptodate rFFFF
### Package(s) Affected
libconfig-1.7.3_1, libconfig-devel-1.7.3_1
### Does a report exist for this bug with the project's home (upstream) and/or another distro?
https://github.com/yshui/picom/issues/872
### Expected behaviour
Compiling picom should generate an executable `picom` under `src/` under the building tree
### Actual behaviour
A bunch of linker error messages related to `libconfig` are displayed
### Steps to reproduce
1. Clone the repo `git clone https://github.com/yshui/picom`
2. Install make deps including `libconfig-devel`
3. meson --buildtype=release . build
4. ninja -C build
|
non_process
|
libconfig is this a new report yes system info void musl genuineintel uptodate rffff package s affected libconfig libconfig devel does a report exist for this bug with the project s home upstream and or another distro expected behaviour compiling picom should generate an executable picom under src under the building tree actual behaviour a bunch of linker error messages related to libconfig are displayed steps to reproduce clone the repo git clone install make deps including libconfig devel meson buildtype release build ninja c build
| 0
|
15,642
| 19,845,881,406
|
IssuesEvent
|
2022-01-21 06:12:20
|
ooi-data/CE07SHSM-MFD37-01-OPTAAD000-telemetered-optaa_dj_dcl_instrument
|
https://api.github.com/repos/ooi-data/CE07SHSM-MFD37-01-OPTAAD000-telemetered-optaa_dj_dcl_instrument
|
opened
|
🛑 Processing failed: InvalidIndexError
|
process
|
## Overview
`InvalidIndexError` found in `processing_task` task during run ended on 2022-01-21T06:12:19.902557.
## Details
Flow name: `CE07SHSM-MFD37-01-OPTAAD000-telemetered-optaa_dj_dcl_instrument`
Task name: `processing_task`
Error type: `InvalidIndexError`
Error message: Reindexing only valid with uniquely valued Index objects
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 182, in _append_zarr
ds_to_append = ds_to_append.drop_isel({append_dim: 0})
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 4563, in drop_isel
ds = ds.loc[dimension_index]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 563, in __getitem__
return self.dataset.sel(key)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 2504, in sel
pos_indexers, new_indexes = remap_label_indexers(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/coordinates.py", line 421, in remap_label_indexers
pos_indexers, new_indexes = indexing.remap_label_indexers(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 120, in remap_label_indexers
idxr, new_idx = index.query(labels, method=method, tolerance=tolerance)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexes.py", line 240, in query
indexer = get_indexer_nd(self.index, label, method, tolerance)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexes.py", line 142, in get_indexer_nd
flat_indexer = index.get_indexer(flat_labels, method=method, tolerance=tolerance)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3442, in get_indexer
raise InvalidIndexError(self._requires_unique_msg)
pandas.errors.InvalidIndexError: Reindexing only valid with uniquely valued Index objects
```
</details>
|
1.0
|
🛑 Processing failed: InvalidIndexError - ## Overview
`InvalidIndexError` found in `processing_task` task during run ended on 2022-01-21T06:12:19.902557.
## Details
Flow name: `CE07SHSM-MFD37-01-OPTAAD000-telemetered-optaa_dj_dcl_instrument`
Task name: `processing_task`
Error type: `InvalidIndexError`
Error message: Reindexing only valid with uniquely valued Index objects
<details>
<summary>Traceback</summary>
```
Traceback (most recent call last):
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing
final_path = finalize_data_stream(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 84, in finalize_data_stream
append_to_zarr(mod_ds, final_store, enc, logger=logger)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 357, in append_to_zarr
_append_zarr(store, mod_ds)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/utils.py", line 182, in _append_zarr
ds_to_append = ds_to_append.drop_isel({append_dim: 0})
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 4563, in drop_isel
ds = ds.loc[dimension_index]
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 563, in __getitem__
return self.dataset.sel(key)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/dataset.py", line 2504, in sel
pos_indexers, new_indexes = remap_label_indexers(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/coordinates.py", line 421, in remap_label_indexers
pos_indexers, new_indexes = indexing.remap_label_indexers(
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexing.py", line 120, in remap_label_indexers
idxr, new_idx = index.query(labels, method=method, tolerance=tolerance)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexes.py", line 240, in query
indexer = get_indexer_nd(self.index, label, method, tolerance)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/xarray/core/indexes.py", line 142, in get_indexer_nd
flat_indexer = index.get_indexer(flat_labels, method=method, tolerance=tolerance)
File "/srv/conda/envs/notebook/lib/python3.9/site-packages/pandas/core/indexes/base.py", line 3442, in get_indexer
raise InvalidIndexError(self._requires_unique_msg)
pandas.errors.InvalidIndexError: Reindexing only valid with uniquely valued Index objects
```
</details>
|
process
|
🛑 processing failed invalidindexerror overview invalidindexerror found in processing task task during run ended on details flow name telemetered optaa dj dcl instrument task name processing task error type invalidindexerror error message reindexing only valid with uniquely valued index objects traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream append to zarr mod ds final store enc logger logger file srv conda envs notebook lib site packages ooi harvester processor init py line in append to zarr append zarr store mod ds file srv conda envs notebook lib site packages ooi harvester processor utils py line in append zarr ds to append ds to append drop isel append dim file srv conda envs notebook lib site packages xarray core dataset py line in drop isel ds ds loc file srv conda envs notebook lib site packages xarray core dataset py line in getitem return self dataset sel key file srv conda envs notebook lib site packages xarray core dataset py line in sel pos indexers new indexes remap label indexers file srv conda envs notebook lib site packages xarray core coordinates py line in remap label indexers pos indexers new indexes indexing remap label indexers file srv conda envs notebook lib site packages xarray core indexing py line in remap label indexers idxr new idx index query labels method method tolerance tolerance file srv conda envs notebook lib site packages xarray core indexes py line in query indexer get indexer nd self index label method tolerance file srv conda envs notebook lib site packages xarray core indexes py line in get indexer nd flat indexer index get indexer flat labels method method tolerance tolerance file srv conda envs notebook lib site packages pandas core indexes base py line in get indexer raise invalidindexerror self requires unique msg pandas errors invalidindexerror reindexing only valid with uniquely valued index objects
| 1
|
293,522
| 8,997,007,338
|
IssuesEvent
|
2019-02-02 07:20:42
|
tandemcode/tandem
|
https://api.github.com/repos/tandemcode/tandem
|
closed
|
tandem-cli tool should download app
|
completed feature priority: high
|
Look at Cypress for inspiration. Motivation for this is that the editor is _coupled_ to `*.pc` files, and the developer may need different versions of the application if they have different projects which use different `*.pc` files. This _also_ makes updating a bit easier since we'd be relying on the package version of the CLI tool vs using something like Squirrel.
TODOS:
- [ ] Store Tandem app in shared place
- [ ] Tandem binary
- [ ] If *.tdproject is not found in directory, prompt to create new project
- [ ] CLI should prompt when there's an update available based on the _minor_ parameter.
|
1.0
|
tandem-cli tool should download app - Look at Cypress for inspiration. Motivation for this is that the editor is _coupled_ to `*.pc` files, and the developer may need different versions of the application if they have different projects which use different `*.pc` files. This _also_ makes updating a bit easier since we'd be relying on the package version of the CLI tool vs using something like Squirrel.
TODOS:
- [ ] Store Tandem app in shared place
- [ ] Tandem binary
- [ ] If *.tdproject is not found in directory, prompt to create new project
- [ ] CLI should prompt when there's an update available based on the _minor_ parameter.
|
non_process
|
tandem cli tool should download app look at cypress for inspiration motivation for this is that the editor is coupled to pc files and the developer may need different versions of the application if they have different projects which use different pc files this also makes updating a bit easier since we d be relying on the package version of the cli tool vs using something like squirrel todos store tandem app in shared place tandem binary if tdproject is not found in directory prompt to create new project cli should prompt when there s an update available based on the minor parameter
| 0
|
15,105
| 5,888,488,118
|
IssuesEvent
|
2017-05-17 10:15:49
|
linuxkit/linuxkit
|
https://api.github.com/repos/linuxkit/linuxkit
|
closed
|
Can busybox in the root fs be stripped down more?
|
area/build kind/enhancement
|
I'm just looking through the minimal.yml base filesystem and noticing a lot of functionality in the Busybox symlinks that seem redundant now:
```
lrwxrwxrwx 1 root root 12 May 17 01:36 dnsd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 ether-wake -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 fakeidentd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 fbset -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 fdformat -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 ftpd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 httpd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 inetd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 killall5 -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 loadfont -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 lspci -> /bin/busybox
```
e.g. dnsd, ftpd, httpd definitely seem not useful...
|
1.0
|
Can busybox in the root fs be stripped down more? - I'm just looking through the minimal.yml base filesystem and noticing a lot of functionality in the Busybox symlinks that seem redundant now:
```
lrwxrwxrwx 1 root root 12 May 17 01:36 dnsd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 ether-wake -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 fakeidentd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 fbset -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 fdformat -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 ftpd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 httpd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 inetd -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 killall5 -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 loadfont -> /bin/busybox
lrwxrwxrwx 1 root root 12 May 17 01:36 lspci -> /bin/busybox
```
e.g. dnsd, ftpd, httpd definitely seem not useful...
|
non_process
|
can busybox in the root fs be stripped down more i m just looking through the minimal yml base filesystem and noticing a lot of functionality in the busybox symlinks that seem redundant now lrwxrwxrwx root root may dnsd bin busybox lrwxrwxrwx root root may ether wake bin busybox lrwxrwxrwx root root may fakeidentd bin busybox lrwxrwxrwx root root may fbset bin busybox lrwxrwxrwx root root may fdformat bin busybox lrwxrwxrwx root root may ftpd bin busybox lrwxrwxrwx root root may httpd bin busybox lrwxrwxrwx root root may inetd bin busybox lrwxrwxrwx root root may bin busybox lrwxrwxrwx root root may loadfont bin busybox lrwxrwxrwx root root may lspci bin busybox e g dnsd ftpd httpd definitely seem not useful
| 0
|
257,509
| 22,172,772,211
|
IssuesEvent
|
2022-06-06 04:06:57
|
pingcap/tiflow
|
https://api.github.com/repos/pingcap/tiflow
|
opened
|
cdc cron mq tests not guarding code properly
|
component/test
|
### Which jobs are flaking?
N/A
### Which test(s) are flaking?
N/A
### Jenkins logs or GitHub Actions link
https://github.com/pingcap/tiflow/actions/workflows/ticdc_integration_cron.yaml
### Anything else we need to know
The action contains two test and runs them cronly. But from the result I think there is almost nobody careing about them. The canal job fails consistently for 4 months. And with the GA of avro feature, we should take the corresponding test seriously.
|
1.0
|
cdc cron mq tests not guarding code properly - ### Which jobs are flaking?
N/A
### Which test(s) are flaking?
N/A
### Jenkins logs or GitHub Actions link
https://github.com/pingcap/tiflow/actions/workflows/ticdc_integration_cron.yaml
### Anything else we need to know
The action contains two test and runs them cronly. But from the result I think there is almost nobody careing about them. The canal job fails consistently for 4 months. And with the GA of avro feature, we should take the corresponding test seriously.
|
non_process
|
cdc cron mq tests not guarding code properly which jobs are flaking n a which test s are flaking n a jenkins logs or github actions link anything else we need to know the action contains two test and runs them cronly but from the result i think there is almost nobody careing about them the canal job fails consistently for months and with the ga of avro feature we should take the corresponding test seriously
| 0
|
3,927
| 6,847,354,552
|
IssuesEvent
|
2017-11-13 15:12:05
|
ouh-churchill/COPE
|
https://api.github.com/repos/ouh-churchill/COPE
|
closed
|
Date of procurement misaligned on DMC Adverse Events Report
|
App: Administration Priority: Low Process: Closed-Resolved
|
The procured date is being displayed twice on the HMP +o2 table (but correctly on the HMP -o2). Looks to have been this way since 4th Apr when the report was last amended (wondering why we didn't spot it at the last DMC meeting).
|
1.0
|
Date of procurement misaligned on DMC Adverse Events Report - The procured date is being displayed twice on the HMP +o2 table (but correctly on the HMP -o2). Looks to have been this way since 4th Apr when the report was last amended (wondering why we didn't spot it at the last DMC meeting).
|
process
|
date of procurement misaligned on dmc adverse events report the procured date is being displayed twice on the hmp table but correctly on the hmp looks to have been this way since apr when the report was last amended wondering why we didn t spot it at the last dmc meeting
| 1
|
261,233
| 19,703,216,014
|
IssuesEvent
|
2022-01-12 18:48:55
|
getditto/docs
|
https://api.github.com/repos/getditto/docs
|
opened
|
Problem: Missing Auth0 Integration Tutorial
|
documentation
|
We need an example where an iOS, React, and Android device can login with Auth0 and simultaneously get credentials to sync with Ditto.
|
1.0
|
Problem: Missing Auth0 Integration Tutorial - We need an example where an iOS, React, and Android device can login with Auth0 and simultaneously get credentials to sync with Ditto.
|
non_process
|
problem missing integration tutorial we need an example where an ios react and android device can login with and simultaneously get credentials to sync with ditto
| 0
|
20,190
| 26,756,842,259
|
IssuesEvent
|
2023-01-31 01:22:56
|
hsmusic/hsmusic-data
|
https://api.github.com/repos/hsmusic/hsmusic-data
|
closed
|
Wallpapers for the 'classic' Homestuck volumes
|
scope: official type: addition type: involved process what: art & layout media
|
Homestuck Vol. 4 (as well as its rerelease Vol. 1-4) released with an exclusive SBURB-themed wallpaper. We added it to the wiki with the additional files feature, but it never felt 100% appropriate to have a background only for Vol. 4 and not the other three initial albums (although it looks really good as an album background). I'd like to use some of [MSPA's wallpapers section](http://www.mspaintadventures.com/desktops.html) for this since they match the SBURB wallpaper's early Homestuck feel. The idea is for each volume to have a wallpaper of the appropriate kid's house and for Vol. 1-4 to use the SBURB wallpaper.
- [x] Vol. 1
- [x] Trawl through flashes for appropriate HQ assets
- [x] Vol. 2
- [x] Trawl through flashes for appropriate HQ assets
- [x] [Vol. 3](http://www.mspaintadventures.com/desktops/daveapt_1920x1080.jpg)
- [x] [Vol. 4](http://www.mspaintadventures.com/desktops/jadehouse_1920x1080.jpg)
- [x] Carefully fill in blank edges
- [x] [Vol. 1-4](https://hsmusic.wiki/media/album-additional/homestuck-vol-4/sburbwp_1920x1080.jpg)
- [x] Slightly adjust horizontally to prevent scrollbar from mucking with symmetry (5px to the left)
|
1.0
|
Wallpapers for the 'classic' Homestuck volumes - Homestuck Vol. 4 (as well as its rerelease Vol. 1-4) released with an exclusive SBURB-themed wallpaper. We added it to the wiki with the additional files feature, but it never felt 100% appropriate to have a background only for Vol. 4 and not the other three initial albums (although it looks really good as an album background). I'd like to use some of [MSPA's wallpapers section](http://www.mspaintadventures.com/desktops.html) for this since they match the SBURB wallpaper's early Homestuck feel. The idea is for each volume to have a wallpaper of the appropriate kid's house and for Vol. 1-4 to use the SBURB wallpaper.
- [x] Vol. 1
- [x] Trawl through flashes for appropriate HQ assets
- [x] Vol. 2
- [x] Trawl through flashes for appropriate HQ assets
- [x] [Vol. 3](http://www.mspaintadventures.com/desktops/daveapt_1920x1080.jpg)
- [x] [Vol. 4](http://www.mspaintadventures.com/desktops/jadehouse_1920x1080.jpg)
- [x] Carefully fill in blank edges
- [x] [Vol. 1-4](https://hsmusic.wiki/media/album-additional/homestuck-vol-4/sburbwp_1920x1080.jpg)
- [x] Slightly adjust horizontally to prevent scrollbar from mucking with symmetry (5px to the left)
|
process
|
wallpapers for the classic homestuck volumes homestuck vol as well as its rerelease vol released with an exclusive sburb themed wallpaper we added it to the wiki with the additional files feature but it never felt appropriate to have a background only for vol and not the other three initial albums although it looks really good as an album background i d like to use some of for this since they match the sburb wallpaper s early homestuck feel the idea is for each volume to have a wallpaper of the appropriate kid s house and for vol to use the sburb wallpaper vol trawl through flashes for appropriate hq assets vol trawl through flashes for appropriate hq assets carefully fill in blank edges slightly adjust horizontally to prevent scrollbar from mucking with symmetry to the left
| 1
|
16,560
| 21,572,911,566
|
IssuesEvent
|
2022-05-02 10:25:56
|
huutho77/CNPMNC_ThayAi
|
https://api.github.com/repos/huutho77/CNPMNC_ThayAi
|
opened
|
[API] Feature Get All Products
|
dev/thnguyen processing
|
- Return all products from the database:
- [ ] Product Name
- [ ] Price
- [ ] Quantity
- [ ] Description
Due Date: 06/05/2022
|
1.0
|
[API] Feature Get All Products - - Return all products from the database:
- [ ] Product Name
- [ ] Price
- [ ] Quantity
- [ ] Description
Due Date: 06/05/2022
|
process
|
feature get all products return all products from the database product name price quantity description due date
| 1
|
10,896
| 7,342,347,235
|
IssuesEvent
|
2018-03-07 07:26:16
|
neovim/neovim
|
https://api.github.com/repos/neovim/neovim
|
reopened
|
:terminal flooded by output (`yes`) does not respond to input
|
:terminal input io performance
|
- `nvim --version`: NVIM v0.2.3-768-gf72630b78
### Steps to reproduce using `nvim -u NORC`
Just run some terminal command that produces a lot of output, enter normal mode, and try to scroll up. The cursor in normal mode used to not track the last line of terminal output, but as of a recent commit, it does.
|
True
|
:terminal flooded by output (`yes`) does not respond to input -
- `nvim --version`: NVIM v0.2.3-768-gf72630b78
### Steps to reproduce using `nvim -u NORC`
Just run some terminal command that produces a lot of output, enter normal mode, and try to scroll up. The cursor in normal mode used to not track the last line of terminal output, but as of a recent commit, it does.
|
non_process
|
terminal flooded by output yes does not respond to input nvim version nvim steps to reproduce using nvim u norc just run some terminal command that produces a lot of output enter normal mode and try to scroll up the cursor in normal mode used to not track the last line of terminal output but as of a recent commit it does
| 0
|
16,587
| 3,541,518,733
|
IssuesEvent
|
2016-01-19 01:40:21
|
e-government-ua/i
|
https://api.github.com/repos/e-government-ua/i
|
closed
|
Доработать сущность Сервиса, для лимитирование одновременной подачи Н-ного числа заявок гражданином по определенной услуге
|
active question test _wf-central
|
В сущности Service добавляем поле:
nOpenedLimit - число возможных одновременно открытых заявок от гражданина (больше этого числа открытых заявок быть не может)
|
1.0
|
Доработать сущность Сервиса, для лимитирование одновременной подачи Н-ного числа заявок гражданином по определенной услуге - В сущности Service добавляем поле:
nOpenedLimit - число возможных одновременно открытых заявок от гражданина (больше этого числа открытых заявок быть не может)
|
non_process
|
доработать сущность сервиса для лимитирование одновременной подачи н ного числа заявок гражданином по определенной услуге в сущности service добавляем поле nopenedlimit число возможных одновременно открытых заявок от гражданина больше этого числа открытых заявок быть не может
| 0
|
11,625
| 14,484,892,825
|
IssuesEvent
|
2020-12-10 16:54:59
|
gwatkinson/projet-python-twitter
|
https://api.github.com/repos/gwatkinson/projet-python-twitter
|
closed
|
Mise en forme des données
|
data processing
|
**Concerne quelle partie du projet?**
- [ ] Organisation du GitHub
- [ ] Stream API Twitter
- [x] Processing des données
- [ ] Modélisation des données
- [ ] Visualisation
---
**Détails de l'élément à ajouter**
- Objectif de l'élément ?
- Créer des dataframes à partir des fichiers .json récupérés avec le streaming
- Quelles étapes effectuer ?
- [ ] Faire des fonctions et des classes pour traiter les fichiers .json
- [ ] Faire la doc de ces fonctions
- [ ] Ecrire des tests et tester les fonctions
- Qui doit le faire ?
- @gwatkinson
- @MathiasVigouroux
- @Wilesane
---
**Contexte supplémentaire**
Cela correspond à la deuxième grosse partie du projet.
|
1.0
|
Mise en forme des données - **Concerne quelle partie du projet?**
- [ ] Organisation du GitHub
- [ ] Stream API Twitter
- [x] Processing des données
- [ ] Modélisation des données
- [ ] Visualisation
---
**Détails de l'élément à ajouter**
- Objectif de l'élément ?
- Créer des dataframes à partir des fichiers .json récupérés avec le streaming
- Quelles étapes effectuer ?
- [ ] Faire des fonctions et des classes pour traiter les fichiers .json
- [ ] Faire la doc de ces fonctions
- [ ] Ecrire des tests et tester les fonctions
- Qui doit le faire ?
- @gwatkinson
- @MathiasVigouroux
- @Wilesane
---
**Contexte supplémentaire**
Cela correspond à la deuxième grosse partie du projet.
|
process
|
mise en forme des données concerne quelle partie du projet organisation du github stream api twitter processing des données modélisation des données visualisation détails de l élément à ajouter objectif de l élément créer des dataframes à partir des fichiers json récupérés avec le streaming quelles étapes effectuer faire des fonctions et des classes pour traiter les fichiers json faire la doc de ces fonctions ecrire des tests et tester les fonctions qui doit le faire gwatkinson mathiasvigouroux wilesane contexte supplémentaire cela correspond à la deuxième grosse partie du projet
| 1
|
201,718
| 7,035,494,950
|
IssuesEvent
|
2017-12-28 00:27:34
|
oldergod/red
|
https://api.github.com/repos/oldergod/red
|
closed
|
Shared Content animation for match detail view
|
enhancement priority 3 (should) size M
|
Set an animation when going from MatchList to MatchDetail with the headline as shared content
|
1.0
|
Shared Content animation for match detail view - Set an animation when going from MatchList to MatchDetail with the headline as shared content
|
non_process
|
shared content animation for match detail view set an animation when going from matchlist to matchdetail with the headline as shared content
| 0
|
43,168
| 9,382,372,029
|
IssuesEvent
|
2019-04-04 22:12:21
|
Azure/azure-sdk-for-java
|
https://api.github.com/repos/Azure/azure-sdk-for-java
|
opened
|
CheckStyle: No implementation in public API
|
Client Java Source Code Rules
|
Public API classes should not have any public methods that return or accept as arguments any non-public API.
1. This can be determined by looking at the fully-qualified name of all parameter types and return types for all public API (that is, all public classes that are not in an implementation package).
2. For each type, if it is within an implementation package, then this should be considered an error.
|
1.0
|
CheckStyle: No implementation in public API - Public API classes should not have any public methods that return or accept as arguments any non-public API.
1. This can be determined by looking at the fully-qualified name of all parameter types and return types for all public API (that is, all public classes that are not in an implementation package).
2. For each type, if it is within an implementation package, then this should be considered an error.
|
non_process
|
checkstyle no implementation in public api public api classes should not have any public methods that return or accept as arguments any non public api this can be determined by looking at the fully qualified name of all parameter types and return types for all public api that is all public classes that are not in an implementation package for each type if it is within an implementation package then this should be considered an error
| 0
|
748
| 3,219,396,620
|
IssuesEvent
|
2015-10-08 09:29:10
|
JoaoGFarias/string_processing_algorithms_in_c_and_python
|
https://api.github.com/repos/JoaoGFarias/string_processing_algorithms_in_c_and_python
|
opened
|
[Projeto 1] Roadmap
|
process
|
* 0.1 - Reescrever KMP e Wu-Manber em C/C++
* 0.2 - Criar frontend para requisitos mínimos. I/O burro.
* 0.3 - I/O inteligente
* 0.4 - KMP paralelo para 1 texto e n padrões
* 0.5 - Opção ignore-case
* 1.0 - KMP paralelo para m textos e n padrões
|
1.0
|
[Projeto 1] Roadmap - * 0.1 - Reescrever KMP e Wu-Manber em C/C++
* 0.2 - Criar frontend para requisitos mínimos. I/O burro.
* 0.3 - I/O inteligente
* 0.4 - KMP paralelo para 1 texto e n padrões
* 0.5 - Opção ignore-case
* 1.0 - KMP paralelo para m textos e n padrões
|
process
|
roadmap reescrever kmp e wu manber em c c criar frontend para requisitos mínimos i o burro i o inteligente kmp paralelo para texto e n padrões opção ignore case kmp paralelo para m textos e n padrões
| 1
|
21,677
| 30,121,327,884
|
IssuesEvent
|
2023-06-30 15:23:31
|
USGS-WiM/StreamStats
|
https://api.github.com/repos/USGS-WiM/StreamStats
|
closed
|
BP: Some BCs not appearing until Flow Statistic is selected
|
Batch Processor
|
How to recreate:
1. Select Alaska as State/Region
2. Check "Compute Flow Statistics" and "Compute Basin Characteristics"
3. Note how there are only 2 BCs: DRNAREA and PRECPRIS00
4. Check "Peak-Flow Statistics"
5. Note how the ELEV BC has showed up
All BCs should show up in the Basin Characteristic list from the start
|
1.0
|
BP: Some BCs not appearing until Flow Statistic is selected - How to recreate:
1. Select Alaska as State/Region
2. Check "Compute Flow Statistics" and "Compute Basin Characteristics"
3. Note how there are only 2 BCs: DRNAREA and PRECPRIS00
4. Check "Peak-Flow Statistics"
5. Note how the ELEV BC has showed up
All BCs should show up in the Basin Characteristic list from the start
|
process
|
bp some bcs not appearing until flow statistic is selected how to recreate select alaska as state region check compute flow statistics and compute basin characteristics note how there are only bcs drnarea and check peak flow statistics note how the elev bc has showed up all bcs should show up in the basin characteristic list from the start
| 1
|
24,720
| 12,145,311,950
|
IssuesEvent
|
2020-04-24 09:06:51
|
aws/aws-sdk-go
|
https://api.github.com/repos/aws/aws-sdk-go
|
closed
|
Can't parse detective.ListGraphs, ListInvitations output
|
service-api
|
### Version of AWS SDK for Go?
`github.com/aws/aws-sdk-go v1.29.11`
### Version of Go (`go version`)?
`go version go1.14 darwin/amd64`
### What issue did you see?
JSON RPC response from `detective.ListGraphs` call can't be parsed into `detective.ListGraphsOutput` with following error:
```
error listing graphs: SerializationError: failed decoding JSON RPC response
status code: 200, request id: 560d2b2a-33fe-4be9-a68e-8fb366c24da8
caused by: parsing time "2020-02-11T14:42:26.569+0000" as "2006-01-02T15:04:05.999999999Z": cannot parse "+0000" as "Z"
```
### Steps to reproduce
```go
masterSess := session.Must(session.NewSession(
&aws.Config{
Region: aws.String("eu-west-1"),
}))
d := detective.New(masterSess)
graphs, err := d.ListGraphs(nil)
if err != nil {
return nil, errors.Wrap(err, "error listing graphs")
}
```
|
1.0
|
Can't parse detective.ListGraphs, ListInvitations output - ### Version of AWS SDK for Go?
`github.com/aws/aws-sdk-go v1.29.11`
### Version of Go (`go version`)?
`go version go1.14 darwin/amd64`
### What issue did you see?
JSON RPC response from `detective.ListGraphs` call can't be parsed into `detective.ListGraphsOutput` with following error:
```
error listing graphs: SerializationError: failed decoding JSON RPC response
status code: 200, request id: 560d2b2a-33fe-4be9-a68e-8fb366c24da8
caused by: parsing time "2020-02-11T14:42:26.569+0000" as "2006-01-02T15:04:05.999999999Z": cannot parse "+0000" as "Z"
```
### Steps to reproduce
```go
masterSess := session.Must(session.NewSession(
&aws.Config{
Region: aws.String("eu-west-1"),
}))
d := detective.New(masterSess)
graphs, err := d.ListGraphs(nil)
if err != nil {
return nil, errors.Wrap(err, "error listing graphs")
}
```
|
non_process
|
can t parse detective listgraphs listinvitations output version of aws sdk for go github com aws aws sdk go version of go go version go version darwin what issue did you see json rpc response from detective listgraphs call can t be parsed into detective listgraphsoutput with following error error listing graphs serializationerror failed decoding json rpc response status code request id caused by parsing time as cannot parse as z steps to reproduce go mastersess session must session newsession aws config region aws string eu west d detective new mastersess graphs err d listgraphs nil if err nil return nil errors wrap err error listing graphs
| 0
|
14,290
| 4,865,850,888
|
IssuesEvent
|
2016-11-14 21:56:54
|
ZDDM/Time-lab
|
https://api.github.com/repos/ZDDM/Time-lab
|
closed
|
Godot Development discussion.
|
Code design discussion Godot
|
We're finally doing this, eh?
Well, we need to plan this out. Coding everything from scratch might be a pain in the ass,
but I'm sure it'll be very rewarding in the future, as we would have full control over everything.
|
1.0
|
Godot Development discussion. - We're finally doing this, eh?
Well, we need to plan this out. Coding everything from scratch might be a pain in the ass,
but I'm sure it'll be very rewarding in the future, as we would have full control over everything.
|
non_process
|
godot development discussion we re finally doing this eh well we need to plan this out coding everything from scratch might be a pain in the ass but i m sure it ll be very rewarding in the future as we would have full control over everything
| 0
|
90,736
| 3,829,927,526
|
IssuesEvent
|
2016-03-31 12:51:40
|
PX4/Firmware
|
https://api.github.com/repos/PX4/Firmware
|
closed
|
Ekf2 does not publish control state airspeed
|
bug priority-critical
|
All logic in PX4 which depends on control state is not working with ekf2 at the moment.
|
1.0
|
Ekf2 does not publish control state airspeed - All logic in PX4 which depends on control state is not working with ekf2 at the moment.
|
non_process
|
does not publish control state airspeed all logic in which depends on control state is not working with at the moment
| 0
|
8,601
| 11,759,996,222
|
IssuesEvent
|
2020-03-13 18:27:36
|
GetTerminus/terminus-ui
|
https://api.github.com/repos/GetTerminus/terminus-ui
|
closed
|
README: All local links are now incorrect
|
Focus: docs Goal: Process Improvement P2: Urgent Type: bug
|
During the recent large file structure refactor we didn't update the relative URLs inside our repos.
- [ ] Verify all links in primary readme are correct
- [ ] Verify all links in any nested readme's are correct
|
1.0
|
README: All local links are now incorrect - During the recent large file structure refactor we didn't update the relative URLs inside our repos.
- [ ] Verify all links in primary readme are correct
- [ ] Verify all links in any nested readme's are correct
|
process
|
readme all local links are now incorrect during the recent large file structure refactor we didn t update the relative urls inside our repos verify all links in primary readme are correct verify all links in any nested readme s are correct
| 1
|
481,705
| 13,890,359,379
|
IssuesEvent
|
2020-10-19 09:10:52
|
ntop/ntopng
|
https://api.github.com/repos/ntop/ntopng
|
closed
|
Risk not specified in alerts
|
low-priority bug
|

The above alert is misleading if not useless as it does not explain what is the problem. It is requested to add it to the error message
|
1.0
|
Risk not specified in alerts - 
The above alert is misleading if not useless as it does not explain what is the problem. It is requested to add it to the error message
|
non_process
|
risk not specified in alerts the above alert is misleading if not useless as it does not explain what is the problem it is requested to add it to the error message
| 0
|
608,563
| 18,842,469,154
|
IssuesEvent
|
2021-11-11 11:11:51
|
ballerina-platform/ballerina-lang
|
https://api.github.com/repos/ballerina-platform/ballerina-lang
|
closed
|
Observing OOM when doing subsequent compilations
|
Type/Bug Area/Compiler Priority/Blocker Team/CompilerFE SwanLakeDump
|
**Description:**
Get the nBallerina repo and open it with the VSCode plugin.
Now try editing a source in one of the modules or multiple modules.
This will lead to an OOM with the following heap observation with the `SemanticAnalyzer`
<img width="1388" alt="Screenshot 2021-11-10 at 14 13 56" src="https://user-images.githubusercontent.com/1329674/141079716-507f88b6-6e57-46b6-91fc-b6f81d1f4640.png">
**Affected Versions:**
SwanLake-Beta4-RC1 at least
|
1.0
|
Observing OOM when doing subsequent compilations - **Description:**
Get the nBallerina repo and open it with the VSCode plugin.
Now try editing a source in one of the modules or multiple modules.
This will lead to an OOM with the following heap observation with the `SemanticAnalyzer`
<img width="1388" alt="Screenshot 2021-11-10 at 14 13 56" src="https://user-images.githubusercontent.com/1329674/141079716-507f88b6-6e57-46b6-91fc-b6f81d1f4640.png">
**Affected Versions:**
SwanLake-Beta4-RC1 at least
|
non_process
|
observing oom when doing subsequent compilations description get the nballerina repo and open it with the vscode plugin now try editing a source in one of the modules or multiple modules this will lead to an oom with the following heap observation with the semanticanalyzer img width alt screenshot at src affected versions swanlake at least
| 0
|
21,736
| 30,248,468,392
|
IssuesEvent
|
2023-07-06 18:25:33
|
ORNL-AMO/AMO-Tools-Desktop
|
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
|
closed
|
Cooling Tower Basin Calc Weather Header Styling
|
Process Cooling
|
Fix 'Weather Data' header tab to take full width of container (but 50% of header). Also create better divide between result tab and left side of calculator whitespace.

|
1.0
|
Cooling Tower Basin Calc Weather Header Styling - Fix 'Weather Data' header tab to take full width of container (but 50% of header). Also create better divide between result tab and left side of calculator whitespace.

|
process
|
cooling tower basin calc weather header styling fix weather data header tab to take full width of container but of header also create better divide between result tab and left side of calculator whitespace
| 1
|
732,425
| 25,259,041,076
|
IssuesEvent
|
2022-11-15 20:52:43
|
CCSI-Toolset/FOQUS
|
https://api.github.com/repos/CCSI-Toolset/FOQUS
|
opened
|
Nightly builds failing for Windows with pywin32 error
|
Priority:High
|
Our best guess currently is a breaking change being introduced with the upgrade from `pywin32` 304 and 305: https://github.com/mhammond/pywin32/compare/b304...b305
|
1.0
|
Nightly builds failing for Windows with pywin32 error - Our best guess currently is a breaking change being introduced with the upgrade from `pywin32` 304 and 305: https://github.com/mhammond/pywin32/compare/b304...b305
|
non_process
|
nightly builds failing for windows with error our best guess currently is a breaking change being introduced with the upgrade from and
| 0
|
18,451
| 24,547,175,710
|
IssuesEvent
|
2022-10-12 09:41:05
|
Altinn/altinn-studio
|
https://api.github.com/repos/Altinn/altinn-studio
|
closed
|
API to make applications BPMN process definition available from application controller
|
area/process solution/app-backend area/api-expose kind/user-story
|
## Description
make bmpn process definitions available from application controller, both in runtime and in storage.
## Considerations
- bpmn should be stored in platform on deploy.
- bpmn should be available both with Accept application/xml and application/json
## Acceptance criteria
- Process BPMN definition should be available on a public endpoint
## Development tasks
- [ ] ServiceImplementation to get Process definition document
- [ ] Change Designer deploy controller to post process definition as well as application metadata
- [ ] Runtime endpoint to return process definition: /org/app/process
- [ ] Storage enpoint to return process definition: /applications/org/app/process
- [ ] specify and implement how to store process definition in platform
- [ ] Automated test (if needed)
|
1.0
|
API to make applications BPMN process definition available from application controller - ## Description
make bmpn process definitions available from application controller, both in runtime and in storage.
## Considerations
- bpmn should be stored in platform on deploy.
- bpmn should be available both with Accept application/xml and application/json
## Acceptance criteria
- Process BPMN definition should be available on a public endpoint
## Development tasks
- [ ] ServiceImplementation to get Process definition document
- [ ] Change Designer deploy controller to post process definition as well as application metadata
- [ ] Runtime endpoint to return process definition: /org/app/process
- [ ] Storage enpoint to return process definition: /applications/org/app/process
- [ ] specify and implement how to store process definition in platform
- [ ] Automated test (if needed)
|
process
|
api to make applications bpmn process definition available from application controller description make bmpn process definitions available from application controller both in runtime and in storage considerations bpmn should be stored in platform on deploy bpmn should be available both with accept application xml and application json acceptance criteria process bpmn definition should be available on a public endpoint development tasks serviceimplementation to get process definition document change designer deploy controller to post process definition as well as application metadata runtime endpoint to return process definition org app process storage enpoint to return process definition applications org app process specify and implement how to store process definition in platform automated test if needed
| 1
|
13,806
| 16,566,432,291
|
IssuesEvent
|
2021-05-29 14:06:25
|
laugharn/link
|
https://api.github.com/repos/laugharn/link
|
closed
|
More Small Wins
|
kind/improvement process/selected size/xs team/front
|
A few small wins to knock out early in the morning:
- [x] Switch to humanTimeDiff until the full time filtering story is figured out
- [x] Add cursor querying to each post
- [x] Convert individual links to gSSP
- [x] Ensure correct time zone in our dev environment
|
1.0
|
More Small Wins - A few small wins to knock out early in the morning:
- [x] Switch to humanTimeDiff until the full time filtering story is figured out
- [x] Add cursor querying to each post
- [x] Convert individual links to gSSP
- [x] Ensure correct time zone in our dev environment
|
process
|
more small wins a few small wins to knock out early in the morning switch to humantimediff until the full time filtering story is figured out add cursor querying to each post convert individual links to gssp ensure correct time zone in our dev environment
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.