Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
744
| labels
stringlengths 4
574
| body
stringlengths 9
211k
| index
stringclasses 10
values | text_combine
stringlengths 96
211k
| label
stringclasses 2
values | text
stringlengths 96
188k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
204
| 4,270,642,727
|
IssuesEvent
|
2016-07-13 08:04:58
|
wordpress-mobile/WordPress-Android
|
https://api.github.com/repos/wordpress-mobile/WordPress-Android
|
opened
|
People Management: Text fields at Invite screen are black on api 16 (4.1.1)
|
People Management [Type] Bug
|
### Expected behavior
Text fields look normal.
### Actual behavior

##### Tested on Galaxy Nexus, Android 4.1.1 using alpha-17
|
1.0
|
People Management: Text fields at Invite screen are black on api 16 (4.1.1) - ### Expected behavior
Text fields look normal.
### Actual behavior

##### Tested on Galaxy Nexus, Android 4.1.1 using alpha-17
|
non_process
|
people management text fields at invite screen are black on api expected behavior text fields look normal actual behavior tested on galaxy nexus android using alpha
| 0
|
312,417
| 26,862,970,188
|
IssuesEvent
|
2023-02-03 20:14:52
|
PalisadoesFoundation/talawa-api
|
https://api.github.com/repos/PalisadoesFoundation/talawa-api
|
closed
|
Test: src/lib/resolvers/Mutation/addUserToGroupChat.ts
|
bug test
|
**Describe the bug**
The test coverage for this file is currently 96.5%.
**Expected behavior**
The test coverage should be 100%.
**Actual behavior**
coverage is not 100%.
**Screenshots**

|
1.0
|
Test: src/lib/resolvers/Mutation/addUserToGroupChat.ts -
**Describe the bug**
The test coverage for this file is currently 96.5%.
**Expected behavior**
The test coverage should be 100%.
**Actual behavior**
coverage is not 100%.
**Screenshots**

|
non_process
|
test src lib resolvers mutation addusertogroupchat ts describe the bug the test coverage for this file is currently expected behavior the test coverage should be actual behavior coverage is not screenshots
| 0
|
27,505
| 11,494,522,004
|
IssuesEvent
|
2020-02-12 01:49:51
|
fufunoyu/shop
|
https://api.github.com/repos/fufunoyu/shop
|
opened
|
CVE-2019-17571 (High) detected in log4j-1.2.17.jar
|
security vulnerability
|
## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.17.jar</b></p></summary>
<p>Apache Log4j 1.2</p>
<p>Path to dependency file: /tmp/ws-scm/shop/pom.xml</p>
<p>Path to vulnerable library: downloadResource_70e5a3aa-4544-4d64-9df0-b7788a41296d/20200212014859/log4j-1.2.17.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.17.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/shop/commit/7256871f3c948ef4e01485247aff1dd20190da6c">7256871f3c948ef4e01485247aff1dd20190da6c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
True
|
CVE-2019-17571 (High) detected in log4j-1.2.17.jar - ## CVE-2019-17571 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-1.2.17.jar</b></p></summary>
<p>Apache Log4j 1.2</p>
<p>Path to dependency file: /tmp/ws-scm/shop/pom.xml</p>
<p>Path to vulnerable library: downloadResource_70e5a3aa-4544-4d64-9df0-b7788a41296d/20200212014859/log4j-1.2.17.jar</p>
<p>
Dependency Hierarchy:
- :x: **log4j-1.2.17.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/fufunoyu/shop/commit/7256871f3c948ef4e01485247aff1dd20190da6c">7256871f3c948ef4e01485247aff1dd20190da6c</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Included in Log4j 1.2 is a SocketServer class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data. This affects Log4j versions up to 1.2 up to 1.2.17.
<p>Publish Date: 2019-12-20
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17571>CVE-2019-17571</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
|
non_process
|
cve high detected in jar cve high severity vulnerability vulnerable library jar apache path to dependency file tmp ws scm shop pom xml path to vulnerable library downloadresource jar dependency hierarchy x jar vulnerable library found in head commit a href vulnerability details included in is a socketserver class that is vulnerable to deserialization of untrusted data which can be exploited to remotely execute arbitrary code when combined with a deserialization gadget when listening to untrusted network traffic for log data this affects versions up to up to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href
| 0
|
271,858
| 23,637,144,258
|
IssuesEvent
|
2022-08-25 14:07:35
|
pikers/piker
|
https://api.github.com/repos/pikers/piker
|
opened
|
Backend test suite
|
testing broker-backend
|
In #331 comes a new backend test command `piker brokercheck` which i think we should run in CI against all backends that are officially supported.
Should be super simple to whip together with `pytest`.
Bonus points for extending things to be able to eventually spawn the tsdb and stream quotes from it use a data-layer set of tests 😉
|
1.0
|
Backend test suite - In #331 comes a new backend test command `piker brokercheck` which i think we should run in CI against all backends that are officially supported.
Should be super simple to whip together with `pytest`.
Bonus points for extending things to be able to eventually spawn the tsdb and stream quotes from it use a data-layer set of tests 😉
|
non_process
|
backend test suite in comes a new backend test command piker brokercheck which i think we should run in ci against all backends that are officially supported should be super simple to whip together with pytest bonus points for extending things to be able to eventually spawn the tsdb and stream quotes from it use a data layer set of tests 😉
| 0
|
414,129
| 12,099,652,991
|
IssuesEvent
|
2020-04-20 12:33:55
|
deora-earth/tealgarden
|
https://api.github.com/repos/deora-earth/tealgarden
|
opened
|
Develope Head Section + get the data from the
|
03 High Priority 350 Deora RepPoints
|
<!--
# Simple Summary
This policy allows to write out rewards to complete required tasks. Completed tasks are payed by the deora council to the claiming member.
# How to create a new bounty?
1. To start you'll have to fill out the bounty form below.
- If the bounty spans across multiple repositories, consider splitting it in a smaller per-repo bounties if possible.
- If the bounty is larger than M, then the best known expert in the bounty matter should be consulted and included in an
"Expert" field in the bounty description.
2. Communicate the bounty to the organisation by submitting the following form:
https://forms.gle/STSNjTBGygNtTUwLA
- The bounty will get published on the deora communication channel.
# Bounty sizes
XS / 50 to 200 / DAI
S / 200 to 350 / DAI
M / 350 to 550 / DAI
L / 550 to 900 / DAI
XL / 900 to 1400 / DAI
You can specify the range individually under #Roles
# Pair programming
If 2 people claim the bounty together, the payout increases by 1.5x.
# Bounty Challenge
Once a bounty is assigned, the worker is asked to start working immediately on the issue.
If the worker feels blocked in execution, he/she has to communicate the tensions to the gardener.
Only if tensions are not reported and the bounty get's no further attention, anyone can challenge the bounty or takeover.
Bounties should be delivered within time, even if work is left to be performed. Leftover work can be tackled by submitting a new bounty with support by the organisation.
Bounty forking: complexity of bounties that has been undersized can be forked out by a new bounty submission.
**START DESCRIBING YOUR BOUNTY HERE:**
-->
# Bounty
Our design has to be added to the processPage tamplate. Use the new `gatsby-node.js` to gather the content

## Scope
- create the header like in the picture
- get the data from `gatsby-node.js`
- do not implement the voting button and categories yet
## Deliverables
PR
## Gain for the project
## Roles
bounty gardener: @cyan-one / -
bounty worker: @cyan-one / 90%
bounty reviewer: name / share
|
1.0
|
Develope Head Section + get the data from the - <!--
# Simple Summary
This policy allows to write out rewards to complete required tasks. Completed tasks are payed by the deora council to the claiming member.
# How to create a new bounty?
1. To start you'll have to fill out the bounty form below.
- If the bounty spans across multiple repositories, consider splitting it in a smaller per-repo bounties if possible.
- If the bounty is larger than M, then the best known expert in the bounty matter should be consulted and included in an
"Expert" field in the bounty description.
2. Communicate the bounty to the organisation by submitting the following form:
https://forms.gle/STSNjTBGygNtTUwLA
- The bounty will get published on the deora communication channel.
# Bounty sizes
XS / 50 to 200 / DAI
S / 200 to 350 / DAI
M / 350 to 550 / DAI
L / 550 to 900 / DAI
XL / 900 to 1400 / DAI
You can specify the range individually under #Roles
# Pair programming
If 2 people claim the bounty together, the payout increases by 1.5x.
# Bounty Challenge
Once a bounty is assigned, the worker is asked to start working immediately on the issue.
If the worker feels blocked in execution, he/she has to communicate the tensions to the gardener.
Only if tensions are not reported and the bounty get's no further attention, anyone can challenge the bounty or takeover.
Bounties should be delivered within time, even if work is left to be performed. Leftover work can be tackled by submitting a new bounty with support by the organisation.
Bounty forking: complexity of bounties that has been undersized can be forked out by a new bounty submission.
**START DESCRIBING YOUR BOUNTY HERE:**
-->
# Bounty
Our design has to be added to the processPage tamplate. Use the new `gatsby-node.js` to gather the content

## Scope
- create the header like in the picture
- get the data from `gatsby-node.js`
- do not implement the voting button and categories yet
## Deliverables
PR
## Gain for the project
## Roles
bounty gardener: @cyan-one / -
bounty worker: @cyan-one / 90%
bounty reviewer: name / share
|
non_process
|
develope head section get the data from the simple summary this policy allows to write out rewards to complete required tasks completed tasks are payed by the deora council to the claiming member how to create a new bounty to start you ll have to fill out the bounty form below if the bounty spans across multiple repositories consider splitting it in a smaller per repo bounties if possible if the bounty is larger than m then the best known expert in the bounty matter should be consulted and included in an expert field in the bounty description communicate the bounty to the organisation by submitting the following form the bounty will get published on the deora communication channel bounty sizes xs to dai s to dai m to dai l to dai xl to dai you can specify the range individually under roles pair programming if people claim the bounty together the payout increases by bounty challenge once a bounty is assigned the worker is asked to start working immediately on the issue if the worker feels blocked in execution he she has to communicate the tensions to the gardener only if tensions are not reported and the bounty get s no further attention anyone can challenge the bounty or takeover bounties should be delivered within time even if work is left to be performed leftover work can be tackled by submitting a new bounty with support by the organisation bounty forking complexity of bounties that has been undersized can be forked out by a new bounty submission start describing your bounty here bounty our design has to be added to the processpage tamplate use the new gatsby node js to gather the content scope create the header like in the picture get the data from gatsby node js do not implement the voting button and categories yet deliverables pr gain for the project roles bounty gardener cyan one bounty worker cyan one bounty reviewer name share
| 0
|
599,844
| 18,284,272,646
|
IssuesEvent
|
2021-10-05 08:32:57
|
woocommerce/facebook-for-woocommerce
|
https://api.github.com/repos/woocommerce/facebook-for-woocommerce
|
opened
|
Implement Exponential Backoff preventing issues caused by unavailable API services.
|
priority: critical type: enhancement
|
A recent Facebook services outage has uncovered an issue with the plugin. When the DNS records have not been available the plugin was blocking the site for some of the merchants. We need to implement a logic that will detect network issues and prevent the plugin from blocking the whole site.
The proposed solution is to use Truncated Exponential Backoff to stop plugins from doing requests when the responses are not coming through.
|
1.0
|
Implement Exponential Backoff preventing issues caused by unavailable API services. - A recent Facebook services outage has uncovered an issue with the plugin. When the DNS records have not been available the plugin was blocking the site for some of the merchants. We need to implement a logic that will detect network issues and prevent the plugin from blocking the whole site.
The proposed solution is to use Truncated Exponential Backoff to stop plugins from doing requests when the responses are not coming through.
|
non_process
|
implement exponential backoff preventing issues caused by unavailable api services a recent facebook services outage has uncovered an issue with the plugin when the dns records have not been available the plugin was blocking the site for some of the merchants we need to implement a logic that will detect network issues and prevent the plugin from blocking the whole site the proposed solution is to use truncated exponential backoff to stop plugins from doing requests when the responses are not coming through
| 0
|
149,329
| 19,576,927,792
|
IssuesEvent
|
2022-01-04 16:26:33
|
Hugh-Cushing-Campaign/shopify-app-node
|
https://api.github.com/repos/Hugh-Cushing-Campaign/shopify-app-node
|
opened
|
CVE-2021-23382 (Medium) detected in multiple libraries
|
security vulnerability
|
## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.32.tgz</b>, <b>postcss-8.1.7.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.32.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@ampproject/toolbox-optimizer/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- toolbox-optimizer-2.7.1-alpha.0.tgz
- :x: **postcss-7.0.32.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-8.1.7.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.1.7.tgz">https://registry.npmjs.org/postcss/-/postcss-8.1.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/next/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- :x: **postcss-8.1.7.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/next/node_modules/postcss-modules-extract-imports/node_modules/postcss/package.json,/node_modules/next/node_modules/postcss-modules-values/node_modules/postcss/package.json,/node_modules/postcss-safe-parser/node_modules/postcss/package.json,/node_modules/next/node_modules/postcss-modules-local-by-default/node_modules/postcss/package.json,/node_modules/next/node_modules/icss-utils/node_modules/postcss/package.json,/node_modules/cssnano-preset-simple/node_modules/postcss/package.json,/node_modules/next/node_modules/css-loader/node_modules/postcss/package.json,/node_modules/cssnano-simple/node_modules/postcss/package.json,/node_modules/next/node_modules/postcss-modules-scope/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- css-loader-4.3.0.tgz
- postcss-modules-extract-imports-2.0.0.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Hugh-Cushing-Campaign/shopify-app-node/commit/82c1864287418c6d0b45a27163e9d4a991e9e288">82c1864287418c6d0b45a27163e9d4a991e9e288</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.21","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;resolve-url-loader:3.1.2;postcss:7.0.21","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.32","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;@ampproject/toolbox-optimizer:2.7.1-alpha.0;postcss:7.0.32","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"8.1.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;postcss:8.1.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.35","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;css-loader:4.3.0;postcss-modules-extract-imports:2.0.0;postcss:7.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
True
|
CVE-2021-23382 (Medium) detected in multiple libraries - ## CVE-2021-23382 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>postcss-7.0.21.tgz</b>, <b>postcss-7.0.32.tgz</b>, <b>postcss-8.1.7.tgz</b>, <b>postcss-7.0.35.tgz</b></p></summary>
<p>
<details><summary><b>postcss-7.0.21.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.21.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/resolve-url-loader/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- resolve-url-loader-3.1.2.tgz
- :x: **postcss-7.0.21.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.32.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.32.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/@ampproject/toolbox-optimizer/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- toolbox-optimizer-2.7.1-alpha.0.tgz
- :x: **postcss-7.0.32.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-8.1.7.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-8.1.7.tgz">https://registry.npmjs.org/postcss/-/postcss-8.1.7.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/next/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- :x: **postcss-8.1.7.tgz** (Vulnerable Library)
</details>
<details><summary><b>postcss-7.0.35.tgz</b></p></summary>
<p>Tool for transforming styles with JS plugins</p>
<p>Library home page: <a href="https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz">https://registry.npmjs.org/postcss/-/postcss-7.0.35.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/next/node_modules/postcss-modules-extract-imports/node_modules/postcss/package.json,/node_modules/next/node_modules/postcss-modules-values/node_modules/postcss/package.json,/node_modules/postcss-safe-parser/node_modules/postcss/package.json,/node_modules/next/node_modules/postcss-modules-local-by-default/node_modules/postcss/package.json,/node_modules/next/node_modules/icss-utils/node_modules/postcss/package.json,/node_modules/cssnano-preset-simple/node_modules/postcss/package.json,/node_modules/next/node_modules/css-loader/node_modules/postcss/package.json,/node_modules/cssnano-simple/node_modules/postcss/package.json,/node_modules/next/node_modules/postcss-modules-scope/node_modules/postcss/package.json</p>
<p>
Dependency Hierarchy:
- next-10.0.4.tgz (Root Library)
- css-loader-4.3.0.tgz
- postcss-modules-extract-imports-2.0.0.tgz
- :x: **postcss-7.0.35.tgz** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/Hugh-Cushing-Campaign/shopify-app-node/commit/82c1864287418c6d0b45a27163e9d4a991e9e288">82c1864287418c6d0b45a27163e9d4a991e9e288</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \/\*\s* sourceMappingURL=(.*).
<p>Publish Date: 2021-04-26
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382>CVE-2021-23382</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-23382</a></p>
<p>Release Date: 2021-04-26</p>
<p>Fix Resolution: postcss - 8.2.13</p>
</p>
</details>
<p></p>
<!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.21","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;resolve-url-loader:3.1.2;postcss:7.0.21","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.32","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;@ampproject/toolbox-optimizer:2.7.1-alpha.0;postcss:7.0.32","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"8.1.7","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;postcss:8.1.7","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false},{"packageType":"javascript/Node.js","packageName":"postcss","packageVersion":"7.0.35","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"next:10.0.4;css-loader:4.3.0;postcss-modules-extract-imports:2.0.0;postcss:7.0.35","isMinimumFixVersionAvailable":true,"minimumFixVersion":"postcss - 8.2.13","isBinary":false}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-23382","vulnerabilityDetails":"The package postcss before 8.2.13 are vulnerable to Regular Expression Denial of Service (ReDoS) via getAnnotationURL() and loadAnnotation() in lib/previous-map.js. The vulnerable regexes are caused mainly by the sub-pattern \\/\\*\\s* sourceMappingURL\u003d(.*).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23382","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
|
non_process
|
cve medium detected in multiple libraries cve medium severity vulnerability vulnerable libraries postcss tgz postcss tgz postcss tgz postcss tgz postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules resolve url loader node modules postcss package json dependency hierarchy next tgz root library resolve url loader tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules ampproject toolbox optimizer node modules postcss package json dependency hierarchy next tgz root library toolbox optimizer alpha tgz x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules next node modules postcss package json dependency hierarchy next tgz root library x postcss tgz vulnerable library postcss tgz tool for transforming styles with js plugins library home page a href path to dependency file package json path to vulnerable library node modules next node modules postcss modules extract imports node modules postcss package json node modules next node modules postcss modules values node modules postcss package json node modules postcss safe parser node modules postcss package json node modules next node modules postcss modules local by default node modules postcss package json node modules next node modules icss utils node modules postcss package json node modules cssnano preset simple node modules postcss package json node modules next node modules css loader node modules postcss package json node modules cssnano simple node modules postcss package json node modules next node modules postcss modules scope node modules postcss package json dependency hierarchy next tgz root library css loader tgz postcss modules extract imports tgz x postcss tgz vulnerable library found in head commit a href found in base branch master vulnerability details the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution postcss isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree next resolve url loader postcss isminimumfixversionavailable true minimumfixversion postcss isbinary false packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree next ampproject toolbox optimizer alpha postcss isminimumfixversionavailable true minimumfixversion postcss isbinary false packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree next postcss isminimumfixversionavailable true minimumfixversion postcss isbinary false packagetype javascript node js packagename postcss packageversion packagefilepaths istransitivedependency true dependencytree next css loader postcss modules extract imports postcss isminimumfixversionavailable true minimumfixversion postcss isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails the package postcss before are vulnerable to regular expression denial of service redos via getannotationurl and loadannotation in lib previous map js the vulnerable regexes are caused mainly by the sub pattern s sourcemappingurl vulnerabilityurl
| 0
|
10,813
| 13,609,289,573
|
IssuesEvent
|
2020-09-23 04:50:25
|
googleapis/java-servicedirectory
|
https://api.github.com/repos/googleapis/java-servicedirectory
|
closed
|
Dependency Dashboard
|
api: servicedirectory type: process
|
This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-servicedirectory-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-servicedirectory to v0.2.2
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue contains a list of Renovate updates and their statuses.
## Open
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-servicedirectory-0.x -->chore(deps): update dependency com.google.cloud:google-cloud-servicedirectory to v0.2.2
- [ ] <!-- rebase-branch=renovate/com.google.cloud-libraries-bom-10.x -->chore(deps): update dependency com.google.cloud:libraries-bom to v10
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue contains a list of renovate updates and their statuses open these updates have all been created already click a checkbox below to force a retry rebase of any chore deps update dependency com google cloud google cloud servicedirectory to chore deps update dependency com google cloud libraries bom to check this box to trigger a request for renovate to run again on this repository
| 1
|
75,359
| 3,461,824,421
|
IssuesEvent
|
2015-12-20 12:20:19
|
ceylon/ceylon-ide-eclipse
|
https://api.github.com/repos/ceylon/ceylon-ide-eclipse
|
opened
|
IDE does not display any documentation when hovering vert.x APIs
|
bug on last release bug on master high priority
|
Vert.x does the right thing and encodes documentation using `@DocAnnotation$annotation$`, but the IDE just refuses to display it. This is *terrible*, since it means there is simply no way to view the documentation for any vert.x APIs.
@davidfestal what's your take on this?
|
1.0
|
IDE does not display any documentation when hovering vert.x APIs - Vert.x does the right thing and encodes documentation using `@DocAnnotation$annotation$`, but the IDE just refuses to display it. This is *terrible*, since it means there is simply no way to view the documentation for any vert.x APIs.
@davidfestal what's your take on this?
|
non_process
|
ide does not display any documentation when hovering vert x apis vert x does the right thing and encodes documentation using docannotation annotation but the ide just refuses to display it this is terrible since it means there is simply no way to view the documentation for any vert x apis davidfestal what s your take on this
| 0
|
344,593
| 10,346,933,165
|
IssuesEvent
|
2019-09-04 16:13:52
|
OpenNebula/one
|
https://api.github.com/repos/OpenNebula/one
|
closed
|
Integrate GOCA with Travis
|
Category: API Priority: Normal Status: Accepted Type: Feature
|
**Description**
This issue is to integrate GOCA testing with travis
**Use case**
NaN
**Interface Changes**
NaN
**Additional Context**
NaN
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
NaN
|
1.0
|
Integrate GOCA with Travis - **Description**
This issue is to integrate GOCA testing with travis
**Use case**
NaN
**Interface Changes**
NaN
**Additional Context**
NaN
<!--////////////////////////////////////////////-->
<!-- THIS SECTION IS FOR THE DEVELOPMENT TEAM -->
<!-- BOTH FOR BUGS AND ENHANCEMENT REQUESTS -->
<!-- PROGRESS WILL BE REFLECTED HERE -->
<!--////////////////////////////////////////////-->
## Progress Status
NaN
|
non_process
|
integrate goca with travis description this issue is to integrate goca testing with travis use case nan interface changes nan additional context nan progress status nan
| 0
|
19,887
| 26,331,134,425
|
IssuesEvent
|
2023-01-10 10:52:49
|
bitfocus/companion-module-requests
|
https://api.github.com/repos/bitfocus/companion-module-requests
|
opened
|
Shure UR4D module
|
NOT YET PROCESSED
|
I tired the generic UDP module to connect to my old Shure UR4D receivers, but I couldn't get any feedback from that.
It would be great to have a feedback like the ULXD module, with frequencies, battery status, channel name, RF, gain, etc.
I know it's an old discontinued machine, but it still works like a charm!
I attach the documentation I found on shure website with a list of network commands.
[UHFR Network String Commands.pdf](https://github.com/bitfocus/companion-module-requests/files/10381812/UHFR.Network.String.Commands.pdf)
|
1.0
|
Shure UR4D module - I tired the generic UDP module to connect to my old Shure UR4D receivers, but I couldn't get any feedback from that.
It would be great to have a feedback like the ULXD module, with frequencies, battery status, channel name, RF, gain, etc.
I know it's an old discontinued machine, but it still works like a charm!
I attach the documentation I found on shure website with a list of network commands.
[UHFR Network String Commands.pdf](https://github.com/bitfocus/companion-module-requests/files/10381812/UHFR.Network.String.Commands.pdf)
|
process
|
shure module i tired the generic udp module to connect to my old shure receivers but i couldn t get any feedback from that it would be great to have a feedback like the ulxd module with frequencies battery status channel name rf gain etc i know it s an old discontinued machine but it still works like a charm i attach the documentation i found on shure website with a list of network commands
| 1
|
15,582
| 19,704,458,382
|
IssuesEvent
|
2022-01-12 20:13:09
|
googleapis/nodejs-secret-manager
|
https://api.github.com/repos/googleapis/nodejs-secret-manager
|
closed
|
Your .repo-metadata.json file has a problem 🤒
|
type: process api: secretmanager repo-metadata: lint
|
You have a problem with your .repo-metadata.json file:
Result of scan 📈:
```
must have required property 'library_type' in .repo-metadata.json
release_level must be equal to one of the allowed values in .repo-metadata.json
```
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
1.0
|
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file:
Result of scan 📈:
```
must have required property 'library_type' in .repo-metadata.json
release_level must be equal to one of the allowed values in .repo-metadata.json
```
☝️ Once you correct these problems, you can close this issue.
Reach out to **go/github-automation** if you have any questions.
|
process
|
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 must have required property library type in repo metadata json release level must be equal to one of the allowed values in repo metadata json ☝️ once you correct these problems you can close this issue reach out to go github automation if you have any questions
| 1
|
1,611
| 4,227,023,095
|
IssuesEvent
|
2016-07-02 21:52:37
|
pelias/schema
|
https://api.github.com/repos/pelias/schema
|
closed
|
Create index script fails on newer elasticsearch v2.2
|
processed
|
I tried running it against latest elasticsearch v2.2 and it fails with error :
[put mapping] pelias { [Error: [mapper_parsing_exception] analyzer on field [neighbourhood_id] must be set when search_analyzer is set]
status: '400',
message: '[mapper_parsing_exception] analyzer on field [neighbourhood_id] must be set when search_analyzer is set',
path: '/pelias',
query: {},
...
|
1.0
|
Create index script fails on newer elasticsearch v2.2 - I tried running it against latest elasticsearch v2.2 and it fails with error :
[put mapping] pelias { [Error: [mapper_parsing_exception] analyzer on field [neighbourhood_id] must be set when search_analyzer is set]
status: '400',
message: '[mapper_parsing_exception] analyzer on field [neighbourhood_id] must be set when search_analyzer is set',
path: '/pelias',
query: {},
...
|
process
|
create index script fails on newer elasticsearch i tried running it against latest elasticsearch and it fails with error pelias analyzer on field must be set when search analyzer is set status message analyzer on field must be set when search analyzer is set path pelias query
| 1
|
22,107
| 30,637,203,906
|
IssuesEvent
|
2023-07-24 18:44:23
|
juspay/hyperswitch
|
https://api.github.com/repos/juspay/hyperswitch
|
closed
|
[BUG] update `api_key_expiry_workflow` to validate the expiry before scheduling the task
|
A-process-tracker C-bug good first issue help wanted
|
### Bug Description
We have a `process_tracker` which schedules the task to future date, and executes it whenever the time is up. We have a feature to schedule a reminder email to notify the merchants when their api_key is about to expire.
Currently, when the api_key is created with some expiry set, process tracker schedules the 1st email, 7 days prior to api_key expiry. But just in case merchant sets the expiry to next day, process tracker will schedule the email to past day which won't be executed by process tracker.
### Expected Behavior
During the api_key expiry if the merchant sets the expiry to next day or any other day before 7 days, we need to perform a validation something like - calculate the schedule_time of 1st email during api_key creation. if it is before the current_time, don't create an entry in process_tracker.
file to include the change -
https://github.com/juspay/hyperswitch/blob/e913bfc4958da613cd352eca9bc38b23ab7ac38e/crates/router/src/core/api_keys.rs#L193C1-L193C1
### Steps To Reproduce
1. Run below commands in 3 diff terminals
- cargo r --features email (application)
- SCHEDULER_FLOW=producer cargo r --bin scheduler (producer binary)
- SCHEDULER_FLOW=consumer cargo r --bin scheduler (consumer binary)
2. Create an api_key with expiry set to next day in postman.
3. Check the process_tracker table in db which will contain the schedule_time field set to past day.
### Context For The Bug
Just to cover the edge case
### Have you spent some time to check if this bug has been raised before?
- [X] I checked and didn't find similar issue
### Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/juspay/hyperswitch/blob/main/docs/CONTRIBUTING.md)
### Are you willing to submit a PR?
No, but I'm happy to collaborate on a PR with someone else
|
1.0
|
[BUG] update `api_key_expiry_workflow` to validate the expiry before scheduling the task - ### Bug Description
We have a `process_tracker` which schedules the task to future date, and executes it whenever the time is up. We have a feature to schedule a reminder email to notify the merchants when their api_key is about to expire.
Currently, when the api_key is created with some expiry set, process tracker schedules the 1st email, 7 days prior to api_key expiry. But just in case merchant sets the expiry to next day, process tracker will schedule the email to past day which won't be executed by process tracker.
### Expected Behavior
During the api_key expiry if the merchant sets the expiry to next day or any other day before 7 days, we need to perform a validation something like - calculate the schedule_time of 1st email during api_key creation. if it is before the current_time, don't create an entry in process_tracker.
file to include the change -
https://github.com/juspay/hyperswitch/blob/e913bfc4958da613cd352eca9bc38b23ab7ac38e/crates/router/src/core/api_keys.rs#L193C1-L193C1
### Steps To Reproduce
1. Run below commands in 3 diff terminals
- cargo r --features email (application)
- SCHEDULER_FLOW=producer cargo r --bin scheduler (producer binary)
- SCHEDULER_FLOW=consumer cargo r --bin scheduler (consumer binary)
2. Create an api_key with expiry set to next day in postman.
3. Check the process_tracker table in db which will contain the schedule_time field set to past day.
### Context For The Bug
Just to cover the edge case
### Have you spent some time to check if this bug has been raised before?
- [X] I checked and didn't find similar issue
### Have you read the Contributing Guidelines?
- [X] I have read the [Contributing Guidelines](https://github.com/juspay/hyperswitch/blob/main/docs/CONTRIBUTING.md)
### Are you willing to submit a PR?
No, but I'm happy to collaborate on a PR with someone else
|
process
|
update api key expiry workflow to validate the expiry before scheduling the task bug description we have a process tracker which schedules the task to future date and executes it whenever the time is up we have a feature to schedule a reminder email to notify the merchants when their api key is about to expire currently when the api key is created with some expiry set process tracker schedules the email days prior to api key expiry but just in case merchant sets the expiry to next day process tracker will schedule the email to past day which won t be executed by process tracker expected behavior during the api key expiry if the merchant sets the expiry to next day or any other day before days we need to perform a validation something like calculate the schedule time of email during api key creation if it is before the current time don t create an entry in process tracker file to include the change steps to reproduce run below commands in diff terminals cargo r features email application scheduler flow producer cargo r bin scheduler producer binary scheduler flow consumer cargo r bin scheduler consumer binary create an api key with expiry set to next day in postman check the process tracker table in db which will contain the schedule time field set to past day context for the bug just to cover the edge case have you spent some time to check if this bug has been raised before i checked and didn t find similar issue have you read the contributing guidelines i have read the are you willing to submit a pr no but i m happy to collaborate on a pr with someone else
| 1
|
2,965
| 5,960,476,039
|
IssuesEvent
|
2017-05-29 14:12:23
|
orbardugo/Hahot-Hameshulash
|
https://api.github.com/repos/orbardugo/Hahot-Hameshulash
|
closed
|
Create prototype
|
in process
|
## Feature Template
#### Related Issues/Tasks
- [ ] #3
#### Scenario
- [x] Read from XLSX file into the database
- [ ] Create GUI main form
- [ ] Expected result: a non functionality form
## User Story Template
- We want to generate the prototype of the main form of the project
- So that the project design will be done
## Bug Template
#### Expected behavior
#### Actual behavior
#### Steps to reproduce the behavior
## Project Submission Template
#### Iteration page: [here](https://github.com/orbardugo/Hahot-Hameshulash/milestone/2)
#### Checklist:
- [ ] Feature scenarios/tests passing
- [ ] Iteration page updated, including:
- [ ] Iteration retrospective
- [ ] Code reviews
- [ ] Client review
- [ ] Issues updates
- [ ] Section on application of course materials
- [ ] git tag
- [ ] Next iteration:
- [ ] Open page
- [ ] Select stories and plan issues
- [ ] Test scenarios
- [ ] All engineers filled peer-review
- [ ] Submitted
- [ ] Announcement in chat room
- [ ] Assign this issue to checker
- [ ] Register for a review meeting
|
1.0
|
Create prototype - ## Feature Template
#### Related Issues/Tasks
- [ ] #3
#### Scenario
- [x] Read from XLSX file into the database
- [ ] Create GUI main form
- [ ] Expected result: a non functionality form
## User Story Template
- We want to generate the prototype of the main form of the project
- So that the project design will be done
## Bug Template
#### Expected behavior
#### Actual behavior
#### Steps to reproduce the behavior
## Project Submission Template
#### Iteration page: [here](https://github.com/orbardugo/Hahot-Hameshulash/milestone/2)
#### Checklist:
- [ ] Feature scenarios/tests passing
- [ ] Iteration page updated, including:
- [ ] Iteration retrospective
- [ ] Code reviews
- [ ] Client review
- [ ] Issues updates
- [ ] Section on application of course materials
- [ ] git tag
- [ ] Next iteration:
- [ ] Open page
- [ ] Select stories and plan issues
- [ ] Test scenarios
- [ ] All engineers filled peer-review
- [ ] Submitted
- [ ] Announcement in chat room
- [ ] Assign this issue to checker
- [ ] Register for a review meeting
|
process
|
create prototype feature template related issues tasks scenario read from xlsx file into the database create gui main form expected result a non functionality form user story template we want to generate the prototype of the main form of the project so that the project design will be done bug template expected behavior actual behavior steps to reproduce the behavior project submission template iteration page checklist feature scenarios tests passing iteration page updated including iteration retrospective code reviews client review issues updates section on application of course materials git tag next iteration open page select stories and plan issues test scenarios all engineers filled peer review submitted announcement in chat room assign this issue to checker register for a review meeting
| 1
|
8,396
| 11,565,856,748
|
IssuesEvent
|
2020-02-20 11:18:34
|
aiidateam/aiida-core
|
https://api.github.com/repos/aiidateam/aiida-core
|
opened
|
Ensure that children processes are properly killed when parent killed and clean up nodes
|
priority/important topic/engine topic/processes type/bug
|
There are various scenarios possible where when killing a parent process not all children processes are properly killed as well, or maybe just the nodes are not properly updated. Make sure that the process tasks are properly acknowledged so they don't remain in the queue and make sure to wrap up the nodes.
|
1.0
|
Ensure that children processes are properly killed when parent killed and clean up nodes - There are various scenarios possible where when killing a parent process not all children processes are properly killed as well, or maybe just the nodes are not properly updated. Make sure that the process tasks are properly acknowledged so they don't remain in the queue and make sure to wrap up the nodes.
|
process
|
ensure that children processes are properly killed when parent killed and clean up nodes there are various scenarios possible where when killing a parent process not all children processes are properly killed as well or maybe just the nodes are not properly updated make sure that the process tasks are properly acknowledged so they don t remain in the queue and make sure to wrap up the nodes
| 1
|
10,200
| 13,065,404,550
|
IssuesEvent
|
2020-07-30 19:45:25
|
kubeflow/kfctl
|
https://api.github.com/repos/kubeflow/kfctl
|
closed
|
Release Kubeflow 0.7.1
|
area/kfctl kind/process priority/p2
|
Tracking bug for 0.7.1.
Issues we want fixed in 0.7.1
* kubeflow/kfserving#568 - KFServing mutating webhook is causing problems.
Before we can start cherry-picking changes to Kubeflow manifests onto the v0.7-branch we need to pin the version of kubeflow/manifests used in the v0.7.0 KFDef YAML specs (kubeflow/manifests#643)
To fix kubeflow/kfserving#568 there are a number of things that need to happen see
https://github.com/kubeflow/kfserving/issues/568#issuecomment-561974987
|
1.0
|
Release Kubeflow 0.7.1 - Tracking bug for 0.7.1.
Issues we want fixed in 0.7.1
* kubeflow/kfserving#568 - KFServing mutating webhook is causing problems.
Before we can start cherry-picking changes to Kubeflow manifests onto the v0.7-branch we need to pin the version of kubeflow/manifests used in the v0.7.0 KFDef YAML specs (kubeflow/manifests#643)
To fix kubeflow/kfserving#568 there are a number of things that need to happen see
https://github.com/kubeflow/kfserving/issues/568#issuecomment-561974987
|
process
|
release kubeflow tracking bug for issues we want fixed in kubeflow kfserving kfserving mutating webhook is causing problems before we can start cherry picking changes to kubeflow manifests onto the branch we need to pin the version of kubeflow manifests used in the kfdef yaml specs kubeflow manifests to fix kubeflow kfserving there are a number of things that need to happen see
| 1
|
7,408
| 10,526,224,161
|
IssuesEvent
|
2019-09-30 16:37:14
|
liskcenterutrecht/lisk.bike
|
https://api.github.com/repos/liskcenterutrecht/lisk.bike
|
closed
|
Onboarding Lock
|
Process Flow Virtual Lock Server
|
Onboarding means that the Virtual Lock Server creates a wallet for the lock on the lisk.bike sidechain. Now there's a connection between the lock and the pubkey, in the Lisk blockchain.
See ./client/create-bike.js
- [x] Lock sends the 'login' command to server
- [x] Server creates wallet for lock using ./client/create-account.js
- [x] Server registers lock onto the blockchain using ./client/create-bike.js
|
1.0
|
Onboarding Lock - Onboarding means that the Virtual Lock Server creates a wallet for the lock on the lisk.bike sidechain. Now there's a connection between the lock and the pubkey, in the Lisk blockchain.
See ./client/create-bike.js
- [x] Lock sends the 'login' command to server
- [x] Server creates wallet for lock using ./client/create-account.js
- [x] Server registers lock onto the blockchain using ./client/create-bike.js
|
process
|
onboarding lock onboarding means that the virtual lock server creates a wallet for the lock on the lisk bike sidechain now there s a connection between the lock and the pubkey in the lisk blockchain see client create bike js lock sends the login command to server server creates wallet for lock using client create account js server registers lock onto the blockchain using client create bike js
| 1
|
9,777
| 12,795,664,734
|
IssuesEvent
|
2020-07-02 09:07:36
|
googleapis/google-cloud-dotnet
|
https://api.github.com/repos/googleapis/google-cloud-dotnet
|
closed
|
Fix Travis log for PR change detection
|
type: process
|
We're still getting logs with junk console escape characters in detect-pr-changes.sh, which prevent us from seeing all the information. This is very frustrating.
I thought I'd fixed this, but apparently not. Will keep checking.
|
1.0
|
Fix Travis log for PR change detection - We're still getting logs with junk console escape characters in detect-pr-changes.sh, which prevent us from seeing all the information. This is very frustrating.
I thought I'd fixed this, but apparently not. Will keep checking.
|
process
|
fix travis log for pr change detection we re still getting logs with junk console escape characters in detect pr changes sh which prevent us from seeing all the information this is very frustrating i thought i d fixed this but apparently not will keep checking
| 1
|
8,230
| 11,415,575,179
|
IssuesEvent
|
2020-02-02 12:02:09
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
CSS-Module compiled wrong sometimes with cache.
|
:bug: Bug CSS Preprocessing Cache Stale
|


Source code:

Output:

Cache file in `.cache`:

***
I think that the repeat hash is the key that caused this problem. What's we expected is, compile `.Xxx {}` to `.Xxx_hash` , and so in `js` we could use `import styles from './Xxx.css';`, the value of `styles` should looks like `{".Xxx": ".Xxx_hash"}`.
But, we got `.Xxx_hash_hash`, and the value of `styles` is like `{".Xxx_hash": ".Xxx_hash_hash"}` , so all `className` set by `styles.Xxx` is `undefined`.
|
1.0
|
CSS-Module compiled wrong sometimes with cache. - 

Source code:

Output:

Cache file in `.cache`:

***
I think that the repeat hash is the key that caused this problem. What's we expected is, compile `.Xxx {}` to `.Xxx_hash` , and so in `js` we could use `import styles from './Xxx.css';`, the value of `styles` should looks like `{".Xxx": ".Xxx_hash"}`.
But, we got `.Xxx_hash_hash`, and the value of `styles` is like `{".Xxx_hash": ".Xxx_hash_hash"}` , so all `className` set by `styles.Xxx` is `undefined`.
|
process
|
css module compiled wrong sometimes with cache source code output cache file in cache i think that the repeat hash is the key that caused this problem what s we expected is compile xxx to xxx hash and so in js we could use import styles from xxx css the value of styles should looks like xxx xxx hash but we got xxx hash hash and the value of styles is like xxx hash xxx hash hash so all classname set by styles xxx is undefined
| 1
|
8,940
| 4,363,934,108
|
IssuesEvent
|
2016-08-03 03:28:45
|
ZeroCM/zcm
|
https://api.github.com/repos/ZeroCM/zcm
|
closed
|
Make build documentation clearer this should include instructions on how to build and run the examples
|
Build Sys enhancement
|
This should include information about where and how to set ZCM_DEFAULT_URL as well as notifying the user that in order to build examples, they must have configured zcm with --use-all
|
1.0
|
Make build documentation clearer this should include instructions on how to build and run the examples - This should include information about where and how to set ZCM_DEFAULT_URL as well as notifying the user that in order to build examples, they must have configured zcm with --use-all
|
non_process
|
make build documentation clearer this should include instructions on how to build and run the examples this should include information about where and how to set zcm default url as well as notifying the user that in order to build examples they must have configured zcm with use all
| 0
|
9,261
| 12,294,714,990
|
IssuesEvent
|
2020-05-11 01:15:56
|
allinurl/goaccess
|
https://api.github.com/repos/allinurl/goaccess
|
closed
|
Crashed by SIG 11 using on-disk storage
|
bug duplicate log-processing on-disk waiting reply
|
```
cat /apache.log | goaccess -a -o /path/to/log/apache.html --real-time-html --load-from-disk --keep-db-files
==19354== GoAccess 1.1.1 crashed by Sig 11
==19354==
==19354== VALUES AT CRASH POINT
==19354==
==19354== Line number: 86681692
==19354== Offset: 0
==19354== Invalid data: 163921
==19354== Piping: 1
==19354== Response size: 397685186818 bytes
==19354==
==19354== STACK TRACE:
==19354==
==19354== 0 goaccess(sigsegv_handler+0x13c) [0x40a75c]
==19354== 1 /lib64/libc.so.6() [0x3dfae32920]
==19354== 2 /usr/lib64/libtokyocabinet.so.8() [0x39e4840371]
==19354== 3 /usr/lib64/libtokyocabinet.so.8(tcbdbput+0xbf) [0x39e4843fdf]
==19354== 4 /usr/lib64/libtokyocabinet.so.8(tcadbput+0x1f4) [0x39e4868e54]
==19354== 5 goaccess() [0x42813a]
==19354== 6 goaccess() [0x4281f6]
==19354== 7 goaccess() [0x41a46f]
==19354== 8 goaccess() [0x41a902]
==19354== 9 goaccess(parse_log+0xc7) [0x41aa47]
==19354== 10 goaccess(main+0x208) [0x4101f8]
==19354== 11 /lib64/libc.so.6(__libc_start_main+0xfd) [0x3dfae1ecdd]
==19354== 12 goaccess() [0x407ee9]
```
|
1.0
|
Crashed by SIG 11 using on-disk storage - ```
cat /apache.log | goaccess -a -o /path/to/log/apache.html --real-time-html --load-from-disk --keep-db-files
==19354== GoAccess 1.1.1 crashed by Sig 11
==19354==
==19354== VALUES AT CRASH POINT
==19354==
==19354== Line number: 86681692
==19354== Offset: 0
==19354== Invalid data: 163921
==19354== Piping: 1
==19354== Response size: 397685186818 bytes
==19354==
==19354== STACK TRACE:
==19354==
==19354== 0 goaccess(sigsegv_handler+0x13c) [0x40a75c]
==19354== 1 /lib64/libc.so.6() [0x3dfae32920]
==19354== 2 /usr/lib64/libtokyocabinet.so.8() [0x39e4840371]
==19354== 3 /usr/lib64/libtokyocabinet.so.8(tcbdbput+0xbf) [0x39e4843fdf]
==19354== 4 /usr/lib64/libtokyocabinet.so.8(tcadbput+0x1f4) [0x39e4868e54]
==19354== 5 goaccess() [0x42813a]
==19354== 6 goaccess() [0x4281f6]
==19354== 7 goaccess() [0x41a46f]
==19354== 8 goaccess() [0x41a902]
==19354== 9 goaccess(parse_log+0xc7) [0x41aa47]
==19354== 10 goaccess(main+0x208) [0x4101f8]
==19354== 11 /lib64/libc.so.6(__libc_start_main+0xfd) [0x3dfae1ecdd]
==19354== 12 goaccess() [0x407ee9]
```
|
process
|
crashed by sig using on disk storage cat apache log goaccess a o path to log apache html real time html load from disk keep db files goaccess crashed by sig values at crash point line number offset invalid data piping response size bytes stack trace goaccess sigsegv handler libc so usr libtokyocabinet so usr libtokyocabinet so tcbdbput usr libtokyocabinet so tcadbput goaccess goaccess goaccess goaccess goaccess parse log goaccess main libc so libc start main goaccess
| 1
|
17,434
| 23,252,814,458
|
IssuesEvent
|
2022-08-04 06:26:51
|
inmanta/inmanta-core
|
https://api.github.com/repos/inmanta/inmanta-core
|
closed
|
Improve test suite setup time by caching pip index
|
process
|
**Problem description:**
The `local_module_package_index` fixture builds all modules present in the `tests/data/modules_v2` directory and publishes them on a temporary Python package repository for the runtime of the test suite. This procedure takes a long time to complete. Test cases that use this fixture cannot be developed efficiently because of this.
**Proposed solution:**
Cache this pip index in the project directory and load the cache (if exists) instead of building up the index from scratch every time.
|
1.0
|
Improve test suite setup time by caching pip index - **Problem description:**
The `local_module_package_index` fixture builds all modules present in the `tests/data/modules_v2` directory and publishes them on a temporary Python package repository for the runtime of the test suite. This procedure takes a long time to complete. Test cases that use this fixture cannot be developed efficiently because of this.
**Proposed solution:**
Cache this pip index in the project directory and load the cache (if exists) instead of building up the index from scratch every time.
|
process
|
improve test suite setup time by caching pip index problem description the local module package index fixture builds all modules present in the tests data modules directory and publishes them on a temporary python package repository for the runtime of the test suite this procedure takes a long time to complete test cases that use this fixture cannot be developed efficiently because of this proposed solution cache this pip index in the project directory and load the cache if exists instead of building up the index from scratch every time
| 1
|
330,487
| 10,041,295,515
|
IssuesEvent
|
2019-07-18 22:20:27
|
learningequality/studio
|
https://api.github.com/repos/learningequality/studio
|
closed
|
Error when copying via node menu
|
bug high priority
|
Sentry Issue: [STUDIO-2D5](https://sentry.io/organizations/learningequality/issues/1114134686/?referrer=github_integration)
```
AssertionError: The `request` argument must be an instance of `django.http.HttpRequest`, not `rest_framework.request.Request`.
(12 additional frame(s) were not displayed)
...
File "rest_framework/views.py", line 480, in dispatch
response = handler(request, *args, **kwargs)
File "rest_framework/decorators.py", line 53, in handler
return func(*args, **kwargs)
File "rest_framework/views.py", line 466, in dispatch
request = self.initialize_request(request, *args, **kwargs)
File "rest_framework/views.py", line 370, in initialize_request
parser_context=parser_context
File "rest_framework/request.py", line 159, in __init__
.format(request.__class__.__module__, request.__class__.__name__)
AssertionError: The `request` argument must be an instance of `django.http.HttpRequest`, not `rest_framework.request.Request`.
```
|
1.0
|
Error when copying via node menu - Sentry Issue: [STUDIO-2D5](https://sentry.io/organizations/learningequality/issues/1114134686/?referrer=github_integration)
```
AssertionError: The `request` argument must be an instance of `django.http.HttpRequest`, not `rest_framework.request.Request`.
(12 additional frame(s) were not displayed)
...
File "rest_framework/views.py", line 480, in dispatch
response = handler(request, *args, **kwargs)
File "rest_framework/decorators.py", line 53, in handler
return func(*args, **kwargs)
File "rest_framework/views.py", line 466, in dispatch
request = self.initialize_request(request, *args, **kwargs)
File "rest_framework/views.py", line 370, in initialize_request
parser_context=parser_context
File "rest_framework/request.py", line 159, in __init__
.format(request.__class__.__module__, request.__class__.__name__)
AssertionError: The `request` argument must be an instance of `django.http.HttpRequest`, not `rest_framework.request.Request`.
```
|
non_process
|
error when copying via node menu sentry issue assertionerror the request argument must be an instance of django http httprequest not rest framework request request additional frame s were not displayed file rest framework views py line in dispatch response handler request args kwargs file rest framework decorators py line in handler return func args kwargs file rest framework views py line in dispatch request self initialize request request args kwargs file rest framework views py line in initialize request parser context parser context file rest framework request py line in init format request class module request class name assertionerror the request argument must be an instance of django http httprequest not rest framework request request
| 0
|
370,533
| 25,912,682,647
|
IssuesEvent
|
2022-12-15 15:05:29
|
getindiekit/indiekit
|
https://api.github.com/repos/getindiekit/indiekit
|
opened
|
Docs: Railway tutorial
|
documentation
|
This is the service I am currently using, so just need to remind myself how I set it up and write it down!
- [ ] Add tutorial for setting up an Indiekit server on Railway (adding GitHub and MongoDB components to a project)
- [ ] Create a deploy button/project template
- [ ] Investigate [Open Source Kickback](https://railway.app/open-source-kickback) scheme (Indiekit is eligible)
|
1.0
|
Docs: Railway tutorial - This is the service I am currently using, so just need to remind myself how I set it up and write it down!
- [ ] Add tutorial for setting up an Indiekit server on Railway (adding GitHub and MongoDB components to a project)
- [ ] Create a deploy button/project template
- [ ] Investigate [Open Source Kickback](https://railway.app/open-source-kickback) scheme (Indiekit is eligible)
|
non_process
|
docs railway tutorial this is the service i am currently using so just need to remind myself how i set it up and write it down add tutorial for setting up an indiekit server on railway adding github and mongodb components to a project create a deploy button project template investigate scheme indiekit is eligible
| 0
|
14,051
| 16,855,761,971
|
IssuesEvent
|
2021-06-21 06:23:34
|
pingcap/tidb
|
https://api.github.com/repos/pingcap/tidb
|
closed
|
privileges_test.go:testPrivilegeSuite.TearDownSuite failed
|
component/coprocessor component/test
|
privileges_test.go:testPrivilegeSuite.TearDownSuite
```
[2020-07-14T06:41:20.072Z] ----------------------------------------------------------------------
[2020-07-14T06:41:20.072Z] FAIL: privileges_test.go:76: testPrivilegeSuite.TearDownSuite
[2020-07-14T06:41:20.072Z]
[2020-07-14T06:41:20.072Z] privileges_test.go:79:
[2020-07-14T06:41:20.072Z] testleak.AfterTest(c)()
[2020-07-14T06:41:20.072Z] /home/jenkins/agent/workspace/tidb_ghpr_unit_test/go/src/github.com/pingcap/tidb/util/testleak/leaktest.go:166:
[2020-07-14T06:41:20.072Z] c.Errorf("Test %s check-count %d appears to have leaked: %v", c.TestName(), cnt, g)
[2020-07-14T06:41:20.072Z] ... Error: Test check-count 50 appears to have leaked: github.com/pingcap/tidb/domain.(*Domain).globalBindHandleWorkerLoop.func1(0xc000840a20)
[2020-07-14T06:41:20.072Z] /home/jenkins/agent/workspace/tidb_ghpr_unit_test/go/src/github.com/pingcap/tidb/domain/domain.go:931 +0x162
[2020-07-14T06:41:20.072Z] created by github.com/pingcap/tidb/domain.(*Domain).globalBindHandleWorkerLoop
[2020-07-14T06:41:20.072Z] /home/jenkins/agent/workspace/tidb_ghpr_unit_test/go/src/github.com/pingcap/tidb/domain/domain.go:922 +0x5f
[2020-07-14T06:41:20.072Z]
```
Latest failed builds:
https://internal.pingcap.net/idc-jenkins/job/tidb_ghpr_unit_test/42694/display/redirect
|
1.0
|
privileges_test.go:testPrivilegeSuite.TearDownSuite failed - privileges_test.go:testPrivilegeSuite.TearDownSuite
```
[2020-07-14T06:41:20.072Z] ----------------------------------------------------------------------
[2020-07-14T06:41:20.072Z] FAIL: privileges_test.go:76: testPrivilegeSuite.TearDownSuite
[2020-07-14T06:41:20.072Z]
[2020-07-14T06:41:20.072Z] privileges_test.go:79:
[2020-07-14T06:41:20.072Z] testleak.AfterTest(c)()
[2020-07-14T06:41:20.072Z] /home/jenkins/agent/workspace/tidb_ghpr_unit_test/go/src/github.com/pingcap/tidb/util/testleak/leaktest.go:166:
[2020-07-14T06:41:20.072Z] c.Errorf("Test %s check-count %d appears to have leaked: %v", c.TestName(), cnt, g)
[2020-07-14T06:41:20.072Z] ... Error: Test check-count 50 appears to have leaked: github.com/pingcap/tidb/domain.(*Domain).globalBindHandleWorkerLoop.func1(0xc000840a20)
[2020-07-14T06:41:20.072Z] /home/jenkins/agent/workspace/tidb_ghpr_unit_test/go/src/github.com/pingcap/tidb/domain/domain.go:931 +0x162
[2020-07-14T06:41:20.072Z] created by github.com/pingcap/tidb/domain.(*Domain).globalBindHandleWorkerLoop
[2020-07-14T06:41:20.072Z] /home/jenkins/agent/workspace/tidb_ghpr_unit_test/go/src/github.com/pingcap/tidb/domain/domain.go:922 +0x5f
[2020-07-14T06:41:20.072Z]
```
Latest failed builds:
https://internal.pingcap.net/idc-jenkins/job/tidb_ghpr_unit_test/42694/display/redirect
|
process
|
privileges test go testprivilegesuite teardownsuite failed privileges test go testprivilegesuite teardownsuite fail privileges test go testprivilegesuite teardownsuite privileges test go testleak aftertest c home jenkins agent workspace tidb ghpr unit test go src github com pingcap tidb util testleak leaktest go c errorf test s check count d appears to have leaked v c testname cnt g error test check count appears to have leaked github com pingcap tidb domain domain globalbindhandleworkerloop home jenkins agent workspace tidb ghpr unit test go src github com pingcap tidb domain domain go created by github com pingcap tidb domain domain globalbindhandleworkerloop home jenkins agent workspace tidb ghpr unit test go src github com pingcap tidb domain domain go latest failed builds
| 1
|
20,861
| 27,645,425,605
|
IssuesEvent
|
2023-03-10 22:23:08
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
opened
|
as a browser, I want to click on the login and signup buttons to direct me to the respective pages.
|
Processing Task Sprint 2
|
**Acceptance Tests**
test 1:
-go to the homepage by typing "npm start" in the terminal within the project folder.
- click on the login button and ensure it takes you to the login page
test 2:
-go to the homepage by typing "npm start" in the terminal within the project folder.
- click on the signup button and ensure it takes you to the signup page
|
1.0
|
as a browser, I want to click on the login and signup buttons to direct me to the respective pages. - **Acceptance Tests**
test 1:
-go to the homepage by typing "npm start" in the terminal within the project folder.
- click on the login button and ensure it takes you to the login page
test 2:
-go to the homepage by typing "npm start" in the terminal within the project folder.
- click on the signup button and ensure it takes you to the signup page
|
process
|
as a browser i want to click on the login and signup buttons to direct me to the respective pages acceptance tests test go to the homepage by typing npm start in the terminal within the project folder click on the login button and ensure it takes you to the login page test go to the homepage by typing npm start in the terminal within the project folder click on the signup button and ensure it takes you to the signup page
| 1
|
22,632
| 31,881,003,874
|
IssuesEvent
|
2023-09-16 11:27:14
|
AnishTiwari16/Portfolio
|
https://api.github.com/repos/AnishTiwari16/Portfolio
|
closed
|
feat: projects section needs more content
|
in-process
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
feat: projects section needs more content - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
feat projects section needs more content is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when describe the solution you d like a clear and concise description of what you want to happen describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered additional context add any other context or screenshots about the feature request here
| 1
|
185,932
| 21,881,939,603
|
IssuesEvent
|
2022-05-19 15:00:33
|
odpi/egeria
|
https://api.github.com/repos/odpi/egeria
|
opened
|
Sonatype lift scan of repositories using egeria maven artifacts - vulnarability reported
|
security
|
When new connector sample code was added to a PR https://github.com/odpi/egeria-connector-repository-file-sample/pull/6 by @davidradl we had Sonatype Lift enabled.
The scan reports that some of the dependent egeria dependencies have a critical vulnarability ie
<img width="1105" alt="Screenshot 2022-05-19 at 15 55 55" src="https://user-images.githubusercontent.com/7292002/169327347-48da9f7f-1fb1-4b63-9bad-7412b11e2482.png">
However, as part of the release process, our published maven artifacts are scanned by sonatype. The report from the 3.8 staging area is at https://sbom.lift.sonatype.com/report/T1-a0368c8f29fdaa555824-56dff43ca17820-1651249806-0d4b2004f66e4e128e6df03823c62db1 and shows no components as vulnerable.
It's possible, that new vulnerabilities were found in dependent libraries after the staging scan was performed, though the ossindex site is not showing any concerns -- at odds with the scan.
I will contact sonatype for comment since this is concerning - especially if all consumers of egeria libraries will see such findings.
|
True
|
Sonatype lift scan of repositories using egeria maven artifacts - vulnarability reported - When new connector sample code was added to a PR https://github.com/odpi/egeria-connector-repository-file-sample/pull/6 by @davidradl we had Sonatype Lift enabled.
The scan reports that some of the dependent egeria dependencies have a critical vulnarability ie
<img width="1105" alt="Screenshot 2022-05-19 at 15 55 55" src="https://user-images.githubusercontent.com/7292002/169327347-48da9f7f-1fb1-4b63-9bad-7412b11e2482.png">
However, as part of the release process, our published maven artifacts are scanned by sonatype. The report from the 3.8 staging area is at https://sbom.lift.sonatype.com/report/T1-a0368c8f29fdaa555824-56dff43ca17820-1651249806-0d4b2004f66e4e128e6df03823c62db1 and shows no components as vulnerable.
It's possible, that new vulnerabilities were found in dependent libraries after the staging scan was performed, though the ossindex site is not showing any concerns -- at odds with the scan.
I will contact sonatype for comment since this is concerning - especially if all consumers of egeria libraries will see such findings.
|
non_process
|
sonatype lift scan of repositories using egeria maven artifacts vulnarability reported when new connector sample code was added to a pr by davidradl we had sonatype lift enabled the scan reports that some of the dependent egeria dependencies have a critical vulnarability ie img width alt screenshot at src however as part of the release process our published maven artifacts are scanned by sonatype the report from the staging area is at and shows no components as vulnerable it s possible that new vulnerabilities were found in dependent libraries after the staging scan was performed though the ossindex site is not showing any concerns at odds with the scan i will contact sonatype for comment since this is concerning especially if all consumers of egeria libraries will see such findings
| 0
|
141,822
| 11,438,160,560
|
IssuesEvent
|
2020-02-05 02:27:34
|
WordPress/gutenberg
|
https://api.github.com/repos/WordPress/gutenberg
|
reopened
|
Select all shortcut selects all blocks instead of content in block (Microsoft Edge)
|
Needs Testing
|
**Describe the bug**
In Microsoft Edge the select all shortcut `ctrl+A` selects all blocks instead of only selecting all of the content in a block.
This is not consistent with the behaviour in Chrome and Firefox. In other browsers only when using the shortcut again are all of the blocks selected.
**To reproduce**
Steps to reproduce the behavior:
1. Edit a post with multiple posts in Microsoft Edge
2. Use the select all shortcut `ctrl+A`
3. All blocks are selected
**Expected behavior**
Only the text within the paragraph block should be selected not all of the blocks.
**Environment**
- OS: Windows 10
- Browser: Edge
- Version: 42.17134.1.0
- WordPress: 5.3
- Gutenberg: 7.1
|
1.0
|
Select all shortcut selects all blocks instead of content in block (Microsoft Edge) - **Describe the bug**
In Microsoft Edge the select all shortcut `ctrl+A` selects all blocks instead of only selecting all of the content in a block.
This is not consistent with the behaviour in Chrome and Firefox. In other browsers only when using the shortcut again are all of the blocks selected.
**To reproduce**
Steps to reproduce the behavior:
1. Edit a post with multiple posts in Microsoft Edge
2. Use the select all shortcut `ctrl+A`
3. All blocks are selected
**Expected behavior**
Only the text within the paragraph block should be selected not all of the blocks.
**Environment**
- OS: Windows 10
- Browser: Edge
- Version: 42.17134.1.0
- WordPress: 5.3
- Gutenberg: 7.1
|
non_process
|
select all shortcut selects all blocks instead of content in block microsoft edge describe the bug in microsoft edge the select all shortcut ctrl a selects all blocks instead of only selecting all of the content in a block this is not consistent with the behaviour in chrome and firefox in other browsers only when using the shortcut again are all of the blocks selected to reproduce steps to reproduce the behavior edit a post with multiple posts in microsoft edge use the select all shortcut ctrl a all blocks are selected expected behavior only the text within the paragraph block should be selected not all of the blocks environment os windows browser edge version wordpress gutenberg
| 0
|
30,779
| 14,673,941,337
|
IssuesEvent
|
2020-12-30 14:15:58
|
gramps-project/gramps-webapi
|
https://api.github.com/repos/gramps-project/gramps-webapi
|
closed
|
Very slow relationship calculation
|
performance
|
Starting to work on a timeline view, I hit the issue that the person timeline endpoint is *very* slow on my tree. It takes more than 20 seconds to fetch my timeline (with default arguments), which makes it unusable in practice. I am pretty sure this is a problem within Gramps, not of our timeline code...
Most of the time is spent calculating the relationship between my 2 sisters and me, which each takes almost 10 seconds (the result being `Sister`). Something must be wrong there. Snakeviz shows that, for a single such relationship calculation, it instantiates more `Person` and `Family` objects than are present in the database.
Has anyone else encountered such problems?
|
True
|
Very slow relationship calculation - Starting to work on a timeline view, I hit the issue that the person timeline endpoint is *very* slow on my tree. It takes more than 20 seconds to fetch my timeline (with default arguments), which makes it unusable in practice. I am pretty sure this is a problem within Gramps, not of our timeline code...
Most of the time is spent calculating the relationship between my 2 sisters and me, which each takes almost 10 seconds (the result being `Sister`). Something must be wrong there. Snakeviz shows that, for a single such relationship calculation, it instantiates more `Person` and `Family` objects than are present in the database.
Has anyone else encountered such problems?
|
non_process
|
very slow relationship calculation starting to work on a timeline view i hit the issue that the person timeline endpoint is very slow on my tree it takes more than seconds to fetch my timeline with default arguments which makes it unusable in practice i am pretty sure this is a problem within gramps not of our timeline code most of the time is spent calculating the relationship between my sisters and me which each takes almost seconds the result being sister something must be wrong there snakeviz shows that for a single such relationship calculation it instantiates more person and family objects than are present in the database has anyone else encountered such problems
| 0
|
50,960
| 13,188,001,261
|
IssuesEvent
|
2020-08-13 05:16:06
|
icecube-trac/tix3
|
https://api.github.com/repos/icecube-trac/tix3
|
closed
|
[production_histograms] invalid syntax (Trac #1737)
|
Migrated from Trac cmake defect
|
```text
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py", line 12
self.append(Histogram(, , , "FluxSum"))
^
SyntaxError: invalid syntax
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py", line 12
self.append(Histogram(, , , "OneWeight"))
^
SyntaxError: invalid syntax
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1737">https://code.icecube.wisc.edu/ticket/1737</a>, reported by kjmeagher and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py\", line 12\n self.append(Histogram(, , , \"FluxSum\"))\n ^\nSyntaxError: invalid syntax\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py\", line 12\n self.append(Histogram(, , , \"OneWeight\"))\n ^\nSyntaxError: invalid syntax\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "cmake",
"summary": "[production_histograms] invalid syntax",
"priority": "normal",
"keywords": "documentation",
"time": "2016-06-10T07:44:37",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
1.0
|
[production_histograms] invalid syntax (Trac #1737) -
```text
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py", line 12
self.append(Histogram(, , , "FluxSum"))
^
SyntaxError: invalid syntax
/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:
Traceback (most recent call last):
File "/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py", line 385, in import_object
File "/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py", line 12
self.append(Histogram(, , , "OneWeight"))
^
SyntaxError: invalid syntax
```
<details>
<summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1737">https://code.icecube.wisc.edu/ticket/1737</a>, reported by kjmeagher and owned by olivas</em></summary>
<p>
```json
{
"status": "closed",
"changetime": "2019-02-13T14:12:38",
"description": "\n{{{\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:15: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.corsika_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/corsika_weight.py\", line 12\n self.append(Histogram(, , , \"FluxSum\"))\n ^\nSyntaxError: invalid syntax\n/Users/kmeagher/icecube/combo/release/sphinx_build/source/python/icecube.production_histograms.histogram_modules.simulation.rst:63: WARNING: autodoc: failed to import module u'icecube.production_histograms.histogram_modules.simulation.nugen_weight'; the following exception was raised:\nTraceback (most recent call last):\n File \"/private/var/folders/rc/g_4_lyp9039cj1586zzg88f40000gn/T/pip-build-A327aa/sphinx/sphinx/ext/autodoc.py\", line 385, in import_object\n File \"/Users/kmeagher/icecube/combo/release/lib/icecube/production_histograms/histogram_modules/simulation/nugen_weight.py\", line 12\n self.append(Histogram(, , , \"OneWeight\"))\n ^\nSyntaxError: invalid syntax\n\n}}}\n",
"reporter": "kjmeagher",
"cc": "",
"resolution": "fixed",
"_ts": "1550067158057333",
"component": "cmake",
"summary": "[production_histograms] invalid syntax",
"priority": "normal",
"keywords": "documentation",
"time": "2016-06-10T07:44:37",
"milestone": "",
"owner": "olivas",
"type": "defect"
}
```
</p>
</details>
|
non_process
|
invalid syntax trac text users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation corsika weight the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation corsika weight py line self append histogram fluxsum syntaxerror invalid syntax users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation nugen weight the following exception was raised traceback most recent call last file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation nugen weight py line self append histogram oneweight syntaxerror invalid syntax migrated from json status closed changetime description n n users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation corsika weight the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation corsika weight py line n self append histogram fluxsum n nsyntaxerror invalid syntax n users kmeagher icecube combo release sphinx build source python icecube production histograms histogram modules simulation rst warning autodoc failed to import module u icecube production histograms histogram modules simulation nugen weight the following exception was raised ntraceback most recent call last n file private var folders rc g t pip build sphinx sphinx ext autodoc py line in import object n file users kmeagher icecube combo release lib icecube production histograms histogram modules simulation nugen weight py line n self append histogram oneweight n nsyntaxerror invalid syntax n n n reporter kjmeagher cc resolution fixed ts component cmake summary invalid syntax priority normal keywords documentation time milestone owner olivas type defect
| 0
|
17,536
| 23,345,595,325
|
IssuesEvent
|
2022-08-09 17:38:42
|
AdyanRios-NOAA/SEFSC-MH-Processing
|
https://api.github.com/repos/AdyanRios-NOAA/SEFSC-MH-Processing
|
closed
|
Species expansion final process flow
|
bug icebox Processing
|
- [x] mh_pre_process.R is going to be for only sector expansion and creating other variables needed for processing
- [x] mh_process.R creates clusters and fills in dates
- [x] spcies_expansion.R gets dataset ready for collections at the species level and dates are cleaned up
- [x] Review date cleanup
|
1.0
|
Species expansion final process flow - - [x] mh_pre_process.R is going to be for only sector expansion and creating other variables needed for processing
- [x] mh_process.R creates clusters and fills in dates
- [x] spcies_expansion.R gets dataset ready for collections at the species level and dates are cleaned up
- [x] Review date cleanup
|
process
|
species expansion final process flow mh pre process r is going to be for only sector expansion and creating other variables needed for processing mh process r creates clusters and fills in dates spcies expansion r gets dataset ready for collections at the species level and dates are cleaned up review date cleanup
| 1
|
6,243
| 2,586,306,459
|
IssuesEvent
|
2015-02-17 10:29:24
|
phusion/passenger
|
https://api.github.com/repos/phusion/passenger
|
closed
|
passenger-irb is broken in version 5
|
EnterpriseCustomer Priority/Critical
|
```
passenger-enterprise-server-5.0.0.beta2 ./bin/passenger-irb
LoadError: no such file to load -- phusion_passenger/admin_tools/server_instance
require at org/jruby/RubyKernel.java:1071
require at /Users/xxx/.rvm/rubies/jruby-1.7.18/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
(root) at ./bin/passenger-irb:14
```
|
1.0
|
passenger-irb is broken in version 5 - ```
passenger-enterprise-server-5.0.0.beta2 ./bin/passenger-irb
LoadError: no such file to load -- phusion_passenger/admin_tools/server_instance
require at org/jruby/RubyKernel.java:1071
require at /Users/xxx/.rvm/rubies/jruby-1.7.18/lib/ruby/shared/rubygems/core_ext/kernel_require.rb:55
(root) at ./bin/passenger-irb:14
```
|
non_process
|
passenger irb is broken in version passenger enterprise server bin passenger irb loaderror no such file to load phusion passenger admin tools server instance require at org jruby rubykernel java require at users xxx rvm rubies jruby lib ruby shared rubygems core ext kernel require rb root at bin passenger irb
| 0
|
145,129
| 11,659,734,549
|
IssuesEvent
|
2020-03-03 00:59:00
|
rook/rook
|
https://api.github.com/repos/rook/rook
|
opened
|
Integration tests failing in master due to CephBlockPool cleanup
|
bug test
|
<!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
Integration tests should pass. The CephSmokeSuite is failing after the cleanup of a pool is not completed from a previous test suite. This was not seen on the PR before merge since the test suites run independently, which is not the case in master.
**Expected behavior:**
The CI is failing when setting up the CephSmokeSuite since the CephBlockPool CRD already exists. See the [test log](https://jenkins.rook.io/blue/rest/organizations/jenkins/pipelines/rook/pipelines/rook/branches/master/runs/1761/nodes/56/steps/126/log/?start=0)
This is related to #4915. The master CI has failed consistently since it merged. I suspect this is related to the finalizer that was added to the pool. Something must not be cleaned up from the CephMultiClusterSuite.
The cleanup of the CephMultiClusterSuite shows this error:
```
2020-03-03 00:34:35.982211 I | exec: Running command: kubectl delete crd cephclusters.ceph.rook.io cephblockpools.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephclients.ceph.rook.io volumes.rook.io objectbuckets.objectbucket.io objectbucketclaims.objectbucket.io
2020-03-03 00:34:50.982773 I | exec: Timeout waiting for process kubectl to return. Sending interrupt signal to the process
2020-03-03 00:34:50.984566 E | utils: Failed to execute: kubectl [delete crd cephclusters.ceph.rook.io cephblockpools.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephclients.ceph.rook.io volumes.rook.io objectbuckets.objectbucket.io objectbucketclaims.objectbucket.io] : Failed to complete 'kubectl': signal: interrupt. . customresourcedefinition.apiextensions.k8s.io "cephclusters.ceph.rook.io" deleted
```
The smoke suite then fails to create the CRDs since the cleanup failed previously:
```
Error from server (AlreadyExists): error when creating "STDIN": object is being deleted: customresourcedefinitions.apiextensions.k8s.io "cephblockpools.ceph.rook.io" already exists
```
**How to reproduce it (minimal and precise):**
<!-- Please let us know any circumstances for reproduction of your bug. -->
- Run the master CI
|
1.0
|
Integration tests failing in master due to CephBlockPool cleanup - <!-- **Are you in the right place?**
1. For issues or feature requests, please create an issue in this repository.
2. For general technical and non-technical questions, we are happy to help you on our [Rook.io Slack](https://slack.rook.io/).
3. Did you already search the existing open issues for anything similar? -->
**Is this a bug report or feature request?**
* Bug Report
**Deviation from expected behavior:**
Integration tests should pass. The CephSmokeSuite is failing after the cleanup of a pool is not completed from a previous test suite. This was not seen on the PR before merge since the test suites run independently, which is not the case in master.
**Expected behavior:**
The CI is failing when setting up the CephSmokeSuite since the CephBlockPool CRD already exists. See the [test log](https://jenkins.rook.io/blue/rest/organizations/jenkins/pipelines/rook/pipelines/rook/branches/master/runs/1761/nodes/56/steps/126/log/?start=0)
This is related to #4915. The master CI has failed consistently since it merged. I suspect this is related to the finalizer that was added to the pool. Something must not be cleaned up from the CephMultiClusterSuite.
The cleanup of the CephMultiClusterSuite shows this error:
```
2020-03-03 00:34:35.982211 I | exec: Running command: kubectl delete crd cephclusters.ceph.rook.io cephblockpools.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephclients.ceph.rook.io volumes.rook.io objectbuckets.objectbucket.io objectbucketclaims.objectbucket.io
2020-03-03 00:34:50.982773 I | exec: Timeout waiting for process kubectl to return. Sending interrupt signal to the process
2020-03-03 00:34:50.984566 E | utils: Failed to execute: kubectl [delete crd cephclusters.ceph.rook.io cephblockpools.ceph.rook.io cephobjectstores.ceph.rook.io cephobjectstoreusers.ceph.rook.io cephfilesystems.ceph.rook.io cephnfses.ceph.rook.io cephclients.ceph.rook.io volumes.rook.io objectbuckets.objectbucket.io objectbucketclaims.objectbucket.io] : Failed to complete 'kubectl': signal: interrupt. . customresourcedefinition.apiextensions.k8s.io "cephclusters.ceph.rook.io" deleted
```
The smoke suite then fails to create the CRDs since the cleanup failed previously:
```
Error from server (AlreadyExists): error when creating "STDIN": object is being deleted: customresourcedefinitions.apiextensions.k8s.io "cephblockpools.ceph.rook.io" already exists
```
**How to reproduce it (minimal and precise):**
<!-- Please let us know any circumstances for reproduction of your bug. -->
- Run the master CI
|
non_process
|
integration tests failing in master due to cephblockpool cleanup are you in the right place for issues or feature requests please create an issue in this repository for general technical and non technical questions we are happy to help you on our did you already search the existing open issues for anything similar is this a bug report or feature request bug report deviation from expected behavior integration tests should pass the cephsmokesuite is failing after the cleanup of a pool is not completed from a previous test suite this was not seen on the pr before merge since the test suites run independently which is not the case in master expected behavior the ci is failing when setting up the cephsmokesuite since the cephblockpool crd already exists see the this is related to the master ci has failed consistently since it merged i suspect this is related to the finalizer that was added to the pool something must not be cleaned up from the cephmulticlustersuite the cleanup of the cephmulticlustersuite shows this error i exec running command kubectl delete crd cephclusters ceph rook io cephblockpools ceph rook io cephobjectstores ceph rook io cephobjectstoreusers ceph rook io cephfilesystems ceph rook io cephnfses ceph rook io cephclients ceph rook io volumes rook io objectbuckets objectbucket io objectbucketclaims objectbucket io i exec timeout waiting for process kubectl to return sending interrupt signal to the process e utils failed to execute kubectl failed to complete kubectl signal interrupt customresourcedefinition apiextensions io cephclusters ceph rook io deleted the smoke suite then fails to create the crds since the cleanup failed previously error from server alreadyexists error when creating stdin object is being deleted customresourcedefinitions apiextensions io cephblockpools ceph rook io already exists how to reproduce it minimal and precise run the master ci
| 0
|
425,277
| 12,338,113,163
|
IssuesEvent
|
2020-05-14 15:57:33
|
cb-geo/mpm
|
https://api.github.com/repos/cb-geo/mpm
|
opened
|
Parallel Solver for Semi-Implicit Navier-Stokes solver
|
Priority: High Status: Review needed Type: Discussion
|
## Summary
This RFC is to propose an enhancement of the Navier-Stokes solver in branch `solver/navier-stokes` and RFC #634. A parallel solver based on PETSc library is planned to be added to the implementation to solve a large scale linear systems of equation.
## Motivation
To add parallel capabilities to the NS solver so that it can be used to solve problems with large number of particles.
## Design Detail
The following classes and functionality were implemented:
1. A specific assembler of NS scheme for parallel usage.
2. Wrapper to include the capability of PETSc solver to CB-Geo. This includes the capability to wrap Eigen matrices and vectors to PETSc data structures and rank-to-global index mapping.
3. Integration of NS solver with domain-decomposition and halo-exchange features.
4. **(To be done)** Parallel free-surface detection.
5. **(To be done)** Add the capability of dynamic load-balancing.
6. **(To be done)** To implement parallel incomplete cholesky preconditioner for faster convergence.
## Drawbacks
No drawbacks in performance at the moment. It extended the capability of running the semi-implicit solver in a distributed memory machine. The current max number of particles reached 2.4 million, which can be completed in 1 hour for 1000 steps, running in 256 MPI tasks. Further optimization can definitely be done to improve performance.
|
1.0
|
Parallel Solver for Semi-Implicit Navier-Stokes solver - ## Summary
This RFC is to propose an enhancement of the Navier-Stokes solver in branch `solver/navier-stokes` and RFC #634. A parallel solver based on PETSc library is planned to be added to the implementation to solve a large scale linear systems of equation.
## Motivation
To add parallel capabilities to the NS solver so that it can be used to solve problems with large number of particles.
## Design Detail
The following classes and functionality were implemented:
1. A specific assembler of NS scheme for parallel usage.
2. Wrapper to include the capability of PETSc solver to CB-Geo. This includes the capability to wrap Eigen matrices and vectors to PETSc data structures and rank-to-global index mapping.
3. Integration of NS solver with domain-decomposition and halo-exchange features.
4. **(To be done)** Parallel free-surface detection.
5. **(To be done)** Add the capability of dynamic load-balancing.
6. **(To be done)** To implement parallel incomplete cholesky preconditioner for faster convergence.
## Drawbacks
No drawbacks in performance at the moment. It extended the capability of running the semi-implicit solver in a distributed memory machine. The current max number of particles reached 2.4 million, which can be completed in 1 hour for 1000 steps, running in 256 MPI tasks. Further optimization can definitely be done to improve performance.
|
non_process
|
parallel solver for semi implicit navier stokes solver summary this rfc is to propose an enhancement of the navier stokes solver in branch solver navier stokes and rfc a parallel solver based on petsc library is planned to be added to the implementation to solve a large scale linear systems of equation motivation to add parallel capabilities to the ns solver so that it can be used to solve problems with large number of particles design detail the following classes and functionality were implemented a specific assembler of ns scheme for parallel usage wrapper to include the capability of petsc solver to cb geo this includes the capability to wrap eigen matrices and vectors to petsc data structures and rank to global index mapping integration of ns solver with domain decomposition and halo exchange features to be done parallel free surface detection to be done add the capability of dynamic load balancing to be done to implement parallel incomplete cholesky preconditioner for faster convergence drawbacks no drawbacks in performance at the moment it extended the capability of running the semi implicit solver in a distributed memory machine the current max number of particles reached million which can be completed in hour for steps running in mpi tasks further optimization can definitely be done to improve performance
| 0
|
773
| 3,256,656,404
|
IssuesEvent
|
2015-10-20 14:42:07
|
g4gaurang/bcbsmaissuestracker
|
https://api.github.com/repos/g4gaurang/bcbsmaissuestracker
|
opened
|
Content Library Folder Auto-Generation - AB-174
|
Environment-Production HR_FixType-HR_Product Milestone-Post_1.1 Priority-High Status- In-Process Type-Defect
|
After searching through Content Library, it was discovered that there are duplicates of multiple folders, with the exceptions of commas, periods, hyphens, etc.
Bug AB-174 has been logged and confirmed that when the correctly-named Preferred Name attribute value is chosen, it is, in fact, creating a brand new folder structure with no punctuation.
For example - When choosing the Preferred Account Name on the Plan Details page of the Configurator - using the drop down to find 'Mintz...'. The account names in the Preferred Account Name drop down list in the configurator do not fully match what is in Content Library.
https://drive.google.com/file/d/0B346D2AloIVGcTF6ZlVFZVZaTDA/view?usp=sharing
|
1.0
|
Content Library Folder Auto-Generation - AB-174 - After searching through Content Library, it was discovered that there are duplicates of multiple folders, with the exceptions of commas, periods, hyphens, etc.
Bug AB-174 has been logged and confirmed that when the correctly-named Preferred Name attribute value is chosen, it is, in fact, creating a brand new folder structure with no punctuation.
For example - When choosing the Preferred Account Name on the Plan Details page of the Configurator - using the drop down to find 'Mintz...'. The account names in the Preferred Account Name drop down list in the configurator do not fully match what is in Content Library.
https://drive.google.com/file/d/0B346D2AloIVGcTF6ZlVFZVZaTDA/view?usp=sharing
|
process
|
content library folder auto generation ab after searching through content library it was discovered that there are duplicates of multiple folders with the exceptions of commas periods hyphens etc bug ab has been logged and confirmed that when the correctly named preferred name attribute value is chosen it is in fact creating a brand new folder structure with no punctuation for example when choosing the preferred account name on the plan details page of the configurator using the drop down to find mintz the account names in the preferred account name drop down list in the configurator do not fully match what is in content library
| 1
|
159,916
| 6,064,168,353
|
IssuesEvent
|
2017-06-14 13:46:51
|
VirtoCommerce/vc-platform
|
https://api.github.com/repos/VirtoCommerce/vc-platform
|
opened
|
Storefront: menu not working on mobile devices / devices with narrow screen
|
bug Priority: High
|
Version info:
- Browser version: Chrome 58 Desktop / for Android
- Platform version: 2.13.8 (public demo)
- Module version: Storefront 2.21.3 (public demo)
### Expected behavior
When you click/tap on `Menu` button, menu must appear.
### Actual behavior
Nothing happens.
### Steps to reproduce
1. Open demo.virtocommerce.com on mobile device or set width of your browser windows to less than 768px.
2. Sign in.
3. Open menu and go to `View account`.
4. Try to open menu now.
### Why it happens?
JS functions what allow you to open menu is defined in `shop.js.liquid`, which is not included in `account/scripts` bundle
|
1.0
|
Storefront: menu not working on mobile devices / devices with narrow screen - Version info:
- Browser version: Chrome 58 Desktop / for Android
- Platform version: 2.13.8 (public demo)
- Module version: Storefront 2.21.3 (public demo)
### Expected behavior
When you click/tap on `Menu` button, menu must appear.
### Actual behavior
Nothing happens.
### Steps to reproduce
1. Open demo.virtocommerce.com on mobile device or set width of your browser windows to less than 768px.
2. Sign in.
3. Open menu and go to `View account`.
4. Try to open menu now.
### Why it happens?
JS functions what allow you to open menu is defined in `shop.js.liquid`, which is not included in `account/scripts` bundle
|
non_process
|
storefront menu not working on mobile devices devices with narrow screen version info browser version chrome desktop for android platform version public demo module version storefront public demo expected behavior when you click tap on menu button menu must appear actual behavior nothing happens steps to reproduce open demo virtocommerce com on mobile device or set width of your browser windows to less than sign in open menu and go to view account try to open menu now why it happens js functions what allow you to open menu is defined in shop js liquid which is not included in account scripts bundle
| 0
|
226,093
| 17,948,440,220
|
IssuesEvent
|
2021-09-12 08:49:58
|
MetagaussInc/Blazeforms-Revamped-Frontend
|
https://api.github.com/repos/MetagaussInc/Blazeforms-Revamped-Frontend
|
closed
|
Style editor on publish page, backgorund color and backgroung image not looking at frontend.
|
bug medium Ready For Retest
|
1. Publish page and click on style editor.
2. set the background image or background color.
Actual result- background image or background color set in style editor not showing at frontend.
|
1.0
|
Style editor on publish page, backgorund color and backgroung image not looking at frontend. - 1. Publish page and click on style editor.
2. set the background image or background color.
Actual result- background image or background color set in style editor not showing at frontend.
|
non_process
|
style editor on publish page backgorund color and backgroung image not looking at frontend publish page and click on style editor set the background image or background color actual result background image or background color set in style editor not showing at frontend
| 0
|
3,761
| 6,734,899,689
|
IssuesEvent
|
2017-10-18 19:43:39
|
cypress-io/cypress
|
https://api.github.com/repos/cypress-io/cypress
|
closed
|
Update contributing with instructions on how to run tests in the driver package
|
first-timers-only process: contributing
|
*General note:* The best source of truth in figuring out how to run tests for each package is our `circle.yml` file found in the main `cypress` directory. The tasks defined in our `cypress.yml` are all run before anything is deployed.
**To run end-to-end tests for the `driver` package from the Cypress Test Runner:**
- In the `cypress` directory, run `npm install` & `npm start`.
- When the Cypress Test Runner opens, manually add the directory `cypress/packages/driver/test`.
- In the `cypress/packages/driver` directory, run `npm start`.
- Click into the `test` directory from the Cypress Test Runner.
- Select any test file you want to run.
**To run end-to-end tests in the `driver` package from the terminal:**
- In the `cypress` directory: run `npm install`.
- In the `cypress/packages/driver` directory, run `npm start` & `npm run test-integration`.
- The Cypress Test Runner should spawn and run through each test file individually.
|
1.0
|
Update contributing with instructions on how to run tests in the driver package - *General note:* The best source of truth in figuring out how to run tests for each package is our `circle.yml` file found in the main `cypress` directory. The tasks defined in our `cypress.yml` are all run before anything is deployed.
**To run end-to-end tests for the `driver` package from the Cypress Test Runner:**
- In the `cypress` directory, run `npm install` & `npm start`.
- When the Cypress Test Runner opens, manually add the directory `cypress/packages/driver/test`.
- In the `cypress/packages/driver` directory, run `npm start`.
- Click into the `test` directory from the Cypress Test Runner.
- Select any test file you want to run.
**To run end-to-end tests in the `driver` package from the terminal:**
- In the `cypress` directory: run `npm install`.
- In the `cypress/packages/driver` directory, run `npm start` & `npm run test-integration`.
- The Cypress Test Runner should spawn and run through each test file individually.
|
process
|
update contributing with instructions on how to run tests in the driver package general note the best source of truth in figuring out how to run tests for each package is our circle yml file found in the main cypress directory the tasks defined in our cypress yml are all run before anything is deployed to run end to end tests for the driver package from the cypress test runner in the cypress directory run npm install npm start when the cypress test runner opens manually add the directory cypress packages driver test in the cypress packages driver directory run npm start click into the test directory from the cypress test runner select any test file you want to run to run end to end tests in the driver package from the terminal in the cypress directory run npm install in the cypress packages driver directory run npm start npm run test integration the cypress test runner should spawn and run through each test file individually
| 1
|
291,524
| 21,928,366,682
|
IssuesEvent
|
2022-05-23 07:29:43
|
cloudnative-pg/cloudnative-pg.github.io
|
https://api.github.com/repos/cloudnative-pg/cloudnative-pg.github.io
|
closed
|
Add link to EDB's k8s landing page from home page
|
documentation
|
As original creater, allos EDB to include a link to the designated URL in the home page. The URL is https://www.enterprisedb.com/products/cloud-native-postgresql-kubernetes-ha-clusters-k8s-containers-scalable
|
1.0
|
Add link to EDB's k8s landing page from home page - As original creater, allos EDB to include a link to the designated URL in the home page. The URL is https://www.enterprisedb.com/products/cloud-native-postgresql-kubernetes-ha-clusters-k8s-containers-scalable
|
non_process
|
add link to edb s landing page from home page as original creater allos edb to include a link to the designated url in the home page the url is
| 0
|
1,270
| 2,615,157,167
|
IssuesEvent
|
2015-03-01 06:35:25
|
chrsmith/html5rocks
|
https://api.github.com/repos/chrsmith/html5rocks
|
closed
|
Review: 59dd6a5770
|
auto-migrated Milestone-4.2 Priority-P1 Type-CodeReview
|
```
Link to revision:
http://code.google.com/p/html5rocks/source/detail?r=59dd6a57707d7e109d4957bc79f5
16128d92c409
Purpose of code changes:
Merge.
```
Original issue reported on code.google.com by `han...@html5rocks.com` on 20 Dec 2010 at 4:49
|
1.0
|
Review: 59dd6a5770 - ```
Link to revision:
http://code.google.com/p/html5rocks/source/detail?r=59dd6a57707d7e109d4957bc79f5
16128d92c409
Purpose of code changes:
Merge.
```
Original issue reported on code.google.com by `han...@html5rocks.com` on 20 Dec 2010 at 4:49
|
non_process
|
review link to revision purpose of code changes merge original issue reported on code google com by han com on dec at
| 0
|
16,386
| 21,112,135,913
|
IssuesEvent
|
2022-04-05 03:44:39
|
zotero/zotero
|
https://api.github.com/repos/zotero/zotero
|
opened
|
Incorrect error handling when citeproc-rs is disabled
|
Word Processor Integration Bug
|
```
[JavaScript Error: "This command is not available because no document is open. [getDocument:\vboxsvr\adomas\zotero\word-for-windows-integration\build\zoterowinwordintegration\document.cpp]"]
[JavaScript Error: "TypeError: invalid 'instanceof' operand Zotero.CiteprocRs.CiteprocRsDriverError" {file: "chrome://zotero/content/xpcom/integration.js" line: 375}]
```
I'm guessing this is throwing incorrectly?
|
1.0
|
Incorrect error handling when citeproc-rs is disabled - ```
[JavaScript Error: "This command is not available because no document is open. [getDocument:\vboxsvr\adomas\zotero\word-for-windows-integration\build\zoterowinwordintegration\document.cpp]"]
[JavaScript Error: "TypeError: invalid 'instanceof' operand Zotero.CiteprocRs.CiteprocRsDriverError" {file: "chrome://zotero/content/xpcom/integration.js" line: 375}]
```
I'm guessing this is throwing incorrectly?
|
process
|
incorrect error handling when citeproc rs is disabled i m guessing this is throwing incorrectly
| 1
|
3,647
| 6,677,850,565
|
IssuesEvent
|
2017-10-05 12:14:42
|
dzhw/zofar
|
https://api.github.com/repos/dzhw/zofar
|
closed
|
create test cases
|
category: service.processes et: 8 prio: 1 status: discussion type: backlog.task
|
assigned to:
- Andrea
- Kim
- Nadin
Creating test cases for unit tests on HTML5 Questiontypes.
|
1.0
|
create test cases - assigned to:
- Andrea
- Kim
- Nadin
Creating test cases for unit tests on HTML5 Questiontypes.
|
process
|
create test cases assigned to andrea kim nadin creating test cases for unit tests on questiontypes
| 1
|
21,104
| 28,062,687,569
|
IssuesEvent
|
2023-03-29 13:35:24
|
FOLIO-FSE/folio_migration_tools
|
https://api.github.com/repos/FOLIO-FSE/folio_migration_tools
|
closed
|
Improve performance for ItemsTransformer by calling super().get_prop() only when needed.
|
Simplify migration process
|
Things to look into:
- [ ] Call super().get_prop() only when needed in ItemMapper's get_prop() method
- [ ] Look over the implementation for mapping_file_mapper_base.is_set_or_bool_or_numeric() and see if there is anything that could speed it up.
Items transformer seems significantly slower in recent releases. Nothing apparent stands out, but I recommend running a profiling on the task:
> python3 -m cProfile -o item_transformer_prof.prof -m folio_migration_tools mapping_files/configuration.json transform_items --base_folder_path ./
And then installing snakeviz
> pip install snakeviz
and run
> snakeviz item_transformer_prof.prof

|
1.0
|
Improve performance for ItemsTransformer by calling super().get_prop() only when needed. - Things to look into:
- [ ] Call super().get_prop() only when needed in ItemMapper's get_prop() method
- [ ] Look over the implementation for mapping_file_mapper_base.is_set_or_bool_or_numeric() and see if there is anything that could speed it up.
Items transformer seems significantly slower in recent releases. Nothing apparent stands out, but I recommend running a profiling on the task:
> python3 -m cProfile -o item_transformer_prof.prof -m folio_migration_tools mapping_files/configuration.json transform_items --base_folder_path ./
And then installing snakeviz
> pip install snakeviz
and run
> snakeviz item_transformer_prof.prof

|
process
|
improve performance for itemstransformer by calling super get prop only when needed things to look into call super get prop only when needed in itemmapper s get prop method look over the implementation for mapping file mapper base is set or bool or numeric and see if there is anything that could speed it up items transformer seems significantly slower in recent releases nothing apparent stands out but i recommend running a profiling on the task m cprofile o item transformer prof prof m folio migration tools mapping files configuration json transform items base folder path and then installing snakeviz pip install snakeviz and run snakeviz item transformer prof prof
| 1
|
575,770
| 17,049,411,515
|
IssuesEvent
|
2021-07-06 07:01:16
|
plotly/Dash.NET
|
https://api.github.com/repos/plotly/Dash.NET
|
opened
|
Adapt Feliz style DSL for Dash components
|
Area:Frontend Area:Meta Priority: High Type: Enhancement
|
While the core HTML components use the new Feliz-style DSL with `Feliz.Engine`, other Dash components still use the double list style `Component.component [<props>] [<children>]`.
Unifying the feel of components would be awesome. We can most likely tackle this as #16 progresses.
|
1.0
|
Adapt Feliz style DSL for Dash components - While the core HTML components use the new Feliz-style DSL with `Feliz.Engine`, other Dash components still use the double list style `Component.component [<props>] [<children>]`.
Unifying the feel of components would be awesome. We can most likely tackle this as #16 progresses.
|
non_process
|
adapt feliz style dsl for dash components while the core html components use the new feliz style dsl with feliz engine other dash components still use the double list style component component unifying the feel of components would be awesome we can most likely tackle this as progresses
| 0
|
690,510
| 23,662,693,191
|
IssuesEvent
|
2022-08-26 17:12:30
|
orbeon/orbeon-forms
|
https://api.github.com/repos/orbeon/orbeon-forms
|
opened
|
Fields Date: can't select or enter date
|
Priority: Regression Module: XForms Area: XBL Components
|
Either using the date picker or the fields. Possibly related to #5389.
|
1.0
|
Fields Date: can't select or enter date - Either using the date picker or the fields. Possibly related to #5389.
|
non_process
|
fields date can t select or enter date either using the date picker or the fields possibly related to
| 0
|
31,258
| 4,240,323,358
|
IssuesEvent
|
2016-07-06 13:03:34
|
mysociety/fixmystreet
|
https://api.github.com/repos/mysociety/fixmystreet
|
closed
|
FixMyStreet account management: “How do I change my password?”
|
Design
|
Most sites have an account management screen or menu, that lets users:
* Change their password
* Change their full name
* Change their username / email address (sometimes!)
* View all content created by them
* Manage their email subscriptions
The most common user support request for FixMyStreet is helping people change their password (especially in the wake of security scares like Heartbleed). The link to change your password is currently hidden on the “Your reports” page, because there is no user menu / account management page.
I suggest we add a user management page, so the workflow matches people’s expectations.
|
1.0
|
FixMyStreet account management: “How do I change my password?” - Most sites have an account management screen or menu, that lets users:
* Change their password
* Change their full name
* Change their username / email address (sometimes!)
* View all content created by them
* Manage their email subscriptions
The most common user support request for FixMyStreet is helping people change their password (especially in the wake of security scares like Heartbleed). The link to change your password is currently hidden on the “Your reports” page, because there is no user menu / account management page.
I suggest we add a user management page, so the workflow matches people’s expectations.
|
non_process
|
fixmystreet account management “how do i change my password ” most sites have an account management screen or menu that lets users change their password change their full name change their username email address sometimes view all content created by them manage their email subscriptions the most common user support request for fixmystreet is helping people change their password especially in the wake of security scares like heartbleed the link to change your password is currently hidden on the “your reports” page because there is no user menu account management page i suggest we add a user management page so the workflow matches people’s expectations
| 0
|
19,923
| 26,389,306,981
|
IssuesEvent
|
2023-01-12 14:36:43
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
`result` extension that shadows a field of the same name causes `Maximum call stack exceeded` at runtime
|
bug/2-confirmed kind/bug process/candidate tech/typescript team/client topic: clientExtensions
|
### Bug description
When a `result` extension adds a computed field that shadows a field of the same name, AND the extension field `needs` the shadowed field, the following error occurs:
```
RangeError: Maximum call stack size exceeded
at Array.flatMap (<anonymous>)
at /Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31121:48
at Cache.getOrCreate (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31090:19)
at resolveNeeds (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31119:18)
at Array.flatMap (<anonymous>)
at /Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31121:48
at Cache.getOrCreate (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31090:19)
at resolveNeeds (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31119:18)
at Array.flatMap (<anonymous>)
at /Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31121:48 {
clientVersion: '4.7.1'
}
```
Here is where the issue actually happens:
https://github.com/prisma/prisma/blob/main/packages/client/src/runtime/core/extensions/resultUtils.ts#L58
(`resolveNeeds` doesn't check for cycles in the dependencies)
### How to reproduce
Repository with a minimal reproduction and instructions in the README: https://github.com/sbking/prisma-client-extensions-shadowing
### Expected behavior
A computed field should be able to shadow a field of the same name, and use the original value to compute the computed value.
This works at the type level - the extended client uses the data type returned by the computed field, not the type of the original field.
The TypeScript types also seem suggest that you can chain extensions that shadow the same computed field. This would be intuitive, but the docs suggest that the last extension always wins in the case of conflicts. See the inferred inlay hints in this screenshot:
<img width="952" alt="Screen Shot 2022-12-05 at 5 43 04 PM" src="https://user-images.githubusercontent.com/3913213/205778295-5ec93d6f-b74f-48df-82b4-af4a150dd31f.png">
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
```prisma
generator client {
provider = "prisma-client-js"
previewFeatures = ["clientExtensions"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Post {
id String @id @default(cuid())
title String
createdAt DateTime @default(now())
}
```
Here is the extension code:
```ts
const prisma = new PrismaClient().$extends({
name: "i18n",
result: {
post: {
createdAt: {
needs: { createdAt: true },
compute(post) {
return formatDistanceToNow(post.createdAt, { locale: de });
},
},
},
},
});
async function main() {
const posts = await prisma.post.findMany({
select: {
title: true,
createdAt: true,
},
take: 5,
});
for (const post of posts) {
console.info(`- ${post.title} (${post.createdAt})`);
}
}
```
And here is where the error occurs:
```ts
async function main() {
const posts = await prisma.post.findMany({
select: {
title: true,
createdAt: true,
},
take: 5,
});
for (const post of posts) {
console.info(`- ${post.title} (${post.createdAt})`);
}
}
```
### Environment & setup
- OS: macOS
- Database: PostgreSQL
- Node.js version: v18.10.0
### Prisma Version
```
4.7.1
```
|
1.0
|
`result` extension that shadows a field of the same name causes `Maximum call stack exceeded` at runtime - ### Bug description
When a `result` extension adds a computed field that shadows a field of the same name, AND the extension field `needs` the shadowed field, the following error occurs:
```
RangeError: Maximum call stack size exceeded
at Array.flatMap (<anonymous>)
at /Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31121:48
at Cache.getOrCreate (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31090:19)
at resolveNeeds (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31119:18)
at Array.flatMap (<anonymous>)
at /Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31121:48
at Cache.getOrCreate (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31090:19)
at resolveNeeds (/Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31119:18)
at Array.flatMap (<anonymous>)
at /Users/stephen/Projects/prisma-client-extensions-shadowing/node_modules/@prisma/client/runtime/index.js:31121:48 {
clientVersion: '4.7.1'
}
```
Here is where the issue actually happens:
https://github.com/prisma/prisma/blob/main/packages/client/src/runtime/core/extensions/resultUtils.ts#L58
(`resolveNeeds` doesn't check for cycles in the dependencies)
### How to reproduce
Repository with a minimal reproduction and instructions in the README: https://github.com/sbking/prisma-client-extensions-shadowing
### Expected behavior
A computed field should be able to shadow a field of the same name, and use the original value to compute the computed value.
This works at the type level - the extended client uses the data type returned by the computed field, not the type of the original field.
The TypeScript types also seem suggest that you can chain extensions that shadow the same computed field. This would be intuitive, but the docs suggest that the last extension always wins in the case of conflicts. See the inferred inlay hints in this screenshot:
<img width="952" alt="Screen Shot 2022-12-05 at 5 43 04 PM" src="https://user-images.githubusercontent.com/3913213/205778295-5ec93d6f-b74f-48df-82b4-af4a150dd31f.png">
### Prisma information
<!-- Do not include your database credentials when sharing your Prisma schema! -->
```prisma
generator client {
provider = "prisma-client-js"
previewFeatures = ["clientExtensions"]
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
model Post {
id String @id @default(cuid())
title String
createdAt DateTime @default(now())
}
```
Here is the extension code:
```ts
const prisma = new PrismaClient().$extends({
name: "i18n",
result: {
post: {
createdAt: {
needs: { createdAt: true },
compute(post) {
return formatDistanceToNow(post.createdAt, { locale: de });
},
},
},
},
});
async function main() {
const posts = await prisma.post.findMany({
select: {
title: true,
createdAt: true,
},
take: 5,
});
for (const post of posts) {
console.info(`- ${post.title} (${post.createdAt})`);
}
}
```
And here is where the error occurs:
```ts
async function main() {
const posts = await prisma.post.findMany({
select: {
title: true,
createdAt: true,
},
take: 5,
});
for (const post of posts) {
console.info(`- ${post.title} (${post.createdAt})`);
}
}
```
### Environment & setup
- OS: macOS
- Database: PostgreSQL
- Node.js version: v18.10.0
### Prisma Version
```
4.7.1
```
|
process
|
result extension that shadows a field of the same name causes maximum call stack exceeded at runtime bug description when a result extension adds a computed field that shadows a field of the same name and the extension field needs the shadowed field the following error occurs rangeerror maximum call stack size exceeded at array flatmap at users stephen projects prisma client extensions shadowing node modules prisma client runtime index js at cache getorcreate users stephen projects prisma client extensions shadowing node modules prisma client runtime index js at resolveneeds users stephen projects prisma client extensions shadowing node modules prisma client runtime index js at array flatmap at users stephen projects prisma client extensions shadowing node modules prisma client runtime index js at cache getorcreate users stephen projects prisma client extensions shadowing node modules prisma client runtime index js at resolveneeds users stephen projects prisma client extensions shadowing node modules prisma client runtime index js at array flatmap at users stephen projects prisma client extensions shadowing node modules prisma client runtime index js clientversion here is where the issue actually happens resolveneeds doesn t check for cycles in the dependencies how to reproduce repository with a minimal reproduction and instructions in the readme expected behavior a computed field should be able to shadow a field of the same name and use the original value to compute the computed value this works at the type level the extended client uses the data type returned by the computed field not the type of the original field the typescript types also seem suggest that you can chain extensions that shadow the same computed field this would be intuitive but the docs suggest that the last extension always wins in the case of conflicts see the inferred inlay hints in this screenshot img width alt screen shot at pm src prisma information prisma generator client provider prisma client js previewfeatures datasource db provider postgresql url env database url model post id string id default cuid title string createdat datetime default now here is the extension code ts const prisma new prismaclient extends name result post createdat needs createdat true compute post return formatdistancetonow post createdat locale de async function main const posts await prisma post findmany select title true createdat true take for const post of posts console info post title post createdat and here is where the error occurs ts async function main const posts await prisma post findmany select title true createdat true take for const post of posts console info post title post createdat environment setup os macos database postgresql node js version prisma version
| 1
|
764
| 3,250,901,201
|
IssuesEvent
|
2015-10-19 06:01:30
|
t3kt/vjzual2
|
https://api.github.com/repos/t3kt/vjzual2
|
opened
|
create a 3d color-based tiling module
|
enhancement video processing
|
also support using an external source for the tiling data
|
1.0
|
create a 3d color-based tiling module - also support using an external source for the tiling data
|
process
|
create a color based tiling module also support using an external source for the tiling data
| 1
|
25,327
| 18,475,900,934
|
IssuesEvent
|
2021-10-18 07:13:31
|
coq/coq
|
https://api.github.com/repos/coq/coq
|
closed
|
Switch bench to iris/examples
|
kind: infrastructure part: bench
|
With https://github.com/coq/coq/pull/12969 we switched the Coq CI to test iris/examples instead of lambda-rust. It was my expectation that with this change, lambda-rust no longer needs to be compatible with Coq master, so we stopped that CI job.
However, it looks like the Coq bench suite still uses lambda-rust, not iris/examples. I have no idea where that configuration is stored so it'd be great if someone could update this. :)
|
1.0
|
Switch bench to iris/examples - With https://github.com/coq/coq/pull/12969 we switched the Coq CI to test iris/examples instead of lambda-rust. It was my expectation that with this change, lambda-rust no longer needs to be compatible with Coq master, so we stopped that CI job.
However, it looks like the Coq bench suite still uses lambda-rust, not iris/examples. I have no idea where that configuration is stored so it'd be great if someone could update this. :)
|
non_process
|
switch bench to iris examples with we switched the coq ci to test iris examples instead of lambda rust it was my expectation that with this change lambda rust no longer needs to be compatible with coq master so we stopped that ci job however it looks like the coq bench suite still uses lambda rust not iris examples i have no idea where that configuration is stored so it d be great if someone could update this
| 0
|
20,522
| 27,181,089,016
|
IssuesEvent
|
2023-02-18 16:40:21
|
cse442-at-ub/project_s23-cinco
|
https://api.github.com/repos/cse442-at-ub/project_s23-cinco
|
opened
|
Create mini tutorial document for Figma
|
Processing Task
|
**Task Tests**
*Test 1*
1) Assure that the information included in the document can help guide you to make a basic example page
2) Include useful videos and include descriptions on how they help
|
1.0
|
Create mini tutorial document for Figma - **Task Tests**
*Test 1*
1) Assure that the information included in the document can help guide you to make a basic example page
2) Include useful videos and include descriptions on how they help
|
process
|
create mini tutorial document for figma task tests test assure that the information included in the document can help guide you to make a basic example page include useful videos and include descriptions on how they help
| 1
|
8,421
| 11,589,077,553
|
IssuesEvent
|
2020-02-24 00:09:59
|
microsoft/LightGBM
|
https://api.github.com/repos/microsoft/LightGBM
|
closed
|
make the breakage in docs into Dataset and Booster params
|
in-process
|
Refer to https://github.com/microsoft/LightGBM/pull/2594#discussion_r373819109.
> Can we have a subsection under `IO` params named `Dataset` (or somehow better) params? Yes, we have no `Booster`/`Dataset` abstraction at CLI level, but with this PR distinguishing becomes very important. Without such info, users have to understand where are Dataset params only via fatal errors they get.
Or maybe we should add a **Note** to each Dataset parameter about that it cannot be changed after dataset construction.
> I agree with the subsection for dataset object, it will be much clearer.
|
1.0
|
make the breakage in docs into Dataset and Booster params - Refer to https://github.com/microsoft/LightGBM/pull/2594#discussion_r373819109.
> Can we have a subsection under `IO` params named `Dataset` (or somehow better) params? Yes, we have no `Booster`/`Dataset` abstraction at CLI level, but with this PR distinguishing becomes very important. Without such info, users have to understand where are Dataset params only via fatal errors they get.
Or maybe we should add a **Note** to each Dataset parameter about that it cannot be changed after dataset construction.
> I agree with the subsection for dataset object, it will be much clearer.
|
process
|
make the breakage in docs into dataset and booster params refer to can we have a subsection under io params named dataset or somehow better params yes we have no booster dataset abstraction at cli level but with this pr distinguishing becomes very important without such info users have to understand where are dataset params only via fatal errors they get or maybe we should add a note to each dataset parameter about that it cannot be changed after dataset construction i agree with the subsection for dataset object it will be much clearer
| 1
|
173,379
| 14,409,102,981
|
IssuesEvent
|
2020-12-04 01:23:44
|
NosadfacesRBLX/csdrblx
|
https://api.github.com/repos/NosadfacesRBLX/csdrblx
|
reopened
|
Report Bugs about the Game here!
|
documentation
|
# Report Bugs via GitHub Issues
This is where you can report bugs that Camp Sunny Days may have.
## Basic Rules
**1. Do not use this as a way to suggest new Ideas to the game, this is only for bugs and issues the game has.**
**2. Use common sense, don't troll or be immature.**
**3. Only leave helpful comments on Issues, like possible fixes.**
*As always, collaborators reserve the right to change these rules, and may lock conversations without notice.*
## Links
[Camp Sunny Days Discord Server](https://www.discord.gg/Mcchmmv), this can be used to submit game suggestions!
[Camp Sunny Days Website](https://www.campsunnydaysrblx.com)
|
1.0
|
Report Bugs about the Game here! - # Report Bugs via GitHub Issues
This is where you can report bugs that Camp Sunny Days may have.
## Basic Rules
**1. Do not use this as a way to suggest new Ideas to the game, this is only for bugs and issues the game has.**
**2. Use common sense, don't troll or be immature.**
**3. Only leave helpful comments on Issues, like possible fixes.**
*As always, collaborators reserve the right to change these rules, and may lock conversations without notice.*
## Links
[Camp Sunny Days Discord Server](https://www.discord.gg/Mcchmmv), this can be used to submit game suggestions!
[Camp Sunny Days Website](https://www.campsunnydaysrblx.com)
|
non_process
|
report bugs about the game here report bugs via github issues this is where you can report bugs that camp sunny days may have basic rules do not use this as a way to suggest new ideas to the game this is only for bugs and issues the game has use common sense don t troll or be immature only leave helpful comments on issues like possible fixes as always collaborators reserve the right to change these rules and may lock conversations without notice links this can be used to submit game suggestions
| 0
|
7,418
| 10,542,375,770
|
IssuesEvent
|
2019-10-02 13:03:48
|
Hurence/logisland
|
https://api.github.com/repos/Hurence/logisland
|
opened
|
add SplitRecord processor
|
feature processor
|
this processor takes 1 record in and gives n records out according to dynamic parameters
example conf
- processor: split_record
component: com.hurence.logisland.processor.SplitRecord
configuration:
# default false
keep.parent.record: false
# default false, if true the new record_type will be the name of the dynamic property
keep.parent.record_type: false
# default true, if false the new record_time will is set to processing_time
keep.parent.record_time: false
# dynamic parameters
record_type1: fieldA, fieldB
record_type2: fieldC
record_type3: fieldA, fieldD
will give
R(record_time0, record_type0, record_id0, fieldA, fieldB, fieldC, fieldD)
=>
R1(record_time0, record_type1, record_id1, parent_record_id =record_id0, fieldA, fieldB)
R2(record_time0, record_type2, record_id2, parent_record_id =record_id0, fieldC)
R2(record_time0, record_type3, record_id3, parent_record_id =record_id0, fieldA, fieldD)
|
1.0
|
add SplitRecord processor - this processor takes 1 record in and gives n records out according to dynamic parameters
example conf
- processor: split_record
component: com.hurence.logisland.processor.SplitRecord
configuration:
# default false
keep.parent.record: false
# default false, if true the new record_type will be the name of the dynamic property
keep.parent.record_type: false
# default true, if false the new record_time will is set to processing_time
keep.parent.record_time: false
# dynamic parameters
record_type1: fieldA, fieldB
record_type2: fieldC
record_type3: fieldA, fieldD
will give
R(record_time0, record_type0, record_id0, fieldA, fieldB, fieldC, fieldD)
=>
R1(record_time0, record_type1, record_id1, parent_record_id =record_id0, fieldA, fieldB)
R2(record_time0, record_type2, record_id2, parent_record_id =record_id0, fieldC)
R2(record_time0, record_type3, record_id3, parent_record_id =record_id0, fieldA, fieldD)
|
process
|
add splitrecord processor this processor takes record in and gives n records out according to dynamic parameters example conf processor split record component com hurence logisland processor splitrecord configuration default false keep parent record false default false if true the new record type will be the name of the dynamic property keep parent record type false default true if false the new record time will is set to processing time keep parent record time false dynamic parameters record fielda fieldb record fieldc record fielda fieldd will give r record record record fielda fieldb fieldc fieldd record record record parent record id record fielda fieldb record record record parent record id record fieldc record record record parent record id record fielda fieldd
| 1
|
21,408
| 29,351,205,771
|
IssuesEvent
|
2023-05-27 00:34:42
|
devssa/onde-codar-em-salvador
|
https://api.github.com/repos/devssa/onde-codar-em-salvador
|
closed
|
[Remoto] Data Engineer na Coodesh
|
SALVADOR PJ BANCO DE DADOS BIG DATA DATA SCIENCE PYTHON SQL AWS ETL REQUISITOS REMOTO GITHUB CI AZURE SEGURANÇA UMA ANALYTICS IA CASOS DE USO R VENDAS PADRÕ ESTATÍSTICA NEGÓCIOS AUTOMAÇÃO DE PROCESSOS INTELIGÊNCIA ARTIFICIAL LGPD MONITORAMENTO Stale
|
## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-engineer-204013062?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Qbem</strong> busca <strong><ins>Data Engineer</ins></strong> para compor seu time!</p>
<p>Animado com o uso de grandes quantidades de dados, análises e modelos de aprendizado de máquina (ML) para transformar o setor de saúde? Quer ajudar nossos clientes a gerar valor de negócios por meio da adoção de Data Analytics e Inteligência Artificial? Ansioso para se tornar uma referência mundial no processamento, interoperabilidade, compreensão de dados de saúde e melhorar os serviços de saúde?</p>
<p>Nossa organização trabalha para abordar, por meio de dados, análises e IA altamente qualificados, os insights inteligentes e acionáveis certos para transformar a maneira como as organizações gerenciam a complexidade do sistema de saúde.</p>
<p><strong>Esperamos que você:</strong></p>
<ul>
<li>Compreenda a análise multidimensional de dados de assistência médica;</li>
<li>Explorar profundamente e correlacionar eventos como forense;</li>
<li>Seja um Data Storyteller apoiando nossos profissionais de saúde;</li>
<li>Crie a inteligência para automatizar o processo;</li>
<li>Ajude os clientes a ler e interpretar dados para serem orientados por dados;</li>
<li>Seja um Assessor do Cliente e ajude-o a enquadrar os seus problemas;</li>
<li>Ajude os clientes a entregar projetos de dados, análises e IA do começo ao fim;</li>
<li>Faça análises retrospectivas, comparações, comportamento, risco e previsão;</li>
<li>Trabalhe na hipótese juntando considerações de ciência de dados à mesa.</li>
</ul>
<p><strong>Suas rotinas diárias incluem e não se restringem:</strong></p>
<ul>
<li>Desenvolver soluções com grandes volumes de Dados;</li>
<li>Ajudar na expansão da plataforma analítica, com foco em Self-Service Analytics e ferramentas Big Data;</li>
<li>Ajudar na evolução do Data Lake, conectores para Ingestão de dados, catálogo, automação de pipelines de processamento, governança e visualização de dados;</li>
<li>Criar modelos de Dados analíticos e indicadores;</li>
<li>Desenvolver ETL;</li>
<li>Garantir políticas de segurança para acesso aos Dados;</li>
<li>Garantir o tratamento e preparação dos Dados;</li>
<li>Fomentar a utilização de técnicas de automação para criação, configuração e monitoramento dos ambientes analíticos;</li>
<li>Acompanhar e administrar SLAs de entrega de Dados e análises para usuários;</li>
<li>Analisar e corrigir eventuais problemas na execução de cargas de Dados;</li>
<li>Enriquecer, limpar, desduplicar, extrapolar, organizar, catalogar, versionar, gerar visualizações dados;</li>
<li>Desenvolver produtos de dados oriundos da exploração das bases, para auxiliar o negócio a crescer de forma sustentável.</li>
</ul>
## Qbem:
<p>A Qbem é uma empresa de inteligência digital em saúde corporativa, formada por uma "plataforma as a service" que integra corretor, RH e beneficiário gerando soluções em gestão e automação de processos (RPA) para corretoras e consultorias de benefícios. </p>
<p>Entregando o diferencial de integração da gestão e dos dados dos benefícios para gestão da saúde corporativa em um só local. Possuimos uma Solução Multibenefícios (vida, saúde e odontológico) , Conexão com o beneficiário , Segurança de dados LGPD, Plataforma de vendas. </p><a href='https://coodesh.com/empresas/qbem'>Veja mais no site</a>
## Habilidades:
- Python
- Banco de dados relacionais (SQL)
- AWS
## Local:
100% Remoto
## Requisitos:
- Bacharelado ou mestrado em um campo quantitativo (como ciências da vida, ciência da computação, pesquisa, bioinformática, estatística, matemática) ou experiência equivalente;
- Experiência no setor de saúde do Brasil e/ou no setor de seguros;
- Em plataformas de serviço de provedor de nuvem (AWS, Azure, Google);
- Habilidades em Python, R e SQL;
- Experiência prática em plataformas de visualização de dados;
- Experiência em projetar e fornecer soluções de dados escaláveis e seguras.
## Diferenciais:
- Doutorado em um campo quantitativo (Ciência da Computação, Aprendizado de Máquina, Pesquisa Operacional, Estatística, Matemática);
- Experiência em consultoria com casos de uso de Data & Analytics;
- Banco de dados de gráfico de relacionamento (Neo4j);
- Experiência com EMR, HIS, PACS, LIS e outras soluções de saúde;
- Conhecimento em HL7 e padrões de interoperabilidade.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Engineer na Qbem](https://coodesh.com/vagas/data-engineer-204013062?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Data Science
|
1.0
|
[Remoto] Data Engineer na Coodesh - ## Descrição da vaga:
Esta é uma vaga de um parceiro da plataforma Coodesh, ao candidatar-se você terá acesso as informações completas sobre a empresa e benefícios.
Fique atento ao redirecionamento que vai te levar para uma url [https://coodesh.com](https://coodesh.com/vagas/data-engineer-204013062?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open) com o pop-up personalizado de candidatura. 👋
<p>A <strong>Qbem</strong> busca <strong><ins>Data Engineer</ins></strong> para compor seu time!</p>
<p>Animado com o uso de grandes quantidades de dados, análises e modelos de aprendizado de máquina (ML) para transformar o setor de saúde? Quer ajudar nossos clientes a gerar valor de negócios por meio da adoção de Data Analytics e Inteligência Artificial? Ansioso para se tornar uma referência mundial no processamento, interoperabilidade, compreensão de dados de saúde e melhorar os serviços de saúde?</p>
<p>Nossa organização trabalha para abordar, por meio de dados, análises e IA altamente qualificados, os insights inteligentes e acionáveis certos para transformar a maneira como as organizações gerenciam a complexidade do sistema de saúde.</p>
<p><strong>Esperamos que você:</strong></p>
<ul>
<li>Compreenda a análise multidimensional de dados de assistência médica;</li>
<li>Explorar profundamente e correlacionar eventos como forense;</li>
<li>Seja um Data Storyteller apoiando nossos profissionais de saúde;</li>
<li>Crie a inteligência para automatizar o processo;</li>
<li>Ajude os clientes a ler e interpretar dados para serem orientados por dados;</li>
<li>Seja um Assessor do Cliente e ajude-o a enquadrar os seus problemas;</li>
<li>Ajude os clientes a entregar projetos de dados, análises e IA do começo ao fim;</li>
<li>Faça análises retrospectivas, comparações, comportamento, risco e previsão;</li>
<li>Trabalhe na hipótese juntando considerações de ciência de dados à mesa.</li>
</ul>
<p><strong>Suas rotinas diárias incluem e não se restringem:</strong></p>
<ul>
<li>Desenvolver soluções com grandes volumes de Dados;</li>
<li>Ajudar na expansão da plataforma analítica, com foco em Self-Service Analytics e ferramentas Big Data;</li>
<li>Ajudar na evolução do Data Lake, conectores para Ingestão de dados, catálogo, automação de pipelines de processamento, governança e visualização de dados;</li>
<li>Criar modelos de Dados analíticos e indicadores;</li>
<li>Desenvolver ETL;</li>
<li>Garantir políticas de segurança para acesso aos Dados;</li>
<li>Garantir o tratamento e preparação dos Dados;</li>
<li>Fomentar a utilização de técnicas de automação para criação, configuração e monitoramento dos ambientes analíticos;</li>
<li>Acompanhar e administrar SLAs de entrega de Dados e análises para usuários;</li>
<li>Analisar e corrigir eventuais problemas na execução de cargas de Dados;</li>
<li>Enriquecer, limpar, desduplicar, extrapolar, organizar, catalogar, versionar, gerar visualizações dados;</li>
<li>Desenvolver produtos de dados oriundos da exploração das bases, para auxiliar o negócio a crescer de forma sustentável.</li>
</ul>
## Qbem:
<p>A Qbem é uma empresa de inteligência digital em saúde corporativa, formada por uma "plataforma as a service" que integra corretor, RH e beneficiário gerando soluções em gestão e automação de processos (RPA) para corretoras e consultorias de benefícios. </p>
<p>Entregando o diferencial de integração da gestão e dos dados dos benefícios para gestão da saúde corporativa em um só local. Possuimos uma Solução Multibenefícios (vida, saúde e odontológico) , Conexão com o beneficiário , Segurança de dados LGPD, Plataforma de vendas. </p><a href='https://coodesh.com/empresas/qbem'>Veja mais no site</a>
## Habilidades:
- Python
- Banco de dados relacionais (SQL)
- AWS
## Local:
100% Remoto
## Requisitos:
- Bacharelado ou mestrado em um campo quantitativo (como ciências da vida, ciência da computação, pesquisa, bioinformática, estatística, matemática) ou experiência equivalente;
- Experiência no setor de saúde do Brasil e/ou no setor de seguros;
- Em plataformas de serviço de provedor de nuvem (AWS, Azure, Google);
- Habilidades em Python, R e SQL;
- Experiência prática em plataformas de visualização de dados;
- Experiência em projetar e fornecer soluções de dados escaláveis e seguras.
## Diferenciais:
- Doutorado em um campo quantitativo (Ciência da Computação, Aprendizado de Máquina, Pesquisa Operacional, Estatística, Matemática);
- Experiência em consultoria com casos de uso de Data & Analytics;
- Banco de dados de gráfico de relacionamento (Neo4j);
- Experiência com EMR, HIS, PACS, LIS e outras soluções de saúde;
- Conhecimento em HL7 e padrões de interoperabilidade.
## Como se candidatar:
Candidatar-se exclusivamente através da plataforma Coodesh no link a seguir: [Data Engineer na Qbem](https://coodesh.com/vagas/data-engineer-204013062?utm_source=github&utm_medium=devssa-onde-codar-em-salvador&modal=open)
Após candidatar-se via plataforma Coodesh e validar o seu login, você poderá acompanhar e receber todas as interações do processo por lá. Utilize a opção **Pedir Feedback** entre uma etapa e outra na vaga que se candidatou. Isso fará com que a pessoa **Recruiter** responsável pelo processo na empresa receba a notificação.
## Labels
#### Alocação
Remoto
#### Regime
PJ
#### Categoria
Data Science
|
process
|
data engineer na coodesh descrição da vaga esta é uma vaga de um parceiro da plataforma coodesh ao candidatar se você terá acesso as informações completas sobre a empresa e benefícios fique atento ao redirecionamento que vai te levar para uma url com o pop up personalizado de candidatura 👋 a qbem busca data engineer para compor seu time animado com o uso de grandes quantidades de dados análises e modelos de aprendizado de máquina ml para transformar o setor de saúde quer ajudar nossos clientes a gerar valor de negócios por meio da adoção de data analytics e inteligência artificial ansioso para se tornar uma referência mundial no processamento interoperabilidade compreensão de dados de saúde e melhorar os serviços de saúde nossa organização trabalha para abordar por meio de dados análises e ia altamente qualificados os insights inteligentes e acionáveis certos para transformar a maneira como as organizações gerenciam a complexidade do sistema de saúde esperamos que você compreenda a análise multidimensional de dados de assistência médica explorar profundamente e correlacionar eventos como forense seja um data storyteller apoiando nossos profissionais de saúde crie a inteligência para automatizar o processo ajude os clientes a ler e interpretar dados para serem orientados por dados seja um assessor do cliente e ajude o a enquadrar os seus problemas ajude os clientes a entregar projetos de dados análises e ia do começo ao fim faça análises retrospectivas comparações comportamento risco e previsão trabalhe na hipótese juntando considerações de ciência de dados à mesa suas rotinas diárias incluem e não se restringem desenvolver soluções com grandes volumes de dados ajudar na expansão da plataforma analítica com foco em self service analytics e ferramentas big data ajudar na evolução do data lake conectores para ingestão de dados catálogo automação de pipelines de processamento governança e visualização de dados criar modelos de dados analíticos e indicadores desenvolver etl garantir políticas de segurança para acesso aos dados garantir o tratamento e preparação dos dados fomentar a utilização de técnicas de automação para criação configuração e monitoramento dos ambientes analíticos acompanhar e administrar slas de entrega de dados e análises para usuários analisar e corrigir eventuais problemas na execução de cargas de dados enriquecer limpar desduplicar extrapolar organizar catalogar versionar gerar visualizações dados desenvolver produtos de dados oriundos da exploração das bases para auxiliar o negócio a crescer de forma sustentável qbem a qbem é uma empresa de inteligência digital em saúde corporativa formada por uma plataforma as a service que integra corretor rh e beneficiário gerando soluções em gestão e automação de processos rpa para corretoras e consultorias de benefícios nbsp entregando o diferencial de integração da gestão e dos dados dos benefícios para gestão da saúde corporativa em um só local possuimos uma solução multibenefícios vida saúde e odontológico conexão com o beneficiário segurança de dados lgpd plataforma de vendas nbsp habilidades python banco de dados relacionais sql aws local remoto requisitos bacharelado ou mestrado em um campo quantitativo como ciências da vida ciência da computação pesquisa bioinformática estatística matemática ou experiência equivalente experiência no setor de saúde do brasil e ou no setor de seguros em plataformas de serviço de provedor de nuvem aws azure google habilidades em python r e sql experiência prática em plataformas de visualização de dados experiência em projetar e fornecer soluções de dados escaláveis e seguras diferenciais doutorado em um campo quantitativo ciência da computação aprendizado de máquina pesquisa operacional estatística matemática experiência em consultoria com casos de uso de data analytics banco de dados de gráfico de relacionamento experiência com emr his pacs lis e outras soluções de saúde conhecimento em e padrões de interoperabilidade como se candidatar candidatar se exclusivamente através da plataforma coodesh no link a seguir após candidatar se via plataforma coodesh e validar o seu login você poderá acompanhar e receber todas as interações do processo por lá utilize a opção pedir feedback entre uma etapa e outra na vaga que se candidatou isso fará com que a pessoa recruiter responsável pelo processo na empresa receba a notificação labels alocação remoto regime pj categoria data science
| 1
|
15,491
| 19,699,488,319
|
IssuesEvent
|
2022-01-12 15:20:35
|
influxdata/telegraf
|
https://api.github.com/repos/influxdata/telegraf
|
closed
|
Add noise processor plugin
|
feature request security plugin/processor security/misc
|
## Feature Request
Implement a processor plugin which adds "noise" to fields.
### Proposal:
Introduce a method for 'Differential Privacy', which adds additional noise to specified values, before the data is saved to a db. Using Laplace or Gaussian distribution a random value can be generated which is then added to a field value. Depending on the distribution function various parameters must be configurable, eg. Laplace requires two: sensitivity & epsilon.
### Current behavior:
No plugin is yet available for this modification.
### Desired behavior:
A plugin which adds noise to defined field values.
### Use case:
The idea is to add some noise to sensitive customer data, to anonymise it and further prevent linkage attacks in case anything gets leaked or someone has unauthorized access to the data and might draw conclusions, which they are not allowed to do.
Proper methods for differential privacy should ensure that, when having a dataset twice (one with noise, the other "raw"), both will generate the same statistical output.
|
1.0
|
Add noise processor plugin - ## Feature Request
Implement a processor plugin which adds "noise" to fields.
### Proposal:
Introduce a method for 'Differential Privacy', which adds additional noise to specified values, before the data is saved to a db. Using Laplace or Gaussian distribution a random value can be generated which is then added to a field value. Depending on the distribution function various parameters must be configurable, eg. Laplace requires two: sensitivity & epsilon.
### Current behavior:
No plugin is yet available for this modification.
### Desired behavior:
A plugin which adds noise to defined field values.
### Use case:
The idea is to add some noise to sensitive customer data, to anonymise it and further prevent linkage attacks in case anything gets leaked or someone has unauthorized access to the data and might draw conclusions, which they are not allowed to do.
Proper methods for differential privacy should ensure that, when having a dataset twice (one with noise, the other "raw"), both will generate the same statistical output.
|
process
|
add noise processor plugin feature request implement a processor plugin which adds noise to fields proposal introduce a method for differential privacy which adds additional noise to specified values before the data is saved to a db using laplace or gaussian distribution a random value can be generated which is then added to a field value depending on the distribution function various parameters must be configurable eg laplace requires two sensitivity epsilon current behavior no plugin is yet available for this modification desired behavior a plugin which adds noise to defined field values use case the idea is to add some noise to sensitive customer data to anonymise it and further prevent linkage attacks in case anything gets leaked or someone has unauthorized access to the data and might draw conclusions which they are not allowed to do proper methods for differential privacy should ensure that when having a dataset twice one with noise the other raw both will generate the same statistical output
| 1
|
249,424
| 26,932,181,110
|
IssuesEvent
|
2023-02-07 17:36:11
|
pustovitDmytro/semantic-release-heroku
|
https://api.github.com/repos/pustovitDmytro/semantic-release-heroku
|
closed
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-4.1.0.tgz - autoclosed
|
security vulnerability
|
## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/http-cache-semantics/package.json,/node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-19.0.2.tgz (Root Library)
- npm-9.0.0.tgz
- npm-8.4.1.tgz
- make-fetch-happen-10.0.0.tgz
- :x: **http-cache-semantics-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/pustovitDmytro/semantic-release-heroku/commit/d25b5ab9c76b5b79d633715f44515b5a3faa3923">d25b5ab9c76b5b79d633715f44515b5a3faa3923</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-25881 (Medium) detected in http-cache-semantics-4.1.0.tgz - autoclosed - ## CVE-2022-25881 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>http-cache-semantics-4.1.0.tgz</b></p></summary>
<p>Parses Cache-Control and other headers. Helps building correct HTTP caches and proxies</p>
<p>Library home page: <a href="https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz">https://registry.npmjs.org/http-cache-semantics/-/http-cache-semantics-4.1.0.tgz</a></p>
<p>Path to dependency file: /package.json</p>
<p>Path to vulnerable library: /node_modules/npm/node_modules/http-cache-semantics/package.json,/node_modules/http-cache-semantics/package.json</p>
<p>
Dependency Hierarchy:
- semantic-release-19.0.2.tgz (Root Library)
- npm-9.0.0.tgz
- npm-8.4.1.tgz
- make-fetch-happen-10.0.0.tgz
- :x: **http-cache-semantics-4.1.0.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/pustovitDmytro/semantic-release-heroku/commit/d25b5ab9c76b5b79d633715f44515b5a3faa3923">d25b5ab9c76b5b79d633715f44515b5a3faa3923</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
This affects versions of the package http-cache-semantics before 4.1.1. The issue can be exploited via malicious request header values sent to a server, when that server reads the cache policy from the request using this library.
<p>Publish Date: 2023-01-31
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-25881>CVE-2022-25881</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://www.cve.org/CVERecord?id=CVE-2022-25881">https://www.cve.org/CVERecord?id=CVE-2022-25881</a></p>
<p>Release Date: 2023-01-31</p>
<p>Fix Resolution: http-cache-semantics - 4.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in http cache semantics tgz autoclosed cve medium severity vulnerability vulnerable library http cache semantics tgz parses cache control and other headers helps building correct http caches and proxies library home page a href path to dependency file package json path to vulnerable library node modules npm node modules http cache semantics package json node modules http cache semantics package json dependency hierarchy semantic release tgz root library npm tgz npm tgz make fetch happen tgz x http cache semantics tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects versions of the package http cache semantics before the issue can be exploited via malicious request header values sent to a server when that server reads the cache policy from the request using this library publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution http cache semantics step up your open source security game with mend
| 0
|
10,368
| 13,188,513,775
|
IssuesEvent
|
2020-08-13 06:38:13
|
prisma/prisma
|
https://api.github.com/repos/prisma/prisma
|
closed
|
Fluent API chaining seems to be broken as of 2.3.0
|
bug/2-confirmed kind/regression process/candidate team/typescript
|
<!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
The query chaining part of the fluent API is throwing an error when chaining anything more than 1 relation, e.g.:
```
const customer = await prisma.booking
.findOne({ where: { id: booking.id } })
.property()
.customer()
```
Throws:
```
12 const customer = await prisma.booking
13 .findOne({ where: { id: booking.id } })
14 .property()
→ 15 .customer({
where: {
id: 'ckdprh4rp000092ch4d7pus08'
},
select: {
property: {
~~~~~~~~
select: {
customer: true
}
},
? id?: true,
? customer?: true,
? customerId?: true,
? bookings?: true
}
})
Unknown field `property` for select statement on model Property. Available options are listed in green.
```
## How to reproduce
1. Create schema
```
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model Customer {
id String @default(cuid()) @id
// children
properties Property[]
}
model Property {
id String @default(cuid()) @id
// parents
customer Customer @relation(fields: [customerId], references: [id])
customerId String
// children
bookings Booking[]
}
model Booking {
id String @default(cuid()) @id
// parents
property Property @relation(fields: [propertyId], references: [id])
propertyId String
}
```
2. Run this script:
```
const { PrismaClient } = require('@prisma/client')
const prisma = new PrismaClient()
const main = async () => {
await prisma.executeRaw('TRUNCATE "Customer" CASCADE')
const booking = await prisma.booking.create({
data: { property: { create: { customer: { create: {} } } } }
})
const customer = await prisma.booking
.findOne({ where: { id: booking.id } })
.property()
.customer()
console.log(customer)
}
main().catch(err => console.log(err)).finally(async () => {
await prisma.disconnect()
})
```
## Expected behavior
Should work as before v2.3+
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Mac OS
- Database: PostgreSQL
- Node.js version: 14.5.0
- Prisma version:
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
2.4.1 (seems to be broken as of 2.3.0)
```
|
1.0
|
Fluent API chaining seems to be broken as of 2.3.0 - <!--
Thanks for helping us improve Prisma! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by setting the `DEBUG="*"` environment variable and enabling additional logging output in Prisma Client.
Learn more about writing proper bug reports here: https://pris.ly/d/bug-reports
-->
## Bug description
The query chaining part of the fluent API is throwing an error when chaining anything more than 1 relation, e.g.:
```
const customer = await prisma.booking
.findOne({ where: { id: booking.id } })
.property()
.customer()
```
Throws:
```
12 const customer = await prisma.booking
13 .findOne({ where: { id: booking.id } })
14 .property()
→ 15 .customer({
where: {
id: 'ckdprh4rp000092ch4d7pus08'
},
select: {
property: {
~~~~~~~~
select: {
customer: true
}
},
? id?: true,
? customer?: true,
? customerId?: true,
? bookings?: true
}
})
Unknown field `property` for select statement on model Property. Available options are listed in green.
```
## How to reproduce
1. Create schema
```
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}
generator client {
provider = "prisma-client-js"
}
model Customer {
id String @default(cuid()) @id
// children
properties Property[]
}
model Property {
id String @default(cuid()) @id
// parents
customer Customer @relation(fields: [customerId], references: [id])
customerId String
// children
bookings Booking[]
}
model Booking {
id String @default(cuid()) @id
// parents
property Property @relation(fields: [propertyId], references: [id])
propertyId String
}
```
2. Run this script:
```
const { PrismaClient } = require('@prisma/client')
const prisma = new PrismaClient()
const main = async () => {
await prisma.executeRaw('TRUNCATE "Customer" CASCADE')
const booking = await prisma.booking.create({
data: { property: { create: { customer: { create: {} } } } }
})
const customer = await prisma.booking
.findOne({ where: { id: booking.id } })
.property()
.customer()
console.log(customer)
}
main().catch(err => console.log(err)).finally(async () => {
await prisma.disconnect()
})
```
## Expected behavior
Should work as before v2.3+
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Mac OS
- Database: PostgreSQL
- Node.js version: 14.5.0
- Prisma version:
<!--[Run `prisma -v` to see your Prisma version and paste it between the ´´´]-->
```
2.4.1 (seems to be broken as of 2.3.0)
```
|
process
|
fluent api chaining seems to be broken as of thanks for helping us improve prisma 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by setting the debug environment variable and enabling additional logging output in prisma client learn more about writing proper bug reports here bug description the query chaining part of the fluent api is throwing an error when chaining anything more than relation e g const customer await prisma booking findone where id booking id property customer throws const customer await prisma booking findone where id booking id property → customer where id select property select customer true id true customer true customerid true bookings true unknown field property for select statement on model property available options are listed in green how to reproduce create schema datasource db provider postgresql url env database url generator client provider prisma client js model customer id string default cuid id children properties property model property id string default cuid id parents customer customer relation fields references customerid string children bookings booking model booking id string default cuid id parents property property relation fields references propertyid string run this script const prismaclient require prisma client const prisma new prismaclient const main async await prisma executeraw truncate customer cascade const booking await prisma booking create data property create customer create const customer await prisma booking findone where id booking id property customer console log customer main catch err console log err finally async await prisma disconnect expected behavior should work as before environment setup os mac os database postgresql node js version prisma version seems to be broken as of
| 1
|
80,608
| 30,388,851,993
|
IssuesEvent
|
2023-07-13 04:48:29
|
zed-industries/community
|
https://api.github.com/repos/zed-industries/community
|
opened
|
Only one project name is shown after "Added a project & branch switcher under project name" in 0.94.3
|
defect triage admin read
|
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
For a project with more than one root folders, originally all of their names, along with their respective branch, would be shown in the titlebar.
After the introduction of "Added a project & branch switcher under project name" in 0.94.3, I can now only see one folder in the title bar, and can no longer see others, so there's also no way to switch their branch from the titlebar.
### Environment
Zed: v0.94.4 (stable)
OS: macOS 13.4.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
1.0
|
Only one project name is shown after "Added a project & branch switcher under project name" in 0.94.3 - ### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
For a project with more than one root folders, originally all of their names, along with their respective branch, would be shown in the titlebar.
After the introduction of "Added a project & branch switcher under project name" in 0.94.3, I can now only see one folder in the title bar, and can no longer see others, so there's also no way to switch their branch from the titlebar.
### Environment
Zed: v0.94.4 (stable)
OS: macOS 13.4.0
Memory: 16 GiB
Architecture: aarch64
### If applicable, add mockups / screenshots to help explain present your vision of the feature
_No response_
### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
_No response_
|
non_process
|
only one project name is shown after added a project branch switcher under project name in check for existing issues completed describe the bug provide steps to reproduce it for a project with more than one root folders originally all of their names along with their respective branch would be shown in the titlebar after the introduction of added a project branch switcher under project name in i can now only see one folder in the title bar and can no longer see others so there s also no way to switch their branch from the titlebar environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
| 0
|
403,637
| 11,844,185,639
|
IssuesEvent
|
2020-03-24 04:56:55
|
minio/mc
|
https://api.github.com/repos/minio/mc
|
closed
|
mc cp -a not restoring correct mtime timestamp on files
|
community priority: medium
|
## Expected behavior
On Ubuntu 18.04 client, I'm using "mc cp -a" to try to preserve filesystem attributes when copying files to a MinIO cluster. Then, I use "mc cp -a -r" to download the files from the MinIO cluster to a fresh local directory on the client. I would generally expect that the atime and mtime info on the downloaded files to match the respective atime and mtime info as seen on the original files.
## Actual behavior
On the downloaded files, atime info appears to be restored but the mtime info is incorrect.
## Steps to reproduce the behavior
```
user1@ThinkPad-T440s:~/temp$ stat *
File: file1
Size: 104857600 Blocks: 204808 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050162 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-01-01 00:00:00.000000000 -0600
Change: 2020-03-05 16:37:28.735349803 -0600
Birth: -
File: file2
Size: 104857600 Blocks: 204800 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050164 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-01-01 00:00:00.000000000 -0600
Change: 2020-03-05 16:37:31.867370957 -0600
Birth: -
File: SHA256SUMS
Size: 392 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050165 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.710870128 -0600
Modify: 2019-02-28 10:54:26.000000000 -0600
Change: 2020-03-05 10:38:54.494151208 -0600
Birth: -
File: ubuntu-16.04.6-server-amd64.iso
Size: 915406848 Blocks: 1787912 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050167 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.762870357 -0600
Modify: 2019-02-26 18:07:02.000000000 -0600
Change: 2020-03-05 10:39:08.830315553 -0600
Birth: -
```
```
user1@ThinkPad-T440s:~/temp$ mc cp -a * cluster1-node1/bucket1
ubuntu-16.04.6-server-amd64.iso: 1.05 GiB / 1.05 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 52.52 MiB/s 20s
```
```
user1@ThinkPad-T440s:~/temp$ mc stat -r cluster1-node1/bucket1
Name : bucket1/SHA256SUMS
Date : 2020-03-05 17:17:48 CST
Size : 392 B
ETag : 212f8f5a3892c1d7365d7faaa4734c1b-1
Type : file
Metadata :
Content-Type : application/octet-stream
X-Amz-Meta-Mc-Attrs: atime:1583426423/ctime:1583426334/gid:1000/gname:user1/mode:33204/mtime:1551372866/uid:1000/uname:user1
Name : bucket1/file1
Date : 2020-03-05 17:17:53 CST
Size : 100 MiB
ETag : d410ebd044f1206792fc813c69f2a0ef-1
Type : file
Metadata :
Content-Type : application/octet-stream
X-Amz-Meta-Mc-Attrs: atime:1577858400/ctime:1583447848/gid:1000/gname:user1/mode:33204/mtime:1577858400/uid:1000/uname:user1
Name : bucket1/file2
Date : 2020-03-05 17:17:52 CST
Size : 100 MiB
ETag : 2061725f81a8a8f5e6555824f9550294-1
Type : file
Metadata :
Content-Type : application/octet-stream
X-Amz-Meta-Mc-Attrs: atime:1577858400/ctime:1583447851/gid:1000/gname:user1/mode:33204/mtime:1577858400/uid:1000/uname:user1
Name : bucket1/ubuntu-16.04.6-server-amd64.iso
Date : 2020-03-05 17:18:08 CST
Size : 873 MiB
ETag : 3fffa14289b52f98b5ba141d7addd6a2-7
Type : file
Metadata :
Content-Type : application/x-iso9660-image
X-Amz-Meta-Mc-Attrs: atime:1583426423/ctime:1583426348/gid:1000/gname:user1/mode:33204/mtime:1551226022/uid:1000/uname:user1
```
file1 and file2, atime, Epoch conversion - January 1, 2020
file1 and file2, mtime, Epoch conversion - January 1, 2020
SHA256SUMS, atime, Epoch conversion - March 5, 2020
SHA256SUMS, mtime, Epoch conversion - February 28, 2019
ubuntu-16.04.6-server-amd64.iso, atime, Epoch conversion - March 5, 2020
ubuntu-16.04.6-server-amd64.iso, mtime, Epoch conversion - February 26, 2019
So, this looks good so far as MinIO is preserving the timestamps.
Let's download the files to my restore directory:
```
user1@ThinkPad-T440s:~/restore$ mc cp -a -r cluster1-node1/bucket1/ .
...ubuntu-16.04.6-server-amd64.iso: 1.05 GiB / 1.05 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 78.07 MiB/s 13s
```
```
user1@ThinkPad-T440s:~/restore$ stat *
File: file1
Size: 104857600 Blocks: 204800 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704838 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-03-05 16:37:28.000000000 -0600
Change: 2020-03-05 17:36:32.982639374 -0600
Birth: -
File: file2
Size: 104857600 Blocks: 204808 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704839 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-03-05 16:37:31.000000000 -0600
Change: 2020-03-05 17:36:32.506635920 -0600
Birth: -
File: SHA256SUMS
Size: 392 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704836 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.000000000 -0600
Modify: 2020-03-05 10:38:54.000000000 -0600
Change: 2020-03-05 17:36:29.390613310 -0600
Birth: -
File: ubuntu-16.04.6-server-amd64.iso
Size: 915406848 Blocks: 1787912 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704837 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.000000000 -0600
Modify: 2020-03-05 10:39:08.000000000 -0600
Change: 2020-03-05 17:36:43.006712135 -0600
Birth: -
```
Notice that atimes for the files are correct but mtimes are quite wrong.
## mc --version
mc version RELEASE.2020-02-25T18-10-03Z
## System information
Ubuntu 18.04 on client and MinIO cluster.
```
user1@ThinkPad-T440s:~$ mc admin info cluster1-node1
● 10.199.0.11:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
● 10.199.0.12:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
● 10.199.0.13:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
● 10.199.0.10:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
47 GiB Used, 6 Buckets, 12 Objects
8 drives online, 0 drives offline
```
|
1.0
|
mc cp -a not restoring correct mtime timestamp on files - ## Expected behavior
On Ubuntu 18.04 client, I'm using "mc cp -a" to try to preserve filesystem attributes when copying files to a MinIO cluster. Then, I use "mc cp -a -r" to download the files from the MinIO cluster to a fresh local directory on the client. I would generally expect that the atime and mtime info on the downloaded files to match the respective atime and mtime info as seen on the original files.
## Actual behavior
On the downloaded files, atime info appears to be restored but the mtime info is incorrect.
## Steps to reproduce the behavior
```
user1@ThinkPad-T440s:~/temp$ stat *
File: file1
Size: 104857600 Blocks: 204808 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050162 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-01-01 00:00:00.000000000 -0600
Change: 2020-03-05 16:37:28.735349803 -0600
Birth: -
File: file2
Size: 104857600 Blocks: 204800 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050164 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-01-01 00:00:00.000000000 -0600
Change: 2020-03-05 16:37:31.867370957 -0600
Birth: -
File: SHA256SUMS
Size: 392 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050165 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.710870128 -0600
Modify: 2019-02-28 10:54:26.000000000 -0600
Change: 2020-03-05 10:38:54.494151208 -0600
Birth: -
File: ubuntu-16.04.6-server-amd64.iso
Size: 915406848 Blocks: 1787912 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28050167 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.762870357 -0600
Modify: 2019-02-26 18:07:02.000000000 -0600
Change: 2020-03-05 10:39:08.830315553 -0600
Birth: -
```
```
user1@ThinkPad-T440s:~/temp$ mc cp -a * cluster1-node1/bucket1
ubuntu-16.04.6-server-amd64.iso: 1.05 GiB / 1.05 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 52.52 MiB/s 20s
```
```
user1@ThinkPad-T440s:~/temp$ mc stat -r cluster1-node1/bucket1
Name : bucket1/SHA256SUMS
Date : 2020-03-05 17:17:48 CST
Size : 392 B
ETag : 212f8f5a3892c1d7365d7faaa4734c1b-1
Type : file
Metadata :
Content-Type : application/octet-stream
X-Amz-Meta-Mc-Attrs: atime:1583426423/ctime:1583426334/gid:1000/gname:user1/mode:33204/mtime:1551372866/uid:1000/uname:user1
Name : bucket1/file1
Date : 2020-03-05 17:17:53 CST
Size : 100 MiB
ETag : d410ebd044f1206792fc813c69f2a0ef-1
Type : file
Metadata :
Content-Type : application/octet-stream
X-Amz-Meta-Mc-Attrs: atime:1577858400/ctime:1583447848/gid:1000/gname:user1/mode:33204/mtime:1577858400/uid:1000/uname:user1
Name : bucket1/file2
Date : 2020-03-05 17:17:52 CST
Size : 100 MiB
ETag : 2061725f81a8a8f5e6555824f9550294-1
Type : file
Metadata :
Content-Type : application/octet-stream
X-Amz-Meta-Mc-Attrs: atime:1577858400/ctime:1583447851/gid:1000/gname:user1/mode:33204/mtime:1577858400/uid:1000/uname:user1
Name : bucket1/ubuntu-16.04.6-server-amd64.iso
Date : 2020-03-05 17:18:08 CST
Size : 873 MiB
ETag : 3fffa14289b52f98b5ba141d7addd6a2-7
Type : file
Metadata :
Content-Type : application/x-iso9660-image
X-Amz-Meta-Mc-Attrs: atime:1583426423/ctime:1583426348/gid:1000/gname:user1/mode:33204/mtime:1551226022/uid:1000/uname:user1
```
file1 and file2, atime, Epoch conversion - January 1, 2020
file1 and file2, mtime, Epoch conversion - January 1, 2020
SHA256SUMS, atime, Epoch conversion - March 5, 2020
SHA256SUMS, mtime, Epoch conversion - February 28, 2019
ubuntu-16.04.6-server-amd64.iso, atime, Epoch conversion - March 5, 2020
ubuntu-16.04.6-server-amd64.iso, mtime, Epoch conversion - February 26, 2019
So, this looks good so far as MinIO is preserving the timestamps.
Let's download the files to my restore directory:
```
user1@ThinkPad-T440s:~/restore$ mc cp -a -r cluster1-node1/bucket1/ .
...ubuntu-16.04.6-server-amd64.iso: 1.05 GiB / 1.05 GiB ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ 78.07 MiB/s 13s
```
```
user1@ThinkPad-T440s:~/restore$ stat *
File: file1
Size: 104857600 Blocks: 204800 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704838 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-03-05 16:37:28.000000000 -0600
Change: 2020-03-05 17:36:32.982639374 -0600
Birth: -
File: file2
Size: 104857600 Blocks: 204808 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704839 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-01-01 00:00:00.000000000 -0600
Modify: 2020-03-05 16:37:31.000000000 -0600
Change: 2020-03-05 17:36:32.506635920 -0600
Birth: -
File: SHA256SUMS
Size: 392 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704836 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.000000000 -0600
Modify: 2020-03-05 10:38:54.000000000 -0600
Change: 2020-03-05 17:36:29.390613310 -0600
Birth: -
File: ubuntu-16.04.6-server-amd64.iso
Size: 915406848 Blocks: 1787912 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28704837 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ user1) Gid: ( 1000/ user1)
Access: 2020-03-05 10:40:23.000000000 -0600
Modify: 2020-03-05 10:39:08.000000000 -0600
Change: 2020-03-05 17:36:43.006712135 -0600
Birth: -
```
Notice that atimes for the files are correct but mtimes are quite wrong.
## mc --version
mc version RELEASE.2020-02-25T18-10-03Z
## System information
Ubuntu 18.04 on client and MinIO cluster.
```
user1@ThinkPad-T440s:~$ mc admin info cluster1-node1
● 10.199.0.11:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
● 10.199.0.12:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
● 10.199.0.13:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
● 10.199.0.10:9000
Uptime: 7 hours
Version: 2020-03-05T01:04:19Z
Network: 4/4 OK
Drives: 2/2 OK
47 GiB Used, 6 Buckets, 12 Objects
8 drives online, 0 drives offline
```
|
non_process
|
mc cp a not restoring correct mtime timestamp on files expected behavior on ubuntu client i m using mc cp a to try to preserve filesystem attributes when copying files to a minio cluster then i use mc cp a r to download the files from the minio cluster to a fresh local directory on the client i would generally expect that the atime and mtime info on the downloaded files to match the respective atime and mtime info as seen on the original files actual behavior on the downloaded files atime info appears to be restored but the mtime info is incorrect steps to reproduce the behavior thinkpad temp stat file size blocks io block regular file device inode links access rw rw r uid gid access modify change birth file size blocks io block regular file device inode links access rw rw r uid gid access modify change birth file size blocks io block regular file device inode links access rw rw r uid gid access modify change birth file ubuntu server iso size blocks io block regular file device inode links access rw rw r uid gid access modify change birth thinkpad temp mc cp a ubuntu server iso gib gib ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ mib s thinkpad temp mc stat r name date cst size b etag type file metadata content type application octet stream x amz meta mc attrs atime ctime gid gname mode mtime uid uname name date cst size mib etag type file metadata content type application octet stream x amz meta mc attrs atime ctime gid gname mode mtime uid uname name date cst size mib etag type file metadata content type application octet stream x amz meta mc attrs atime ctime gid gname mode mtime uid uname name ubuntu server iso date cst size mib etag type file metadata content type application x image x amz meta mc attrs atime ctime gid gname mode mtime uid uname and atime epoch conversion january and mtime epoch conversion january atime epoch conversion march mtime epoch conversion february ubuntu server iso atime epoch conversion march ubuntu server iso mtime epoch conversion february so this looks good so far as minio is preserving the timestamps let s download the files to my restore directory thinkpad restore mc cp a r ubuntu server iso gib gib ┃▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓┃ mib s thinkpad restore stat file size blocks io block regular file device inode links access rw rw r uid gid access modify change birth file size blocks io block regular file device inode links access rw rw r uid gid access modify change birth file size blocks io block regular file device inode links access rw rw r uid gid access modify change birth file ubuntu server iso size blocks io block regular file device inode links access rw rw r uid gid access modify change birth notice that atimes for the files are correct but mtimes are quite wrong mc version mc version release system information ubuntu on client and minio cluster thinkpad mc admin info ● uptime hours version network ok drives ok ● uptime hours version network ok drives ok ● uptime hours version network ok drives ok ● uptime hours version network ok drives ok gib used buckets objects drives online drives offline
| 0
|
379,830
| 11,236,347,679
|
IssuesEvent
|
2020-01-09 10:16:08
|
mozilla/addons-server
|
https://api.github.com/repos/mozilla/addons-server
|
reopened
|
batch import_blocklist processing and don't try to reimport existing blocks
|
component: admin tools priority: p3
|
#12460 was a little naively written - it assumed that regex searches on the database would be relatively speedy, and the command was pretty much guaranteed to finish so would only ever need to be executed once per env. The experience from addons-dev is some of the regexs take a very long time to run and not all blocks are processed before the command times out - either a database timeout or the dev instance is replaced due to a new deploy. And then if you start it again the blocks have to be processed again from the beginning.
|
1.0
|
batch import_blocklist processing and don't try to reimport existing blocks - #12460 was a little naively written - it assumed that regex searches on the database would be relatively speedy, and the command was pretty much guaranteed to finish so would only ever need to be executed once per env. The experience from addons-dev is some of the regexs take a very long time to run and not all blocks are processed before the command times out - either a database timeout or the dev instance is replaced due to a new deploy. And then if you start it again the blocks have to be processed again from the beginning.
|
non_process
|
batch import blocklist processing and don t try to reimport existing blocks was a little naively written it assumed that regex searches on the database would be relatively speedy and the command was pretty much guaranteed to finish so would only ever need to be executed once per env the experience from addons dev is some of the regexs take a very long time to run and not all blocks are processed before the command times out either a database timeout or the dev instance is replaced due to a new deploy and then if you start it again the blocks have to be processed again from the beginning
| 0
|
6,957
| 10,113,968,047
|
IssuesEvent
|
2019-07-30 18:01:46
|
material-components/material-components-ios
|
https://api.github.com/repos/material-components/material-components-ios
|
closed
|
Internal bug: b/80076887
|
type:Process
|
## Definition of done
- [ ] We've set up an internal tracker for GitHub bugs.
---
This is an internal issue. If you are a Googler, please visit [b/80076887](http://b/80076887) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/80076887](http://b/80076887)
|
1.0
|
Internal bug: b/80076887 - ## Definition of done
- [ ] We've set up an internal tracker for GitHub bugs.
---
This is an internal issue. If you are a Googler, please visit [b/80076887](http://b/80076887) for more details.
<!-- Auto-generated content below, do not modify -->
---
#### Internal data
- Associated internal bug: [b/80076887](http://b/80076887)
|
process
|
internal bug b definition of done we ve set up an internal tracker for github bugs this is an internal issue if you are a googler please visit for more details internal data associated internal bug
| 1
|
596,196
| 18,099,878,344
|
IssuesEvent
|
2021-09-22 13:13:06
|
Sage-Bionetworks/rocc-app
|
https://api.github.com/repos/Sage-Bionetworks/rocc-app
|
closed
|
Seed MICCAI Challenge data
|
Priority: Critical
|
@jiaxinmachine88 Collected information about 6 MICCAI challenges. The next task is to convert this information to JSON objects so they can be seeded in the ROCC app. For now ignore the challenge that does not have a "standard" challenge platform.
In addition, create two additional users from MICCAI, Spyros and Annika Reinke (extra info given via Slack)
|
1.0
|
Seed MICCAI Challenge data - @jiaxinmachine88 Collected information about 6 MICCAI challenges. The next task is to convert this information to JSON objects so they can be seeded in the ROCC app. For now ignore the challenge that does not have a "standard" challenge platform.
In addition, create two additional users from MICCAI, Spyros and Annika Reinke (extra info given via Slack)
|
non_process
|
seed miccai challenge data collected information about miccai challenges the next task is to convert this information to json objects so they can be seeded in the rocc app for now ignore the challenge that does not have a standard challenge platform in addition create two additional users from miccai spyros and annika reinke extra info given via slack
| 0
|
2,034
| 4,847,340,802
|
IssuesEvent
|
2016-11-10 14:43:07
|
Alfresco/alfresco-ng2-components
|
https://api.github.com/repos/Alfresco/alfresco-ng2-components
|
opened
|
Visibility of widgets not taken into consideration for form within start event
|
browser: all bug comp: activiti-processList
|
Form visibility not taken into consideration for start events which contain forms when starting a process.
**activiti**

**component**

N.b. this is a form, that may help, which has been compressed
[form with all widgets.json.zip](https://github.com/Alfresco/alfresco-ng2-components/files/583577/form.with.all.widgets.json.zip)
|
1.0
|
Visibility of widgets not taken into consideration for form within start event - Form visibility not taken into consideration for start events which contain forms when starting a process.
**activiti**

**component**

N.b. this is a form, that may help, which has been compressed
[form with all widgets.json.zip](https://github.com/Alfresco/alfresco-ng2-components/files/583577/form.with.all.widgets.json.zip)
|
process
|
visibility of widgets not taken into consideration for form within start event form visibility not taken into consideration for start events which contain forms when starting a process activiti component n b this is a form that may help which has been compressed
| 1
|
71,265
| 18,668,881,692
|
IssuesEvent
|
2021-10-30 10:10:20
|
neovim/neovim
|
https://api.github.com/repos/neovim/neovim
|
closed
|
Build failure on LuaJIT with parallel build
|
build bug-regression dependencies
|
### Neovim version (nvim -v)
head
### Vim (not Nvim) behaves the same?
n/a
### Operating system/version
ubuntu 20.04 (via WSL)
### Terminal name/version
n/a
### $TERM environment variable
n/a
### Installation
n/a
### How to reproduce the issue
make CMAKE_BUILD_TYPE=RelWithDebInfo CMAKE_INSTALL_PREFIX=/home/dch/.local/nvim -j8
### Expected behavior
compiles
### Actual behavior
Many errors, starting like this:
```
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_err_throw':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:735: multiple definition of `lj_err_throw'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:735: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_atpanic':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1052: multiple definition of `lua_atpanic'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1052: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_func_closeuv':
/home/dch/neovim/.deps/build/src/luajit/src/lj_func.c:84: multiple definition of `lj_func_closeuv'; libluajit.a(lj_func.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_func.c:84: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_setlocal':
/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:420: multiple definition of `lua_setlocal'; libluajit.a(lj_debug.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:420: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_getstack':
/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:538: multiple definition of `lua_getstack'; libluajit.a(lj_debug.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:538: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_state_growstack':
/home/dch/neovim/.deps/build/src/luajit/src/lj_state.c:104: multiple definition of `lj_state_growstack'; libluajit.a(lj_state.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_state.c:104: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_str_new':
/home/dch/neovim/.deps/build/src/luajit/src/lj_str.c:315: multiple definition of `lj_str_new'; libluajit.a(lj_str.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_str.c:315: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `luaL_where':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1078: multiple definition of `luaL_where'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1078: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o):(.data.rel.local+0x0): multiple definition of `lj_err_allmsg'; libluajit.a(lj_err.o):(.data.rel.local+0x0): first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_err_trace':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:856: multiple definition of `lj_err_trace'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:856: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_error':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1060: multiple definition of `lua_error'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1060: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `luaL_argerror':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1066: multiple definition of `luaL_argerror'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1066: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o):(.data.rel.ro.local+0x10e0): multiple definition of `lj_obj_itypename'; libluajit.a(lj_obj.o):(.data.rel.ro.local+0x0): first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `luaL_typerror':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1072: multiple definition of `luaL_typerror'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1072: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o):(.data.rel.ro.local+0x1160): multiple definition of `lj_obj_typename'; libluajit.a(lj_obj.o):(.data.rel.ro.local+0x80): f
...
```
git bisect says that 6acfbd810d31e8c2771a475388568925dc90d141 is the first commit where this fails
|
1.0
|
Build failure on LuaJIT with parallel build - ### Neovim version (nvim -v)
head
### Vim (not Nvim) behaves the same?
n/a
### Operating system/version
ubuntu 20.04 (via WSL)
### Terminal name/version
n/a
### $TERM environment variable
n/a
### Installation
n/a
### How to reproduce the issue
make CMAKE_BUILD_TYPE=RelWithDebInfo CMAKE_INSTALL_PREFIX=/home/dch/.local/nvim -j8
### Expected behavior
compiles
### Actual behavior
Many errors, starting like this:
```
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_err_throw':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:735: multiple definition of `lj_err_throw'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:735: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_atpanic':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1052: multiple definition of `lua_atpanic'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1052: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_func_closeuv':
/home/dch/neovim/.deps/build/src/luajit/src/lj_func.c:84: multiple definition of `lj_func_closeuv'; libluajit.a(lj_func.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_func.c:84: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_setlocal':
/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:420: multiple definition of `lua_setlocal'; libluajit.a(lj_debug.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:420: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_getstack':
/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:538: multiple definition of `lua_getstack'; libluajit.a(lj_debug.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_debug.c:538: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_state_growstack':
/home/dch/neovim/.deps/build/src/luajit/src/lj_state.c:104: multiple definition of `lj_state_growstack'; libluajit.a(lj_state.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_state.c:104: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_str_new':
/home/dch/neovim/.deps/build/src/luajit/src/lj_str.c:315: multiple definition of `lj_str_new'; libluajit.a(lj_str.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_str.c:315: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `luaL_where':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1078: multiple definition of `luaL_where'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1078: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o):(.data.rel.local+0x0): multiple definition of `lj_err_allmsg'; libluajit.a(lj_err.o):(.data.rel.local+0x0): first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lj_err_trace':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:856: multiple definition of `lj_err_trace'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:856: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `lua_error':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1060: multiple definition of `lua_error'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1060: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `luaL_argerror':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1066: multiple definition of `luaL_argerror'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1066: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o):(.data.rel.ro.local+0x10e0): multiple definition of `lj_obj_itypename'; libluajit.a(lj_obj.o):(.data.rel.ro.local+0x0): first defined here
/usr/bin/ld: libluajit.a(ljamalg.o): in function `luaL_typerror':
/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1072: multiple definition of `luaL_typerror'; libluajit.a(lj_err.o):/home/dch/neovim/.deps/build/src/luajit/src/lj_err.c:1072: first defined here
/usr/bin/ld: libluajit.a(ljamalg.o):(.data.rel.ro.local+0x1160): multiple definition of `lj_obj_typename'; libluajit.a(lj_obj.o):(.data.rel.ro.local+0x80): f
...
```
git bisect says that 6acfbd810d31e8c2771a475388568925dc90d141 is the first commit where this fails
|
non_process
|
build failure on luajit with parallel build neovim version nvim v head vim not nvim behaves the same n a operating system version ubuntu via wsl terminal name version n a term environment variable n a installation n a how to reproduce the issue make cmake build type relwithdebinfo cmake install prefix home dch local nvim expected behavior compiles actual behavior many errors starting like this usr bin ld libluajit a ljamalg o in function lj err throw home dch neovim deps build src luajit src lj err c multiple definition of lj err throw libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o in function lua atpanic home dch neovim deps build src luajit src lj err c multiple definition of lua atpanic libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o in function lj func closeuv home dch neovim deps build src luajit src lj func c multiple definition of lj func closeuv libluajit a lj func o home dch neovim deps build src luajit src lj func c first defined here usr bin ld libluajit a ljamalg o in function lua setlocal home dch neovim deps build src luajit src lj debug c multiple definition of lua setlocal libluajit a lj debug o home dch neovim deps build src luajit src lj debug c first defined here usr bin ld libluajit a ljamalg o in function lua getstack home dch neovim deps build src luajit src lj debug c multiple definition of lua getstack libluajit a lj debug o home dch neovim deps build src luajit src lj debug c first defined here usr bin ld libluajit a ljamalg o in function lj state growstack home dch neovim deps build src luajit src lj state c multiple definition of lj state growstack libluajit a lj state o home dch neovim deps build src luajit src lj state c first defined here usr bin ld libluajit a ljamalg o in function lj str new home dch neovim deps build src luajit src lj str c multiple definition of lj str new libluajit a lj str o home dch neovim deps build src luajit src lj str c first defined here usr bin ld libluajit a ljamalg o in function lual where home dch neovim deps build src luajit src lj err c multiple definition of lual where libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o data rel local multiple definition of lj err allmsg libluajit a lj err o data rel local first defined here usr bin ld libluajit a ljamalg o in function lj err trace home dch neovim deps build src luajit src lj err c multiple definition of lj err trace libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o in function lua error home dch neovim deps build src luajit src lj err c multiple definition of lua error libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o in function lual argerror home dch neovim deps build src luajit src lj err c multiple definition of lual argerror libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o data rel ro local multiple definition of lj obj itypename libluajit a lj obj o data rel ro local first defined here usr bin ld libluajit a ljamalg o in function lual typerror home dch neovim deps build src luajit src lj err c multiple definition of lual typerror libluajit a lj err o home dch neovim deps build src luajit src lj err c first defined here usr bin ld libluajit a ljamalg o data rel ro local multiple definition of lj obj typename libluajit a lj obj o data rel ro local f git bisect says that is the first commit where this fails
| 0
|
3,854
| 6,808,617,971
|
IssuesEvent
|
2017-11-04 05:39:01
|
Great-Hill-Corporation/quickBlocks
|
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
|
reopened
|
ethscan.py should be able to open multiple tabs
|
status-inprocess tools-scripts type-enhancement
|
If I enter the command:
ethscan.py 0x3003208e77edf3b088b122b5de3a6fc8c8ef679d
it opens the Etherscan website to that address (as it should).
If I enter the command:
ethscan.py 0x3003208e77edf3b088b122b5de3a6fc8c8ef679d 0x314159265dd8dbb310642f98f50c066173c1259b
It should open two tabs. I should be able to specify multiple tabs, and each tab should be able to be any of an address, a hash, or a block number, thus:
ethscan.py 0x3003208e77edf3b088b122b5de3a6fc8c8ef679d 4001001 0x....hash....
This is just an optimization, but it makes the `ethscan.py` tool more usable.
For example, I use it to test the `isContract` command which does a compare of two contracts for identical code. To double check that, I want to open etherscan on two, not just one, addresses.
|
1.0
|
ethscan.py should be able to open multiple tabs - If I enter the command:
ethscan.py 0x3003208e77edf3b088b122b5de3a6fc8c8ef679d
it opens the Etherscan website to that address (as it should).
If I enter the command:
ethscan.py 0x3003208e77edf3b088b122b5de3a6fc8c8ef679d 0x314159265dd8dbb310642f98f50c066173c1259b
It should open two tabs. I should be able to specify multiple tabs, and each tab should be able to be any of an address, a hash, or a block number, thus:
ethscan.py 0x3003208e77edf3b088b122b5de3a6fc8c8ef679d 4001001 0x....hash....
This is just an optimization, but it makes the `ethscan.py` tool more usable.
For example, I use it to test the `isContract` command which does a compare of two contracts for identical code. To double check that, I want to open etherscan on two, not just one, addresses.
|
process
|
ethscan py should be able to open multiple tabs if i enter the command ethscan py it opens the etherscan website to that address as it should if i enter the command ethscan py it should open two tabs i should be able to specify multiple tabs and each tab should be able to be any of an address a hash or a block number thus ethscan py hash this is just an optimization but it makes the ethscan py tool more usable for example i use it to test the iscontract command which does a compare of two contracts for identical code to double check that i want to open etherscan on two not just one addresses
| 1
|
21,461
| 29,498,383,757
|
IssuesEvent
|
2023-06-02 19:07:37
|
metabase/metabase
|
https://api.github.com/repos/metabase/metabase
|
closed
|
[MLv2] `column-name`/`:lib/desired-column-alias` should be consistent with MLv1, at least for legacy queries
|
.Backend .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
|
Suppose you have an expression aggregation like this:
```clj
[:+
{}
[:min {} (lib.tu/field-clause :venues :id)]
[:* {} 2 [:avg {} (lib.tu/field-clause :venues :price)]]]
```
`metabase.query-processor.middleware.annotate` will generate the wonderful name of `:expression` for this column in the results, while MLv2 will generate `min_ID_plus_2_times_avg_PRICE`. The second is obviously much nicer, **BUT** it's going to break things if you use the original query as a source query using `[:field <string-name>]` references or in a native query like
```sql
SELECT expression
FROM {{#1}}
```
Do we want to break everything for people? No, probably not; so I think we should revert the fancy name calculation stuff or maybe save it under a different key than `:name`. Maybe `:v2-name` or `:lib/name`?
Ideally, for newly-created queries, we can use the nice new-style names, but for legacy queries converted from legacy MBQL we can use the old-style names, to preserve existing usages of them.
|
1.0
|
[MLv2] `column-name`/`:lib/desired-column-alias` should be consistent with MLv1, at least for legacy queries - Suppose you have an expression aggregation like this:
```clj
[:+
{}
[:min {} (lib.tu/field-clause :venues :id)]
[:* {} 2 [:avg {} (lib.tu/field-clause :venues :price)]]]
```
`metabase.query-processor.middleware.annotate` will generate the wonderful name of `:expression` for this column in the results, while MLv2 will generate `min_ID_plus_2_times_avg_PRICE`. The second is obviously much nicer, **BUT** it's going to break things if you use the original query as a source query using `[:field <string-name>]` references or in a native query like
```sql
SELECT expression
FROM {{#1}}
```
Do we want to break everything for people? No, probably not; so I think we should revert the fancy name calculation stuff or maybe save it under a different key than `:name`. Maybe `:v2-name` or `:lib/name`?
Ideally, for newly-created queries, we can use the nice new-style names, but for legacy queries converted from legacy MBQL we can use the old-style names, to preserve existing usages of them.
|
process
|
column name lib desired column alias should be consistent with at least for legacy queries suppose you have an expression aggregation like this clj metabase query processor middleware annotate will generate the wonderful name of expression for this column in the results while will generate min id plus times avg price the second is obviously much nicer but it s going to break things if you use the original query as a source query using references or in a native query like sql select expression from do we want to break everything for people no probably not so i think we should revert the fancy name calculation stuff or maybe save it under a different key than name maybe name or lib name ideally for newly created queries we can use the nice new style names but for legacy queries converted from legacy mbql we can use the old style names to preserve existing usages of them
| 1
|
596,823
| 18,144,520,017
|
IssuesEvent
|
2021-09-25 07:09:22
|
GIST-Petition-Site-Project/GIST-petition-web
|
https://api.github.com/repos/GIST-Petition-Site-Project/GIST-petition-web
|
closed
|
Footer 작업
|
Type: Feature/UI Type: Feature/Function Status: To Do Priority: Medium
|
## Feature description
footer 작업함
### Use cases
## Benefits
For whom and why.
## Requirements
## Links / references
|
1.0
|
Footer 작업 - ## Feature description
footer 작업함
### Use cases
## Benefits
For whom and why.
## Requirements
## Links / references
|
non_process
|
footer 작업 feature description footer 작업함 use cases benefits for whom and why requirements links references
| 0
|
14,730
| 17,946,801,716
|
IssuesEvent
|
2021-09-12 00:19:21
|
beecorrea/midas
|
https://api.github.com/repos/beecorrea/midas
|
closed
|
[PROCESS] Open account
|
processes
|
# Description
Opens an account and binds it to an account owner
# Rules involved
- Does the account exist?
# Steps
1. Create a new account.
2. Bind it to the account owner's uuid.
3. Save it to the system.
|
1.0
|
[PROCESS] Open account - # Description
Opens an account and binds it to an account owner
# Rules involved
- Does the account exist?
# Steps
1. Create a new account.
2. Bind it to the account owner's uuid.
3. Save it to the system.
|
process
|
open account description opens an account and binds it to an account owner rules involved does the account exist steps create a new account bind it to the account owner s uuid save it to the system
| 1
|
267,473
| 8,389,283,380
|
IssuesEvent
|
2018-10-09 09:10:39
|
CS2113-AY1819S1-T16-4/main
|
https://api.github.com/repos/CS2113-AY1819S1-T16-4/main
|
opened
|
As a HR staff, I can delete old employee’s data
|
priority.high type.story
|
So that I can remove employee’s details that are no longer required.
|
1.0
|
As a HR staff, I can delete old employee’s data - So that I can remove employee’s details that are no longer required.
|
non_process
|
as a hr staff i can delete old employee’s data so that i can remove employee’s details that are no longer required
| 0
|
139,012
| 31,162,783,134
|
IssuesEvent
|
2023-08-16 17:15:06
|
toeverything/blocksuite
|
https://api.github.com/repos/toeverything/blocksuite
|
closed
|
Cursor position with code block
|
type:bug mod:code
|
It seems that there are some problems with the key down events of code block.
1. When the cursor is in the last code line, user press arrowdown, the cursor would not move to next block.
2. When cursor is at the block before code block, then press arrowdown, it will focus on the whole code block, not the first code line of this code block.
3. When cursor is at the block after code block, then press arrowup, it will focus on the whole code block, not the last code line of this code block.
https://github.com/toeverything/blocksuite/assets/99816898/c6ef4257-1331-403c-bcef-391a48fe3335
|
1.0
|
Cursor position with code block - It seems that there are some problems with the key down events of code block.
1. When the cursor is in the last code line, user press arrowdown, the cursor would not move to next block.
2. When cursor is at the block before code block, then press arrowdown, it will focus on the whole code block, not the first code line of this code block.
3. When cursor is at the block after code block, then press arrowup, it will focus on the whole code block, not the last code line of this code block.
https://github.com/toeverything/blocksuite/assets/99816898/c6ef4257-1331-403c-bcef-391a48fe3335
|
non_process
|
cursor position with code block it seems that there are some problems with the key down events of code block when the cursor is in the last code line user press arrowdown the cursor would not move to next block when cursor is at the block before code block then press arrowdown it will focus on the whole code block not the first code line of this code block when cursor is at the block after code block then press arrowup it will focus on the whole code block not the last code line of this code block
| 0
|
329,548
| 24,225,701,120
|
IssuesEvent
|
2022-09-26 14:16:30
|
firewalld/firewalld
|
https://api.github.com/repos/firewalld/firewalld
|
closed
|
Doc: --set-target gives wrong description of deault target
|
documentation
|
The behavior changed in f2896e43c3a548a299f87675a01e1a421b8897b8. Docs did not get updated.
|
1.0
|
Doc: --set-target gives wrong description of deault target - The behavior changed in f2896e43c3a548a299f87675a01e1a421b8897b8. Docs did not get updated.
|
non_process
|
doc set target gives wrong description of deault target the behavior changed in docs did not get updated
| 0
|
308,580
| 23,255,456,427
|
IssuesEvent
|
2022-08-04 08:52:28
|
appsmithorg/appsmith
|
https://api.github.com/repos/appsmithorg/appsmith
|
closed
|
[Docs]: Update DynamoDB docs; primary key/partition key
|
Documentation User Education Pod A-Force
|
## Primary / Partition Key
There was some confusion regarding how to use the primary key for DynamoDB. In our docs, we refer to it only as "Primary Key" -- in the DynamoDB docs, they also use terms Partition Key, Sort Key, Composite Primary Key; it can all be a bit confusing.
It may helpful in our docs to either include a brief description of what these keys are, or at least add a hint block that links to the appropriate AWS docs where it describes these terms.
[From AWS](https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/):
> What is a partition key?
>
> DynamoDB supports two types of primary keys:
>
> * Partition key: A simple primary key, composed of one attribute known as the partition key. Attributes in DynamoDB are similar in many ways to fields or columns in other database systems.
>* Partition key and sort key: Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. All data under a partition key is sorted by the sort key value. The following is an example.
## Operations
Lastly, a user had asked about retrieving multiple items from the DB; there are operations to do this (like a SCAN), which we have not documented on our end. It doesn't seem worthwhile to document all of the possible DynamoDB operations in our docs, since there are so many of them and they don't require any special configuration from the Appsmith platform.
I'm not sure if any action should be taken about the last part, but I included it for consideration.
|
1.0
|
[Docs]: Update DynamoDB docs; primary key/partition key - ## Primary / Partition Key
There was some confusion regarding how to use the primary key for DynamoDB. In our docs, we refer to it only as "Primary Key" -- in the DynamoDB docs, they also use terms Partition Key, Sort Key, Composite Primary Key; it can all be a bit confusing.
It may helpful in our docs to either include a brief description of what these keys are, or at least add a hint block that links to the appropriate AWS docs where it describes these terms.
[From AWS](https://aws.amazon.com/blogs/database/choosing-the-right-dynamodb-partition-key/):
> What is a partition key?
>
> DynamoDB supports two types of primary keys:
>
> * Partition key: A simple primary key, composed of one attribute known as the partition key. Attributes in DynamoDB are similar in many ways to fields or columns in other database systems.
>* Partition key and sort key: Referred to as a composite primary key, this type of key is composed of two attributes. The first attribute is the partition key, and the second attribute is the sort key. All data under a partition key is sorted by the sort key value. The following is an example.
## Operations
Lastly, a user had asked about retrieving multiple items from the DB; there are operations to do this (like a SCAN), which we have not documented on our end. It doesn't seem worthwhile to document all of the possible DynamoDB operations in our docs, since there are so many of them and they don't require any special configuration from the Appsmith platform.
I'm not sure if any action should be taken about the last part, but I included it for consideration.
|
non_process
|
update dynamodb docs primary key partition key primary partition key there was some confusion regarding how to use the primary key for dynamodb in our docs we refer to it only as primary key in the dynamodb docs they also use terms partition key sort key composite primary key it can all be a bit confusing it may helpful in our docs to either include a brief description of what these keys are or at least add a hint block that links to the appropriate aws docs where it describes these terms what is a partition key dynamodb supports two types of primary keys partition key a simple primary key composed of one attribute known as the partition key attributes in dynamodb are similar in many ways to fields or columns in other database systems partition key and sort key referred to as a composite primary key this type of key is composed of two attributes the first attribute is the partition key and the second attribute is the sort key all data under a partition key is sorted by the sort key value the following is an example operations lastly a user had asked about retrieving multiple items from the db there are operations to do this like a scan which we have not documented on our end it doesn t seem worthwhile to document all of the possible dynamodb operations in our docs since there are so many of them and they don t require any special configuration from the appsmith platform i m not sure if any action should be taken about the last part but i included it for consideration
| 0
|
193,260
| 14,645,064,851
|
IssuesEvent
|
2020-12-26 04:55:29
|
github-vet/rangeloop-pointer-findings
|
https://api.github.com/repos/github-vet/rangeloop-pointer-findings
|
closed
|
keikoproj/lifecycle-manager: pkg/service/nodes_test.go; 3 LoC
|
fresh test tiny
|
Found a possible issue in [keikoproj/lifecycle-manager](https://www.github.com/keikoproj/lifecycle-manager) at [pkg/service/nodes_test.go](https://github.com/keikoproj/lifecycle-manager/blob/71fbbbfcff695b1eab0a405760e24dc03af41564/pkg/service/nodes_test.go#L68-L70)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to node at line 69 may start a goroutine
[Click here to see the code in its original context.](https://github.com/keikoproj/lifecycle-manager/blob/71fbbbfcff695b1eab0a405760e24dc03af41564/pkg/service/nodes_test.go#L68-L70)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, node := range fakeNodes {
kubeClient.CoreV1().Nodes().Create(&node)
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
"(executeCredentialProcess, 0)" -> {}
"(newELBv2Client, 2)" -> {"(Context, 0)";}
"(Collect, 1)" -> {}
"(readLoop, 0)" -> {"(run, 0)";}
"(startFrameWrite, 1)" -> {}
"(Read, 1)" -> {}
"(Marshal, 1)" -> {"(marshalDoc, 2)";}
"(newStaticTable, 0)" -> {"(init, 0)";}
"(WithDeadline, 2)" -> {"(propagateCancel, 2)";"(WithCancel, 1)";}
"(Infof, 2)" -> {"(printf, 3)";}
"(printWithFileLine, 5)" -> {"(output, 5)";}
"(Invokes, 2)" -> {"(DeepCopy, 0)";}
"(UseOrCreateObject, 4)" -> {"(New, 1)";}
"(Len, 0)" -> {"(Get, 0)";}
"(processSettings, 1)" -> {"(scheduleFrameWrite, 0)";}
"(Close, 0)" -> {}
"(WithCancel, 1)" -> {"(propagateCancel, 2)";}
"(Start, 0)" -> {"(Handler, 0)";"(MustRegister, 1)";"(Process, 1)";}
"(marshalDoc, 2)" -> {"(init, 0)";}
"(Get, 0)" -> {"(Retrieve, 0)";}
"(Errorf, 2)" -> {"(printf, 3)";}
"(ConfigureTransport, 1)" -> {"(configureTransport, 1)";}
"(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";}
"(Decode, 2)" -> {"(Read, 1)";}
"(, 0)" -> {"(newStaticTable, 0)";"(NewSchemeBuilder, 1)";"(newELBv2Client, 2)";"(newELBClient, 2)";"(handleSendError, 2)";"(NewUnboundedExecutor, 0)";"(Start, 0)";"(Context, 0)";}
"(run, 0)" -> {"(processHeaders, 1)";"(processSettings, 1)";"(processWindowUpdate, 1)";}
"(BorrowStream, 1)" -> {"(Get, 0)";}
"(NewClientConn, 1)" -> {"(newClientConn, 2)";}
"(drainLoadbalancerTarget, 1)" -> {"(executeDeregisterWaiters, 3)";}
"(executeDeregisterWaiters, 3)" -> {}
"(Body, 1)" -> {"(Encode, 2)";}
"(timeoutFlush, 1)" -> {}
"(printf, 3)" -> {"(output, 5)";}
"(tryThrottle, 0)" -> {"(String, 0)";}
"(Process, 1)" -> {"(handleEvent, 1)";}
"(on100, 0)" -> {}
"(Error, 0)" -> {"(Decode, 3)";"(String, 0)";"(Infof, 2)";}
"(BackgroundContext, 0)" -> {}
"(Encode, 1)" -> {"(marshalDoc, 2)";}
"(output, 5)" -> {"(timeoutFlush, 1)";}
"(MustRegister, 1)" -> {"(Register, 1)";}
"(Handler, 0)" -> {"(HandlerFor, 2)";}
"(newClientConn, 2)" -> {"(readLoop, 0)";}
"(init, 0)" -> {"(Register, 1)";}
"(DeepCopy, 0)" -> {"(Set, 1)";}
"(Context, 0)" -> {"(BackgroundContext, 0)";}
"(propagateCancel, 2)" -> {}
"(enableCSM, 3)" -> {"(Start, 2)";}
"(String, 0)" -> {"(Get, 0)";"(Write, 1)";}
"(Start, 2)" -> {"(connect, 1)";}
"(handleEvent, 1)" -> {"(drainLoadbalancerTarget, 1)";}
"(newELBClient, 2)" -> {"(Context, 0)";}
"(run, 3)" -> {"(NewClientConn, 1)";}
"(handleResponse, 2)" -> {"(on100, 0)";}
"(New, 1)" -> {"(restart, 0)";"(enableCSM, 3)";"(get, 1)";}
"(Set, 1)" -> {"(New, 1)";}
"(connect, 1)" -> {}
"(Get, 1)" -> {"(get, 1)";}
"(request, 1)" -> {"(, 0)";"(WithTimeout, 2)";"(Infof, 2)";"(String, 0)";"(Close, 0)";}
"(Unmarshal, 1)" -> {"(Close, 0)";}
"(WithTimeout, 2)" -> {"(WithDeadline, 2)";}
"(Retrieve, 0)" -> {"(executeCredentialProcess, 0)";}
"(NewUnboundedExecutor, 0)" -> {"(WithCancel, 1)";}
"(BorrowIterator, 1)" -> {"(Get, 0)";}
"(get, 1)" -> {"(SetTransportDefaults, 1)";}
"(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";}
"(Do, 0)" -> {"(request, 1)";"(tryThrottle, 0)";"(transformResponse, 2)";}
"(Decode, 3)" -> {"(Decode, 2)";"(Unmarshal, 1)";"(RecognizesData, 1)";"(New, 1)";"(UseOrCreateObject, 4)";}
"(processHeaders, 1)" -> {"(handleResponse, 2)";}
"(Into, 1)" -> {"(Decode, 3)";"(Error, 0)";}
"(restart, 0)" -> {}
"(NewSchemeBuilder, 1)" -> {"(Register, 1)";}
"(Register, 1)" -> {"(Describe, 1)";}
"(handleSendError, 2)" -> {"(Context, 0)";}
"(Write, 1)" -> {"(printWithFileLine, 5)";"(Get, 0)";}
"(Create, 1)" -> {"(Into, 1)";"(Invokes, 2)";"(Do, 0)";"(Body, 1)";}
"(Encode, 2)" -> {"(BorrowStream, 1)";"(Write, 1)";"(String, 0)";"(NewEncoder, 1)";"(Encode, 1)";"(Len, 0)";"(Close, 0)";"(Marshal, 1)";"(BorrowIterator, 1)";}
"(Gather, 0)" -> {"(Collect, 1)";}
"(addConnIfNeeded, 3)" -> {"(run, 3)";}
"(Describe, 1)" -> {}
"(transformResponse, 2)" -> {"(Get, 1)";"(Infof, 2)";"(Errorf, 2)";}
"(HandlerFor, 2)" -> {"(Gather, 0)";}
"(scheduleFrameWrite, 0)" -> {"(startFrameWrite, 1)";}
"(processWindowUpdate, 1)" -> {"(scheduleFrameWrite, 0)";}
"(RecognizesData, 1)" -> {"(Read, 1)";}
"(NewEncoder, 1)" -> {"(init, 0)";}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 71fbbbfcff695b1eab0a405760e24dc03af41564
|
1.0
|
keikoproj/lifecycle-manager: pkg/service/nodes_test.go; 3 LoC -
Found a possible issue in [keikoproj/lifecycle-manager](https://www.github.com/keikoproj/lifecycle-manager) at [pkg/service/nodes_test.go](https://github.com/keikoproj/lifecycle-manager/blob/71fbbbfcff695b1eab0a405760e24dc03af41564/pkg/service/nodes_test.go#L68-L70)
Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first
issue it finds, so please do not limit your consideration to the contents of the below message.
> function call which takes a reference to node at line 69 may start a goroutine
[Click here to see the code in its original context.](https://github.com/keikoproj/lifecycle-manager/blob/71fbbbfcff695b1eab0a405760e24dc03af41564/pkg/service/nodes_test.go#L68-L70)
<details>
<summary>Click here to show the 3 line(s) of Go which triggered the analyzer.</summary>
```go
for _, node := range fakeNodes {
kubeClient.CoreV1().Nodes().Create(&node)
}
```
</details>
<details>
<summary>Click here to show extra information the analyzer produced.</summary>
```
The following dot graph describes paths through the callgraph that could lead to a function calling a goroutine:
digraph G {
"(executeCredentialProcess, 0)" -> {}
"(newELBv2Client, 2)" -> {"(Context, 0)";}
"(Collect, 1)" -> {}
"(readLoop, 0)" -> {"(run, 0)";}
"(startFrameWrite, 1)" -> {}
"(Read, 1)" -> {}
"(Marshal, 1)" -> {"(marshalDoc, 2)";}
"(newStaticTable, 0)" -> {"(init, 0)";}
"(WithDeadline, 2)" -> {"(propagateCancel, 2)";"(WithCancel, 1)";}
"(Infof, 2)" -> {"(printf, 3)";}
"(printWithFileLine, 5)" -> {"(output, 5)";}
"(Invokes, 2)" -> {"(DeepCopy, 0)";}
"(UseOrCreateObject, 4)" -> {"(New, 1)";}
"(Len, 0)" -> {"(Get, 0)";}
"(processSettings, 1)" -> {"(scheduleFrameWrite, 0)";}
"(Close, 0)" -> {}
"(WithCancel, 1)" -> {"(propagateCancel, 2)";}
"(Start, 0)" -> {"(Handler, 0)";"(MustRegister, 1)";"(Process, 1)";}
"(marshalDoc, 2)" -> {"(init, 0)";}
"(Get, 0)" -> {"(Retrieve, 0)";}
"(Errorf, 2)" -> {"(printf, 3)";}
"(ConfigureTransport, 1)" -> {"(configureTransport, 1)";}
"(configureTransport, 1)" -> {"(addConnIfNeeded, 3)";}
"(Decode, 2)" -> {"(Read, 1)";}
"(, 0)" -> {"(newStaticTable, 0)";"(NewSchemeBuilder, 1)";"(newELBv2Client, 2)";"(newELBClient, 2)";"(handleSendError, 2)";"(NewUnboundedExecutor, 0)";"(Start, 0)";"(Context, 0)";}
"(run, 0)" -> {"(processHeaders, 1)";"(processSettings, 1)";"(processWindowUpdate, 1)";}
"(BorrowStream, 1)" -> {"(Get, 0)";}
"(NewClientConn, 1)" -> {"(newClientConn, 2)";}
"(drainLoadbalancerTarget, 1)" -> {"(executeDeregisterWaiters, 3)";}
"(executeDeregisterWaiters, 3)" -> {}
"(Body, 1)" -> {"(Encode, 2)";}
"(timeoutFlush, 1)" -> {}
"(printf, 3)" -> {"(output, 5)";}
"(tryThrottle, 0)" -> {"(String, 0)";}
"(Process, 1)" -> {"(handleEvent, 1)";}
"(on100, 0)" -> {}
"(Error, 0)" -> {"(Decode, 3)";"(String, 0)";"(Infof, 2)";}
"(BackgroundContext, 0)" -> {}
"(Encode, 1)" -> {"(marshalDoc, 2)";}
"(output, 5)" -> {"(timeoutFlush, 1)";}
"(MustRegister, 1)" -> {"(Register, 1)";}
"(Handler, 0)" -> {"(HandlerFor, 2)";}
"(newClientConn, 2)" -> {"(readLoop, 0)";}
"(init, 0)" -> {"(Register, 1)";}
"(DeepCopy, 0)" -> {"(Set, 1)";}
"(Context, 0)" -> {"(BackgroundContext, 0)";}
"(propagateCancel, 2)" -> {}
"(enableCSM, 3)" -> {"(Start, 2)";}
"(String, 0)" -> {"(Get, 0)";"(Write, 1)";}
"(Start, 2)" -> {"(connect, 1)";}
"(handleEvent, 1)" -> {"(drainLoadbalancerTarget, 1)";}
"(newELBClient, 2)" -> {"(Context, 0)";}
"(run, 3)" -> {"(NewClientConn, 1)";}
"(handleResponse, 2)" -> {"(on100, 0)";}
"(New, 1)" -> {"(restart, 0)";"(enableCSM, 3)";"(get, 1)";}
"(Set, 1)" -> {"(New, 1)";}
"(connect, 1)" -> {}
"(Get, 1)" -> {"(get, 1)";}
"(request, 1)" -> {"(, 0)";"(WithTimeout, 2)";"(Infof, 2)";"(String, 0)";"(Close, 0)";}
"(Unmarshal, 1)" -> {"(Close, 0)";}
"(WithTimeout, 2)" -> {"(WithDeadline, 2)";}
"(Retrieve, 0)" -> {"(executeCredentialProcess, 0)";}
"(NewUnboundedExecutor, 0)" -> {"(WithCancel, 1)";}
"(BorrowIterator, 1)" -> {"(Get, 0)";}
"(get, 1)" -> {"(SetTransportDefaults, 1)";}
"(SetTransportDefaults, 1)" -> {"(ConfigureTransport, 1)";}
"(Do, 0)" -> {"(request, 1)";"(tryThrottle, 0)";"(transformResponse, 2)";}
"(Decode, 3)" -> {"(Decode, 2)";"(Unmarshal, 1)";"(RecognizesData, 1)";"(New, 1)";"(UseOrCreateObject, 4)";}
"(processHeaders, 1)" -> {"(handleResponse, 2)";}
"(Into, 1)" -> {"(Decode, 3)";"(Error, 0)";}
"(restart, 0)" -> {}
"(NewSchemeBuilder, 1)" -> {"(Register, 1)";}
"(Register, 1)" -> {"(Describe, 1)";}
"(handleSendError, 2)" -> {"(Context, 0)";}
"(Write, 1)" -> {"(printWithFileLine, 5)";"(Get, 0)";}
"(Create, 1)" -> {"(Into, 1)";"(Invokes, 2)";"(Do, 0)";"(Body, 1)";}
"(Encode, 2)" -> {"(BorrowStream, 1)";"(Write, 1)";"(String, 0)";"(NewEncoder, 1)";"(Encode, 1)";"(Len, 0)";"(Close, 0)";"(Marshal, 1)";"(BorrowIterator, 1)";}
"(Gather, 0)" -> {"(Collect, 1)";}
"(addConnIfNeeded, 3)" -> {"(run, 3)";}
"(Describe, 1)" -> {}
"(transformResponse, 2)" -> {"(Get, 1)";"(Infof, 2)";"(Errorf, 2)";}
"(HandlerFor, 2)" -> {"(Gather, 0)";}
"(scheduleFrameWrite, 0)" -> {"(startFrameWrite, 1)";}
"(processWindowUpdate, 1)" -> {"(scheduleFrameWrite, 0)";}
"(RecognizesData, 1)" -> {"(Read, 1)";}
"(NewEncoder, 1)" -> {"(init, 0)";}
}
```
</details>
Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket:
See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information.
commit ID: 71fbbbfcff695b1eab0a405760e24dc03af41564
|
non_process
|
keikoproj lifecycle manager pkg service nodes test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to node at line may start a goroutine click here to show the line s of go which triggered the analyzer go for node range fakenodes kubeclient nodes create node click here to show extra information the analyzer produced the following dot graph describes paths through the callgraph that could lead to a function calling a goroutine digraph g executecredentialprocess context collect readloop run startframewrite read marshal marshaldoc newstatictable init withdeadline propagatecancel withcancel infof printf printwithfileline output invokes deepcopy useorcreateobject new len get processsettings scheduleframewrite close withcancel propagatecancel start handler mustregister process marshaldoc init get retrieve errorf printf configuretransport configuretransport configuretransport addconnifneeded decode read newstatictable newschemebuilder newelbclient handlesenderror newunboundedexecutor start context run processheaders processsettings processwindowupdate borrowstream get newclientconn newclientconn drainloadbalancertarget executederegisterwaiters executederegisterwaiters body encode timeoutflush printf output trythrottle string process handleevent error decode string infof backgroundcontext encode marshaldoc output timeoutflush mustregister register handler handlerfor newclientconn readloop init register deepcopy set context backgroundcontext propagatecancel enablecsm start string get write start connect handleevent drainloadbalancertarget newelbclient context run newclientconn handleresponse new restart enablecsm get set new connect get get request withtimeout infof string close unmarshal close withtimeout withdeadline retrieve executecredentialprocess newunboundedexecutor withcancel borrowiterator get get settransportdefaults settransportdefaults configuretransport do request trythrottle transformresponse decode decode unmarshal recognizesdata new useorcreateobject processheaders handleresponse into decode error restart newschemebuilder register register describe handlesenderror context write printwithfileline get create into invokes do body encode borrowstream write string newencoder encode len close marshal borrowiterator gather collect addconnifneeded run describe transformresponse get infof errorf handlerfor gather scheduleframewrite startframewrite processwindowupdate scheduleframewrite recognizesdata read newencoder init leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
| 0
|
4,523
| 7,370,566,750
|
IssuesEvent
|
2018-03-13 08:56:23
|
DevExpress/testcafe-hammerhead
|
https://api.github.com/repos/DevExpress/testcafe-hammerhead
|
closed
|
Hammerhead does not inject own staff to pages with 'text/plain' content type
|
REASON: won't fix SYSTEM: resource processing TYPE: bug
|
It means that testcafe will hung after redirect (or perform any action) to such page .
Simple server to reproduce:
```js
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
res.setHeader('content-type', 'text/plain');
res.end('ok');
}).listen(3000);
```
|
1.0
|
Hammerhead does not inject own staff to pages with 'text/plain' content type - It means that testcafe will hung after redirect (or perform any action) to such page .
Simple server to reproduce:
```js
var http = require('http');
var fs = require('fs');
http.createServer(function (req, res) {
res.setHeader('content-type', 'text/plain');
res.end('ok');
}).listen(3000);
```
|
process
|
hammerhead does not inject own staff to pages with text plain content type it means that testcafe will hung after redirect or perform any action to such page simple server to reproduce js var http require http var fs require fs http createserver function req res res setheader content type text plain res end ok listen
| 1
|
297,709
| 25,758,132,912
|
IssuesEvent
|
2022-12-08 18:02:47
|
johnpaulrusso/svelte-text-logger
|
https://api.github.com/repos/johnpaulrusso/svelte-text-logger
|
closed
|
Once auto-scrolling has been implemented, add a button to toggle the feature.
|
enhancement R - Ready For Test
|
> estimate 1
Once auto-scrolling has been implemented, add a button to toggle the feature on and off.
|
1.0
|
Once auto-scrolling has been implemented, add a button to toggle the feature. - > estimate 1
Once auto-scrolling has been implemented, add a button to toggle the feature on and off.
|
non_process
|
once auto scrolling has been implemented add a button to toggle the feature estimate once auto scrolling has been implemented add a button to toggle the feature on and off
| 0
|
6,179
| 9,087,789,497
|
IssuesEvent
|
2019-02-18 14:37:13
|
kubetenancy/tenant-integrator
|
https://api.github.com/repos/kubetenancy/tenant-integrator
|
opened
|
Generic integrator library
|
enhancement in process
|
To integrate tenants from different systems we should create a generic integrator library that can be used by concrete integrators.
|
1.0
|
Generic integrator library - To integrate tenants from different systems we should create a generic integrator library that can be used by concrete integrators.
|
process
|
generic integrator library to integrate tenants from different systems we should create a generic integrator library that can be used by concrete integrators
| 1
|
20,914
| 27,753,691,270
|
IssuesEvent
|
2023-03-15 23:27:30
|
googleapis/google-api-java-client-services
|
https://api.github.com/repos/googleapis/google-api-java-client-services
|
closed
|
Use a better batching mechanism to prevent the 256 matrix job limit in `codegen.yaml`
|
type: process priority: p2
|
[`codegen.yaml`](https://github.com/googleapis/google-api-java-client-services/blob/main/.github/workflows/codegen.yaml) had a matrix job limit issue that was [fixed today](https://github.com/googleapis/google-api-java-client-services/pull/16114). This approach is very rudimentary and we may need to remove the duplicated job definitions, maybe by defining a job in a separate file (as in [here](https://github.com/googleapis/google-api-java-client-services/blob/b800184c89875262bb9df9937b58c7b9083422ce/.github/workflows/codegen.yaml#L10)) and referencing it twice with each slice of the service list.
|
1.0
|
Use a better batching mechanism to prevent the 256 matrix job limit in `codegen.yaml` - [`codegen.yaml`](https://github.com/googleapis/google-api-java-client-services/blob/main/.github/workflows/codegen.yaml) had a matrix job limit issue that was [fixed today](https://github.com/googleapis/google-api-java-client-services/pull/16114). This approach is very rudimentary and we may need to remove the duplicated job definitions, maybe by defining a job in a separate file (as in [here](https://github.com/googleapis/google-api-java-client-services/blob/b800184c89875262bb9df9937b58c7b9083422ce/.github/workflows/codegen.yaml#L10)) and referencing it twice with each slice of the service list.
|
process
|
use a better batching mechanism to prevent the matrix job limit in codegen yaml had a matrix job limit issue that was this approach is very rudimentary and we may need to remove the duplicated job definitions maybe by defining a job in a separate file as in and referencing it twice with each slice of the service list
| 1
|
67,860
| 13,041,815,985
|
IssuesEvent
|
2020-07-28 21:07:00
|
dotnet/aspnetcore
|
https://api.github.com/repos/dotnet/aspnetcore
|
closed
|
Adding a new Razor file nukes backing C# buffer
|
area-razor.tooling bug feature-razor.vscode
|
This is a regression from the current release:

|
1.0
|
Adding a new Razor file nukes backing C# buffer - This is a regression from the current release:

|
non_process
|
adding a new razor file nukes backing c buffer this is a regression from the current release
| 0
|
7,124
| 10,270,140,809
|
IssuesEvent
|
2019-08-23 10:46:17
|
vaerilius/angular8-course
|
https://api.github.com/repos/vaerilius/angular8-course
|
closed
|
Section 20
|
inProcess
|
- [x] 284. Module Introduction
- [x] 285. How Authentication Works
- [x] 286. Adding the Auth Page
- [x] 287. Switching Between Auth Modes
- [x] 288. Handling Form Input
- [x] 289. Preparing the Backend
- [x] 290. Make sure you got Recipes in your backend!
- [x] 291. Preparing the Signup Request
- [x] 292. Sending the Signup Request
- [x] 293. Adding a Loading Spinner & Error Handling Logic
- [x] 294. Improving Error Handling
- [x] 295. Sending Login Requests
- [x] 296. Login Error Handling
- [x] 297. Creating & Storing the User Data
- [x] 298. Reflecting the Auth State in the UI
- [x] 299. Adding the Token to Outgoing Requests
- [x] 300. Attaching the Token with an Interceptor
- [x] 301. Adding Logout
- [x] 302. Adding Auto-Login
- [x] 303. Adding Auto-Logout
- [x] 304. Adding an Auth Guard
- [x] 305. Wrap Up
- [x] 306. Useful Resources & Links
|
1.0
|
Section 20 - - [x] 284. Module Introduction
- [x] 285. How Authentication Works
- [x] 286. Adding the Auth Page
- [x] 287. Switching Between Auth Modes
- [x] 288. Handling Form Input
- [x] 289. Preparing the Backend
- [x] 290. Make sure you got Recipes in your backend!
- [x] 291. Preparing the Signup Request
- [x] 292. Sending the Signup Request
- [x] 293. Adding a Loading Spinner & Error Handling Logic
- [x] 294. Improving Error Handling
- [x] 295. Sending Login Requests
- [x] 296. Login Error Handling
- [x] 297. Creating & Storing the User Data
- [x] 298. Reflecting the Auth State in the UI
- [x] 299. Adding the Token to Outgoing Requests
- [x] 300. Attaching the Token with an Interceptor
- [x] 301. Adding Logout
- [x] 302. Adding Auto-Login
- [x] 303. Adding Auto-Logout
- [x] 304. Adding an Auth Guard
- [x] 305. Wrap Up
- [x] 306. Useful Resources & Links
|
process
|
section module introduction how authentication works adding the auth page switching between auth modes handling form input preparing the backend make sure you got recipes in your backend preparing the signup request sending the signup request adding a loading spinner error handling logic improving error handling sending login requests login error handling creating storing the user data reflecting the auth state in the ui adding the token to outgoing requests attaching the token with an interceptor adding logout adding auto login adding auto logout adding an auth guard wrap up useful resources links
| 1
|
62,174
| 7,549,844,824
|
IssuesEvent
|
2018-04-18 15:14:24
|
disco-lang/disco
|
https://api.github.com/repos/disco-lang/disco
|
opened
|
Reconsider syntax for anonymous functions
|
U-Language Design U-Syntax
|
Right now, the syntax is something like `x -> x+3`, or `(x:Z) -> x + 3` (with some alternative syntaxes also accepted in place of the arrow, *e.g.* `↦`). This is close to standard mathematical practice, with the giant caveats that (a) standard mathematical practice is actually to use one symbol (`→`) to express function types, and a different symbol (`↦`) for anonymous functions, which helps reduce confusion; (b) writing anonymous functions is not all that common in mathematics anyway, and it's quite likely that many students will not have seen it.
I'm concerned that allowing the same syntax (`->`) for both function types and anonymous functions is going to create massive confusion. In my experience students learning Haskell already get the levels (function type vs. function value) confused anyway, and this would not help.
|
1.0
|
Reconsider syntax for anonymous functions - Right now, the syntax is something like `x -> x+3`, or `(x:Z) -> x + 3` (with some alternative syntaxes also accepted in place of the arrow, *e.g.* `↦`). This is close to standard mathematical practice, with the giant caveats that (a) standard mathematical practice is actually to use one symbol (`→`) to express function types, and a different symbol (`↦`) for anonymous functions, which helps reduce confusion; (b) writing anonymous functions is not all that common in mathematics anyway, and it's quite likely that many students will not have seen it.
I'm concerned that allowing the same syntax (`->`) for both function types and anonymous functions is going to create massive confusion. In my experience students learning Haskell already get the levels (function type vs. function value) confused anyway, and this would not help.
|
non_process
|
reconsider syntax for anonymous functions right now the syntax is something like x x or x z x with some alternative syntaxes also accepted in place of the arrow e g ↦ this is close to standard mathematical practice with the giant caveats that a standard mathematical practice is actually to use one symbol → to express function types and a different symbol ↦ for anonymous functions which helps reduce confusion b writing anonymous functions is not all that common in mathematics anyway and it s quite likely that many students will not have seen it i m concerned that allowing the same syntax for both function types and anonymous functions is going to create massive confusion in my experience students learning haskell already get the levels function type vs function value confused anyway and this would not help
| 0
|
11,229
| 14,006,121,725
|
IssuesEvent
|
2020-10-28 19:28:31
|
cncf/cnf-conformance
|
https://api.github.com/repos/cncf/cnf-conformance
|
closed
|
Switch from TravisCI to GitHub Actions for CI builds and releases
|
5 pts enhancement process sprint18 sprint19
|
### Proof of Concept: GitHub Actions build process
Short Description:
- Travis CI on cnf conformance has failed several times with crystal spec and kind, etc
- Tested Circle CI in #428, and would like to compare with GHA
- CNCF is utilizing GHA more often currently
There was initially some testing with GHA in ticket #81.
See CircleCI #428 PoC and [code](https://github.com/cncf/cnf-conformance/tree/master/.circleci).
Requirements in a CI system:
- Dashboard
- searchable logs
- Runner
- Increase the power of the provided CI runner
- Support using custom runners
- Self host the runner
- Performance could be improved
- Caching
- GitHub integrations/hooks for commits
- Merge Pull request into new branch before promoted to master branch
- Multiple stages
- Parallel jobs running currently
- Arm64 support
### Documentation Tasks:
- [ ] Update release doc
- [ ] Update INSTALL doc
### GitHub Repo Tasks:
- [ ] Block PRs with GitHub Actions
- [ ] Disable TravisCI
### GitHub Actions tasks:
TBD
### QA tasks
Dev Review:
- [ ] walk through A/C
- [ ] do you get the expected result?
- [ ] if yes,
- [ ] move to `Needs Peer Review` column
- [ ] create Pull Request and follow check list
- [ ] Assign 1 or more people for peer review
- [ ] if no, document what additional tasks will be needed
Peer review:
- [ ] walk through A/C
- [ ] do you get the expected result?
- [ ] if yes,
- [ ] move to `Reviewer Approved` column
- [ ] Approve pull request
- [ ] if no,
- [ ] document what did not go as expected, including error messages and screenshots (if possible)
- [ ] Add comment to pull request
- [ ] request changes to pull request
|
1.0
|
Switch from TravisCI to GitHub Actions for CI builds and releases - ### Proof of Concept: GitHub Actions build process
Short Description:
- Travis CI on cnf conformance has failed several times with crystal spec and kind, etc
- Tested Circle CI in #428, and would like to compare with GHA
- CNCF is utilizing GHA more often currently
There was initially some testing with GHA in ticket #81.
See CircleCI #428 PoC and [code](https://github.com/cncf/cnf-conformance/tree/master/.circleci).
Requirements in a CI system:
- Dashboard
- searchable logs
- Runner
- Increase the power of the provided CI runner
- Support using custom runners
- Self host the runner
- Performance could be improved
- Caching
- GitHub integrations/hooks for commits
- Merge Pull request into new branch before promoted to master branch
- Multiple stages
- Parallel jobs running currently
- Arm64 support
### Documentation Tasks:
- [ ] Update release doc
- [ ] Update INSTALL doc
### GitHub Repo Tasks:
- [ ] Block PRs with GitHub Actions
- [ ] Disable TravisCI
### GitHub Actions tasks:
TBD
### QA tasks
Dev Review:
- [ ] walk through A/C
- [ ] do you get the expected result?
- [ ] if yes,
- [ ] move to `Needs Peer Review` column
- [ ] create Pull Request and follow check list
- [ ] Assign 1 or more people for peer review
- [ ] if no, document what additional tasks will be needed
Peer review:
- [ ] walk through A/C
- [ ] do you get the expected result?
- [ ] if yes,
- [ ] move to `Reviewer Approved` column
- [ ] Approve pull request
- [ ] if no,
- [ ] document what did not go as expected, including error messages and screenshots (if possible)
- [ ] Add comment to pull request
- [ ] request changes to pull request
|
process
|
switch from travisci to github actions for ci builds and releases proof of concept github actions build process short description travis ci on cnf conformance has failed several times with crystal spec and kind etc tested circle ci in and would like to compare with gha cncf is utilizing gha more often currently there was initially some testing with gha in ticket see circleci poc and requirements in a ci system dashboard searchable logs runner increase the power of the provided ci runner support using custom runners self host the runner performance could be improved caching github integrations hooks for commits merge pull request into new branch before promoted to master branch multiple stages parallel jobs running currently support documentation tasks update release doc update install doc github repo tasks block prs with github actions disable travisci github actions tasks tbd qa tasks dev review walk through a c do you get the expected result if yes move to needs peer review column create pull request and follow check list assign or more people for peer review if no document what additional tasks will be needed peer review walk through a c do you get the expected result if yes move to reviewer approved column approve pull request if no document what did not go as expected including error messages and screenshots if possible add comment to pull request request changes to pull request
| 1
|
19,899
| 26,350,254,813
|
IssuesEvent
|
2023-01-11 03:45:06
|
jointakahe/takahe
|
https://api.github.com/repos/jointakahe/takahe
|
closed
|
Stator runner should be able to exclude models
|
feature area/processing pri/low
|
As well as the current mode where you can select only some models.
|
1.0
|
Stator runner should be able to exclude models - As well as the current mode where you can select only some models.
|
process
|
stator runner should be able to exclude models as well as the current mode where you can select only some models
| 1
|
50,993
| 3,009,584,290
|
IssuesEvent
|
2015-07-28 07:37:15
|
OctopusDeploy/Issues
|
https://api.github.com/repos/OctopusDeploy/Issues
|
closed
|
Cloud Service Deployment Target dropdown values reload incorrectly
|
bug in progress priority
|
Reproduced on 3.0.6.2140
The values selected for Azure Account, Cloud Service and Storage Account should persist correctly when you review the Cloud Service Deployment Target
I've created two Azure accounts to cover two of our Azure Subscriptions ("BD MSDN" and "IT MSDN"), and three Cloud Service deployment targets.
Two of the these targets use the "first" Account ("BD MSDN"), and these remember the set values when I go in to review/edit the settings, however the third uses the "second" Account ("IT MSDN").
Whenever I open the third deployment target, the Account option incorrectly shows "BD MSDN" and the cloud service and storage accounts are empty, and I cannot save until I switch the account to "IT MSDN" at which point it shows the correct values for the Cloud Service and Storage Account and allows me to save the changes.

Source: http://help.octopusdeploy.com/discussions/questions/5179
|
1.0
|
Cloud Service Deployment Target dropdown values reload incorrectly - Reproduced on 3.0.6.2140
The values selected for Azure Account, Cloud Service and Storage Account should persist correctly when you review the Cloud Service Deployment Target
I've created two Azure accounts to cover two of our Azure Subscriptions ("BD MSDN" and "IT MSDN"), and three Cloud Service deployment targets.
Two of the these targets use the "first" Account ("BD MSDN"), and these remember the set values when I go in to review/edit the settings, however the third uses the "second" Account ("IT MSDN").
Whenever I open the third deployment target, the Account option incorrectly shows "BD MSDN" and the cloud service and storage accounts are empty, and I cannot save until I switch the account to "IT MSDN" at which point it shows the correct values for the Cloud Service and Storage Account and allows me to save the changes.

Source: http://help.octopusdeploy.com/discussions/questions/5179
|
non_process
|
cloud service deployment target dropdown values reload incorrectly reproduced on the values selected for azure account cloud service and storage account should persist correctly when you review the cloud service deployment target i ve created two azure accounts to cover two of our azure subscriptions bd msdn and it msdn and three cloud service deployment targets two of the these targets use the first account bd msdn and these remember the set values when i go in to review edit the settings however the third uses the second account it msdn whenever i open the third deployment target the account option incorrectly shows bd msdn and the cloud service and storage accounts are empty and i cannot save until i switch the account to it msdn at which point it shows the correct values for the cloud service and storage account and allows me to save the changes source
| 0
|
6,892
| 10,036,164,311
|
IssuesEvent
|
2019-07-18 09:59:43
|
parcel-bundler/parcel
|
https://api.github.com/repos/parcel-bundler/parcel
|
closed
|
.htmlnanorc ignored for package imports
|
:bug: Bug HTML Preprocessing
|
# 🐛 bug report
.htmlnanorc settings are honored for local template files, but not for those imported from a package
## 🎛 Configuration (.babelrc, package.json, cli command)
.htmlnanorc in project root as follows:
```js
{
"minifySvg": false
}
```
## 🤔 Expected Behavior
SVGs should not be minified.
## 😯 Current Behavior
SVG imports within the project source are ignored by the minifier as expected. However, SVGs pulled in from a package are still minified with any settings in htmlnanorc ignored.
## 💁 Possible Solution
Adding a copy of the .htmlnanorc file detailed above to the package directory within node_modules has the imported SVGs correctly ignored by the minifier, but this is not a practical solution.
## 🔦 Context
I'm trying to import an HTML template file ("icon.htm") with the following contents:
```
<svg class="icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
<path v-bind:d="path" />
</svg>
```
The vuejs attribute syntax in the path causes a minifer error. Setting the minifier to ignore SVGs should prevent this.
If the template is imported as e.g. "./icon.htm" all is well. However, importing from a package, e.g. "mypackage/templates/icon.htm" fails as the minifier parses the SVG within.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.12.3 |
| Node | v12.6.0 |
| npm/Yarn | 6.10.1 |
| Operating System | Ubuntu |
|
1.0
|
.htmlnanorc ignored for package imports - # 🐛 bug report
.htmlnanorc settings are honored for local template files, but not for those imported from a package
## 🎛 Configuration (.babelrc, package.json, cli command)
.htmlnanorc in project root as follows:
```js
{
"minifySvg": false
}
```
## 🤔 Expected Behavior
SVGs should not be minified.
## 😯 Current Behavior
SVG imports within the project source are ignored by the minifier as expected. However, SVGs pulled in from a package are still minified with any settings in htmlnanorc ignored.
## 💁 Possible Solution
Adding a copy of the .htmlnanorc file detailed above to the package directory within node_modules has the imported SVGs correctly ignored by the minifier, but this is not a practical solution.
## 🔦 Context
I'm trying to import an HTML template file ("icon.htm") with the following contents:
```
<svg class="icon" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24">
<path v-bind:d="path" />
</svg>
```
The vuejs attribute syntax in the path causes a minifer error. Setting the minifier to ignore SVGs should prevent this.
If the template is imported as e.g. "./icon.htm" all is well. However, importing from a package, e.g. "mypackage/templates/icon.htm" fails as the minifier parses the SVG within.
## 🌍 Your Environment
| Software | Version(s) |
| ---------------- | ---------- |
| Parcel | 1.12.3 |
| Node | v12.6.0 |
| npm/Yarn | 6.10.1 |
| Operating System | Ubuntu |
|
process
|
htmlnanorc ignored for package imports 🐛 bug report htmlnanorc settings are honored for local template files but not for those imported from a package 🎛 configuration babelrc package json cli command htmlnanorc in project root as follows js minifysvg false 🤔 expected behavior svgs should not be minified 😯 current behavior svg imports within the project source are ignored by the minifier as expected however svgs pulled in from a package are still minified with any settings in htmlnanorc ignored 💁 possible solution adding a copy of the htmlnanorc file detailed above to the package directory within node modules has the imported svgs correctly ignored by the minifier but this is not a practical solution 🔦 context i m trying to import an html template file icon htm with the following contents the vuejs attribute syntax in the path causes a minifer error setting the minifier to ignore svgs should prevent this if the template is imported as e g icon htm all is well however importing from a package e g mypackage templates icon htm fails as the minifier parses the svg within 🌍 your environment software version s parcel node npm yarn operating system ubuntu
| 1
|
22,145
| 30,684,714,309
|
IssuesEvent
|
2023-07-26 11:33:11
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Wrong description in ja-jp doc : "Using a user-assigned managed identity for an Azure Automation account" regarding Hybrid Runbook Worker
|
automation/svc triaged assigned-to-author process-automation/subsvc Pri2 doc-issue
|
Hi I found some conflicts in ja-jp doc.
I believe the en-US doc is correct one.
Please update the ja-JP doc as en-US doc.
https://docs.microsoft.com/en-US/azure/automation/add-user-assigned-identity
https://docs.microsoft.com/ja-JP/azure/automation/add-user-assigned-identity
In en-US doc, I can see the Hybrid Runbook Worker is **not** available for user-assigned Managed identity.

But in ja-JP doc, the doc is saying Hybrid Runbook Worker is available for user-assigned Managed identity.
Even though the "User-assigned managed identities are supported for Azure jobs only" is mentioned as note.
Below highlighted one is the sentence which has the issue.

In Japanese
> ユーザー割り当てマネージド ID を使用してハイブリッド ジョブを実行する場合は、Hybrid Runbook Worker を最新バージョンに更新します。 最低限必要なバージョンは次のとおりです。
>
> Windows Hybrid Runbook Worker: バージョン 7.3.1125.0
> Linux Hybrid Runbook Worker: バージョン 1.7.4.0
If translated to English, this means...
> If you want to run a hybrid job by using a user-assigned managed ID, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:
>
> Windows Hybrid Runbook Worker: Version 7.3.1125.0
> Linux Hybrid Runbook Worker: Version 1.7.4.0
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3b02b193-37a6-6ccf-7beb-ab91e43c229a
* Version Independent ID: ebf0e627-a87c-94d1-978f-57326acd85f7
* Content: [Using a user-assigned managed identity for an Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/add-user-assigned-identity)
* Content Source: [articles/automation/add-user-assigned-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/add-user-assigned-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
1.0
|
Wrong description in ja-jp doc : "Using a user-assigned managed identity for an Azure Automation account" regarding Hybrid Runbook Worker - Hi I found some conflicts in ja-jp doc.
I believe the en-US doc is correct one.
Please update the ja-JP doc as en-US doc.
https://docs.microsoft.com/en-US/azure/automation/add-user-assigned-identity
https://docs.microsoft.com/ja-JP/azure/automation/add-user-assigned-identity
In en-US doc, I can see the Hybrid Runbook Worker is **not** available for user-assigned Managed identity.

But in ja-JP doc, the doc is saying Hybrid Runbook Worker is available for user-assigned Managed identity.
Even though the "User-assigned managed identities are supported for Azure jobs only" is mentioned as note.
Below highlighted one is the sentence which has the issue.

In Japanese
> ユーザー割り当てマネージド ID を使用してハイブリッド ジョブを実行する場合は、Hybrid Runbook Worker を最新バージョンに更新します。 最低限必要なバージョンは次のとおりです。
>
> Windows Hybrid Runbook Worker: バージョン 7.3.1125.0
> Linux Hybrid Runbook Worker: バージョン 1.7.4.0
If translated to English, this means...
> If you want to run a hybrid job by using a user-assigned managed ID, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:
>
> Windows Hybrid Runbook Worker: Version 7.3.1125.0
> Linux Hybrid Runbook Worker: Version 1.7.4.0
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 3b02b193-37a6-6ccf-7beb-ab91e43c229a
* Version Independent ID: ebf0e627-a87c-94d1-978f-57326acd85f7
* Content: [Using a user-assigned managed identity for an Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/add-user-assigned-identity)
* Content Source: [articles/automation/add-user-assigned-identity.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/automation/add-user-assigned-identity.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @SGSneha
* Microsoft Alias: **v-ssudhir**
|
process
|
wrong description in ja jp doc using a user assigned managed identity for an azure automation account regarding hybrid runbook worker hi i found some conflicts in ja jp doc i believe the en us doc is correct one please update the ja jp doc as en us doc in en us doc i can see the hybrid runbook worker is not available for user assigned managed identity but in ja jp doc the doc is saying hybrid runbook worker is available for user assigned managed identity even though the user assigned managed identities are supported for azure jobs only is mentioned as note below highlighted one is the sentence which has the issue in japanese ユーザー割り当てマネージド id を使用してハイブリッド ジョブを実行する場合は、hybrid runbook worker を最新バージョンに更新します。 最低限必要なバージョンは次のとおりです。 windows hybrid runbook worker バージョン linux hybrid runbook worker バージョン if translated to english this means if you want to run a hybrid job by using a user assigned managed id update the hybrid runbook worker to the latest version the minimum required versions are windows hybrid runbook worker version linux hybrid runbook worker version document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service automation sub service process automation github login sgsneha microsoft alias v ssudhir
| 1
|
265,416
| 8,353,966,740
|
IssuesEvent
|
2018-10-02 11:54:38
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
www.google.com - see bug description
|
browser-firefox priority-critical
|
<!-- @browser: Firefox 62.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:62.0) Gecko/20100101 Firefox/62.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.google.com/?gws_rd=ssl
**Browser / Version**: Firefox 62.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: secured connection failed is coming
**Steps to Reproduce**:
i change the date and time correctly
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20180920131237</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: release</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
www.google.com - see bug description - <!-- @browser: Firefox 62.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; rv:62.0) Gecko/20100101 Firefox/62.0 -->
<!-- @reported_with: desktop-reporter -->
**URL**: https://www.google.com/?gws_rd=ssl
**Browser / Version**: Firefox 62.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: secured connection failed is coming
**Steps to Reproduce**:
i change the date and time correctly
<details>
<summary>Browser Configuration</summary>
<ul>
<li>mixed active content blocked: false</li><li>buildID: 20180920131237</li><li>tracking content blocked: false</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.all: false</li><li>mixed passive content blocked: false</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>channel: release</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_process
|
see bug description url browser version firefox operating system windows tested another browser yes problem type something else description secured connection failed is coming steps to reproduce i change the date and time correctly browser configuration mixed active content blocked false buildid tracking content blocked false gfx webrender blob images true gfx webrender all false mixed passive content blocked false gfx webrender enabled false image mem shared true channel release from with ❤️
| 0
|
185,156
| 21,785,094,819
|
IssuesEvent
|
2022-05-14 02:28:19
|
dmartinez777/AzureDevOpsAngular
|
https://api.github.com/repos/dmartinez777/AzureDevOpsAngular
|
closed
|
CVE-2019-16769 (Medium) detected in serialize-javascript-1.9.1.tgz - autoclosed
|
security vulnerability
|
## CVE-2019-16769 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serialize-javascript-1.9.1.tgz</b></p></summary>
<p>Serialize JavaScript to a superset of JSON that includes regular expressions and functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz">https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/AzureDevOpsAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/AzureDevOpsAngular/node_modules/serialize-javascript/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.803.20.tgz (Root Library)
- copy-webpack-plugin-5.0.4.tgz
- :x: **serialize-javascript-1.9.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/xlordt/AzureDevOpsAngular/commit/fcfce2f795c9b1a45655aad17b03c09e9c25bd3d">fcfce2f795c9b1a45655aad17b03c09e9c25bd3d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of this package are vulnerable to Cross-site Scripting (XSS). It does not properly mitigate against unsafe characters in serialized regular expressions. This vulnerability is not affected on Node.js environment since Node.js's implementation of RegExp.prototype.toString() backslash-escapes all forward slashes in regular expressions. If serialized data of regular expression objects are used in an environment other than Node.js, it is affected by this vulnerability.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769>CVE-2019-16769</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: v2.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2019-16769 (Medium) detected in serialize-javascript-1.9.1.tgz - autoclosed - ## CVE-2019-16769 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serialize-javascript-1.9.1.tgz</b></p></summary>
<p>Serialize JavaScript to a superset of JSON that includes regular expressions and functions.</p>
<p>Library home page: <a href="https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz">https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz</a></p>
<p>Path to dependency file: /tmp/ws-scm/AzureDevOpsAngular/package.json</p>
<p>Path to vulnerable library: /tmp/ws-scm/AzureDevOpsAngular/node_modules/serialize-javascript/package.json</p>
<p>
Dependency Hierarchy:
- build-angular-0.803.20.tgz (Root Library)
- copy-webpack-plugin-5.0.4.tgz
- :x: **serialize-javascript-1.9.1.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/xlordt/AzureDevOpsAngular/commit/fcfce2f795c9b1a45655aad17b03c09e9c25bd3d">fcfce2f795c9b1a45655aad17b03c09e9c25bd3d</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
Affected versions of this package are vulnerable to Cross-site Scripting (XSS). It does not properly mitigate against unsafe characters in serialized regular expressions. This vulnerability is not affected on Node.js environment since Node.js's implementation of RegExp.prototype.toString() backslash-escapes all forward slashes in regular expressions. If serialized data of regular expression objects are used in an environment other than Node.js, it is affected by this vulnerability.
<p>Publish Date: 2019-12-05
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769>CVE-2019-16769</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary>
<p>
Base Score Metrics not available</p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769</a></p>
<p>Release Date: 2019-12-05</p>
<p>Fix Resolution: v2.1.1</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_process
|
cve medium detected in serialize javascript tgz autoclosed cve medium severity vulnerability vulnerable library serialize javascript tgz serialize javascript to a superset of json that includes regular expressions and functions library home page a href path to dependency file tmp ws scm azuredevopsangular package json path to vulnerable library tmp ws scm azuredevopsangular node modules serialize javascript package json dependency hierarchy build angular tgz root library copy webpack plugin tgz x serialize javascript tgz vulnerable library found in head commit a href vulnerability details affected versions of this package are vulnerable to cross site scripting xss it does not properly mitigate against unsafe characters in serialized regular expressions this vulnerability is not affected on node js environment since node js s implementation of regexp prototype tostring backslash escapes all forward slashes in regular expressions if serialized data of regular expression objects are used in an environment other than node js it is affected by this vulnerability publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
| 0
|
695,807
| 23,872,829,637
|
IssuesEvent
|
2022-09-07 16:09:47
|
credential-handler/credential-handler-polyfill
|
https://api.github.com/repos/credential-handler/credential-handler-polyfill
|
closed
|
Provide user feedback for browsers with blocked third party storage, etc.
|
Priority 2
|
If possible, we need to provide better feedback to the user when the mediator can't be loaded because of third party blocking tools in the user's browser/extensions. Ideally, we don't have these issues at all, but if there's no way around them, people using the Brave browser or using a Privacy Badger extension, etc. should not just see a broken site, they should be informed that they need to make some changes to use the site they are on. Users of these tools also typically understand that they must make exceptions (and are willing to ... presently anyway) for certain sites.
|
1.0
|
Provide user feedback for browsers with blocked third party storage, etc. - If possible, we need to provide better feedback to the user when the mediator can't be loaded because of third party blocking tools in the user's browser/extensions. Ideally, we don't have these issues at all, but if there's no way around them, people using the Brave browser or using a Privacy Badger extension, etc. should not just see a broken site, they should be informed that they need to make some changes to use the site they are on. Users of these tools also typically understand that they must make exceptions (and are willing to ... presently anyway) for certain sites.
|
non_process
|
provide user feedback for browsers with blocked third party storage etc if possible we need to provide better feedback to the user when the mediator can t be loaded because of third party blocking tools in the user s browser extensions ideally we don t have these issues at all but if there s no way around them people using the brave browser or using a privacy badger extension etc should not just see a broken site they should be informed that they need to make some changes to use the site they are on users of these tools also typically understand that they must make exceptions and are willing to presently anyway for certain sites
| 0
|
8,200
| 11,395,023,706
|
IssuesEvent
|
2020-01-30 10:32:27
|
prisma/prisma-client-js
|
https://api.github.com/repos/prisma/prisma-client-js
|
opened
|
Prisma Client should pick up required env vars from .env file during development
|
process/candidate
|
## Context
We currently encourage users to hardcode their DB credentials into the `schema.prisma` by this being the default getting started experience. We should strongly **discourage** developers to do so and therefore adjust our getting started experience + examples accordingly in order to encourage the use of environment variables.
In order to not degrade the DX during local development, I'm suggesting the following change in Prisma Client.
## Proposal
- Prisma Client should auto-import required env vars (i.e. env vars referenced in the Prisma schema in the `datasource.url` property) from the `.env` file during development.
- Implementation notes: It's important that this functionality/feature takes the **lowest precedent** (i.e. if the user provides their own env vars, call `require('dotenv')` or similar themselves, this feature shouldn't conflict in any way)
- It should **not** make the env vars available to the entire Node process but just the query engine.
|
1.0
|
Prisma Client should pick up required env vars from .env file during development - ## Context
We currently encourage users to hardcode their DB credentials into the `schema.prisma` by this being the default getting started experience. We should strongly **discourage** developers to do so and therefore adjust our getting started experience + examples accordingly in order to encourage the use of environment variables.
In order to not degrade the DX during local development, I'm suggesting the following change in Prisma Client.
## Proposal
- Prisma Client should auto-import required env vars (i.e. env vars referenced in the Prisma schema in the `datasource.url` property) from the `.env` file during development.
- Implementation notes: It's important that this functionality/feature takes the **lowest precedent** (i.e. if the user provides their own env vars, call `require('dotenv')` or similar themselves, this feature shouldn't conflict in any way)
- It should **not** make the env vars available to the entire Node process but just the query engine.
|
process
|
prisma client should pick up required env vars from env file during development context we currently encourage users to hardcode their db credentials into the schema prisma by this being the default getting started experience we should strongly discourage developers to do so and therefore adjust our getting started experience examples accordingly in order to encourage the use of environment variables in order to not degrade the dx during local development i m suggesting the following change in prisma client proposal prisma client should auto import required env vars i e env vars referenced in the prisma schema in the datasource url property from the env file during development implementation notes it s important that this functionality feature takes the lowest precedent i e if the user provides their own env vars call require dotenv or similar themselves this feature shouldn t conflict in any way it should not make the env vars available to the entire node process but just the query engine
| 1
|
16,627
| 21,700,232,363
|
IssuesEvent
|
2022-05-10 02:39:27
|
streamnative/flink
|
https://api.github.com/repos/streamnative/flink
|
opened
|
[SQL Connector] Manual test SQL Connector on 1.15 release and Wrap up
|
compute/data-processing
|
1. manual test the jar
2. test the pipeline after the merge request
3. write documentations on the process
No Estimate as its a cleanup job for previous tickets.
|
1.0
|
[SQL Connector] Manual test SQL Connector on 1.15 release and Wrap up - 1. manual test the jar
2. test the pipeline after the merge request
3. write documentations on the process
No Estimate as its a cleanup job for previous tickets.
|
process
|
manual test sql connector on release and wrap up manual test the jar test the pipeline after the merge request write documentations on the process no estimate as its a cleanup job for previous tickets
| 1
|
98,086
| 20,606,533,697
|
IssuesEvent
|
2022-03-07 01:25:47
|
inventree/InvenTree
|
https://api.github.com/repos/inventree/InvenTree
|
closed
|
[BUG] App QR scanner fails with server version 0.6
|
bug barcode app
|
**Describe the bug**
Barcode / QR code scanner in iPhone App does not work. App version 0.5.6 and Inventree version 0.6.1
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Installed a fresh docker version of 0.6.1.
2. Added a part category and a single new part and displayed the QR code for that part.
3. Opened the iPhone App (version 0.5.6), connected to server.
4. Scan barcode within the app5.
5. App fails with "No match for barcode" message
**Expected behavior**
Expected to open the part in the App as it worked in 0.5.4 of Inventree with the App.
The App fails with the message "No match for barcode"
inventree-proxy log contains message "POST /api/barcode/ HTTP/1.1" 200 130 "-" "Dart/2.13 (dart:io)" "-"
**Deployment Method**
- [x ] Docker
- [ ] Bare Metal
**Version Information**
App version 0.5.6 and Inventree version 0.6.1
|
1.0
|
[BUG] App QR scanner fails with server version 0.6 - **Describe the bug**
Barcode / QR code scanner in iPhone App does not work. App version 0.5.6 and Inventree version 0.6.1
**Steps to Reproduce**
Steps to reproduce the behavior:
1. Installed a fresh docker version of 0.6.1.
2. Added a part category and a single new part and displayed the QR code for that part.
3. Opened the iPhone App (version 0.5.6), connected to server.
4. Scan barcode within the app5.
5. App fails with "No match for barcode" message
**Expected behavior**
Expected to open the part in the App as it worked in 0.5.4 of Inventree with the App.
The App fails with the message "No match for barcode"
inventree-proxy log contains message "POST /api/barcode/ HTTP/1.1" 200 130 "-" "Dart/2.13 (dart:io)" "-"
**Deployment Method**
- [x ] Docker
- [ ] Bare Metal
**Version Information**
App version 0.5.6 and Inventree version 0.6.1
|
non_process
|
app qr scanner fails with server version describe the bug barcode qr code scanner in iphone app does not work app version and inventree version steps to reproduce steps to reproduce the behavior installed a fresh docker version of added a part category and a single new part and displayed the qr code for that part opened the iphone app version connected to server scan barcode within the app fails with no match for barcode message expected behavior expected to open the part in the app as it worked in of inventree with the app the app fails with the message no match for barcode inventree proxy log contains message post api barcode http dart dart io deployment method docker bare metal version information app version and inventree version
| 0
|
5,385
| 8,211,464,034
|
IssuesEvent
|
2018-09-04 13:53:46
|
openvstorage/framework
|
https://api.github.com/repos/openvstorage/framework
|
closed
|
Look for potential issues in list caching
|
process_cantreproduce type_bug
|
### Problem description
@dejonghb had managed to create two vdisks which were not shown on the total overview of the vdisk page but they were shown on the vpool detail page
In short the overall list returned old results and the queried list showed the newest result
After a while the items resolved out of them selves (due to invalidation or expiration of the cache most likely)
|
1.0
|
Look for potential issues in list caching - ### Problem description
@dejonghb had managed to create two vdisks which were not shown on the total overview of the vdisk page but they were shown on the vpool detail page
In short the overall list returned old results and the queried list showed the newest result
After a while the items resolved out of them selves (due to invalidation or expiration of the cache most likely)
|
process
|
look for potential issues in list caching problem description dejonghb had managed to create two vdisks which were not shown on the total overview of the vdisk page but they were shown on the vpool detail page in short the overall list returned old results and the queried list showed the newest result after a while the items resolved out of them selves due to invalidation or expiration of the cache most likely
| 1
|
1,826
| 4,423,383,551
|
IssuesEvent
|
2016-08-16 08:22:37
|
mockito/mockito
|
https://api.github.com/repos/mockito/mockito
|
closed
|
Remove Whitebox class
|
1.* incompatible in progress refactoring
|
The original question stems from #422 where additional changes were requested. This class seems to stimulate bad testing practices. Mockito only uses this class in [JUnitFailureHacker](https://github.com/mockito/mockito/blob/196ff979da156caa07e19f57e4849637d8bede1a/src/main/java/org/mockito/internal/util/junit/JUnitFailureHacker.java) which consequently is only used in [VerboseMockitoJUnitRunner](https://github.com/mockito/mockito/blob/196ff979da156caa07e19f57e4849637d8bede1a/src/main/java/org/mockito/runners/VerboseMockitoJUnitRunner.java). Given the nature of this class and only 1 usage in the library, I think we should remove it to prevent users from obtaining bad testing habits.
|
True
|
Remove Whitebox class - The original question stems from #422 where additional changes were requested. This class seems to stimulate bad testing practices. Mockito only uses this class in [JUnitFailureHacker](https://github.com/mockito/mockito/blob/196ff979da156caa07e19f57e4849637d8bede1a/src/main/java/org/mockito/internal/util/junit/JUnitFailureHacker.java) which consequently is only used in [VerboseMockitoJUnitRunner](https://github.com/mockito/mockito/blob/196ff979da156caa07e19f57e4849637d8bede1a/src/main/java/org/mockito/runners/VerboseMockitoJUnitRunner.java). Given the nature of this class and only 1 usage in the library, I think we should remove it to prevent users from obtaining bad testing habits.
|
non_process
|
remove whitebox class the original question stems from where additional changes were requested this class seems to stimulate bad testing practices mockito only uses this class in which consequently is only used in given the nature of this class and only usage in the library i think we should remove it to prevent users from obtaining bad testing habits
| 0
|
15,486
| 19,694,765,173
|
IssuesEvent
|
2022-01-12 10:56:16
|
arcus-azure/arcus.messaging
|
https://api.github.com/repos/arcus-azure/arcus.messaging
|
closed
|
Add message handling extension based on the Azure Functions builder type
|
enhancement area:message-processing
|
**Is your feature request related to a problem? Please describe.**
We have already a set of extensions to add message handlers to the `IServiceCollection` but not for the `IFunctionsHostBuilder` which is available for Azure Functions projects.
**Describe the solution you'd like**
Add the whole set of extensions for this Azure Functions type so to make it more discoverable.
**Additional context**
Came across during review of #141
|
1.0
|
Add message handling extension based on the Azure Functions builder type - **Is your feature request related to a problem? Please describe.**
We have already a set of extensions to add message handlers to the `IServiceCollection` but not for the `IFunctionsHostBuilder` which is available for Azure Functions projects.
**Describe the solution you'd like**
Add the whole set of extensions for this Azure Functions type so to make it more discoverable.
**Additional context**
Came across during review of #141
|
process
|
add message handling extension based on the azure functions builder type is your feature request related to a problem please describe we have already a set of extensions to add message handlers to the iservicecollection but not for the ifunctionshostbuilder which is available for azure functions projects describe the solution you d like add the whole set of extensions for this azure functions type so to make it more discoverable additional context came across during review of
| 1
|
22,154
| 30,695,362,104
|
IssuesEvent
|
2023-07-26 18:09:20
|
open-telemetry/opentelemetry-collector-contrib
|
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
|
closed
|
Allow Transfroms Processor to Truncate log events
|
processor/transform pkg/ottl
|
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
When collecting k8s pod logs with filelog receiver, I use the recombine operator to merge lines to create "multiline events" (ie. docker (16384) or containerd (8192) partial logs). Sometimes these recombined log lines are very large. ie. > 1MB up to 50MB+
After I do this I would like a processor to TRUNCATE the resulting log event aka "body", to protect downstream systems from massive events logged by an application, while retaining some arbitrary number of characters to get minimum value or insight into the log event. For example, many systems default to 10000 characters and allow config to increase to 100K, up to 1MB. etc.
There is a OTTL transform function that does this, for other data:
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl/ottlfuncs#truncate_all
**Describe the solution you'd like**
The ability to apply truncate_all like option to stings in body for logs as a transforms processor.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Tried using some regex based transforms, like replace_match, etc, but could not get it to allow me to control the length of the "body.string"
Also, while recombine operator lets you set max recombine lines or size, it doesnt let you recombine to a point then drop the rest and start recombining again with a start of line rule.
**Additional context**
Add any other context or screenshots about the feature request here.
|
1.0
|
Allow Transfroms Processor to Truncate log events - **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
When collecting k8s pod logs with filelog receiver, I use the recombine operator to merge lines to create "multiline events" (ie. docker (16384) or containerd (8192) partial logs). Sometimes these recombined log lines are very large. ie. > 1MB up to 50MB+
After I do this I would like a processor to TRUNCATE the resulting log event aka "body", to protect downstream systems from massive events logged by an application, while retaining some arbitrary number of characters to get minimum value or insight into the log event. For example, many systems default to 10000 characters and allow config to increase to 100K, up to 1MB. etc.
There is a OTTL transform function that does this, for other data:
https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/pkg/ottl/ottlfuncs#truncate_all
**Describe the solution you'd like**
The ability to apply truncate_all like option to stings in body for logs as a transforms processor.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
Tried using some regex based transforms, like replace_match, etc, but could not get it to allow me to control the length of the "body.string"
Also, while recombine operator lets you set max recombine lines or size, it doesnt let you recombine to a point then drop the rest and start recombining again with a start of line rule.
**Additional context**
Add any other context or screenshots about the feature request here.
|
process
|
allow transfroms processor to truncate log events is your feature request related to a problem please describe a clear and concise description of what the problem is ex i m always frustrated when when collecting pod logs with filelog receiver i use the recombine operator to merge lines to create multiline events ie docker or containerd partial logs sometimes these recombined log lines are very large ie up to after i do this i would like a processor to truncate the resulting log event aka body to protect downstream systems from massive events logged by an application while retaining some arbitrary number of characters to get minimum value or insight into the log event for example many systems default to characters and allow config to increase to up to etc there is a ottl transform function that does this for other data describe the solution you d like the ability to apply truncate all like option to stings in body for logs as a transforms processor describe alternatives you ve considered a clear and concise description of any alternative solutions or features you ve considered tried using some regex based transforms like replace match etc but could not get it to allow me to control the length of the body string also while recombine operator lets you set max recombine lines or size it doesnt let you recombine to a point then drop the rest and start recombining again with a start of line rule additional context add any other context or screenshots about the feature request here
| 1
|
9,609
| 3,295,768,306
|
IssuesEvent
|
2015-11-01 08:31:05
|
coala-analyzer/coala
|
https://api.github.com/repos/coala-analyzer/coala
|
opened
|
advertise virtualenv
|
documentation
|
Hi,
`sudo pip install ...` is discouraged, instead the python community advertises the use of the much cleaner virtualenv's because pip is not a full package manager. Read up on http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-installation-of-python-software-with-virtualenv/ and https://packaging.python.org/en/latest/installing/ for this.
FWIW I want the installation to be as simple as possible, virtualenv is an additional command... maybe we can pack that in a script and serve that statically or so...
Any thoughts?
|
1.0
|
advertise virtualenv - Hi,
`sudo pip install ...` is discouraged, instead the python community advertises the use of the much cleaner virtualenv's because pip is not a full package manager. Read up on http://opensourcehacker.com/2012/09/16/recommended-way-for-sudo-free-installation-of-python-software-with-virtualenv/ and https://packaging.python.org/en/latest/installing/ for this.
FWIW I want the installation to be as simple as possible, virtualenv is an additional command... maybe we can pack that in a script and serve that statically or so...
Any thoughts?
|
non_process
|
advertise virtualenv hi sudo pip install is discouraged instead the python community advertises the use of the much cleaner virtualenv s because pip is not a full package manager read up on and for this fwiw i want the installation to be as simple as possible virtualenv is an additional command maybe we can pack that in a script and serve that statically or so any thoughts
| 0
|
8,494
| 11,649,171,618
|
IssuesEvent
|
2020-03-02 00:47:47
|
MicrosoftDocs/azure-docs
|
https://api.github.com/repos/MicrosoftDocs/azure-docs
|
closed
|
Got error when I ran this
|
Pri2 automation/svc cxp process-automation/subsvc product-issue triaged
|
2/26/2020, 2:38:32 PM
Error
Connections asset not found. To create this Connections asset, navigate to the Assets blade and create a Connections asset named: AzureRunAsConnection.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d7c2cc34-ba4a-1181-ccda-88dd901e0212
* Version Independent ID: 9b4a3b68-03fc-387a-ee5d-e7f73ee3c567
* Content: [Azure Quickstart - Create an Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-account#feedback)
* Content Source: [articles/automation/automation-quickstart-create-account.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-quickstart-create-account.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
1.0
|
Got error when I ran this - 2/26/2020, 2:38:32 PM
Error
Connections asset not found. To create this Connections asset, navigate to the Assets blade and create a Connections asset named: AzureRunAsConnection.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d7c2cc34-ba4a-1181-ccda-88dd901e0212
* Version Independent ID: 9b4a3b68-03fc-387a-ee5d-e7f73ee3c567
* Content: [Azure Quickstart - Create an Azure Automation account](https://docs.microsoft.com/en-us/azure/automation/automation-quickstart-create-account#feedback)
* Content Source: [articles/automation/automation-quickstart-create-account.md](https://github.com/Microsoft/azure-docs/blob/master/articles/automation/automation-quickstart-create-account.md)
* Service: **automation**
* Sub-service: **process-automation**
* GitHub Login: @MGoedtel
* Microsoft Alias: **magoedte**
|
process
|
got error when i ran this pm error connections asset not found to create this connections asset navigate to the assets blade and create a connections asset named azurerunasconnection document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id ccda version independent id content content source service automation sub service process automation github login mgoedtel microsoft alias magoedte
| 1
|
16,081
| 20,251,728,094
|
IssuesEvent
|
2022-02-14 18:34:26
|
googleapis/java-logging-logback
|
https://api.github.com/repos/googleapis/java-logging-logback
|
closed
|
Dependency Dashboard
|
type: process api: logging
|
This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/org.sonatype.plugins-nexus-staging-maven-plugin-1.x -->[build(deps): update dependency org.sonatype.plugins:nexus-staging-maven-plugin to v1.6.9](../pull/689)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-logback-0.x -->[chore(deps): update dependency com.google.cloud:google-cloud-logging-logback to v0.123.3-alpha](../pull/690)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-github-script-6.x -->[deps: update actions/github-script action to v6](../pull/686)
- [ ] <!-- recreate-branch=renovate/actions-setup-java-2.x -->[deps: update actions/setup-java action to v2](../pull/369)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
1.0
|
Dependency Dashboard - This issue provides visibility into Renovate updates and their statuses. [Learn more](https://docs.renovatebot.com/key-concepts/dashboard/)
## Edited/Blocked
These updates have been manually edited so Renovate will no longer make changes. To discard all commits and start over, click on a checkbox.
- [ ] <!-- rebase-branch=renovate/org.sonatype.plugins-nexus-staging-maven-plugin-1.x -->[build(deps): update dependency org.sonatype.plugins:nexus-staging-maven-plugin to v1.6.9](../pull/689)
- [ ] <!-- rebase-branch=renovate/com.google.cloud-google-cloud-logging-logback-0.x -->[chore(deps): update dependency com.google.cloud:google-cloud-logging-logback to v0.123.3-alpha](../pull/690)
## Ignored or Blocked
These are blocked by an existing closed PR and will not be recreated unless you click a checkbox below.
- [ ] <!-- recreate-branch=renovate/actions-github-script-6.x -->[deps: update actions/github-script action to v6](../pull/686)
- [ ] <!-- recreate-branch=renovate/actions-setup-java-2.x -->[deps: update actions/setup-java action to v2](../pull/369)
---
- [ ] <!-- manual job -->Check this box to trigger a request for Renovate to run again on this repository
|
process
|
dependency dashboard this issue provides visibility into renovate updates and their statuses edited blocked these updates have been manually edited so renovate will no longer make changes to discard all commits and start over click on a checkbox pull pull ignored or blocked these are blocked by an existing closed pr and will not be recreated unless you click a checkbox below pull pull check this box to trigger a request for renovate to run again on this repository
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.