Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
277,927
30,695,948,372
IssuesEvent
2023-07-26 18:36:37
RG4421/ampere-centos-kernel
https://api.github.com/repos/RG4421/ampere-centos-kernel
opened
CVE-2023-33203 (Medium) detected in linuxv5.2
Mend: dependency security vulnerability
## CVE-2023-33203 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary> <p> <p>Linux kernel source tree</p> <p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p> <p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ethernet/qualcomm/emac/emac.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ethernet/qualcomm/emac/emac.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The Linux kernel before 6.2.9 has a race condition and resultant use-after-free in drivers/net/ethernet/qualcomm/emac/emac.c if a physically proximate attacker unplugs an emac based device. <p>Publish Date: 2023-05-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-33203>CVE-2023-33203</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-33203">https://www.linuxkernelcves.com/cves/CVE-2023-33203</a></p> <p>Release Date: 2023-05-18</p> <p>Fix Resolution: v4.14.312,v4.19.280,v5.4.240,v5.10.177,v5.15.105,v6.1.22,v6.2.9</p> </p> </details> <p></p>
True
CVE-2023-33203 (Medium) detected in linuxv5.2 - ## CVE-2023-33203 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv5.2</b></p></summary> <p> <p>Linux kernel source tree</p> <p>Library home page: <a href=https://github.com/torvalds/linux.git>https://github.com/torvalds/linux.git</a></p> <p>Found in base branch: <b>amp-centos-8.0-kernel</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ethernet/qualcomm/emac/emac.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/drivers/net/ethernet/qualcomm/emac/emac.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> The Linux kernel before 6.2.9 has a race condition and resultant use-after-free in drivers/net/ethernet/qualcomm/emac/emac.c if a physically proximate attacker unplugs an emac based device. <p>Publish Date: 2023-05-18 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-33203>CVE-2023-33203</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Physical - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-33203">https://www.linuxkernelcves.com/cves/CVE-2023-33203</a></p> <p>Release Date: 2023-05-18</p> <p>Fix Resolution: v4.14.312,v4.19.280,v5.4.240,v5.10.177,v5.15.105,v6.1.22,v6.2.9</p> </p> </details> <p></p>
non_process
cve medium detected in cve medium severity vulnerability vulnerable library linux kernel source tree library home page a href found in base branch amp centos kernel vulnerable source files drivers net ethernet qualcomm emac emac c drivers net ethernet qualcomm emac emac c vulnerability details the linux kernel before has a race condition and resultant use after free in drivers net ethernet qualcomm emac emac c if a physically proximate attacker unplugs an emac based device publish date url a href cvss score details base score metrics exploitability metrics attack vector physical attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
0
18,267
24,347,071,737
IssuesEvent
2022-10-02 13:08:43
rathena/FluxCP
https://api.github.com/repos/rathena/FluxCP
opened
Vote - How do you process donations?
Enhancement Request Component: Payment Processor
### Provide Details So that I can gauge how you guys use FluxCP, can you please use the following emoji reactions for: 👍 I use the FluxCP Shop and provided Donation NPC. Shop Credits are stored via FluxCP. This is the default setup. 👎 I use the Item Shop on FluxCP but have changed the Shop Credits to use in-game CashPoints instead. Items are delivered via the provided Donations NPC. 💯 I don't use the Item Shop, I use the in-game CashShop and convert donation Credits into CashPoints. 🔢 I wrote my own script to handle it all. If you're using a different method to handle this process, please reply with a comment so I can evaluate how best to proceed.
1.0
Vote - How do you process donations? - ### Provide Details So that I can gauge how you guys use FluxCP, can you please use the following emoji reactions for: 👍 I use the FluxCP Shop and provided Donation NPC. Shop Credits are stored via FluxCP. This is the default setup. 👎 I use the Item Shop on FluxCP but have changed the Shop Credits to use in-game CashPoints instead. Items are delivered via the provided Donations NPC. 💯 I don't use the Item Shop, I use the in-game CashShop and convert donation Credits into CashPoints. 🔢 I wrote my own script to handle it all. If you're using a different method to handle this process, please reply with a comment so I can evaluate how best to proceed.
process
vote how do you process donations provide details so that i can gauge how you guys use fluxcp can you please use the following emoji reactions for 👍 i use the fluxcp shop and provided donation npc shop credits are stored via fluxcp this is the default setup 👎 i use the item shop on fluxcp but have changed the shop credits to use in game cashpoints instead items are delivered via the provided donations npc 💯 i don t use the item shop i use the in game cashshop and convert donation credits into cashpoints 🔢 i wrote my own script to handle it all if you re using a different method to handle this process please reply with a comment so i can evaluate how best to proceed
1
252,892
27,271,244,050
IssuesEvent
2023-02-22 22:34:22
jmservera/simplelogger
https://api.github.com/repos/jmservera/simplelogger
closed
CVE-2021-28860 (High) detected in mixme-0.3.5.tgz - autoclosed
security vulnerability
## CVE-2021-28860 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixme-0.3.5.tgz</b></p></summary> <p>A library for recursive merging of Javascript objects</p> <p>Library home page: <a href="https://registry.npmjs.org/mixme/-/mixme-0.3.5.tgz">https://registry.npmjs.org/mixme/-/mixme-0.3.5.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/mixme/package.json</p> <p> Dependency Hierarchy: - csv-5.3.2.tgz (Root Library) - stream-transform-2.0.1.tgz - :x: **mixme-0.3.5.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Node.js mixme, prior to v0.5.1, an attacker can add or alter properties of an object via '__proto__' through the mutate() and merge() functions. The polluted attribute will be directly assigned to every object in the program. This will put the availability of the program at risk causing a potential denial of service (DoS). <p>Publish Date: 2021-05-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28860>CVE-2021-28860</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-r5cq-9537-9rpf">https://github.com/advisories/GHSA-r5cq-9537-9rpf</a></p> <p>Release Date: 2021-05-03</p> <p>Fix Resolution (mixme): 0.5.1</p> <p>Direct dependency fix Resolution (csv): 5.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-28860 (High) detected in mixme-0.3.5.tgz - autoclosed - ## CVE-2021-28860 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mixme-0.3.5.tgz</b></p></summary> <p>A library for recursive merging of Javascript objects</p> <p>Library home page: <a href="https://registry.npmjs.org/mixme/-/mixme-0.3.5.tgz">https://registry.npmjs.org/mixme/-/mixme-0.3.5.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/mixme/package.json</p> <p> Dependency Hierarchy: - csv-5.3.2.tgz (Root Library) - stream-transform-2.0.1.tgz - :x: **mixme-0.3.5.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Node.js mixme, prior to v0.5.1, an attacker can add or alter properties of an object via '__proto__' through the mutate() and merge() functions. The polluted attribute will be directly assigned to every object in the program. This will put the availability of the program at risk causing a potential denial of service (DoS). <p>Publish Date: 2021-05-03 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-28860>CVE-2021-28860</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-r5cq-9537-9rpf">https://github.com/advisories/GHSA-r5cq-9537-9rpf</a></p> <p>Release Date: 2021-05-03</p> <p>Fix Resolution (mixme): 0.5.1</p> <p>Direct dependency fix Resolution (csv): 5.4.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in mixme tgz autoclosed cve high severity vulnerability vulnerable library mixme tgz a library for recursive merging of javascript objects library home page a href path to dependency file package json path to vulnerable library node modules mixme package json dependency hierarchy csv tgz root library stream transform tgz x mixme tgz vulnerable library found in base branch master vulnerability details in node js mixme prior to an attacker can add or alter properties of an object via proto through the mutate and merge functions the polluted attribute will be directly assigned to every object in the program this will put the availability of the program at risk causing a potential denial of service dos publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mixme direct dependency fix resolution csv step up your open source security game with mend
0
16,069
9,682,628,212
IssuesEvent
2019-05-23 09:35:27
bkimminich/juice-shop
https://api.github.com/repos/bkimminich/juice-shop
closed
WS-2019-0066 (Medium) detected in ecstatic-3.3.0.tgz
security vulnerability
## WS-2019-0066 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ecstatic-3.3.0.tgz</b></p></summary> <p>A simple static file server middleware</p> <p>Library home page: <a href="https://registry.npmjs.org/ecstatic/-/ecstatic-3.3.0.tgz">https://registry.npmjs.org/ecstatic/-/ecstatic-3.3.0.tgz</a></p> <p>Path to dependency file: /juice-shop/package.json</p> <p>Path to vulnerable library: /tmp/git/juice-shop/node_modules/ecstatic/package.json</p> <p> Dependency Hierarchy: - http-server-0.11.1.tgz (Root Library) - :x: **ecstatic-3.3.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of ecstatic prior to 4.1.2 fails to validate redirects, allowing attackers to craft requests that result in an HTTP 301 redirect to any other domains. <p>Publish Date: 2019-05-02 <p>URL: <a href=https://github.com/jfhbrook/node-ecstatic/commit/be6fc25a826f190b67f4d16158f9d67899e38ee4>WS-2019-0066</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/830/versions">https://www.npmjs.com/advisories/830/versions</a></p> <p>Release Date: 2019-05-02</p> <p>Fix Resolution: 4.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0066 (Medium) detected in ecstatic-3.3.0.tgz - ## WS-2019-0066 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>ecstatic-3.3.0.tgz</b></p></summary> <p>A simple static file server middleware</p> <p>Library home page: <a href="https://registry.npmjs.org/ecstatic/-/ecstatic-3.3.0.tgz">https://registry.npmjs.org/ecstatic/-/ecstatic-3.3.0.tgz</a></p> <p>Path to dependency file: /juice-shop/package.json</p> <p>Path to vulnerable library: /tmp/git/juice-shop/node_modules/ecstatic/package.json</p> <p> Dependency Hierarchy: - http-server-0.11.1.tgz (Root Library) - :x: **ecstatic-3.3.0.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of ecstatic prior to 4.1.2 fails to validate redirects, allowing attackers to craft requests that result in an HTTP 301 redirect to any other domains. <p>Publish Date: 2019-05-02 <p>URL: <a href=https://github.com/jfhbrook/node-ecstatic/commit/be6fc25a826f190b67f4d16158f9d67899e38ee4>WS-2019-0066</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/830/versions">https://www.npmjs.com/advisories/830/versions</a></p> <p>Release Date: 2019-05-02</p> <p>Fix Resolution: 4.1.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in ecstatic tgz ws medium severity vulnerability vulnerable library ecstatic tgz a simple static file server middleware library home page a href path to dependency file juice shop package json path to vulnerable library tmp git juice shop node modules ecstatic package json dependency hierarchy http server tgz root library x ecstatic tgz vulnerable library vulnerability details versions of ecstatic prior to fails to validate redirects allowing attackers to craft requests that result in an http redirect to any other domains publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
613,971
19,102,676,753
IssuesEvent
2021-11-30 01:16:42
MT-CTF/capturetheflag
https://api.github.com/repos/MT-CTF/capturetheflag
closed
Visually indicate pro players
Feature :star: Low Priority :sleeping: :gear: Audiovisuals
Currently, all players look the same. It is not clear which players one can count on and which are not, which players are dangerous and which are not. I propose to indicate players with access to pro-section somehow. It should be something simple and visible from all directions. Probably colorful hat.
1.0
Visually indicate pro players - Currently, all players look the same. It is not clear which players one can count on and which are not, which players are dangerous and which are not. I propose to indicate players with access to pro-section somehow. It should be something simple and visible from all directions. Probably colorful hat.
non_process
visually indicate pro players currently all players look the same it is not clear which players one can count on and which are not which players are dangerous and which are not i propose to indicate players with access to pro section somehow it should be something simple and visible from all directions probably colorful hat
0
316,514
9,648,635,088
IssuesEvent
2019-05-17 16:48:19
zeit/ncc
https://api.github.com/repos/zeit/ncc
closed
Supporting require.resolve dynamic passing
committed dynamic require priority
Within the use cases around custom loaders (think Babel plugins, webpack loaders), there are a number of edge cases of dynamic require that come up. While Webpack can get quite far in computing dynamic requires like `require(require.resolve('./asdf.js'))`, there is a tricky case where there is a separation between the resolution and the require: ```js // example use case something like "plugin: require.resolve('./asdf')" being passed as an argument const req = require.resolve('./asdf.js'); require(eval('"' + req + '"')); // or any other untracable logic before passing to require ``` In this case, what Webpack does is replace the require.resolve part with the ID of the module in the webpack bundle, so we get something like: ```js const req = 234; __webpack_require__(123)(eval('"' + req + '"'))() ``` Where **123** is effectively an inlined "throwing" module which will give the error "Module 234 not found". The above then fails as a module not found, all of the time. The naive fix I was thinking to implement would to instrument the "throwing module" (123 in the example) to first check the __webpack_require__ cache for the numeric ID.
1.0
Supporting require.resolve dynamic passing - Within the use cases around custom loaders (think Babel plugins, webpack loaders), there are a number of edge cases of dynamic require that come up. While Webpack can get quite far in computing dynamic requires like `require(require.resolve('./asdf.js'))`, there is a tricky case where there is a separation between the resolution and the require: ```js // example use case something like "plugin: require.resolve('./asdf')" being passed as an argument const req = require.resolve('./asdf.js'); require(eval('"' + req + '"')); // or any other untracable logic before passing to require ``` In this case, what Webpack does is replace the require.resolve part with the ID of the module in the webpack bundle, so we get something like: ```js const req = 234; __webpack_require__(123)(eval('"' + req + '"'))() ``` Where **123** is effectively an inlined "throwing" module which will give the error "Module 234 not found". The above then fails as a module not found, all of the time. The naive fix I was thinking to implement would to instrument the "throwing module" (123 in the example) to first check the __webpack_require__ cache for the numeric ID.
non_process
supporting require resolve dynamic passing within the use cases around custom loaders think babel plugins webpack loaders there are a number of edge cases of dynamic require that come up while webpack can get quite far in computing dynamic requires like require require resolve asdf js there is a tricky case where there is a separation between the resolution and the require js example use case something like plugin require resolve asdf being passed as an argument const req require resolve asdf js require eval req or any other untracable logic before passing to require in this case what webpack does is replace the require resolve part with the id of the module in the webpack bundle so we get something like js const req webpack require eval req where is effectively an inlined throwing module which will give the error module not found the above then fails as a module not found all of the time the naive fix i was thinking to implement would to instrument the throwing module in the example to first check the webpack require cache for the numeric id
0
723,174
24,887,502,407
IssuesEvent
2022-10-28 09:02:44
status-im/status-desktop
https://api.github.com/repos/status-im/status-desktop
closed
Profiles opened via deeplink stack on top of each other
bug ui Profile priority 4: minor E:Bugfixes
# Bug Report ## Description Found during testing of https://github.com/status-im/status-desktop/pull/6450 If a profile is already open and a deep link is used for another profile then it is stacked on top. Each profile must then be closed one after the other to return to the main screen. ## Steps to reproduce 1. Open a profile 2. Navigate to another via deep link ( status-im://u/[ENS or chatkey] ) 3. Multiple profiles now open on top of each other https://user-images.githubusercontent.com/50769329/181018688-223c724b-6656-439b-83fa-d9f818bd5cec.mov #### Expected behavior [Assumed] Currently open profile should close and only the last one should remain open #### Actual behavior Profiles can stack on top of each other ### Additional Information - Status desktop version: master - Operating System: All
1.0
Profiles opened via deeplink stack on top of each other - # Bug Report ## Description Found during testing of https://github.com/status-im/status-desktop/pull/6450 If a profile is already open and a deep link is used for another profile then it is stacked on top. Each profile must then be closed one after the other to return to the main screen. ## Steps to reproduce 1. Open a profile 2. Navigate to another via deep link ( status-im://u/[ENS or chatkey] ) 3. Multiple profiles now open on top of each other https://user-images.githubusercontent.com/50769329/181018688-223c724b-6656-439b-83fa-d9f818bd5cec.mov #### Expected behavior [Assumed] Currently open profile should close and only the last one should remain open #### Actual behavior Profiles can stack on top of each other ### Additional Information - Status desktop version: master - Operating System: All
non_process
profiles opened via deeplink stack on top of each other bug report description found during testing of if a profile is already open and a deep link is used for another profile then it is stacked on top each profile must then be closed one after the other to return to the main screen steps to reproduce open a profile navigate to another via deep link status im u multiple profiles now open on top of each other expected behavior currently open profile should close and only the last one should remain open actual behavior profiles can stack on top of each other additional information status desktop version master operating system all
0
69,484
7,135,526,627
IssuesEvent
2018-01-23 01:33:51
strongbox/strongbox
https://api.github.com/repos/strongbox/strongbox
opened
Test the packed indexes using Artifactory
good first issue help wanted testing
* Start the `strongbox-distribution` * Deploy artifacts to one of the Maven 2 repositories * Run the cron task for rebuilding the Maven Indexes. This should produce a packed Maven Index once it's finished. * Create a new proxy repository in Artifactory pointing to the one in Strongbox the you've deployed the artifacts to. * Via the web interface, try to browse the remote index. If you see the artifact in it, the test has been successful. While you're at it, please also report your findings in regards to browsing the remote repository via the Artifactory web UI. This issue relates to #523.
1.0
Test the packed indexes using Artifactory - * Start the `strongbox-distribution` * Deploy artifacts to one of the Maven 2 repositories * Run the cron task for rebuilding the Maven Indexes. This should produce a packed Maven Index once it's finished. * Create a new proxy repository in Artifactory pointing to the one in Strongbox the you've deployed the artifacts to. * Via the web interface, try to browse the remote index. If you see the artifact in it, the test has been successful. While you're at it, please also report your findings in regards to browsing the remote repository via the Artifactory web UI. This issue relates to #523.
non_process
test the packed indexes using artifactory start the strongbox distribution deploy artifacts to one of the maven repositories run the cron task for rebuilding the maven indexes this should produce a packed maven index once it s finished create a new proxy repository in artifactory pointing to the one in strongbox the you ve deployed the artifacts to via the web interface try to browse the remote index if you see the artifact in it the test has been successful while you re at it please also report your findings in regards to browsing the remote repository via the artifactory web ui this issue relates to
0
46,910
7,294,842,488
IssuesEvent
2018-02-26 02:53:19
IntelVCL/Open3D
https://api.github.com/repos/IntelVCL/Open3D
closed
Optimize a pose graph, GlobalOptimizationOption
Documentation
http://www.open3d.org/docs/tutorial/Advanced/multiway_registration.html#input tell me that “Class GlobalOptimizationOption defines the loss function of the pose graph.” , But I am not find how to defines the loss function, and I also found no default loss function
1.0
Optimize a pose graph, GlobalOptimizationOption - http://www.open3d.org/docs/tutorial/Advanced/multiway_registration.html#input tell me that “Class GlobalOptimizationOption defines the loss function of the pose graph.” , But I am not find how to defines the loss function, and I also found no default loss function
non_process
optimize a pose graph globaloptimizationoption tell me that “class globaloptimizationoption defines the loss function of the pose graph ” but i am not find how to defines the loss function and i also found no default loss function
0
21,286
28,481,658,949
IssuesEvent
2023-04-18 03:32:04
hashgraph/hedera-mirror-node
https://api.github.com/repos/hashgraph/hedera-mirror-node
opened
Improve REST API throughput in Kubernetes
enhancement process rest
### Problem The performance of the REST API in Kubernetes is still not as good as the VM it replaces. ### Solution * Increase Traefik memory * Increase REST deployment limit to a little more than 1 CPU * Revert autoscaling back to CPU based but with higher requests * Reduce minimum replicas to 2 since resources have been raised ### Alternatives _No response_
1.0
Improve REST API throughput in Kubernetes - ### Problem The performance of the REST API in Kubernetes is still not as good as the VM it replaces. ### Solution * Increase Traefik memory * Increase REST deployment limit to a little more than 1 CPU * Revert autoscaling back to CPU based but with higher requests * Reduce minimum replicas to 2 since resources have been raised ### Alternatives _No response_
process
improve rest api throughput in kubernetes problem the performance of the rest api in kubernetes is still not as good as the vm it replaces solution increase traefik memory increase rest deployment limit to a little more than cpu revert autoscaling back to cpu based but with higher requests reduce minimum replicas to since resources have been raised alternatives no response
1
9,444
12,426,673,680
IssuesEvent
2020-05-24 22:29:32
burnpiro/wod-bike-dataset-generator
https://api.github.com/repos/burnpiro/wod-bike-dataset-generator
opened
Data output json format
data processing
Because of the file size, we cannot perform filter/map operations in real-time. We have to change the format of the .json file to be sth like: ``` { <day>: { <hour>: [ { "o": 1, "d": 2, "c": 15, } ] } } ``` Instead of the current format: ``` [ { "s": <datetime>, "o": 1, "d": 2, "c": 15, } ] ``` That's just because July file has 483k records and executing .filter().map() on it is pointless and takes around a minute to parse all 483k string dates to date and then compare with selected period.
1.0
Data output json format - Because of the file size, we cannot perform filter/map operations in real-time. We have to change the format of the .json file to be sth like: ``` { <day>: { <hour>: [ { "o": 1, "d": 2, "c": 15, } ] } } ``` Instead of the current format: ``` [ { "s": <datetime>, "o": 1, "d": 2, "c": 15, } ] ``` That's just because July file has 483k records and executing .filter().map() on it is pointless and takes around a minute to parse all 483k string dates to date and then compare with selected period.
process
data output json format because of the file size we cannot perform filter map operations in real time we have to change the format of the json file to be sth like o d c instead of the current format s o d c that s just because july file has records and executing filter map on it is pointless and takes around a minute to parse all string dates to date and then compare with selected period
1
7,771
10,904,647,186
IssuesEvent
2019-11-20 09:11:19
eclipse-theia/theia
https://api.github.com/repos/eclipse-theia/theia
opened
use `close` not `exit` event process
bug process terminal
`exit` event does not mean that all output was delivered, `close` means it: https://nodejs.org/api/child_process.html#child_process_event_close > he 'close' event is emitted when the stdio streams of a child process have been closed. This is distinct from the 'exit' event, since multiple processes might share the same stdio streams. vscode-jsonrpc uses `close` so it does not cause issues for filesystem watching, but our raw processes, terminals and so on can miss output sometimes
1.0
use `close` not `exit` event process - `exit` event does not mean that all output was delivered, `close` means it: https://nodejs.org/api/child_process.html#child_process_event_close > he 'close' event is emitted when the stdio streams of a child process have been closed. This is distinct from the 'exit' event, since multiple processes might share the same stdio streams. vscode-jsonrpc uses `close` so it does not cause issues for filesystem watching, but our raw processes, terminals and so on can miss output sometimes
process
use close not exit event process exit event does not mean that all output was delivered close means it he close event is emitted when the stdio streams of a child process have been closed this is distinct from the exit event since multiple processes might share the same stdio streams vscode jsonrpc uses close so it does not cause issues for filesystem watching but our raw processes terminals and so on can miss output sometimes
1
43,930
5,717,997,017
IssuesEvent
2017-04-19 18:30:38
dotnet/roslyn
https://api.github.com/repos/dotnet/roslyn
closed
Type inference bug?
Area-Compilers Resolution-By Design
**Version Used**: Visual Studio 2017 **Steps to Reproduce**: ```c# public static class NullExt { public static T? Some<T>(this T? value, Action<T> func) where T : struct { if (value != null) func(value.Value); return value; } public static T Some<T>(this T value, Action<T> func) where T : class { if (value != null) func(value); return value; } } class Program { public static void Main() { string s = "s"; s.Some(_ => Console.Write("some")); int? i = 1; i.Some(_ => Console.Write("some")); } } ``` **Expected Behavior**: This code snippet compiles. **Actual Behavior**: This code snippet doesn't compiles. error CS0121: The call is ambiguous between the following methods or properties: “NullExt.Some<T>(T?, Action<T>)” and “NullExt.Some<T>(T, Action<T>)” Change ```c# i.Some(_ => Console.Write("some")); ``` to ```c# i.Some<int>(_ => Console.Write("some")); ``` it compiles, So i think it's a type inference bug. What do you think of this?
1.0
Type inference bug? - **Version Used**: Visual Studio 2017 **Steps to Reproduce**: ```c# public static class NullExt { public static T? Some<T>(this T? value, Action<T> func) where T : struct { if (value != null) func(value.Value); return value; } public static T Some<T>(this T value, Action<T> func) where T : class { if (value != null) func(value); return value; } } class Program { public static void Main() { string s = "s"; s.Some(_ => Console.Write("some")); int? i = 1; i.Some(_ => Console.Write("some")); } } ``` **Expected Behavior**: This code snippet compiles. **Actual Behavior**: This code snippet doesn't compiles. error CS0121: The call is ambiguous between the following methods or properties: “NullExt.Some<T>(T?, Action<T>)” and “NullExt.Some<T>(T, Action<T>)” Change ```c# i.Some(_ => Console.Write("some")); ``` to ```c# i.Some<int>(_ => Console.Write("some")); ``` it compiles, So i think it's a type inference bug. What do you think of this?
non_process
type inference bug version used visual studio steps to reproduce c public static class nullext public static t some this t value action func where t struct if value null func value value return value public static t some this t value action func where t class if value null func value return value class program public static void main string s s s some console write some int i i some console write some expected behavior this code snippet compiles actual behavior this code snippet doesn t compiles error the call is ambiguous between the following methods or properties “nullext some t action ” and “nullext some t action ” change c i some console write some to c i some console write some it compiles so i think it s a type inference bug what do you think of this
0
21,964
30,461,652,819
IssuesEvent
2023-07-17 07:25:22
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
[MLv2] Segments
.Backend .Epic .metabase-lib .Team/QueryProcessor :hammer_and_wrench:
For custom expression editor migration we need to port Segments. It is currently used like `query.table().segments.find.....` Example: `frontend/src/metabase-lib/expressions/index.js`
1.0
[MLv2] Segments - For custom expression editor migration we need to port Segments. It is currently used like `query.table().segments.find.....` Example: `frontend/src/metabase-lib/expressions/index.js`
process
segments for custom expression editor migration we need to port segments it is currently used like query table segments find example frontend src metabase lib expressions index js
1
9,814
12,824,724,713
IssuesEvent
2020-07-06 13:55:27
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Clarify usage of datasource url when using SQLite - `file:` is what works and `sqlite:` `sqlite://` should error
bug/2-confirmed kind/bug process/candidate
## Problem For SQLite in schema.prisma this works everywhere (Prisma Client JS, Prisma Client Go, Migrate) and is documented: ``` file:./dev.db file:../dev.db ``` This is not valid (error from engine) ``` sqlite:dev.db ``` This is considered valid by the engine's parser but it doesn't work either in clients or Migrate ``` sqlite://dev.db ``` Example error from Prisma Client JS ``` Error in connector: Error querying the database: unable to open database: //dev.db at PrismaClientFetcher._request ({..}/runtime/getPrismaClient.ts:999:15) ``` ## Suggested solution - [ ] Update error message in engines "The URL for datasource `db` must start with the protocol `sqlite://`" should replace `sqlite://` by `file:` - [ ] Make engine's parsing of `sqlite://` an error Related: - [ ] [#497 Can not create SQLite database when using protocol `sqlite://`](https://github.com/prisma/migrate/issues/497) - [ ] https://github.com/prisma/specs/issues/385 ## Additional context [Internal conversation](https://prisma-company.slack.com/archives/CEYCG2MCN/p1594026418374700)
1.0
Clarify usage of datasource url when using SQLite - `file:` is what works and `sqlite:` `sqlite://` should error - ## Problem For SQLite in schema.prisma this works everywhere (Prisma Client JS, Prisma Client Go, Migrate) and is documented: ``` file:./dev.db file:../dev.db ``` This is not valid (error from engine) ``` sqlite:dev.db ``` This is considered valid by the engine's parser but it doesn't work either in clients or Migrate ``` sqlite://dev.db ``` Example error from Prisma Client JS ``` Error in connector: Error querying the database: unable to open database: //dev.db at PrismaClientFetcher._request ({..}/runtime/getPrismaClient.ts:999:15) ``` ## Suggested solution - [ ] Update error message in engines "The URL for datasource `db` must start with the protocol `sqlite://`" should replace `sqlite://` by `file:` - [ ] Make engine's parsing of `sqlite://` an error Related: - [ ] [#497 Can not create SQLite database when using protocol `sqlite://`](https://github.com/prisma/migrate/issues/497) - [ ] https://github.com/prisma/specs/issues/385 ## Additional context [Internal conversation](https://prisma-company.slack.com/archives/CEYCG2MCN/p1594026418374700)
process
clarify usage of datasource url when using sqlite file is what works and sqlite sqlite should error problem for sqlite in schema prisma this works everywhere prisma client js prisma client go migrate and is documented file dev db file dev db this is not valid error from engine sqlite dev db this is considered valid by the engine s parser but it doesn t work either in clients or migrate sqlite dev db example error from prisma client js error in connector error querying the database unable to open database dev db at prismaclientfetcher request runtime getprismaclient ts suggested solution update error message in engines the url for datasource db must start with the protocol sqlite should replace sqlite by file make engine s parsing of sqlite an error related additional context
1
13,897
16,657,484,295
IssuesEvent
2021-06-05 19:52:26
zotero/zotero
https://api.github.com/repos/zotero/zotero
opened
Option to include annotation colors via Add Note
Enhancement Word Processor Integration
Once we do https://github.com/zotero/zotero/issues/2080, we could add an option (toggleable from the citation dialog, off by default) to include annotation colors when inserting notes into word processor documents.
1.0
Option to include annotation colors via Add Note - Once we do https://github.com/zotero/zotero/issues/2080, we could add an option (toggleable from the citation dialog, off by default) to include annotation colors when inserting notes into word processor documents.
process
option to include annotation colors via add note once we do we could add an option toggleable from the citation dialog off by default to include annotation colors when inserting notes into word processor documents
1
3,988
6,917,667,840
IssuesEvent
2017-11-29 09:25:01
w3c/html
https://api.github.com/repos/w3c/html
opened
CFC: Merge Web Workers into HTML
process
This is a Call For Consensus (CFC) to merge the [Web Workers](https://w3c.github.io/workers/) specification into the [HTML](https://w3c.github.io/html) specification. The reason for merging the two specifications is that it would make it easier to maintain Web Workers, and therefore more likely that the Web Workers parts of the HTML specification will be kept up to date (and issues addressed more responsively). The proposal was raised as issue #1075, and Sangwhan Moon (the current Web Workers editor) has [agreed to do the work](https://github.com/w3c/html/issues/1075#issuecomment-347460911). Please respond to this CFC by the end of day on Thursday 7th November. To support the proposal, add a "thumbs up" to this comment. If you don't support the proposal, add a "thumbs down" and post your reasons in a comment. If you choose not to respond it will be taken as silent support for the proposal. Actual responses are preferred however.
1.0
CFC: Merge Web Workers into HTML - This is a Call For Consensus (CFC) to merge the [Web Workers](https://w3c.github.io/workers/) specification into the [HTML](https://w3c.github.io/html) specification. The reason for merging the two specifications is that it would make it easier to maintain Web Workers, and therefore more likely that the Web Workers parts of the HTML specification will be kept up to date (and issues addressed more responsively). The proposal was raised as issue #1075, and Sangwhan Moon (the current Web Workers editor) has [agreed to do the work](https://github.com/w3c/html/issues/1075#issuecomment-347460911). Please respond to this CFC by the end of day on Thursday 7th November. To support the proposal, add a "thumbs up" to this comment. If you don't support the proposal, add a "thumbs down" and post your reasons in a comment. If you choose not to respond it will be taken as silent support for the proposal. Actual responses are preferred however.
process
cfc merge web workers into html this is a call for consensus cfc to merge the specification into the specification the reason for merging the two specifications is that it would make it easier to maintain web workers and therefore more likely that the web workers parts of the html specification will be kept up to date and issues addressed more responsively the proposal was raised as issue and sangwhan moon the current web workers editor has please respond to this cfc by the end of day on thursday november to support the proposal add a thumbs up to this comment if you don t support the proposal add a thumbs down and post your reasons in a comment if you choose not to respond it will be taken as silent support for the proposal actual responses are preferred however
1
804,383
29,485,449,813
IssuesEvent
2023-06-02 09:22:51
ballerina-platform/ballerina-lang
https://api.github.com/repos/ballerina-platform/ballerina-lang
opened
[Bug]: Invalid cyclic ref error for module level function
Type/Bug Priority/High Team/CompilerFE
### Description When there is a function call for another module and if there is a function definition in same name, the compiler detects a cyclic ref error. ### Steps to Reproduce ```ballerina import ballerina/edi; type EdiDeserialize function (); public function fromEdi835String() { json|error res = edi:fromEdiString("ediText", {delimiters: {segment: "", 'field: "", component: ""}, name: ""}); } final readonly & map<EdiDeserialize> ediDeserializers = { "835": fromEdi835String }; public isolated function fromEdiString() { // function same to `fromEdiString` EdiDeserialize? ediDeserialize = ediDeserializers["ediName"]; if ediDeserialize is () { return (); } _ = ediDeserialize(); } ``` ### Affected Version(s) U5 ### OS, DB, other environment details and versions _No response_ ### Related area -> Compilation ### Related issue(s) (optional) _No response_ ### Suggested label(s) (optional) _No response_ ### Suggested assignee(s) (optional) _No response_
1.0
[Bug]: Invalid cyclic ref error for module level function - ### Description When there is a function call for another module and if there is a function definition in same name, the compiler detects a cyclic ref error. ### Steps to Reproduce ```ballerina import ballerina/edi; type EdiDeserialize function (); public function fromEdi835String() { json|error res = edi:fromEdiString("ediText", {delimiters: {segment: "", 'field: "", component: ""}, name: ""}); } final readonly & map<EdiDeserialize> ediDeserializers = { "835": fromEdi835String }; public isolated function fromEdiString() { // function same to `fromEdiString` EdiDeserialize? ediDeserialize = ediDeserializers["ediName"]; if ediDeserialize is () { return (); } _ = ediDeserialize(); } ``` ### Affected Version(s) U5 ### OS, DB, other environment details and versions _No response_ ### Related area -> Compilation ### Related issue(s) (optional) _No response_ ### Suggested label(s) (optional) _No response_ ### Suggested assignee(s) (optional) _No response_
non_process
invalid cyclic ref error for module level function description when there is a function call for another module and if there is a function definition in same name the compiler detects a cyclic ref error steps to reproduce ballerina import ballerina edi type edideserialize function public function json error res edi fromedistring editext delimiters segment field component name final readonly map edideserializers public isolated function fromedistring function same to fromedistring edideserialize edideserialize edideserializers if edideserialize is return edideserialize affected version s os db other environment details and versions no response related area compilation related issue s optional no response suggested label s optional no response suggested assignee s optional no response
0
76,631
21,523,647,447
IssuesEvent
2022-04-28 16:14:35
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
Improve pull request feedback times on docs only changes
>enhancement >docs :Delivery/Build Team:Docs Team:Delivery v8.3.0
Currently the examples in the Elasticsearch documentation are tested using the functionality described here: https://github.com/elastic/elasticsearch/tree/master/docs#snippet-testing Tests that take a lot of setup make for very slow gradle checks. For example: > ./gradlew :docs:check BUILD SUCCESSFUL in 18m 8s That is already slow but if we re-enable code testing for machine learning examples, it will grow even slower. For example, for machine learning anomaly detection, you need to add data, create a data feed, create a job, start the data feed, open the job, then wait for the processing of data before you can get results or stats or anything interesting). Ideally, long-running tests can be tested asynchronously and not block other PRs (e.g. once per day). Per @colings86, in the Elasticsearch unit and integration tests there is the concept of "slow" tests that are only run on CI (not on a PR build or local build) and "nightly" test which are run only once a day, though neither of these options are currently available for documentation tests.
1.0
Improve pull request feedback times on docs only changes - Currently the examples in the Elasticsearch documentation are tested using the functionality described here: https://github.com/elastic/elasticsearch/tree/master/docs#snippet-testing Tests that take a lot of setup make for very slow gradle checks. For example: > ./gradlew :docs:check BUILD SUCCESSFUL in 18m 8s That is already slow but if we re-enable code testing for machine learning examples, it will grow even slower. For example, for machine learning anomaly detection, you need to add data, create a data feed, create a job, start the data feed, open the job, then wait for the processing of data before you can get results or stats or anything interesting). Ideally, long-running tests can be tested asynchronously and not block other PRs (e.g. once per day). Per @colings86, in the Elasticsearch unit and integration tests there is the concept of "slow" tests that are only run on CI (not on a PR build or local build) and "nightly" test which are run only once a day, though neither of these options are currently available for documentation tests.
non_process
improve pull request feedback times on docs only changes currently the examples in the elasticsearch documentation are tested using the functionality described here tests that take a lot of setup make for very slow gradle checks for example gradlew docs check build successful in that is already slow but if we re enable code testing for machine learning examples it will grow even slower for example for machine learning anomaly detection you need to add data create a data feed create a job start the data feed open the job then wait for the processing of data before you can get results or stats or anything interesting ideally long running tests can be tested asynchronously and not block other prs e g once per day per in the elasticsearch unit and integration tests there is the concept of slow tests that are only run on ci not on a pr build or local build and nightly test which are run only once a day though neither of these options are currently available for documentation tests
0
18,091
3,667,472,522
IssuesEvent
2016-02-20 01:08:58
kumulsoft/Fixed-Assets
https://api.github.com/repos/kumulsoft/Fixed-Assets
closed
Asset Transaction - Details Page - Add New Asset
bug Fixed HIGH Ready for testing
![image](https://cloud.githubusercontent.com/assets/10192106/13027327/42654626-d296-11e5-91d3-dd049cb5be3f.png) Refer above image and make following corrections 1. Remove/hide Custodian Field 2. Remove/hide Category field 3. Filter out MAINTENANCE, and IMPROVEMENT from this Transaction type DDL list. 4. Move Asset Description field to new location, and rename Description to 'Asset Description' 5. Rename 'Transaction New' to 'New Asset'
1.0
Asset Transaction - Details Page - Add New Asset - ![image](https://cloud.githubusercontent.com/assets/10192106/13027327/42654626-d296-11e5-91d3-dd049cb5be3f.png) Refer above image and make following corrections 1. Remove/hide Custodian Field 2. Remove/hide Category field 3. Filter out MAINTENANCE, and IMPROVEMENT from this Transaction type DDL list. 4. Move Asset Description field to new location, and rename Description to 'Asset Description' 5. Rename 'Transaction New' to 'New Asset'
non_process
asset transaction details page add new asset refer above image and make following corrections remove hide custodian field remove hide category field filter out maintenance and improvement from this transaction type ddl list move asset description field to new location and rename description to asset description rename transaction new to new asset
0
276,721
30,525,835,097
IssuesEvent
2023-07-19 11:11:40
ESC-CoM/esc-server
https://api.github.com/repos/ESC-CoM/esc-server
closed
[Setting] CORS 설정
setting security
## 🙋🏻‍♂️ 환경 세팅 배포서버를 통한 실제 테스트를 위해서 CORS를 설정합니다. ## 📖 참고 사항 공유할 내용, 레퍼런스, 추가로 발생할 것으로 예상되는 이슈, 스크린샷 등을 넣어 주세요. - 추가적으로 필요한 내용은 comment로 남겨주세요.
True
[Setting] CORS 설정 - ## 🙋🏻‍♂️ 환경 세팅 배포서버를 통한 실제 테스트를 위해서 CORS를 설정합니다. ## 📖 참고 사항 공유할 내용, 레퍼런스, 추가로 발생할 것으로 예상되는 이슈, 스크린샷 등을 넣어 주세요. - 추가적으로 필요한 내용은 comment로 남겨주세요.
non_process
cors 설정 🙋🏻‍♂️ 환경 세팅 배포서버를 통한 실제 테스트를 위해서 cors를 설정합니다 📖 참고 사항 공유할 내용 레퍼런스 추가로 발생할 것으로 예상되는 이슈 스크린샷 등을 넣어 주세요 추가적으로 필요한 내용은 comment로 남겨주세요
0
212,533
16,458,062,246
IssuesEvent
2021-05-21 15:00:46
microsoft/WindowsTemplateStudio
https://api.github.com/repos/microsoft/WindowsTemplateStudio
closed
Add vstemplate tests
Can Close Out Soon Testing
We should add some basic checks for our vstemplate files as @mrlacey suggested in https://github.com/microsoft/WindowsTemplateStudio/issues/4271#issuecomment-836655705. - [x] All our vstemplate files have a TemplateID that ends with "WTS.local" - [x] All our vstemplate files have a Name that ends with "; local)" - [x] All our ProjectTemplates include the projecttype tag Windows Template Studio - [x] The ProjectTemplates that call the Wizard, must define the following parameters: $wts.platform$ - [x] All localized vstemplate files must have the same definition as the root vstemplate file (with exception of the description)
1.0
Add vstemplate tests - We should add some basic checks for our vstemplate files as @mrlacey suggested in https://github.com/microsoft/WindowsTemplateStudio/issues/4271#issuecomment-836655705. - [x] All our vstemplate files have a TemplateID that ends with "WTS.local" - [x] All our vstemplate files have a Name that ends with "; local)" - [x] All our ProjectTemplates include the projecttype tag Windows Template Studio - [x] The ProjectTemplates that call the Wizard, must define the following parameters: $wts.platform$ - [x] All localized vstemplate files must have the same definition as the root vstemplate file (with exception of the description)
non_process
add vstemplate tests we should add some basic checks for our vstemplate files as mrlacey suggested in all our vstemplate files have a templateid that ends with wts local all our vstemplate files have a name that ends with local all our projecttemplates include the projecttype tag windows template studio the projecttemplates that call the wizard must define the following parameters wts platform all localized vstemplate files must have the same definition as the root vstemplate file with exception of the description
0
229,207
25,304,460,480
IssuesEvent
2022-11-17 13:13:35
MatBenfield/news
https://api.github.com/repos/MatBenfield/news
closed
[SecurityWeek] Web Giants to Submit User Data as EU Law Comes Into Effect
SecurityWeek Stale
**A new EU law imposing stricter online regulation comes into effect Wednesday and the biggest platforms like Facebook and Google will have until February 17 to reveal their user numbers.** [read more](https://www.securityweek.com/web-giants-submit-user-data-eu-law-comes-effect) <https://www.securityweek.com/web-giants-submit-user-data-eu-law-comes-effect>
True
[SecurityWeek] Web Giants to Submit User Data as EU Law Comes Into Effect - **A new EU law imposing stricter online regulation comes into effect Wednesday and the biggest platforms like Facebook and Google will have until February 17 to reveal their user numbers.** [read more](https://www.securityweek.com/web-giants-submit-user-data-eu-law-comes-effect) <https://www.securityweek.com/web-giants-submit-user-data-eu-law-comes-effect>
non_process
web giants to submit user data as eu law comes into effect a new eu law imposing stricter online regulation comes into effect wednesday and the biggest platforms like facebook and google will have until february to reveal their user numbers
0
19,060
25,078,269,638
IssuesEvent
2022-11-07 17:04:25
FreeCAD/FreeCAD
https://api.github.com/repos/FreeCAD/FreeCAD
closed
[Bug] Bug Reporting Template Requires Previous Forum Discussion
Process
### Is there an existing issue for this? - [X] I have searched the existing issues ### Forums discussion https://www.forum.freecadweb.org/viewtopic.php?p=638121#p638121 ### Version 0.21 (Development) ### Full version info ```shell [code] OS: Linux Mint 20.3 (X-Cinnamon/cinnamon) Word size of FreeCAD: 64-bit Version: 0.21.30767 +7 (Git) Build type: debug Branch: robustReferences Hash: f8572af5db0b71007a8813cf10bf9480409ecbc0 Python 3.8.10, Qt 5.12.8, Coin 4.0.0, Vtk , OCC 7.6.3 Locale: English/Canada (en_CA) Installed mods: * QuickMeasure-main * A2plus.backup1662229165.818354 * A2plus 0.4.56a [/code] ``` ### Subproject(s) affected? Other (specify in description) ### Issue description The bug reporting template indicates that all bug reports should be preceded by a forum discussion. CONTRIBUTING.md does not mention this requirement. This is confusing. The bug reporting template should be revised to indicate that a forum discussion is not required, or the requirement for a preliminary forum discussion should be added to CONTRIBUTING.md. ### Anything else? _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
1.0
[Bug] Bug Reporting Template Requires Previous Forum Discussion - ### Is there an existing issue for this? - [X] I have searched the existing issues ### Forums discussion https://www.forum.freecadweb.org/viewtopic.php?p=638121#p638121 ### Version 0.21 (Development) ### Full version info ```shell [code] OS: Linux Mint 20.3 (X-Cinnamon/cinnamon) Word size of FreeCAD: 64-bit Version: 0.21.30767 +7 (Git) Build type: debug Branch: robustReferences Hash: f8572af5db0b71007a8813cf10bf9480409ecbc0 Python 3.8.10, Qt 5.12.8, Coin 4.0.0, Vtk , OCC 7.6.3 Locale: English/Canada (en_CA) Installed mods: * QuickMeasure-main * A2plus.backup1662229165.818354 * A2plus 0.4.56a [/code] ``` ### Subproject(s) affected? Other (specify in description) ### Issue description The bug reporting template indicates that all bug reports should be preceded by a forum discussion. CONTRIBUTING.md does not mention this requirement. This is confusing. The bug reporting template should be revised to indicate that a forum discussion is not required, or the requirement for a preliminary forum discussion should be added to CONTRIBUTING.md. ### Anything else? _No response_ ### Code of Conduct - [X] I agree to follow this project's Code of Conduct
process
bug reporting template requires previous forum discussion is there an existing issue for this i have searched the existing issues forums discussion version development full version info shell os linux mint x cinnamon cinnamon word size of freecad bit version git build type debug branch robustreferences hash python qt coin vtk occ locale english canada en ca installed mods quickmeasure main subproject s affected other specify in description issue description the bug reporting template indicates that all bug reports should be preceded by a forum discussion contributing md does not mention this requirement this is confusing the bug reporting template should be revised to indicate that a forum discussion is not required or the requirement for a preliminary forum discussion should be added to contributing md anything else no response code of conduct i agree to follow this project s code of conduct
1
8,228
11,414,658,563
IssuesEvent
2020-02-02 04:53:53
MobileOrg/mobileorg
https://api.github.com/repos/MobileOrg/mobileorg
closed
Investigate switch from CocoaPods to Carthage.
development process
Determine if switching to Carthage and removing the dependency on CocoaPods to allow using `xcconfig` files is worth the effort (probably, yes). * https://github.com/Carthage/Carthage This should also enable removing the multiple plist config with various build settings (adhoc,debug,release). Using the latest BuildSettingExtractor release could help with migration: * https://github.com/dempseyatgithub/BuildSettingExtractor/releases/tag/v1.3.2
1.0
Investigate switch from CocoaPods to Carthage. - Determine if switching to Carthage and removing the dependency on CocoaPods to allow using `xcconfig` files is worth the effort (probably, yes). * https://github.com/Carthage/Carthage This should also enable removing the multiple plist config with various build settings (adhoc,debug,release). Using the latest BuildSettingExtractor release could help with migration: * https://github.com/dempseyatgithub/BuildSettingExtractor/releases/tag/v1.3.2
process
investigate switch from cocoapods to carthage determine if switching to carthage and removing the dependency on cocoapods to allow using xcconfig files is worth the effort probably yes this should also enable removing the multiple plist config with various build settings adhoc debug release using the latest buildsettingextractor release could help with migration
1
11,059
13,893,097,004
IssuesEvent
2020-10-19 13:06:30
googleapis/google-cloud-dotnet
https://api.github.com/repos/googleapis/google-cloud-dotnet
reopened
Add more APIs
type: process
The following APIs have been identified for generation: - [ ] Access Approval API - needs C# namespace option - [ ] App Engine Admin API - if this is google/appengine/v1, needs C# namespace option - [x] Cloud Bigtable Admin API (been generated for ages) - [ ] Binary Authorization API (in google/cloud/binaryauthorization; needs C# namespace option) - [ ] Cloud Build API (in google/devtools/cloudbuild - needs C# namespace option, which we'd need to consider carefully) - [x] Cloud IoT API (in google/cloud/iot) - [ ] Cloud Resource Manager API (in google/cloud/resourcemanager; has C# namespace option - but declares the common Folder resource, so we need to work out how to handle that; probably going to want to wait for v3) - [ ] IAM Service Account Credentials API (in google/iam/credentials; probably want C# namespace option) - [ ] ~~Pub/Sub Lite API (don't think we want to generate this)~~ - [ ] Web Security Scanner API (in google/cloud/websecurityscanner, needs C# namespace option) - [x] Area120 Tables API (released as alpha01 on 2020-09-22) - [ ] Data Labeling API (in google/cloud/datalabeling, needs C# namespace option) - [x] Media Translation API (in google/cloud/mediatranslation, has C# namespace option) - [ ] Policy Troubleshooter API (in beta since 2020-08-26; could go GA) - [ ] Recommendations AI (assuming this is google/cloud/recommendationengine, has C# namespace option - but it's missing it in one file; investigating) - [ ] Service Control API (expect this is google/api/servicecontrol; needs C# namespace option - and maybe include "Cloud"?) - [x] Service Management API (assuming this is google/api/servicemanagement/v1; has C# namespace option of Google.Cloud.ServiceManagement.V1) - [x] Workflow Executions API (generated as of 2020-10-14) We'll add a date and the kind of action as we go.
1.0
Add more APIs - The following APIs have been identified for generation: - [ ] Access Approval API - needs C# namespace option - [ ] App Engine Admin API - if this is google/appengine/v1, needs C# namespace option - [x] Cloud Bigtable Admin API (been generated for ages) - [ ] Binary Authorization API (in google/cloud/binaryauthorization; needs C# namespace option) - [ ] Cloud Build API (in google/devtools/cloudbuild - needs C# namespace option, which we'd need to consider carefully) - [x] Cloud IoT API (in google/cloud/iot) - [ ] Cloud Resource Manager API (in google/cloud/resourcemanager; has C# namespace option - but declares the common Folder resource, so we need to work out how to handle that; probably going to want to wait for v3) - [ ] IAM Service Account Credentials API (in google/iam/credentials; probably want C# namespace option) - [ ] ~~Pub/Sub Lite API (don't think we want to generate this)~~ - [ ] Web Security Scanner API (in google/cloud/websecurityscanner, needs C# namespace option) - [x] Area120 Tables API (released as alpha01 on 2020-09-22) - [ ] Data Labeling API (in google/cloud/datalabeling, needs C# namespace option) - [x] Media Translation API (in google/cloud/mediatranslation, has C# namespace option) - [ ] Policy Troubleshooter API (in beta since 2020-08-26; could go GA) - [ ] Recommendations AI (assuming this is google/cloud/recommendationengine, has C# namespace option - but it's missing it in one file; investigating) - [ ] Service Control API (expect this is google/api/servicecontrol; needs C# namespace option - and maybe include "Cloud"?) - [x] Service Management API (assuming this is google/api/servicemanagement/v1; has C# namespace option of Google.Cloud.ServiceManagement.V1) - [x] Workflow Executions API (generated as of 2020-10-14) We'll add a date and the kind of action as we go.
process
add more apis the following apis have been identified for generation access approval api needs c namespace option app engine admin api if this is google appengine needs c namespace option cloud bigtable admin api been generated for ages binary authorization api in google cloud binaryauthorization needs c namespace option cloud build api in google devtools cloudbuild needs c namespace option which we d need to consider carefully cloud iot api in google cloud iot cloud resource manager api in google cloud resourcemanager has c namespace option but declares the common folder resource so we need to work out how to handle that probably going to want to wait for iam service account credentials api in google iam credentials probably want c namespace option pub sub lite api don t think we want to generate this web security scanner api in google cloud websecurityscanner needs c namespace option tables api released as on data labeling api in google cloud datalabeling needs c namespace option media translation api in google cloud mediatranslation has c namespace option policy troubleshooter api in beta since could go ga recommendations ai assuming this is google cloud recommendationengine has c namespace option but it s missing it in one file investigating service control api expect this is google api servicecontrol needs c namespace option and maybe include cloud service management api assuming this is google api servicemanagement has c namespace option of google cloud servicemanagement workflow executions api generated as of we ll add a date and the kind of action as we go
1
16,052
20,194,609,441
IssuesEvent
2022-02-11 09:30:33
ooi-data/RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_velocity_beam
https://api.github.com/repos/ooi-data/RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_velocity_beam
opened
🛑 Processing failed: GroupNotFoundError
process
## Overview `GroupNotFoundError` found in `processing_task` task during run ended on 2022-02-11T09:30:32.846480. ## Details Flow name: `RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_velocity_beam` Task name: `processing_task` Error type: `GroupNotFoundError` Error message: group not found at path '' <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 64, in finalize_data_stream final_group = zarr.open_group(final_store, mode='r+') File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/hierarchy.py", line 1168, in open_group raise GroupNotFoundError(path) zarr.errors.GroupNotFoundError: group not found at path '' ``` </details>
1.0
🛑 Processing failed: GroupNotFoundError - ## Overview `GroupNotFoundError` found in `processing_task` task during run ended on 2022-02-11T09:30:32.846480. ## Details Flow name: `RS01SBPS-PC01A-05-ADCPTD102-streamed-adcp_velocity_beam` Task name: `processing_task` Error type: `GroupNotFoundError` Error message: group not found at path '' <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 64, in finalize_data_stream final_group = zarr.open_group(final_store, mode='r+') File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/hierarchy.py", line 1168, in open_group raise GroupNotFoundError(path) zarr.errors.GroupNotFoundError: group not found at path '' ``` </details>
process
🛑 processing failed groupnotfounderror overview groupnotfounderror found in processing task task during run ended on details flow name streamed adcp velocity beam task name processing task error type groupnotfounderror error message group not found at path traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream final group zarr open group final store mode r file srv conda envs notebook lib site packages zarr hierarchy py line in open group raise groupnotfounderror path zarr errors groupnotfounderror group not found at path
1
1,627
4,239,486,768
IssuesEvent
2016-07-06 09:38:33
BriceChou/WeiboClient
https://api.github.com/repos/BriceChou/WeiboClient
closed
User personal center and message center issue
High In processing TODO
When phone in landscape mode ,we can't see the page bottom content. We can't scroll down the page. 1. let your phone in landscape mode. 2. open the application. 3. can't scroll down the personal center and message bottom page.
1.0
User personal center and message center issue - When phone in landscape mode ,we can't see the page bottom content. We can't scroll down the page. 1. let your phone in landscape mode. 2. open the application. 3. can't scroll down the personal center and message bottom page.
process
user personal center and message center issue when phone in landscape mode we can t see the page bottom content we can t scroll down the page let your phone in landscape mode open the application can t scroll down the personal center and message bottom page
1
759
3,244,506,708
IssuesEvent
2015-10-16 02:53:02
GFUCABAM/statler
https://api.github.com/repos/GFUCABAM/statler
opened
Link to Sentiment API
API server components Language processing
Enable the API to make a JSON request to the https://github.com/vivekn/sentiment API and store what it sends back.
1.0
Link to Sentiment API - Enable the API to make a JSON request to the https://github.com/vivekn/sentiment API and store what it sends back.
process
link to sentiment api enable the api to make a json request to the api and store what it sends back
1
16,105
20,329,807,671
IssuesEvent
2022-02-18 09:37:14
prisma/prisma
https://api.github.com/repos/prisma/prisma
opened
Error: [introspection-engine/connectors/sql-introspection-connector/src/introspection_helpers.rs:241:64] called `Option::unwrap()` on a `None` value on cockroachdb
kind/bug process/candidate topic: error reporting team/migrations topic: cockroachdb
<!-- If required, please update the title to be clear and descriptive --> Command: `prisma introspect` Version: `3.9.2` Binary Version: `bcc2ff906db47790ee902e7bbc76d7ffb1893009` Report: https://prisma-errors.netlify.app/report/13667 OS: `x64 linux 5.4.0-1067-azure` JS Stacktrace: ``` Error: [introspection-engine/connectors/sql-introspection-connector/src/introspection_helpers.rs:241:64] called `Option::unwrap()` on a `None` value at ChildProcess.<anonymous> (/<..>/node_modules/prisma/build/index.js:46439:30) at ChildProcess.emit (node:events:390:28) at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12) ``` Rust Stacktrace: ``` 0: user_facing_errors::Error::new_in_panic_hook 1: user_facing_errors::panic_hook::set_panic_hook::{{closure}} 2: std::panicking::rust_panic_with_hook at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:610:17 3: std::panicking::begin_panic_handler::{{closure}} at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:500:13 4: std::sys_common::backtrace::__rust_end_short_backtrace at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:139:18 5: rust_begin_unwind at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:498:5 6: core::panicking::panic_fmt at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:107:14 7: core::panicking::panic at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:48:5 8: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold 9: sql_introspection_connector::introspection_helpers::calculate_relation_field 10: sql_introspection_connector::introspection::introspect 11: sql_introspection_connector::calculate_datamodel::calculate_datamodel 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 15: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll 16: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 17: json_rpc_stdio::handle_stdin_next_line::{{closure}} 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 19: introspection_engine::main 20: std::sys_common::backtrace::__rust_begin_short_backtrace 21: std::rt::lang_start::{{closure}} 22: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ops/function.rs:259:13 std::panicking::try::do_call at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40 std::panicking::try at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19 std::panic::catch_unwind at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14 std::rt::lang_start_internal::{{closure}} at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:128:48 std::panicking::try::do_call at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40 std::panicking::try at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19 std::panic::catch_unwind at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14 std::rt::lang_start_internal at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:128:20 23: std::rt::lang_start 24: __libc_start_main 25: <unknown> ```
1.0
Error: [introspection-engine/connectors/sql-introspection-connector/src/introspection_helpers.rs:241:64] called `Option::unwrap()` on a `None` value on cockroachdb - <!-- If required, please update the title to be clear and descriptive --> Command: `prisma introspect` Version: `3.9.2` Binary Version: `bcc2ff906db47790ee902e7bbc76d7ffb1893009` Report: https://prisma-errors.netlify.app/report/13667 OS: `x64 linux 5.4.0-1067-azure` JS Stacktrace: ``` Error: [introspection-engine/connectors/sql-introspection-connector/src/introspection_helpers.rs:241:64] called `Option::unwrap()` on a `None` value at ChildProcess.<anonymous> (/<..>/node_modules/prisma/build/index.js:46439:30) at ChildProcess.emit (node:events:390:28) at Process.ChildProcess._handle.onexit (node:internal/child_process:290:12) ``` Rust Stacktrace: ``` 0: user_facing_errors::Error::new_in_panic_hook 1: user_facing_errors::panic_hook::set_panic_hook::{{closure}} 2: std::panicking::rust_panic_with_hook at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:610:17 3: std::panicking::begin_panic_handler::{{closure}} at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:500:13 4: std::sys_common::backtrace::__rust_end_short_backtrace at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:139:18 5: rust_begin_unwind at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:498:5 6: core::panicking::panic_fmt at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:107:14 7: core::panicking::panic at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:48:5 8: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold 9: sql_introspection_connector::introspection_helpers::calculate_relation_field 10: sql_introspection_connector::introspection::introspect 11: sql_introspection_connector::calculate_datamodel::calculate_datamodel 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 14: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 15: <futures_util::future::future::Then<Fut1,Fut2,F> as core::future::future::Future>::poll 16: <futures_util::future::either::Either<A,B> as core::future::future::Future>::poll 17: json_rpc_stdio::handle_stdin_next_line::{{closure}} 18: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 19: introspection_engine::main 20: std::sys_common::backtrace::__rust_begin_short_backtrace 21: std::rt::lang_start::{{closure}} 22: core::ops::function::impls::<impl core::ops::function::FnOnce<A> for &F>::call_once at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/ops/function.rs:259:13 std::panicking::try::do_call at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40 std::panicking::try at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19 std::panic::catch_unwind at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14 std::rt::lang_start_internal::{{closure}} at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:128:48 std::panicking::try::do_call at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:406:40 std::panicking::try at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:370:19 std::panic::catch_unwind at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panic.rs:133:14 std::rt::lang_start_internal at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/rt.rs:128:20 23: std::rt::lang_start 24: __libc_start_main 25: <unknown> ```
process
error called option unwrap on a none value on cockroachdb command prisma introspect version binary version report os linux azure js stacktrace error called option unwrap on a none value at childprocess node modules prisma build index js at childprocess emit node events at process childprocess handle onexit node internal child process rust stacktrace user facing errors error new in panic hook user facing errors panic hook set panic hook closure std panicking rust panic with hook at rustc library std src panicking rs std panicking begin panic handler closure at rustc library std src panicking rs std sys common backtrace rust end short backtrace at rustc library std src sys common backtrace rs rust begin unwind at rustc library std src panicking rs core panicking panic fmt at rustc library core src panicking rs core panicking panic at rustc library core src panicking rs as core iter traits iterator iterator fold sql introspection connector introspection helpers calculate relation field sql introspection connector introspection introspect sql introspection connector calculate datamodel calculate datamodel as core future future future poll as core future future future poll as core future future future poll as core future future future poll as core future future future poll json rpc stdio handle stdin next line closure as core future future future poll introspection engine main std sys common backtrace rust begin short backtrace std rt lang start closure core ops function impls for f call once at rustc library core src ops function rs std panicking try do call at rustc library std src panicking rs std panicking try at rustc library std src panicking rs std panic catch unwind at rustc library std src panic rs std rt lang start internal closure at rustc library std src rt rs std panicking try do call at rustc library std src panicking rs std panicking try at rustc library std src panicking rs std panic catch unwind at rustc library std src panic rs std rt lang start internal at rustc library std src rt rs std rt lang start libc start main
1
5,474
8,352,613,693
IssuesEvent
2018-10-02 07:17:56
facebook/graphql
https://api.github.com/repos/facebook/graphql
closed
Status of GraphQL spec change process
🐝 Process
This repository provides an awesome opportunity for broader community involvement in the evolution of GraphQL. The [change process](https://github.com/facebook/graphql/blob/master/CONTRIBUTING.md) is great start to making this engagement productive but it's been in draft status since #342 merged back on August 14, 2017. Is there anything blocking this process from moving out of draft status? As someone from the extended GraphQL community that's tracking a few issues that I'd love to see land in some form or another (#300 and #488 / #395), I think there are a few small changes/clarifications to the process that could lead to more productive community engagement: 1) The process mentions "community" several times (e.g. "Find member of community to be champion for this change", "Community consent on the proposed change") but the community isn't clearly defined. Is it a member of the GraphQL Working Group? Is it anyone who has previously made changes to the GraphQL spec? Is it anyone using GraphQL? 2) Similarly the GraphQL Working Group is mentioned a few times but the membership of that group isn't well defined (although the principles for who should be included in the group are relatively clear). 3) It's hard to correlate the issues/pull requests in this repository to stages in the process. This very much ties into the desire for a "predictable timeline of when things are going to be merged" from the working group [discussion](https://github.com/graphql/graphql-wg/blob/master/notes/2017-08-14.md) of this process. Github labels or a wiki page with the status of all proposal (e.g. the [tc39 proposals page](https://github.com/tc39/proposals)) are possible easy solutions. Hopefully with a few small tweaks to the change process there can be more effective engagement with the broader GraphQL community ensuring continued successful evolution of GraphQL.
1.0
Status of GraphQL spec change process - This repository provides an awesome opportunity for broader community involvement in the evolution of GraphQL. The [change process](https://github.com/facebook/graphql/blob/master/CONTRIBUTING.md) is great start to making this engagement productive but it's been in draft status since #342 merged back on August 14, 2017. Is there anything blocking this process from moving out of draft status? As someone from the extended GraphQL community that's tracking a few issues that I'd love to see land in some form or another (#300 and #488 / #395), I think there are a few small changes/clarifications to the process that could lead to more productive community engagement: 1) The process mentions "community" several times (e.g. "Find member of community to be champion for this change", "Community consent on the proposed change") but the community isn't clearly defined. Is it a member of the GraphQL Working Group? Is it anyone who has previously made changes to the GraphQL spec? Is it anyone using GraphQL? 2) Similarly the GraphQL Working Group is mentioned a few times but the membership of that group isn't well defined (although the principles for who should be included in the group are relatively clear). 3) It's hard to correlate the issues/pull requests in this repository to stages in the process. This very much ties into the desire for a "predictable timeline of when things are going to be merged" from the working group [discussion](https://github.com/graphql/graphql-wg/blob/master/notes/2017-08-14.md) of this process. Github labels or a wiki page with the status of all proposal (e.g. the [tc39 proposals page](https://github.com/tc39/proposals)) are possible easy solutions. Hopefully with a few small tweaks to the change process there can be more effective engagement with the broader GraphQL community ensuring continued successful evolution of GraphQL.
process
status of graphql spec change process this repository provides an awesome opportunity for broader community involvement in the evolution of graphql the is great start to making this engagement productive but it s been in draft status since merged back on august is there anything blocking this process from moving out of draft status as someone from the extended graphql community that s tracking a few issues that i d love to see land in some form or another and i think there are a few small changes clarifications to the process that could lead to more productive community engagement the process mentions community several times e g find member of community to be champion for this change community consent on the proposed change but the community isn t clearly defined is it a member of the graphql working group is it anyone who has previously made changes to the graphql spec is it anyone using graphql similarly the graphql working group is mentioned a few times but the membership of that group isn t well defined although the principles for who should be included in the group are relatively clear it s hard to correlate the issues pull requests in this repository to stages in the process this very much ties into the desire for a predictable timeline of when things are going to be merged from the working group of this process github labels or a wiki page with the status of all proposal e g the are possible easy solutions hopefully with a few small tweaks to the change process there can be more effective engagement with the broader graphql community ensuring continued successful evolution of graphql
1
49,281
12,308,611,272
IssuesEvent
2020-05-12 07:32:50
chocolate-doom/chocolate-doom
https://api.github.com/repos/chocolate-doom/chocolate-doom
closed
Clang compilation produces warnings
build
Compiling on macOS, using make, shows the following warnings. Ignore the warnings about libraries. http://pastebin.com/4ANqMtHW
1.0
Clang compilation produces warnings - Compiling on macOS, using make, shows the following warnings. Ignore the warnings about libraries. http://pastebin.com/4ANqMtHW
non_process
clang compilation produces warnings compiling on macos using make shows the following warnings ignore the warnings about libraries
0
6,489
9,559,138,396
IssuesEvent
2019-05-03 15:52:23
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
opened
Add a question to the Education Page
Apply Process State Dept.
Who: Student and DoS What: Additional Education requirement Why: DoS would like to screen out applicants based off this question A/C - Add the following question to the "Education and Transcripts" page under "Will you continue your education after this internship has been completed?" and before the GPA question - If selected for an internship, will you be able to work between 32 and 40 hours a week for 10 consecutive weeks? - There will be a Yes and No button No Mock required **NOTE** This was brought up when we were testing with DoS on Thursday 5/2/19. They forgot to add this question initially.
1.0
Add a question to the Education Page - Who: Student and DoS What: Additional Education requirement Why: DoS would like to screen out applicants based off this question A/C - Add the following question to the "Education and Transcripts" page under "Will you continue your education after this internship has been completed?" and before the GPA question - If selected for an internship, will you be able to work between 32 and 40 hours a week for 10 consecutive weeks? - There will be a Yes and No button No Mock required **NOTE** This was brought up when we were testing with DoS on Thursday 5/2/19. They forgot to add this question initially.
process
add a question to the education page who student and dos what additional education requirement why dos would like to screen out applicants based off this question a c add the following question to the education and transcripts page under will you continue your education after this internship has been completed and before the gpa question if selected for an internship will you be able to work between and hours a week for consecutive weeks there will be a yes and no button no mock required note this was brought up when we were testing with dos on thursday they forgot to add this question initially
1
247,044
18,857,230,887
IssuesEvent
2021-11-12 08:20:29
TTraveller7/pe
https://api.github.com/repos/TTraveller7/pe
opened
UG screenshots unconsistency in "Sample Usage" part
type.DocumentationBug severity.VeryLow
What I did: Enter `tag` Expected: ![图片.png](https://raw.githubusercontent.com/TTraveller7/pe/main/files/96adf97a-7ca0-4133-846a-97e4092f3f88.png) Actual: ![图片.png](https://raw.githubusercontent.com/TTraveller7/pe/main/files/bfd982f8-8a99-4015-8fb2-5c0f8e2bb078.png) Comments: The same for `edit`. For Sample Usage section, perhaps the screenshots need to be updated. <!--session: 1636703627577-d14d2089-5115-40ce-bcc9-d951289e3e9d--> <!--Version: Web v3.4.1-->
1.0
UG screenshots unconsistency in "Sample Usage" part - What I did: Enter `tag` Expected: ![图片.png](https://raw.githubusercontent.com/TTraveller7/pe/main/files/96adf97a-7ca0-4133-846a-97e4092f3f88.png) Actual: ![图片.png](https://raw.githubusercontent.com/TTraveller7/pe/main/files/bfd982f8-8a99-4015-8fb2-5c0f8e2bb078.png) Comments: The same for `edit`. For Sample Usage section, perhaps the screenshots need to be updated. <!--session: 1636703627577-d14d2089-5115-40ce-bcc9-d951289e3e9d--> <!--Version: Web v3.4.1-->
non_process
ug screenshots unconsistency in sample usage part what i did enter tag expected actual comments the same for edit for sample usage section perhaps the screenshots need to be updated
0
4,719
4,553,436,167
IssuesEvent
2016-09-13 04:48:20
docker/for-mac
https://api.github.com/repos/docker/for-mac
closed
"Hang" during 'docker pull' extraction phase
area/network kind/performance osx/10.11.x version/1.12.0
### Expected behavior `docker pull` makes continuous progress while `Extracting`; bar progresses to 100% before completion. ### Actual behavior `docker pull` hangs for a minute, then reports `Pull complete` without apparently finishing extraction: ``` $ docker pull node:6 6: Pulling from library/node 357ea8c3d80b: Already exists 52befadefd24: Already exists 3c0732d5313c: Already exists ceb711c7e301: Already exists 868b1d0e2aad: Already exists 61d10f626f84: Extracting [======================================> ] 10.49 MB/13.6 MB ``` … then, a minute or so later: ``` 61d10f626f84: Pull complete Digest: sha256:12899eea666e85f23e9850bd3c309b1ee28dd0869f554a7a6895fc962d9094a3 Status: Downloaded newer image for node:6 ``` ### Information Diagnostic ID: 4168FFD0-2A6A-4390-B7B7-22F705F5FD3F Docker for Mac: 1.12.0-a (Build 11213) macOS: Version 10.11.6 (Build 15G31)
True
"Hang" during 'docker pull' extraction phase - ### Expected behavior `docker pull` makes continuous progress while `Extracting`; bar progresses to 100% before completion. ### Actual behavior `docker pull` hangs for a minute, then reports `Pull complete` without apparently finishing extraction: ``` $ docker pull node:6 6: Pulling from library/node 357ea8c3d80b: Already exists 52befadefd24: Already exists 3c0732d5313c: Already exists ceb711c7e301: Already exists 868b1d0e2aad: Already exists 61d10f626f84: Extracting [======================================> ] 10.49 MB/13.6 MB ``` … then, a minute or so later: ``` 61d10f626f84: Pull complete Digest: sha256:12899eea666e85f23e9850bd3c309b1ee28dd0869f554a7a6895fc962d9094a3 Status: Downloaded newer image for node:6 ``` ### Information Diagnostic ID: 4168FFD0-2A6A-4390-B7B7-22F705F5FD3F Docker for Mac: 1.12.0-a (Build 11213) macOS: Version 10.11.6 (Build 15G31)
non_process
hang during docker pull extraction phase expected behavior docker pull makes continuous progress while extracting bar progresses to before completion actual behavior docker pull hangs for a minute then reports pull complete without apparently finishing extraction docker pull node pulling from library node already exists already exists already exists already exists already exists extracting mb mb … then a minute or so later pull complete digest status downloaded newer image for node information diagnostic id docker for mac a build macos version build
0
9,382
12,390,892,571
IssuesEvent
2020-05-20 11:31:04
burnpiro/wod-bike-dataset-generator
https://api.github.com/repos/burnpiro/wod-bike-dataset-generator
opened
Generate .csv file with edges for each 15min time period in a day
data processing
For each 15min period in the day, we need to generate a file which contains: ``` | source | target | weight | |--- |--- |--- | | 1 | 2 | 3 | | 2 | 4 | 1 | | 12 | 123 | 12 | ``` It has to omit 0 values and be directed. In the end, there will be a file for each day in 10 months of rent.
1.0
Generate .csv file with edges for each 15min time period in a day - For each 15min period in the day, we need to generate a file which contains: ``` | source | target | weight | |--- |--- |--- | | 1 | 2 | 3 | | 2 | 4 | 1 | | 12 | 123 | 12 | ``` It has to omit 0 values and be directed. In the end, there will be a file for each day in 10 months of rent.
process
generate csv file with edges for each time period in a day for each period in the day we need to generate a file which contains source target weight it has to omit values and be directed in the end there will be a file for each day in months of rent
1
874
3,332,574,748
IssuesEvent
2015-11-11 20:46:00
neuropoly/spinalcordtoolbox
https://api.github.com/repos/neuropoly/spinalcordtoolbox
opened
correct for through-slice partial volume effect
enhancement priority: medium sct_extract_metric sct_process_segmentation
if slice is not perfectly orthogonal to the cord centerline and slice thickness is large, there will be partial volume effect. This could theoretically be accounted for because we know: - centerline - slice orientation - slice thickness
1.0
correct for through-slice partial volume effect - if slice is not perfectly orthogonal to the cord centerline and slice thickness is large, there will be partial volume effect. This could theoretically be accounted for because we know: - centerline - slice orientation - slice thickness
process
correct for through slice partial volume effect if slice is not perfectly orthogonal to the cord centerline and slice thickness is large there will be partial volume effect this could theoretically be accounted for because we know centerline slice orientation slice thickness
1
246,746
26,612,158,644
IssuesEvent
2023-01-24 01:39:28
nidhihcl/linux-4.19.72
https://api.github.com/repos/nidhihcl/linux-4.19.72
reopened
CVE-2022-42895 (Medium) detected in linuxlinux-4.19.269
security vulnerability
## CVE-2022-42895 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhihcl/linux-4.19.72/commit/100f7b26054277849f7a1fbab8e41735138c924e">100f7b26054277849f7a1fbab8e41735138c924e</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/bluetooth/l2cap_core.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> There is an infoleak vulnerability in the Linux kernel's net/bluetooth/l2cap_core.c's l2cap_parse_conf_req function which can be used to leak kernel pointers remotely. We recommend upgrading past commit https://github.com/torvalds/linux/commit/b1a2cd50c0357f243b7435a732b4e62ba3157a2e https://www.google.com/url <p>Publish Date: 2022-11-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42895>CVE-2022-42895</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-42895">https://www.linuxkernelcves.com/cves/CVE-2022-42895</a></p> <p>Release Date: 2022-11-23</p> <p>Fix Resolution: v4.9.333,v4.14.299,v4.19.265,v5.4.224,v5.10.154,v5.15.78,v6.0.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-42895 (Medium) detected in linuxlinux-4.19.269 - ## CVE-2022-42895 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.269</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/nidhihcl/linux-4.19.72/commit/100f7b26054277849f7a1fbab8e41735138c924e">100f7b26054277849f7a1fbab8e41735138c924e</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/bluetooth/l2cap_core.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> There is an infoleak vulnerability in the Linux kernel's net/bluetooth/l2cap_core.c's l2cap_parse_conf_req function which can be used to leak kernel pointers remotely. We recommend upgrading past commit https://github.com/torvalds/linux/commit/b1a2cd50c0357f243b7435a732b4e62ba3157a2e https://www.google.com/url <p>Publish Date: 2022-11-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2022-42895>CVE-2022-42895</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2022-42895">https://www.linuxkernelcves.com/cves/CVE-2022-42895</a></p> <p>Release Date: 2022-11-23</p> <p>Fix Resolution: v4.9.333,v4.14.299,v4.19.265,v5.4.224,v5.10.154,v5.15.78,v6.0.8</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net bluetooth core c vulnerability details there is an infoleak vulnerability in the linux kernel s net bluetooth core c s parse conf req function which can be used to leak kernel pointers remotely we recommend upgrading past commit publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
155,624
12,263,656,996
IssuesEvent
2020-05-07 01:43:40
QubesOS/updates-status
https://api.github.com/repos/QubesOS/updates-status
closed
linux-kernel-4-19 v4.19.114-1 (r4.0)
r4.0-dom0-cur-test
Update of linux-kernel-4-19 to v4.19.114-1 for Qubes r4.0, see comments below for details. Built from: https://github.com/QubesOS/qubes-linux-kernel/commit/f82d9ad67aed964542a9487cc6428e2d7f5b9caf [Changes since previous version](https://github.com/QubesOS/qubes-linux-kernel/compare/v4.19.107-1...v4.19.114-1): QubesOS/qubes-linux-kernel@f82d9ad Update to kernel-4.19.114 QubesOS/qubes-linux-kernel@9558dd6 Update to kernel-4.19.113 QubesOS/qubes-linux-kernel@84cd273 Update to kernel-4.19.109 QubesOS/qubes-linux-kernel@799e135 Makefile: update verify target QubesOS/qubes-linux-kernel@398660b Merge remote-tracking branch 'origin/pr/184' into stable-4.19 QubesOS/qubes-linux-kernel@ca79d2d Makefile: remove extra tab QubesOS/qubes-linux-kernel@d1e4752 update-sources: clean version modification in case of failure QubesOS/qubes-linux-kernel@9b3af1f Makefile: set default BRANCH variable to master QubesOS/qubes-linux-kernel@080c3ad Add scripts for kernel updates QubesOS/qubes-linux-kernel@e8d0840 gitignore: ignore pkgs QubesOS/qubes-linux-kernel@52e3f7b makefile: clean unused targets QubesOS/qubes-linux-kernel@07aa4f1 Update to kernel-4.19.108 Referenced issues: If you're release manager, you can issue GPG-inline signed command: * `Upload linux-kernel-4-19 f82d9ad67aed964542a9487cc6428e2d7f5b9caf r4.0 current repo` (available 7 days from now) * `Upload linux-kernel-4-19 f82d9ad67aed964542a9487cc6428e2d7f5b9caf r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now) * `Upload linux-kernel-4-19 f82d9ad67aed964542a9487cc6428e2d7f5b9caf r4.0 security-testing repo` Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
1.0
linux-kernel-4-19 v4.19.114-1 (r4.0) - Update of linux-kernel-4-19 to v4.19.114-1 for Qubes r4.0, see comments below for details. Built from: https://github.com/QubesOS/qubes-linux-kernel/commit/f82d9ad67aed964542a9487cc6428e2d7f5b9caf [Changes since previous version](https://github.com/QubesOS/qubes-linux-kernel/compare/v4.19.107-1...v4.19.114-1): QubesOS/qubes-linux-kernel@f82d9ad Update to kernel-4.19.114 QubesOS/qubes-linux-kernel@9558dd6 Update to kernel-4.19.113 QubesOS/qubes-linux-kernel@84cd273 Update to kernel-4.19.109 QubesOS/qubes-linux-kernel@799e135 Makefile: update verify target QubesOS/qubes-linux-kernel@398660b Merge remote-tracking branch 'origin/pr/184' into stable-4.19 QubesOS/qubes-linux-kernel@ca79d2d Makefile: remove extra tab QubesOS/qubes-linux-kernel@d1e4752 update-sources: clean version modification in case of failure QubesOS/qubes-linux-kernel@9b3af1f Makefile: set default BRANCH variable to master QubesOS/qubes-linux-kernel@080c3ad Add scripts for kernel updates QubesOS/qubes-linux-kernel@e8d0840 gitignore: ignore pkgs QubesOS/qubes-linux-kernel@52e3f7b makefile: clean unused targets QubesOS/qubes-linux-kernel@07aa4f1 Update to kernel-4.19.108 Referenced issues: If you're release manager, you can issue GPG-inline signed command: * `Upload linux-kernel-4-19 f82d9ad67aed964542a9487cc6428e2d7f5b9caf r4.0 current repo` (available 7 days from now) * `Upload linux-kernel-4-19 f82d9ad67aed964542a9487cc6428e2d7f5b9caf r4.0 current (dists) repo`, you can choose subset of distributions, like `vm-fc24 vm-fc25` (available 7 days from now) * `Upload linux-kernel-4-19 f82d9ad67aed964542a9487cc6428e2d7f5b9caf r4.0 security-testing repo` Above commands will work only if packages in current-testing repository were built from given commit (i.e. no new version superseded it).
non_process
linux kernel update of linux kernel to for qubes see comments below for details built from qubesos qubes linux kernel update to kernel qubesos qubes linux kernel update to kernel qubesos qubes linux kernel update to kernel qubesos qubes linux kernel makefile update verify target qubesos qubes linux kernel merge remote tracking branch origin pr into stable qubesos qubes linux kernel makefile remove extra tab qubesos qubes linux kernel update sources clean version modification in case of failure qubesos qubes linux kernel makefile set default branch variable to master qubesos qubes linux kernel add scripts for kernel updates qubesos qubes linux kernel gitignore ignore pkgs qubesos qubes linux kernel makefile clean unused targets qubesos qubes linux kernel update to kernel referenced issues if you re release manager you can issue gpg inline signed command upload linux kernel current repo available days from now upload linux kernel current dists repo you can choose subset of distributions like vm vm available days from now upload linux kernel security testing repo above commands will work only if packages in current testing repository were built from given commit i e no new version superseded it
0
292,682
25,229,027,962
IssuesEvent
2022-11-14 18:10:41
WPChill/download-monitor
https://api.github.com/repos/WPChill/download-monitor
closed
incompatibility with wpml
Bug needs testing tested
**Describe the bug** Hi, We are using your plugin. We have our website in two languages (English and French). For English download button (thru shortcode) is working fine but for French version, download button is not working. We are getting below link with that button, https://domain.com/?lang=fr%2Fdownload%2F20306%2F&tmstv=1667454245 You can see lang parameter is auto added with french version's website button and I think due to this button is not working. **To Reproduce** Steps to reproduce the behavior: 1. install wpml > create a page and its translation 2. add a download shortcode to each page 3. See error If the download button has the url like this: https://domain.com/?lang=fr%2Fdownload%2F20306%2F&tmstv=1667454245 it will not work **Extensions installed and activated:** - [ ] Advanced Access Manager - [ ] Amazon S3 - [ ] Buttons - [ ] Captcha - [ ] CSV Exporter - [ ] CSV Importer - [ ] Downloading Page - [ ] Email lock - [ ] Email Notifications - [ ] Google Drive - [ ] Gravity Forms - [ ] Mailchimp - [ ] Ninja Forms - [ ] Page Addon - [ ] Terms & Conditions - [ ] Twitter Lock **Additional context** https://secure.helpscout.net/conversation/2056572435/51283?folderId=4638443
2.0
incompatibility with wpml - **Describe the bug** Hi, We are using your plugin. We have our website in two languages (English and French). For English download button (thru shortcode) is working fine but for French version, download button is not working. We are getting below link with that button, https://domain.com/?lang=fr%2Fdownload%2F20306%2F&tmstv=1667454245 You can see lang parameter is auto added with french version's website button and I think due to this button is not working. **To Reproduce** Steps to reproduce the behavior: 1. install wpml > create a page and its translation 2. add a download shortcode to each page 3. See error If the download button has the url like this: https://domain.com/?lang=fr%2Fdownload%2F20306%2F&tmstv=1667454245 it will not work **Extensions installed and activated:** - [ ] Advanced Access Manager - [ ] Amazon S3 - [ ] Buttons - [ ] Captcha - [ ] CSV Exporter - [ ] CSV Importer - [ ] Downloading Page - [ ] Email lock - [ ] Email Notifications - [ ] Google Drive - [ ] Gravity Forms - [ ] Mailchimp - [ ] Ninja Forms - [ ] Page Addon - [ ] Terms & Conditions - [ ] Twitter Lock **Additional context** https://secure.helpscout.net/conversation/2056572435/51283?folderId=4638443
non_process
incompatibility with wpml describe the bug hi we are using your plugin we have our website in two languages english and french for english download button thru shortcode is working fine but for french version download button is not working we are getting below link with that button you can see lang parameter is auto added with french version s website button and i think due to this button is not working to reproduce steps to reproduce the behavior install wpml create a page and its translation add a download shortcode to each page see error if the download button has the url like this it will not work extensions installed and activated advanced access manager amazon buttons captcha csv exporter csv importer downloading page email lock email notifications google drive gravity forms mailchimp ninja forms page addon terms conditions twitter lock additional context
0
14,293
17,266,394,652
IssuesEvent
2021-07-22 14:16:00
googleapis/python-storage
https://api.github.com/repos/googleapis/python-storage
closed
Teardown of `blobs_to_delete` fixture flakes with TimeoutError.
api: storage type: process
From [this Kokoro failure](https://source.cloud.google.com/results/invocations/c4cab6ef-94f0-4609-a2ea-760344f14623/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fsystem-3.8/log): ```python self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7fdd9aa453a0> err = timeout('The read operation timed out') url = '/storage/v1/b/gcp-systest-kms-1626816467428/o/test-blob?generation=1626816473471724&prettyPrint=false' timeout_value = 60 def _raise_timeout(self, err, url, timeout_value): """Is the error actually a timeout? Will raise a ReadTimeout or pass""" if isinstance(err, SocketTimeout): > raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % timeout_value ) E urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='storage.googleapis.com', port=443): Read timed out. (read timeout=60) .nox/system-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:336: ReadTimeoutError During handling of the above exception, another exception occurred: @pytest.fixture(scope="function") def blobs_to_delete(): blobs_to_delete = [] yield blobs_to_delete for blob in blobs_to_delete: > _helpers.delete_blob(blob) ```
1.0
Teardown of `blobs_to_delete` fixture flakes with TimeoutError. - From [this Kokoro failure](https://source.cloud.google.com/results/invocations/c4cab6ef-94f0-4609-a2ea-760344f14623/targets/cloud-devrel%2Fclient-libraries%2Fpython%2Fgoogleapis%2Fpython-storage%2Fpresubmit%2Fsystem-3.8/log): ```python self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x7fdd9aa453a0> err = timeout('The read operation timed out') url = '/storage/v1/b/gcp-systest-kms-1626816467428/o/test-blob?generation=1626816473471724&prettyPrint=false' timeout_value = 60 def _raise_timeout(self, err, url, timeout_value): """Is the error actually a timeout? Will raise a ReadTimeout or pass""" if isinstance(err, SocketTimeout): > raise ReadTimeoutError( self, url, "Read timed out. (read timeout=%s)" % timeout_value ) E urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='storage.googleapis.com', port=443): Read timed out. (read timeout=60) .nox/system-3-8/lib/python3.8/site-packages/urllib3/connectionpool.py:336: ReadTimeoutError During handling of the above exception, another exception occurred: @pytest.fixture(scope="function") def blobs_to_delete(): blobs_to_delete = [] yield blobs_to_delete for blob in blobs_to_delete: > _helpers.delete_blob(blob) ```
process
teardown of blobs to delete fixture flakes with timeouterror from python self err timeout the read operation timed out url storage b gcp systest kms o test blob generation prettyprint false timeout value def raise timeout self err url timeout value is the error actually a timeout will raise a readtimeout or pass if isinstance err sockettimeout raise readtimeouterror self url read timed out read timeout s timeout value e exceptions readtimeouterror httpsconnectionpool host storage googleapis com port read timed out read timeout nox system lib site packages connectionpool py readtimeouterror during handling of the above exception another exception occurred pytest fixture scope function def blobs to delete blobs to delete yield blobs to delete for blob in blobs to delete helpers delete blob blob
1
9,330
12,340,580,963
IssuesEvent
2020-05-14 20:12:13
googleapis/nodejs-recommender
https://api.github.com/repos/googleapis/nodejs-recommender
closed
promoting library to GA
api: recommender type: process
Package name: **@google-cloud/recommender** Current release: **beta** Proposed release: **GA** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] 28 days elapsed since last beta release with new API surface - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [x] Per-API README includes a full description of the API - [x] Per-API README contains at least one “getting started” sample using the most common API scenario - [x] Manual code has been reviewed by API producer - [x] Manual code has been reviewed by a DPE responsible for samples - [x] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
1.0
promoting library to GA - Package name: **@google-cloud/recommender** Current release: **beta** Proposed release: **GA** ## Instructions Check the lists below, adding tests / documentation as required. Once all the "required" boxes are ticked, please create a release and close this issue. ## Required - [x] 28 days elapsed since last beta release with new API surface - [x] Server API is GA - [x] Package API is stable, and we can commit to backward compatibility - [x] All dependencies are GA ## Optional - [ ] Most common / important scenarios have descriptive samples - [ ] Public manual methods have at least one usage sample each (excluding overloads) - [x] Per-API README includes a full description of the API - [x] Per-API README contains at least one “getting started” sample using the most common API scenario - [x] Manual code has been reviewed by API producer - [x] Manual code has been reviewed by a DPE responsible for samples - [x] 'Client Libraries' page is added to the product documentation in 'APIs & Reference' section of the product's documentation on Cloud Site
process
promoting library to ga package name google cloud recommender current release beta proposed release ga instructions check the lists below adding tests documentation as required once all the required boxes are ticked please create a release and close this issue required days elapsed since last beta release with new api surface server api is ga package api is stable and we can commit to backward compatibility all dependencies are ga optional most common important scenarios have descriptive samples public manual methods have at least one usage sample each excluding overloads per api readme includes a full description of the api per api readme contains at least one “getting started” sample using the most common api scenario manual code has been reviewed by api producer manual code has been reviewed by a dpe responsible for samples client libraries page is added to the product documentation in apis reference section of the product s documentation on cloud site
1
98,364
16,373,810,096
IssuesEvent
2021-05-15 17:39:12
hugh-whitesource/NodeGoat-1
https://api.github.com/repos/hugh-whitesource/NodeGoat-1
opened
CVE-2019-19919 (High) detected in handlebars-4.0.5.tgz
security vulnerability
## CVE-2019-19919 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.5.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz</a></p> <p>Path to dependency file: NodeGoat-1/package.json</p> <p>Path to vulnerable library: NodeGoat-1/node_modules/nyc/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - grunt-if-0.2.0.tgz (Root Library) - grunt-contrib-nodeunit-1.0.0.tgz - nodeunit-0.9.5.tgz - tap-7.1.2.tgz - nyc-7.1.0.tgz - istanbul-reports-1.0.0-alpha.8.tgz - :x: **handlebars-4.0.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hugh-whitesource/NodeGoat-1/commit/1acb8446b41e455d2f087e892c9a9ce80609f601">1acb8446b41e455d2f087e892c9a9ce80609f601</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads. <p>Publish Date: 2019-12-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p> <p>Release Date: 2019-12-20</p> <p>Fix Resolution: 4.3.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.0.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-if:0.2.0;grunt-contrib-nodeunit:1.0.0;nodeunit:0.9.5;tap:7.1.2;nyc:7.1.0;istanbul-reports:1.0.0-alpha.8;handlebars:4.0.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.3.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-19919","vulnerabilityDetails":"Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object\u0027s __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-19919 (High) detected in handlebars-4.0.5.tgz - ## CVE-2019-19919 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>handlebars-4.0.5.tgz</b></p></summary> <p>Handlebars provides the power necessary to let you build semantic templates effectively with no frustration</p> <p>Library home page: <a href="https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz">https://registry.npmjs.org/handlebars/-/handlebars-4.0.5.tgz</a></p> <p>Path to dependency file: NodeGoat-1/package.json</p> <p>Path to vulnerable library: NodeGoat-1/node_modules/nyc/node_modules/handlebars/package.json</p> <p> Dependency Hierarchy: - grunt-if-0.2.0.tgz (Root Library) - grunt-contrib-nodeunit-1.0.0.tgz - nodeunit-0.9.5.tgz - tap-7.1.2.tgz - nyc-7.1.0.tgz - istanbul-reports-1.0.0-alpha.8.tgz - :x: **handlebars-4.0.5.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/hugh-whitesource/NodeGoat-1/commit/1acb8446b41e455d2f087e892c9a9ce80609f601">1acb8446b41e455d2f087e892c9a9ce80609f601</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object's __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads. <p>Publish Date: 2019-12-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919>CVE-2019-19919</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1164">https://www.npmjs.com/advisories/1164</a></p> <p>Release Date: 2019-12-20</p> <p>Fix Resolution: 4.3.0</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"handlebars","packageVersion":"4.0.5","packageFilePaths":["/package.json"],"isTransitiveDependency":true,"dependencyTree":"grunt-if:0.2.0;grunt-contrib-nodeunit:1.0.0;nodeunit:0.9.5;tap:7.1.2;nyc:7.1.0;istanbul-reports:1.0.0-alpha.8;handlebars:4.0.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"4.3.0"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-19919","vulnerabilityDetails":"Versions of handlebars prior to 4.3.0 are vulnerable to Prototype Pollution leading to Remote Code Execution. Templates may alter an Object\u0027s __proto__ and __defineGetter__ properties, which may allow an attacker to execute arbitrary code through crafted payloads.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-19919","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in handlebars tgz cve high severity vulnerability vulnerable library handlebars tgz handlebars provides the power necessary to let you build semantic templates effectively with no frustration library home page a href path to dependency file nodegoat package json path to vulnerable library nodegoat node modules nyc node modules handlebars package json dependency hierarchy grunt if tgz root library grunt contrib nodeunit tgz nodeunit tgz tap tgz nyc tgz istanbul reports alpha tgz x handlebars tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of handlebars prior to are vulnerable to prototype pollution leading to remote code execution templates may alter an object s proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree grunt if grunt contrib nodeunit nodeunit tap nyc istanbul reports alpha handlebars isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails versions of handlebars prior to are vulnerable to prototype pollution leading to remote code execution templates may alter an object proto and definegetter properties which may allow an attacker to execute arbitrary code through crafted payloads vulnerabilityurl
0
45,655
13,131,642,343
IssuesEvent
2020-08-06 17:23:35
jgeraigery/kraft-heinz-merger
https://api.github.com/repos/jgeraigery/kraft-heinz-merger
opened
CVE-2018-20822 (Medium) detected in node-sass-v4.13.1
security vulnerability
## CVE-2018-20822 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.13.1</b></p></summary> <p> <p>:rainbow: Node.js bindings to libsass</p> <p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/kraft-heinz-merger/commit/af6fe510cfa7228a06515d410aeabf6ecca51b7a">af6fe510cfa7228a06515d410aeabf6ecca51b7a</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kraft-heinz-merger/node_modules/node-sass/src/libsass/src/ast.hpp</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> LibSass 3.5.4 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Complex_Selector::perform in ast.hpp and Sass::Inspect::operator in inspect.cpp). <p>Publish Date: 2019-04-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20822>CVE-2018-20822</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822</a></p> <p>Release Date: 2019-08-06</p> <p>Fix Resolution: LibSass - 3.6.0;node-sass - 4.13.1</p> </p> </details> <p></p>
True
CVE-2018-20822 (Medium) detected in node-sass-v4.13.1 - ## CVE-2018-20822 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>node-sassv4.13.1</b></p></summary> <p> <p>:rainbow: Node.js bindings to libsass</p> <p>Library home page: <a href=https://github.com/sass/node-sass.git>https://github.com/sass/node-sass.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/jgeraigery/kraft-heinz-merger/commit/af6fe510cfa7228a06515d410aeabf6ecca51b7a">af6fe510cfa7228a06515d410aeabf6ecca51b7a</a></p> </p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>kraft-heinz-merger/node_modules/node-sass/src/libsass/src/ast.hpp</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> LibSass 3.5.4 allows attackers to cause a denial-of-service (uncontrolled recursion in Sass::Complex_Selector::perform in ast.hpp and Sass::Inspect::operator in inspect.cpp). <p>Publish Date: 2019-04-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-20822>CVE-2018-20822</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-20822</a></p> <p>Release Date: 2019-08-06</p> <p>Fix Resolution: LibSass - 3.6.0;node-sass - 4.13.1</p> </p> </details> <p></p>
non_process
cve medium detected in node sass cve medium severity vulnerability vulnerable library node rainbow node js bindings to libsass library home page a href found in head commit a href vulnerable source files kraft heinz merger node modules node sass src libsass src ast hpp vulnerability details libsass allows attackers to cause a denial of service uncontrolled recursion in sass complex selector perform in ast hpp and sass inspect operator in inspect cpp publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution libsass node sass
0
17,039
22,420,243,676
IssuesEvent
2022-06-20 01:42:26
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add [Orgazmo]
suggested title in process
Please add as much of the following info as you can: Title: Orgazmo Type (film/tv show): Comedy XXX Film or show in which it appears: Captain Orgazmo by Trey Parker Is the parent film/show streaming anywhere? Youtube About when in the parent film/show does it appear? Actual footage of the film/show can be seen (yes/no)? No
1.0
Add [Orgazmo] - Please add as much of the following info as you can: Title: Orgazmo Type (film/tv show): Comedy XXX Film or show in which it appears: Captain Orgazmo by Trey Parker Is the parent film/show streaming anywhere? Youtube About when in the parent film/show does it appear? Actual footage of the film/show can be seen (yes/no)? No
process
add please add as much of the following info as you can title orgazmo type film tv show comedy xxx film or show in which it appears captain orgazmo by trey parker is the parent film show streaming anywhere youtube about when in the parent film show does it appear actual footage of the film show can be seen yes no no
1
7,721
10,825,802,250
IssuesEvent
2019-11-09 18:09:31
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
Link to keyscoped topic not properly resolved.
bug preprocess/keyref priority/medium
I'm attaching a ZIP sample. [improperLinkToKeyScopedTopic.zip](https://github.com/dita-ot/dita-ot/files/631027/improperLinkToKeyScopedTopic.zip) So, the DITA Map looks like this: ```xml <topicgroup keyscope="product1_scope"> ........... <topicref keys="archiving" href="product1/c_archiving_mod.dita"/> </topicgroup> <topicgroup keyscope="product2_scope"> ............... <topicref keys="archiving" href="product1/c_archiving_mod.dita"/> </topicgroup> <topicref href="linking.dita"/> ``` Because "c_archiving_mod.dita" is referenced in two key scopes, we will have in the output file two HTMLs for it: "c_archiving_mod.html" and "c_archiving_mod-1.html". And that's good. The "linking.dita" should generate links to both those HTML files using fully qualified key scoped: ```xml <p>Links <xref keyref="product1_scope.archiving" /> to <xref keyref="product2_scope.archiving" />.</p> ``` But in the generated "linking.html" you will have both links point to the same HTML file (the one generated from the first context). I looked a little bit in the code, the method `org.dita.dost.writer.KeyrefPaser.processElement(Attributes)` resolves key references based on the current "KeyScope" map. Unfortunately the map contains these mappings: ``` product1_scope.archiving=product1_scope.archiving=product1/c_archiving_mod.dita product2_scope.archiving=product2_scope.archiving=product1/c_archiving_mod.dita ``` although in my opinion the "product2_scope.archiving" should have been re-written (possibly in the org.dita.dost.module.KeyrefModule.execute) to point to "product1/c_archiving_mod-1.dita" because the DITA Map has already been re-written to do so. Might also be related with #2523
1.0
Link to keyscoped topic not properly resolved. - I'm attaching a ZIP sample. [improperLinkToKeyScopedTopic.zip](https://github.com/dita-ot/dita-ot/files/631027/improperLinkToKeyScopedTopic.zip) So, the DITA Map looks like this: ```xml <topicgroup keyscope="product1_scope"> ........... <topicref keys="archiving" href="product1/c_archiving_mod.dita"/> </topicgroup> <topicgroup keyscope="product2_scope"> ............... <topicref keys="archiving" href="product1/c_archiving_mod.dita"/> </topicgroup> <topicref href="linking.dita"/> ``` Because "c_archiving_mod.dita" is referenced in two key scopes, we will have in the output file two HTMLs for it: "c_archiving_mod.html" and "c_archiving_mod-1.html". And that's good. The "linking.dita" should generate links to both those HTML files using fully qualified key scoped: ```xml <p>Links <xref keyref="product1_scope.archiving" /> to <xref keyref="product2_scope.archiving" />.</p> ``` But in the generated "linking.html" you will have both links point to the same HTML file (the one generated from the first context). I looked a little bit in the code, the method `org.dita.dost.writer.KeyrefPaser.processElement(Attributes)` resolves key references based on the current "KeyScope" map. Unfortunately the map contains these mappings: ``` product1_scope.archiving=product1_scope.archiving=product1/c_archiving_mod.dita product2_scope.archiving=product2_scope.archiving=product1/c_archiving_mod.dita ``` although in my opinion the "product2_scope.archiving" should have been re-written (possibly in the org.dita.dost.module.KeyrefModule.execute) to point to "product1/c_archiving_mod-1.dita" because the DITA Map has already been re-written to do so. Might also be related with #2523
process
link to keyscoped topic not properly resolved i m attaching a zip sample so the dita map looks like this xml because c archiving mod dita is referenced in two key scopes we will have in the output file two htmls for it c archiving mod html and c archiving mod html and that s good the linking dita should generate links to both those html files using fully qualified key scoped xml links xref keyref scope archiving to xref keyref scope archiving but in the generated linking html you will have both links point to the same html file the one generated from the first context i looked a little bit in the code the method org dita dost writer keyrefpaser processelement attributes resolves key references based on the current keyscope map unfortunately the map contains these mappings scope archiving scope archiving c archiving mod dita scope archiving scope archiving c archiving mod dita although in my opinion the scope archiving should have been re written possibly in the org dita dost module keyrefmodule execute to point to c archiving mod dita because the dita map has already been re written to do so might also be related with
1
12,682
15,048,016,695
IssuesEvent
2021-02-03 09:40:12
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
opened
Move to NumPy's ArrayLike and DtypeLike
process + tools
Introduced in NumPy 1.20.0: https://numpy.org/doc/stable/release/1.20.0-notes.html#numpy-is-now-typed These would replace our types in `sgkit.typing`.
1.0
Move to NumPy's ArrayLike and DtypeLike - Introduced in NumPy 1.20.0: https://numpy.org/doc/stable/release/1.20.0-notes.html#numpy-is-now-typed These would replace our types in `sgkit.typing`.
process
move to numpy s arraylike and dtypelike introduced in numpy these would replace our types in sgkit typing
1
11,829
14,655,267,750
IssuesEvent
2020-12-28 10:36:27
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[PM] [Dev] getting 500 error when site permission is not given
Bug P1 Participant manager Process: Dev Process: Fixed Process: Tested QA Process: Tested dev
Getting 500 error when site permission is not given AR : Displaying general error message ER : 'No sites found' (EC_0070) should be displayed ![permission1](https://user-images.githubusercontent.com/71445210/102902197-a7714900-4494-11eb-9665-e6dcd52c361e.png)
4.0
[PM] [Dev] getting 500 error when site permission is not given - Getting 500 error when site permission is not given AR : Displaying general error message ER : 'No sites found' (EC_0070) should be displayed ![permission1](https://user-images.githubusercontent.com/71445210/102902197-a7714900-4494-11eb-9665-e6dcd52c361e.png)
process
getting error when site permission is not given getting error when site permission is not given ar displaying general error message er no sites found ec should be displayed
1
11,398
9,336,977,514
IssuesEvent
2019-03-28 23:02:36
MicrosoftDocs/azure-docs
https://api.github.com/repos/MicrosoftDocs/azure-docs
closed
Persistence is not working
app-service/svc
The app initializes a single item, but, as no persistence is setup this is the only item available: POST (or PUT) will not create new or update the item --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dbcc24f0-9856-a7db-f229-c383e5e5c57d * Version Independent ID: 55d51b2a-c8fe-086e-a1c2-b0592f57967c * Content: [Host RESTful API with CORS - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-rest-api#feedback) * Content Source: [articles/app-service/app-service-web-tutorial-rest-api.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/app-service-web-tutorial-rest-api.md) * Service: **app-service** * GitHub Login: @cephalin * Microsoft Alias: **cephalin**
1.0
Persistence is not working - The app initializes a single item, but, as no persistence is setup this is the only item available: POST (or PUT) will not create new or update the item --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dbcc24f0-9856-a7db-f229-c383e5e5c57d * Version Independent ID: 55d51b2a-c8fe-086e-a1c2-b0592f57967c * Content: [Host RESTful API with CORS - Azure App Service](https://docs.microsoft.com/en-us/azure/app-service/app-service-web-tutorial-rest-api#feedback) * Content Source: [articles/app-service/app-service-web-tutorial-rest-api.md](https://github.com/Microsoft/azure-docs/blob/master/articles/app-service/app-service-web-tutorial-rest-api.md) * Service: **app-service** * GitHub Login: @cephalin * Microsoft Alias: **cephalin**
non_process
persistence is not working the app initializes a single item but as no persistence is setup this is the only item available post or put will not create new or update the item document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service app service github login cephalin microsoft alias cephalin
0
362,311
25,370,256,657
IssuesEvent
2022-11-21 10:02:24
IT-Academy-BCN/ita-directory
https://api.github.com/repos/IT-Academy-BCN/ita-directory
closed
Add "Recover password" link on login form's design
documentation frontend
Añadir al prototipo de la página de login enlace a "recuperar la contraseña" presente en el proyecto. <img width="332" alt="Captura de Pantalla 2021-10-18 a les 12 46 55" src="https://user-images.githubusercontent.com/8426157/137717201-97edf524-bb2b-44e2-87ec-df4abf1cc834.png">
1.0
Add "Recover password" link on login form's design - Añadir al prototipo de la página de login enlace a "recuperar la contraseña" presente en el proyecto. <img width="332" alt="Captura de Pantalla 2021-10-18 a les 12 46 55" src="https://user-images.githubusercontent.com/8426157/137717201-97edf524-bb2b-44e2-87ec-df4abf1cc834.png">
non_process
add recover password link on login form s design añadir al prototipo de la página de login enlace a recuperar la contraseña presente en el proyecto img width alt captura de pantalla a les src
0
77,252
26,880,858,170
IssuesEvent
2023-02-05 16:14:25
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
Documentation needed for Android Secure Backup
T-Defect
### Steps to reproduce ![Element - Android - Samsung Tablet - Secure Backup - IMG_20230130_004606](https://user-images.githubusercontent.com/31858100/216829924-9a8aaaa1-e4bc-4215-b307-659c9cc09e61.jpg) MY STEPS TO SHOW WHAT IS NOT DOCUMENTED ABOUT ANDROID SECURE BACKUP: Make: Samsung Model: SM-T580 (Galaxy Tab A) OS: Android 8.1.0 Element Version: 1.5.22 [40105221] (G-bdd431d2) Matrix SDK Version: 1.5.22 (bdd431d2) olm version: 3.2.12 Android - Samsung Tablet - Element App - Secure backup Old / Original Account: CatMan7Life NOTE - What you need to already have in advance in order to create a "Secure backup": What this actually means is that you have to first Have performed a 'Key with passphrase' backup before hand You will then need to select/choose a key backup file already on your device as a part of creating a "Secure backup" * Personal Recommendation: Create a new/current Key with passphrase *BEFORE* creating a "Secure backup" - do NOT use a pre-existing older key with passphrase file OPTIONS CHOSEN: Settings Security & Privacy Heading: Secure Backup Tap on "Set up Secure Backup" Info displayed beneath: "Safeguard against losing access to encrypted messages & data by backing up encryption keys on your server." Small grey window/box appears at bottom of screen ************************************ Secure backup Safeguard against losing access to encrypted messages & data by backing up encryption keys on your server. Setup > ! This will replace your current Key or Phrase. Above message is in Red color ************************************ Tap on "Setup" A different small grey window/box appears at bottom of screen: ************************************ Encryption upgrade available Enter your Key Backup recovery key to continue *** What this actually means is that you have to first Have performed a 'Key with passphrase' backup before hand You will then need to select/choose a key backup file already on your device as a part of creating a "Secure backup" Small green rectangular field appears underneath with The text in the middle: "Key Backup recovery key" In the middle of the screen is a gigantic list of session data for encrypted messages (E2EE) that i can scroll through with a green border on the sides You can tap the middle of this field and type or edit the code There is a very small grey colored scroll bar on the right-hand side of the input field Underneath the green rectangular input field are 2 buttons: USE FILE CONTINUE You can optionally click on "USE FILE" which will load up a file manager requester for you to select a file from I choose an already exported file from android element: "Element-megolm-export-@cat7life_matrix.org.... NOTE: I selected a little bit of the text in the field I then selected 'All' from the window and then choose copy The entire window list of all the Megolm encryption/decryption stuff suddenly scrolled all the way to the very bottom of the screen !!! WHAT I WANT TO BE ABLE TO DO: I use Element Web under linux using Browser Brave My situation is that under this element user account, i have a number of rooms that have non-decryptable messages in them on web element using the browser brave My Samsung TAB however can read all the messages and all the messages are also there (unlike on any of my 3 different linux PCs using element web with browser brave) I want to be able to have all 3 of my linux PCs be able to 1. Sync ALL the messages i have in my rooms 2. Be able to decrypt all the messages in the rooms - presumably by somehow the decryption keys from my android tablet I ended up with a situation where my samsung tablet had a requester at the top of the screen saying it needed to communicate with another matrix.org client app (not element) and i was only given the option to use the 'Secure Backup' - but i dont know how to use the android options here I did post in the official element android room on matrix - i got no responses Please help documented this and provide at least 1 real example of how to use Thanks devs :) ### Outcome #### What did you expect? I expected a function set to be exactly the same as the official desktop element app/element web #### What happened instead? Did not get to use features, as not documented ### Your phone model SM-T580 (Galaxy Tab A) ### Operating system version Android 8.1.0 ### Application version and app store Element Version: 1.5.22 [40105221] (G-bdd431d2), Element Version: 1.5.22 [40105221] (G-bdd431d2), olm version: 3.2.12 ### Homeserver _No response_ ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
Documentation needed for Android Secure Backup - ### Steps to reproduce ![Element - Android - Samsung Tablet - Secure Backup - IMG_20230130_004606](https://user-images.githubusercontent.com/31858100/216829924-9a8aaaa1-e4bc-4215-b307-659c9cc09e61.jpg) MY STEPS TO SHOW WHAT IS NOT DOCUMENTED ABOUT ANDROID SECURE BACKUP: Make: Samsung Model: SM-T580 (Galaxy Tab A) OS: Android 8.1.0 Element Version: 1.5.22 [40105221] (G-bdd431d2) Matrix SDK Version: 1.5.22 (bdd431d2) olm version: 3.2.12 Android - Samsung Tablet - Element App - Secure backup Old / Original Account: CatMan7Life NOTE - What you need to already have in advance in order to create a "Secure backup": What this actually means is that you have to first Have performed a 'Key with passphrase' backup before hand You will then need to select/choose a key backup file already on your device as a part of creating a "Secure backup" * Personal Recommendation: Create a new/current Key with passphrase *BEFORE* creating a "Secure backup" - do NOT use a pre-existing older key with passphrase file OPTIONS CHOSEN: Settings Security & Privacy Heading: Secure Backup Tap on "Set up Secure Backup" Info displayed beneath: "Safeguard against losing access to encrypted messages & data by backing up encryption keys on your server." Small grey window/box appears at bottom of screen ************************************ Secure backup Safeguard against losing access to encrypted messages & data by backing up encryption keys on your server. Setup > ! This will replace your current Key or Phrase. Above message is in Red color ************************************ Tap on "Setup" A different small grey window/box appears at bottom of screen: ************************************ Encryption upgrade available Enter your Key Backup recovery key to continue *** What this actually means is that you have to first Have performed a 'Key with passphrase' backup before hand You will then need to select/choose a key backup file already on your device as a part of creating a "Secure backup" Small green rectangular field appears underneath with The text in the middle: "Key Backup recovery key" In the middle of the screen is a gigantic list of session data for encrypted messages (E2EE) that i can scroll through with a green border on the sides You can tap the middle of this field and type or edit the code There is a very small grey colored scroll bar on the right-hand side of the input field Underneath the green rectangular input field are 2 buttons: USE FILE CONTINUE You can optionally click on "USE FILE" which will load up a file manager requester for you to select a file from I choose an already exported file from android element: "Element-megolm-export-@cat7life_matrix.org.... NOTE: I selected a little bit of the text in the field I then selected 'All' from the window and then choose copy The entire window list of all the Megolm encryption/decryption stuff suddenly scrolled all the way to the very bottom of the screen !!! WHAT I WANT TO BE ABLE TO DO: I use Element Web under linux using Browser Brave My situation is that under this element user account, i have a number of rooms that have non-decryptable messages in them on web element using the browser brave My Samsung TAB however can read all the messages and all the messages are also there (unlike on any of my 3 different linux PCs using element web with browser brave) I want to be able to have all 3 of my linux PCs be able to 1. Sync ALL the messages i have in my rooms 2. Be able to decrypt all the messages in the rooms - presumably by somehow the decryption keys from my android tablet I ended up with a situation where my samsung tablet had a requester at the top of the screen saying it needed to communicate with another matrix.org client app (not element) and i was only given the option to use the 'Secure Backup' - but i dont know how to use the android options here I did post in the official element android room on matrix - i got no responses Please help documented this and provide at least 1 real example of how to use Thanks devs :) ### Outcome #### What did you expect? I expected a function set to be exactly the same as the official desktop element app/element web #### What happened instead? Did not get to use features, as not documented ### Your phone model SM-T580 (Galaxy Tab A) ### Operating system version Android 8.1.0 ### Application version and app store Element Version: 1.5.22 [40105221] (G-bdd431d2), Element Version: 1.5.22 [40105221] (G-bdd431d2), olm version: 3.2.12 ### Homeserver _No response_ ### Will you send logs? No ### Are you willing to provide a PR? No
non_process
documentation needed for android secure backup steps to reproduce my steps to show what is not documented about android secure backup make samsung model sm galaxy tab a os android element version g matrix sdk version olm version android samsung tablet element app secure backup old original account note what you need to already have in advance in order to create a secure backup what this actually means is that you have to first have performed a key with passphrase backup before hand you will then need to select choose a key backup file already on your device as a part of creating a secure backup personal recommendation create a new current key with passphrase before creating a secure backup do not use a pre existing older key with passphrase file options chosen settings security privacy heading secure backup tap on set up secure backup info displayed beneath safeguard against losing access to encrypted messages data by backing up encryption keys on your server small grey window box appears at bottom of screen secure backup safeguard against losing access to encrypted messages data by backing up encryption keys on your server setup this will replace your current key or phrase above message is in red color tap on setup a different small grey window box appears at bottom of screen encryption upgrade available enter your key backup recovery key to continue what this actually means is that you have to first have performed a key with passphrase backup before hand you will then need to select choose a key backup file already on your device as a part of creating a secure backup small green rectangular field appears underneath with the text in the middle key backup recovery key in the middle of the screen is a gigantic list of session data for encrypted messages that i can scroll through with a green border on the sides you can tap the middle of this field and type or edit the code there is a very small grey colored scroll bar on the right hand side of the input field underneath the green rectangular input field are buttons use file continue you can optionally click on use file which will load up a file manager requester for you to select a file from i choose an already exported file from android element element megolm export matrix org note i selected a little bit of the text in the field i then selected all from the window and then choose copy the entire window list of all the megolm encryption decryption stuff suddenly scrolled all the way to the very bottom of the screen what i want to be able to do i use element web under linux using browser brave my situation is that under this element user account i have a number of rooms that have non decryptable messages in them on web element using the browser brave my samsung tab however can read all the messages and all the messages are also there unlike on any of my different linux pcs using element web with browser brave i want to be able to have all of my linux pcs be able to sync all the messages i have in my rooms be able to decrypt all the messages in the rooms presumably by somehow the decryption keys from my android tablet i ended up with a situation where my samsung tablet had a requester at the top of the screen saying it needed to communicate with another matrix org client app not element and i was only given the option to use the secure backup but i dont know how to use the android options here i did post in the official element android room on matrix i got no responses please help documented this and provide at least real example of how to use thanks devs outcome what did you expect i expected a function set to be exactly the same as the official desktop element app element web what happened instead did not get to use features as not documented your phone model sm galaxy tab a operating system version android application version and app store element version g element version g olm version homeserver no response will you send logs no are you willing to provide a pr no
0
195,802
22,360,123,255
IssuesEvent
2022-06-15 19:35:42
videojs/videojs-overlay
https://api.github.com/repos/videojs/videojs-overlay
closed
CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz
security vulnerability
## CVE-2019-20149 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary> <p>Get the native type of a value.</p> <p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p> <p> Dependency Hierarchy: - karma-3.0.0.tgz (Root Library) - chokidar-2.0.4.tgz - braces-2.3.2.tgz - snapdragon-node-2.1.1.tgz - define-property-1.0.0.tgz - is-descriptor-1.0.2.tgz - :x: **kind-of-6.0.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/brightcove/videojs-overlay/commit/fbdf8372a96f7f26965d33fedec5089038e609dc">fbdf8372a96f7f26965d33fedec5089038e609dc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result. <p>Publish Date: 2019-12-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p> <p>Release Date: 2020-08-24</p> <p>Fix Resolution (kind-of): 6.0.3</p> <p>Direct dependency fix Resolution (karma): 6.3.18</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"karma","packageVersion":"3.0.0","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"karma:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"6.3.18","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-20149","vulnerabilityDetails":"ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by \u0027constructor\u0027: {\u0027name\u0027:\u0027Symbol\u0027}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz - ## CVE-2019-20149 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary> <p>Get the native type of a value.</p> <p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p> <p> Dependency Hierarchy: - karma-3.0.0.tgz (Root Library) - chokidar-2.0.4.tgz - braces-2.3.2.tgz - snapdragon-node-2.1.1.tgz - define-property-1.0.0.tgz - is-descriptor-1.0.2.tgz - :x: **kind-of-6.0.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/brightcove/videojs-overlay/commit/fbdf8372a96f7f26965d33fedec5089038e609dc">fbdf8372a96f7f26965d33fedec5089038e609dc</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result. <p>Publish Date: 2019-12-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p> <p>Release Date: 2020-08-24</p> <p>Fix Resolution (kind-of): 6.0.3</p> <p>Direct dependency fix Resolution (karma): 6.3.18</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"karma","packageVersion":"3.0.0","packageFilePaths":[],"isTransitiveDependency":false,"dependencyTree":"karma:3.0.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"6.3.18","isBinary":true}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-20149","vulnerabilityDetails":"ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by \u0027constructor\u0027: {\u0027name\u0027:\u0027Symbol\u0027}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in kind of tgz cve high severity vulnerability vulnerable library kind of tgz get the native type of a value library home page a href dependency hierarchy karma tgz root library chokidar tgz braces tgz snapdragon node tgz define property tgz is descriptor tgz x kind of tgz vulnerable library found in head commit a href found in base branch master vulnerability details ctorname in index js in kind of allows external user input to overwrite certain internal attributes via a conflicting name as demonstrated by constructor name symbol hence a crafted payload can overwrite this builtin attribute to manipulate the type detection result publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution kind of direct dependency fix resolution karma isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree karma isminimumfixversionavailable true minimumfixversion isbinary true basebranches vulnerabilityidentifier cve vulnerabilitydetails ctorname in index js in kind of allows external user input to overwrite certain internal attributes via a conflicting name as demonstrated by hence a crafted payload can overwrite this builtin attribute to manipulate the type detection result vulnerabilityurl
0
3,873
6,812,082,651
IssuesEvent
2017-11-06 00:13:03
learn-anything/maps
https://api.github.com/repos/learn-anything/maps
closed
best path for learning clojurescript
natural language processing programming study plan
Take a look [here](https://my.mindnode.com/gDrV35KLRsdT9Ajxvr7EnN9yanPrgTx7LRzmfBUK). If you think there is a better way one can learn clojurescript or you think the way the nodes are structured is wrong, please say it here. Also if you think there are some really amazing resources on clojurescript that are missing or you wish something was added, you can say it here.
1.0
best path for learning clojurescript - Take a look [here](https://my.mindnode.com/gDrV35KLRsdT9Ajxvr7EnN9yanPrgTx7LRzmfBUK). If you think there is a better way one can learn clojurescript or you think the way the nodes are structured is wrong, please say it here. Also if you think there are some really amazing resources on clojurescript that are missing or you wish something was added, you can say it here.
process
best path for learning clojurescript take a look if you think there is a better way one can learn clojurescript or you think the way the nodes are structured is wrong please say it here also if you think there are some really amazing resources on clojurescript that are missing or you wish something was added you can say it here
1
3,316
6,425,291,967
IssuesEvent
2017-08-09 15:08:34
dzhw/zofar
https://api.github.com/repos/dzhw/zofar
closed
New Plugin: automated Deployment
category: technical.processes et: 32 prio: 1 status: development type: backlog.task
General deployment platform (on presentation-Server): - [x] **Create New Plugin für contin. Integration.** _Task for another Time: Get another Virtual Machine for the presentation server._
1.0
New Plugin: automated Deployment - General deployment platform (on presentation-Server): - [x] **Create New Plugin für contin. Integration.** _Task for another Time: Get another Virtual Machine for the presentation server._
process
new plugin automated deployment general deployment platform on presentation server create new plugin für contin integration task for another time get another virtual machine for the presentation server
1
10,762
13,549,367,967
IssuesEvent
2020-09-17 08:06:02
easably/games
https://api.github.com/repos/easably/games
reopened
3. Словарь с ачивками
☭ in process
- [ ] добавить в сайдбаре ссылку "Мой словарь" выше ссылки "Аккаунт" (иконка на усмотрение разработчика) ![vocabulary](https://user-images.githubusercontent.com/20292939/92769674-e4ac4200-f3a1-11ea-8631-c88834b8fcc6.png) - [ ] добавить список слов, структура как на макете, на сам дизайн внимания не обращай, в рамках этой задачи нужна только структура словаря ![words](https://user-images.githubusercontent.com/20292939/93016678-b8740980-f5cb-11ea-9e05-b47e4a4cab89.png) - [ ] ачивка на слове пройденного без ошибок слова (звезда или алмаз) - [ ] ачивка на слове изученного слова (пройденного без ошибок слова 5 раз) - [ ] добавить сортировку выше списка слов: "Всего слов - (количество)", select "Показать все слова/без звёзд/со звёздами/изученные", select "Сортировать по алфавиту(A=>Z)/по дате добавления(новые сверху)", поиск слова ![sort](https://user-images.githubusercontent.com/20292939/93016752-36d0ab80-f5cc-11ea-8fd6-4d0527858a4a.png) Примерный прототип интерфейса, который будем имплементить на третьем спринте (или раньше :) [здеся](https://www.figma.com/file/IfAi5eM7tzW3Zlq8ARawN2/MVP-prototype?node-id=0%3A1)
1.0
3. Словарь с ачивками - - [ ] добавить в сайдбаре ссылку "Мой словарь" выше ссылки "Аккаунт" (иконка на усмотрение разработчика) ![vocabulary](https://user-images.githubusercontent.com/20292939/92769674-e4ac4200-f3a1-11ea-8631-c88834b8fcc6.png) - [ ] добавить список слов, структура как на макете, на сам дизайн внимания не обращай, в рамках этой задачи нужна только структура словаря ![words](https://user-images.githubusercontent.com/20292939/93016678-b8740980-f5cb-11ea-9e05-b47e4a4cab89.png) - [ ] ачивка на слове пройденного без ошибок слова (звезда или алмаз) - [ ] ачивка на слове изученного слова (пройденного без ошибок слова 5 раз) - [ ] добавить сортировку выше списка слов: "Всего слов - (количество)", select "Показать все слова/без звёзд/со звёздами/изученные", select "Сортировать по алфавиту(A=>Z)/по дате добавления(новые сверху)", поиск слова ![sort](https://user-images.githubusercontent.com/20292939/93016752-36d0ab80-f5cc-11ea-8fd6-4d0527858a4a.png) Примерный прототип интерфейса, который будем имплементить на третьем спринте (или раньше :) [здеся](https://www.figma.com/file/IfAi5eM7tzW3Zlq8ARawN2/MVP-prototype?node-id=0%3A1)
process
словарь с ачивками добавить в сайдбаре ссылку мой словарь выше ссылки аккаунт иконка на усмотрение разработчика добавить список слов структура как на макете на сам дизайн внимания не обращай в рамках этой задачи нужна только структура словаря ачивка на слове пройденного без ошибок слова звезда или алмаз ачивка на слове изученного слова пройденного без ошибок слова раз добавить сортировку выше списка слов всего слов количество select показать все слова без звёзд со звёздами изученные select сортировать по алфавиту a z по дате добавления новые сверху поиск слова примерный прототип интерфейса который будем имплементить на третьем спринте или раньше
1
19,984
14,884,261,477
IssuesEvent
2021-01-20 14:22:56
InfectedLibraries/Biohazrd
https://api.github.com/repos/InfectedLibraries/Biohazrd
opened
All user-defined records should be returned by reference for instance methods on Windows x64
Arch-x64 Area-Translation Bug Concept-Correctness Concept-OutputUsability
Biohazrd does not correctly handle instance methods which return records. As far as I can tell, in **all** scenarios, these types are returned by reference. (Even if they can be enregistered.) Additionally, these functions really should be emitted as returning a pointer to the return type as well since the buffer passed in is returned there again. (Although that's really only important in the context of virtual methods since ignoring the return value doesn't matter.) # Windows x64 [The nuance is subtle here](https://docs.microsoft.com/en-us/cpp/build/x64-calling-convention?view=msvc-160#return-values): > User-defined types can be returned by value from global functions and static member functions. Notice that this omits instance methods of all kinds. # Linux x64 The [relevant Itanium spec](https://itanium-cxx-abi.github.io/cxx-abi/abi.html#non-trivial-return-values) does not mention anything about records not being returned by value, and in practice they are enregistered as expected. The SysV x86-64 ABI defines the fact that a pointer to the return buffer is returned: (From 3.2.3 "Returning of Values"): > On return `%rax` will contain the address that has been passed in by the caller in `%rdi`. # Manual Verification Manual verification indicates that the above is correct. Additionally, when a return buffer is used, the return buffer is passed back in `rax` on both Windows and Linux. The ABI can be manually verified [using this Godbolt](https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tQVvQAZPLUwA5YwCNMxEAHYADKQAOqBYXW0eoYmgj5%2BanRWNvZGTi4eisqYqgEMBMzEBEHGppyJKhG0aRkEUXaOzoIK6ZnZIXnVJWUxcVwAlIqoBsTIHADkUgDMeFRYVADUAPoAsgw6kwBq2ABK4u4AgkNj1pjj2AAaAAoA8ssAKlOTWMisCl7JEOis7AAePplta5uSg8pKX1tMFQdnsjqczgCfuYRgCNtViAZVOM8CiAa5ZBtxljkfRxi9xIMMd9hhN3OMAPTk8ZnVDAYDscZEcY0Hq7LzMBR%2BUTjBwAT3GADc2AZMF9sTiCONeaRxgAvGUAdxlzBlDhlEkJkN%2B2hhG3ErgAIgSiV9GmpkBLxgBxTAEACS9Agnz16LF2OItu6tHGnHc7mNaKNerh6XNyJR1ttdujTrRRPFKLw42I4wJBtT6J9fozRs1mPdnuI3uIAZdQc2GxuHIUoJO53G015OmEnLjXy8BgcrDwyBAbqxZp7luK5pt9sdzvW4v18fFBYIXqz/rzU%2Bx%2BvL/fGg4tifGI57Y%2Bjdtj%2BaxM83CYjKbTGekS5zpdXc%2BTheLj%2BnhthT8tDsatF6h4Tpu56nnOHoLkWS7vmun7Bt%2Bu6/uk/6YIeMaTh%2Bs7Pru16DOmM73uu0HPuBi4liuH4bqBAp4JkBhsJaCw0QubCAQQJ7fiB37iiRkG%2BsumEPnB4rUbR9G7oxomsKhx7oTBAmXkmOF4ZmfEPuRz4vhBb7qWesGbJ%2BK5fMwBhMswqa4ZG44EI%2BxlMg45nptJNkmag4wWjeNgKg2TYtgosYrrZrnoA53nNtWIAgPuyCsc5TK7DejZhZyEVRU5AUucyIVSAAbIlvkRYhIgAVG9Cxa5wBZZIuU%2BeFICFchaVEn0HSsCAfQAKx9KQph9O4XWoG1OhyHIW5dKyqY/JwXUEG1fVtB0ADWICDIMAB0ACcriuO13DcNlgx7dw7icK4QhtdwXVGCA7WcKtkiuAAHJt2UPZwD07Tdp09X1pADX0XUKCAngzb1zWkHAsBIGgRheHg7BkBQEDQ7D8MgAKyBeF4kwCpw62TJID2TC82W8MCrAEM4gMQA4s1dQ41gZLybVTaQ0NGFoBDHLQrBM6DpBYEYIj0v0P34B6KQCpggN85gLzJCZItddYFOtXz3YOMQjN6FgzPTcQeBXaDHQ0PQTBsBwPD8IIwiiCgw0yEIeAOIDkAdKgXiFID5LHJI%2BTJIUmjaHUuSkOYzQVC4eRhP4dDB6Evgx7Q4exJUDRJCkdDFLU%2Bg5FU6eFFnpQ7C0qeKDUWQ5/UZdNMXEftJ03S9PXqudd1tO/W1xPcOMRgKOjgq43dD3jBAuCECQE2DHk4x6DDcPOJPnBtDP9vSNNtPzaQCCYMwWAuE6pBLYMkirS962cJIkjuMf7hVTdZ19Bdbd839ANA6QINzeDMCICgqBz/DcglBkbzxcFWUQfFPBkwpsQKmNM%2Bb01oIzXWrN/7s3oFzHm7cBZC3YNgmi/s8CS2lj9WW8sKYoOVsodu6tNbECbBgRWH99aGymsbOgjAWB4MtgIX2NsxCr0ds7eAbsPYBGlgAWgUAgDImBgoSOOIMAG%2BcAiB10JXEOYda4p0jt4BOhQ45R30QEZOrQ04FFSOXQxfsM5FHLqY0ujRs7BBDk4ou0Q65LwbqyZubVW7fX6p3B62UJEkzcvwrMq13BRJHmPIg15JoylnijBeQxJDLyGjIOQ68jaLWWmtFahSinFIfk/AJHd/qKHfp/ZqLU2qSEulwP0z8fqvw/hvDoktYGqO4EAA%3D%3D): ```cpp #ifdef _MSC_VER #define EXPORT __declspec(dllexport) #else #define EXPORT #endif struct iii { int x; #if 0 // Toggle to force passing by value int y, z, w, a, b, c; #endif }; static int GetInt() { return 100; } static iii GetIII() { iii r = { 100 }; return r; } class EXPORT MyClass { public: static int StaticGetInt() { return 100; } static iii StaticGetIII() { iii r = { 100 }; return r; } int InstanceGetInt() { return 100; } iii InstanceGetIII() { iii r = { 100 }; return r; } virtual int VirtualGetInt() { return 100; } virtual iii VirtualGetIII() { iii r = { 100 }; return r; } }; auto a = GetInt; auto b = GetIII; auto c = new MyClass(); auto d = MyClass::StaticGetInt; auto e = MyClass::StaticGetIII; auto f = &MyClass::InstanceGetInt; auto g = &MyClass::InstanceGetIII; ``` Summary: Function | Win0 | Linux0 | Win1 | Linux1 -----|-----|-----|-----|----- `GetInt` | Reg | Reg | Reg | Reg `GetIII` | Reg | Reg | Buf | Buf `StaticGetInt` | Reg | Reg | Reg | Reg `StaticGetIII` | Reg | Reg | Buf | Buf `IntanceGetInt` | Reg | Reg | Reg | Reg `InstanceGetIII` | **Buf** | Reg | Buf | Buf `VirtualGetInt` | Reg | Reg | Reg | Reg `VirtualGetIII` | **Buf** | Reg | Buf | Buf `Win0` vs `Win1` means the toggle on line 10 was switched to make `iii` unable to be enregistered. (No surprises with the behavior there, it's included for completeness.)
True
All user-defined records should be returned by reference for instance methods on Windows x64 - Biohazrd does not correctly handle instance methods which return records. As far as I can tell, in **all** scenarios, these types are returned by reference. (Even if they can be enregistered.) Additionally, these functions really should be emitted as returning a pointer to the return type as well since the buffer passed in is returned there again. (Although that's really only important in the context of virtual methods since ignoring the return value doesn't matter.) # Windows x64 [The nuance is subtle here](https://docs.microsoft.com/en-us/cpp/build/x64-calling-convention?view=msvc-160#return-values): > User-defined types can be returned by value from global functions and static member functions. Notice that this omits instance methods of all kinds. # Linux x64 The [relevant Itanium spec](https://itanium-cxx-abi.github.io/cxx-abi/abi.html#non-trivial-return-values) does not mention anything about records not being returned by value, and in practice they are enregistered as expected. The SysV x86-64 ABI defines the fact that a pointer to the return buffer is returned: (From 3.2.3 "Returning of Values"): > On return `%rax` will contain the address that has been passed in by the caller in `%rdi`. # Manual Verification Manual verification indicates that the above is correct. Additionally, when a return buffer is used, the return buffer is passed back in `rax` on both Windows and Linux. The ABI can be manually verified [using this Godbolt](https://godbolt.org/#z:OYLghAFBqd5QCxAYwPYBMCmBRdBLAF1QCcAaPECAM1QDsCBlZAQwBtMQBGAFlICsupVs1qhkAUgBMAISnTSAZ0ztkBPHUqZa6AMKpWAVwC2tQVvQAZPLUwA5YwCNMxEAHYADKQAOqBYXW0eoYmgj5%2BanRWNvZGTi4eisqYqgEMBMzEBEHGppyJKhG0aRkEUXaOzoIK6ZnZIXnVJWUxcVwAlIqoBsTIHADkUgDMeFRYVADUAPoAsgw6kwBq2ABK4u4AgkNj1pjj2AAaAAoA8ssAKlOTWMisCl7JEOis7AAePplta5uSg8pKX1tMFQdnsjqczgCfuYRgCNtViAZVOM8CiAa5ZBtxljkfRxi9xIMMd9hhN3OMAPTk8ZnVDAYDscZEcY0Hq7LzMBR%2BUTjBwAT3GADc2AZMF9sTiCONeaRxgAvGUAdxlzBlDhlEkJkN%2B2hhG3ErgAIgSiV9GmpkBLxgBxTAEACS9Agnz16LF2OItu6tHGnHc7mNaKNerh6XNyJR1ttdujTrRRPFKLw42I4wJBtT6J9fozRs1mPdnuI3uIAZdQc2GxuHIUoJO53G015OmEnLjXy8BgcrDwyBAbqxZp7luK5pt9sdzvW4v18fFBYIXqz/rzU%2Bx%2BvL/fGg4tifGI57Y%2Bjdtj%2BaxM83CYjKbTGekS5zpdXc%2BTheLj%2BnhthT8tDsatF6h4Tpu56nnOHoLkWS7vmun7Bt%2Bu6/uk/6YIeMaTh%2Bs7Pru16DOmM73uu0HPuBi4liuH4bqBAp4JkBhsJaCw0QubCAQQJ7fiB37iiRkG%2BsumEPnB4rUbR9G7oxomsKhx7oTBAmXkmOF4ZmfEPuRz4vhBb7qWesGbJ%2BK5fMwBhMswqa4ZG44EI%2BxlMg45nptJNkmag4wWjeNgKg2TYtgosYrrZrnoA53nNtWIAgPuyCsc5TK7DejZhZyEVRU5AUucyIVSAAbIlvkRYhIgAVG9Cxa5wBZZIuU%2BeFICFchaVEn0HSsCAfQAKx9KQph9O4XWoG1OhyHIW5dKyqY/JwXUEG1fVtB0ADWICDIMAB0ACcriuO13DcNlgx7dw7icK4QhtdwXVGCA7WcKtkiuAAHJt2UPZwD07Tdp09X1pADX0XUKCAngzb1zWkHAsBIGgRheHg7BkBQEDQ7D8MgAKyBeF4kwCpw62TJID2TC82W8MCrAEM4gMQA4s1dQ41gZLybVTaQ0NGFoBDHLQrBM6DpBYEYIj0v0P34B6KQCpggN85gLzJCZItddYFOtXz3YOMQjN6FgzPTcQeBXaDHQ0PQTBsBwPD8IIwiiCgw0yEIeAOIDkAdKgXiFID5LHJI%2BTJIUmjaHUuSkOYzQVC4eRhP4dDB6Evgx7Q4exJUDRJCkdDFLU%2Bg5FU6eFFnpQ7C0qeKDUWQ5/UZdNMXEftJ03S9PXqudd1tO/W1xPcOMRgKOjgq43dD3jBAuCECQE2DHk4x6DDcPOJPnBtDP9vSNNtPzaQCCYMwWAuE6pBLYMkirS962cJIkjuMf7hVTdZ19Bdbd839ANA6QINzeDMCICgqBz/DcglBkbzxcFWUQfFPBkwpsQKmNM%2Bb01oIzXWrN/7s3oFzHm7cBZC3YNgmi/s8CS2lj9WW8sKYoOVsodu6tNbECbBgRWH99aGymsbOgjAWB4MtgIX2NsxCr0ds7eAbsPYBGlgAWgUAgDImBgoSOOIMAG%2BcAiB10JXEOYda4p0jt4BOhQ45R30QEZOrQ04FFSOXQxfsM5FHLqY0ujRs7BBDk4ou0Q65LwbqyZubVW7fX6p3B62UJEkzcvwrMq13BRJHmPIg15JoylnijBeQxJDLyGjIOQ68jaLWWmtFahSinFIfk/AJHd/qKHfp/ZqLU2qSEulwP0z8fqvw/hvDoktYGqO4EAA%3D%3D): ```cpp #ifdef _MSC_VER #define EXPORT __declspec(dllexport) #else #define EXPORT #endif struct iii { int x; #if 0 // Toggle to force passing by value int y, z, w, a, b, c; #endif }; static int GetInt() { return 100; } static iii GetIII() { iii r = { 100 }; return r; } class EXPORT MyClass { public: static int StaticGetInt() { return 100; } static iii StaticGetIII() { iii r = { 100 }; return r; } int InstanceGetInt() { return 100; } iii InstanceGetIII() { iii r = { 100 }; return r; } virtual int VirtualGetInt() { return 100; } virtual iii VirtualGetIII() { iii r = { 100 }; return r; } }; auto a = GetInt; auto b = GetIII; auto c = new MyClass(); auto d = MyClass::StaticGetInt; auto e = MyClass::StaticGetIII; auto f = &MyClass::InstanceGetInt; auto g = &MyClass::InstanceGetIII; ``` Summary: Function | Win0 | Linux0 | Win1 | Linux1 -----|-----|-----|-----|----- `GetInt` | Reg | Reg | Reg | Reg `GetIII` | Reg | Reg | Buf | Buf `StaticGetInt` | Reg | Reg | Reg | Reg `StaticGetIII` | Reg | Reg | Buf | Buf `IntanceGetInt` | Reg | Reg | Reg | Reg `InstanceGetIII` | **Buf** | Reg | Buf | Buf `VirtualGetInt` | Reg | Reg | Reg | Reg `VirtualGetIII` | **Buf** | Reg | Buf | Buf `Win0` vs `Win1` means the toggle on line 10 was switched to make `iii` unable to be enregistered. (No surprises with the behavior there, it's included for completeness.)
non_process
all user defined records should be returned by reference for instance methods on windows biohazrd does not correctly handle instance methods which return records as far as i can tell in all scenarios these types are returned by reference even if they can be enregistered additionally these functions really should be emitted as returning a pointer to the return type as well since the buffer passed in is returned there again although that s really only important in the context of virtual methods since ignoring the return value doesn t matter windows user defined types can be returned by value from global functions and static member functions notice that this omits instance methods of all kinds linux the does not mention anything about records not being returned by value and in practice they are enregistered as expected the sysv abi defines the fact that a pointer to the return buffer is returned from returning of values on return rax will contain the address that has been passed in by the caller in rdi manual verification manual verification indicates that the above is correct additionally when a return buffer is used the return buffer is passed back in rax on both windows and linux the abi can be manually verified cpp ifdef msc ver define export declspec dllexport else define export endif struct iii int x if toggle to force passing by value int y z w a b c endif static int getint return static iii getiii iii r return r class export myclass public static int staticgetint return static iii staticgetiii iii r return r int instancegetint return iii instancegetiii iii r return r virtual int virtualgetint return virtual iii virtualgetiii iii r return r auto a getint auto b getiii auto c new myclass auto d myclass staticgetint auto e myclass staticgetiii auto f myclass instancegetint auto g myclass instancegetiii summary function getint reg reg reg reg getiii reg reg buf buf staticgetint reg reg reg reg staticgetiii reg reg buf buf intancegetint reg reg reg reg instancegetiii buf reg buf buf virtualgetint reg reg reg reg virtualgetiii buf reg buf buf vs means the toggle on line was switched to make iii unable to be enregistered no surprises with the behavior there it s included for completeness
0
128,292
10,523,944,380
IssuesEvent
2019-09-30 12:15:39
kubernetes/kubeadm
https://api.github.com/repos/kubernetes/kubeadm
closed
update kubeadm/kinder tests for 1.16
area/testing kind/cleanup lifecycle/active priority/important-soon
needs PRs for adding 1.16 jobs: - [x] k/kubeadm/kinder https://github.com/kubernetes/kubeadm/pull/1744 - [x] test-infra https://github.com/kubernetes/test-infra/pull/14037 and PRs for removing 1.13 jobs once 1.16 is out: - [x] k/kubeadm/kinder https://github.com/kubernetes/kubeadm/pull/1804 - [x] test-infra https://github.com/kubernetes/test-infra/pull/14483 /assign /area testing /kind cleanup /priority important-soon
1.0
update kubeadm/kinder tests for 1.16 - needs PRs for adding 1.16 jobs: - [x] k/kubeadm/kinder https://github.com/kubernetes/kubeadm/pull/1744 - [x] test-infra https://github.com/kubernetes/test-infra/pull/14037 and PRs for removing 1.13 jobs once 1.16 is out: - [x] k/kubeadm/kinder https://github.com/kubernetes/kubeadm/pull/1804 - [x] test-infra https://github.com/kubernetes/test-infra/pull/14483 /assign /area testing /kind cleanup /priority important-soon
non_process
update kubeadm kinder tests for needs prs for adding jobs k kubeadm kinder test infra and prs for removing jobs once is out k kubeadm kinder test infra assign area testing kind cleanup priority important soon
0
148,884
13,249,780,332
IssuesEvent
2020-08-19 21:27:06
raybellwaves/xskillscore
https://api.github.com/repos/raybellwaves/xskillscore
opened
add xarray to intersphinx_mapping
documentation
Add xarray in here https://github.com/raybellwaves/xskillscore/blob/master/docs/source/conf.py#L122 To be able to link to the xarray docs in the See also section e.g. https://xskillscore.readthedocs.io/en/latest/api/xskillscore.pearson_r.html#xskillscore.pearson_r
1.0
add xarray to intersphinx_mapping - Add xarray in here https://github.com/raybellwaves/xskillscore/blob/master/docs/source/conf.py#L122 To be able to link to the xarray docs in the See also section e.g. https://xskillscore.readthedocs.io/en/latest/api/xskillscore.pearson_r.html#xskillscore.pearson_r
non_process
add xarray to intersphinx mapping add xarray in here to be able to link to the xarray docs in the see also section e g
0
22,298
30,854,055,117
IssuesEvent
2023-08-02 19:03:29
cohenlabUNC/clpipe
https://api.github.com/repos/cohenlabUNC/clpipe
closed
Support Wildcarding For Confound Column Selection
postprocess2 medium
Allow something like comp_cor* to be used in column selection, for either confound columns setting or scrub target columns.
1.0
Support Wildcarding For Confound Column Selection - Allow something like comp_cor* to be used in column selection, for either confound columns setting or scrub target columns.
process
support wildcarding for confound column selection allow something like comp cor to be used in column selection for either confound columns setting or scrub target columns
1
10,888
13,669,204,766
IssuesEvent
2020-09-29 01:15:05
knative/serving
https://api.github.com/repos/knative/serving
closed
Document the knative provisioning guide
area/API area/autoscale area/networking kind/feature kind/process lifecycle/stale
/area API /area autoscale /kind process ## Describe the feature Currently we provision Knative for a quite small load, by default. We should provide some guidance in how to provision the system; suggested replicas (for activator), resources (CPU/Memory) vs concurrency/# services/QPS.
1.0
Document the knative provisioning guide - /area API /area autoscale /kind process ## Describe the feature Currently we provision Knative for a quite small load, by default. We should provide some guidance in how to provision the system; suggested replicas (for activator), resources (CPU/Memory) vs concurrency/# services/QPS.
process
document the knative provisioning guide area api area autoscale kind process describe the feature currently we provision knative for a quite small load by default we should provide some guidance in how to provision the system suggested replicas for activator resources cpu memory vs concurrency services qps
1
37,902
10,110,352,119
IssuesEvent
2019-07-30 10:03:12
tensorflow/tensorflow
https://api.github.com/repos/tensorflow/tensorflow
closed
XLA Warning
comp:xla type:build/install
<!--This template is for miscellaneous issues not covered by the other issue categories. For questions on how to work with TensorFlow, or support for problems that are not verified bugs in TensorFlow, please go to [StackOverflow](https://stackoverflow.com/questions/tagged/tensorflow). If you are reporting a vulnerability, please use the [dedicated reporting process](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md). For high-level discussions about TensorFlow, please post to discuss@tensorflow.org, for questions about the development or internal workings of TensorFlow, or if you would like to know how to contribute to TensorFlow, please post to developers@tensorflow.org.--> I am receiving this warning, ``` W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. ``` in one of my travic ci builds. AFAIK, XLA is going through active development and therefore I prefer not to use, `TF_XLA_FLAGS=--tf_xla_cpu_global_jit`. However, I cannot figure out how to silence this warning, permanently during future builds. If any such technique exists then I would suggest to add that to warning itself. Please let me know, if it can be done by manipulating env variables. Thanks.
1.0
XLA Warning - <!--This template is for miscellaneous issues not covered by the other issue categories. For questions on how to work with TensorFlow, or support for problems that are not verified bugs in TensorFlow, please go to [StackOverflow](https://stackoverflow.com/questions/tagged/tensorflow). If you are reporting a vulnerability, please use the [dedicated reporting process](https://github.com/tensorflow/tensorflow/blob/master/SECURITY.md). For high-level discussions about TensorFlow, please post to discuss@tensorflow.org, for questions about the development or internal workings of TensorFlow, or if you would like to know how to contribute to TensorFlow, please post to developers@tensorflow.org.--> I am receiving this warning, ``` W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile. ``` in one of my travic ci builds. AFAIK, XLA is going through active development and therefore I prefer not to use, `TF_XLA_FLAGS=--tf_xla_cpu_global_jit`. However, I cannot figure out how to silence this warning, permanently during future builds. If any such technique exists then I would suggest to add that to warning itself. Please let me know, if it can be done by manipulating env variables. Thanks.
non_process
xla warning this template is for miscellaneous issues not covered by the other issue categories for questions on how to work with tensorflow or support for problems that are not verified bugs in tensorflow please go to if you are reporting a vulnerability please use the for high level discussions about tensorflow please post to discuss tensorflow org for questions about the development or internal workings of tensorflow or if you would like to know how to contribute to tensorflow please post to developers tensorflow org i am receiving this warning w tensorflow compiler jit mark for compilation pass cc one time warning not using xla cpu for cluster because envvar tf xla flags tf xla cpu global jit was not set if you want xla cpu either set that envvar or use experimental jit scope to enable xla cpu to confirm that xla is active pass vmodule xla compilation cache as a proper command line flag not via tf xla flags or set the envvar xla flags xla hlo profile in one of my travic ci builds afaik xla is going through active development and therefore i prefer not to use tf xla flags tf xla cpu global jit however i cannot figure out how to silence this warning permanently during future builds if any such technique exists then i would suggest to add that to warning itself please let me know if it can be done by manipulating env variables thanks
0
21,926
30,446,558,590
IssuesEvent
2023-07-15 18:48:23
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
pyutils 0.0.1b14 has 2 GuardDog issues
guarddog typosquatting silent-process-execution
https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b14", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils", "silent-process-execution": [ { "location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmph7t8x_ep/pyutils" } }```
1.0
pyutils 0.0.1b14 has 2 GuardDog issues - https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b14", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: pytils, python-utils", "silent-process-execution": [ { "location": "pyutils/exec_utils.py/pyutils/exec_utils.py:205", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmph7t8x_ep/pyutils" } }```
process
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt pytils python utils silent process execution location pyutils exec utils py pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp ep pyutils
1
598,048
18,235,453,682
IssuesEvent
2021-10-01 06:07:34
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
closed
Add support for passing session token in AWS ECR requests
type:feature priority-3-normal datasource:docker status:ready
### What would you like Renovate to be able to do? When using temporary credentials for accessing AWS ECR it is necessary to pass [the so-called session token](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-ecr/modules/credentials.html#sessiontoken-1) in addition to access key ID and secret access key. Renovate currently supports passing access key ID and secret access key via `username`/`password` in host rules: https://github.com/renovatebot/renovate/blob/902ee0209635c3524ce76a2c240b56c909c87d41/lib/datasource/docker/common.ts#L34-L35 I want to be able to pass the session token additionally as otherwise authorization will not succeed. ### If you have any ideas on how this should be implemented, please tell us here. Renovate could, if specifiied, interpret `HostRule.token` as the session token and pass in addition to access key id and secret access key, something like this: ```ts const config: ECRClientConfig = { region }; if (opts.username && opts.password) { config.credentials = { accessKeyId: opts.username, secretAccessKey: opts.password, sessionToken: opts.token, // <-- This is new }; } } const ecr = new ECR(config); ``` At the moment `token` is unused when the host rule refers to an ECR registry, so in my opinion this would be a fully backwards compatible change. ### Is this a feature you are interested in implementing yourself? Yes
1.0
Add support for passing session token in AWS ECR requests - ### What would you like Renovate to be able to do? When using temporary credentials for accessing AWS ECR it is necessary to pass [the so-called session token](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-ecr/modules/credentials.html#sessiontoken-1) in addition to access key ID and secret access key. Renovate currently supports passing access key ID and secret access key via `username`/`password` in host rules: https://github.com/renovatebot/renovate/blob/902ee0209635c3524ce76a2c240b56c909c87d41/lib/datasource/docker/common.ts#L34-L35 I want to be able to pass the session token additionally as otherwise authorization will not succeed. ### If you have any ideas on how this should be implemented, please tell us here. Renovate could, if specifiied, interpret `HostRule.token` as the session token and pass in addition to access key id and secret access key, something like this: ```ts const config: ECRClientConfig = { region }; if (opts.username && opts.password) { config.credentials = { accessKeyId: opts.username, secretAccessKey: opts.password, sessionToken: opts.token, // <-- This is new }; } } const ecr = new ECR(config); ``` At the moment `token` is unused when the host rule refers to an ECR registry, so in my opinion this would be a fully backwards compatible change. ### Is this a feature you are interested in implementing yourself? Yes
non_process
add support for passing session token in aws ecr requests what would you like renovate to be able to do when using temporary credentials for accessing aws ecr it is necessary to pass in addition to access key id and secret access key renovate currently supports passing access key id and secret access key via username password in host rules i want to be able to pass the session token additionally as otherwise authorization will not succeed if you have any ideas on how this should be implemented please tell us here renovate could if specifiied interpret hostrule token as the session token and pass in addition to access key id and secret access key something like this ts const config ecrclientconfig region if opts username opts password config credentials accesskeyid opts username secretaccesskey opts password sessiontoken opts token this is new const ecr new ecr config at the moment token is unused when the host rule refers to an ecr registry so in my opinion this would be a fully backwards compatible change is this a feature you are interested in implementing yourself yes
0
270,905
20,614,257,163
IssuesEvent
2022-03-07 11:38:21
jamal919/bandsos
https://api.github.com/repos/jamal919/bandsos
closed
:memo: Missing project information
documentation
The project information is missing. A small description can be found here - https://www.spaceclimateobservatory.org/band-sos-bengal-delta - for the first version.
1.0
:memo: Missing project information - The project information is missing. A small description can be found here - https://www.spaceclimateobservatory.org/band-sos-bengal-delta - for the first version.
non_process
memo missing project information the project information is missing a small description can be found here for the first version
0
41,201
12,831,755,549
IssuesEvent
2020-07-07 06:14:07
rvvergara/todolist-api-igaku
https://api.github.com/repos/rvvergara/todolist-api-igaku
closed
CVE-2019-10795 (Medium) detected in undefsafe-2.0.2.tgz
security vulnerability
## CVE-2019-10795 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undefsafe-2.0.2.tgz</b></p></summary> <p>Undefined safe way of extracting object properties</p> <p>Library home page: <a href="https://registry.npmjs.org/undefsafe/-/undefsafe-2.0.2.tgz">https://registry.npmjs.org/undefsafe/-/undefsafe-2.0.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/todolist-api-igaku/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/todolist-api-igaku/node_modules/undefsafe/package.json</p> <p> Dependency Hierarchy: - nodemon-2.0.0.tgz (Root Library) - :x: **undefsafe-2.0.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rvvergara/todolist-api-igaku/commit/e74ac424f4412547afcf733f031f27227a0f28e9">e74ac424f4412547afcf733f031f27227a0f28e9</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> undefsafe before 2.0.3 is vulnerable to Prototype Pollution. The 'a' function could be tricked into adding or modifying properties of Object.prototype using a __proto__ payload. <p>Publish Date: 2020-02-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10795>CVE-2019-10795</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10795">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10795</a></p> <p>Release Date: 2020-02-18</p> <p>Fix Resolution: 2.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-10795 (Medium) detected in undefsafe-2.0.2.tgz - ## CVE-2019-10795 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>undefsafe-2.0.2.tgz</b></p></summary> <p>Undefined safe way of extracting object properties</p> <p>Library home page: <a href="https://registry.npmjs.org/undefsafe/-/undefsafe-2.0.2.tgz">https://registry.npmjs.org/undefsafe/-/undefsafe-2.0.2.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/todolist-api-igaku/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/todolist-api-igaku/node_modules/undefsafe/package.json</p> <p> Dependency Hierarchy: - nodemon-2.0.0.tgz (Root Library) - :x: **undefsafe-2.0.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/rvvergara/todolist-api-igaku/commit/e74ac424f4412547afcf733f031f27227a0f28e9">e74ac424f4412547afcf733f031f27227a0f28e9</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> undefsafe before 2.0.3 is vulnerable to Prototype Pollution. The 'a' function could be tricked into adding or modifying properties of Object.prototype using a __proto__ payload. <p>Publish Date: 2020-02-18 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-10795>CVE-2019-10795</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10795">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10795</a></p> <p>Release Date: 2020-02-18</p> <p>Fix Resolution: 2.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in undefsafe tgz cve medium severity vulnerability vulnerable library undefsafe tgz undefined safe way of extracting object properties library home page a href path to dependency file tmp ws scm todolist api igaku package json path to vulnerable library tmp ws scm todolist api igaku node modules undefsafe package json dependency hierarchy nodemon tgz root library x undefsafe tgz vulnerable library found in head commit a href vulnerability details undefsafe before is vulnerable to prototype pollution the a function could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
467
2,904,721,765
IssuesEvent
2015-06-18 19:40:01
pwittchen/prefser
https://api.github.com/repos/pwittchen/prefser
closed
Create GitHub release of 1.0.5
release process
Release notes for 1.0.5 are available in #37. Create release after Maven Sync and updating README.md in #41.
1.0
Create GitHub release of 1.0.5 - Release notes for 1.0.5 are available in #37. Create release after Maven Sync and updating README.md in #41.
process
create github release of release notes for are available in create release after maven sync and updating readme md in
1
239,423
7,794,752,722
IssuesEvent
2018-06-08 04:45:23
braun-robotics/rust-lpc82x-hal
https://api.github.com/repos/braun-robotics/rust-lpc82x-hal
opened
Handle I2C errors
good first issue priority: medium type: enhancement
No error checking is currently done by the I2C API. This of course disqualifies it from any serious use.
1.0
Handle I2C errors - No error checking is currently done by the I2C API. This of course disqualifies it from any serious use.
non_process
handle errors no error checking is currently done by the api this of course disqualifies it from any serious use
0
14,584
17,703,503,284
IssuesEvent
2021-08-25 03:09:47
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term - samplingProtocol
Term - change Class - Event normative Process - complete
## Change term * Submitter: Paula Zermoglio @pzermoglio * Justification (why is this change necessary?): To accommodate "summary" Events in which the specific protocols can not be attributed to specific Occurrences. * Proponents (who needs this change): Humboldt Core Task Group (representing multiple independent stakeholders) Current Term definition: https://dwc.tdwg.org/terms/#dwc:samplingProtocol Proposed new attributes of the term: * Term name (in lowerCamelCase): samplingProtocol * Organized in Class (e.g. Location, Taxon): Event * Definition of the term: **The names of, references to, or descriptions of the methods or protocols used during an Event.** * Usage comments (recommendations regarding content, etc.): **Recommended best practice is describe an Event with no more than one sampling protocol. In the case of a summary Event with multiple protocols, in which a specific protocol can not be attributed to specific Occurrences, the recommended best practice is to separate the values in a list with space vertical bar space ( | ).** * Examples: `UV light trap`, `mist net`, `bottom trawl`, **`ad hoc observation | point count`**, `Penguins from space: faecal stains reveal the location of emperor penguin colonies, https://doi.org/10.1111/j.1466-8238.2009.00467.x`, `Takats et al. 2001. Guidelines for Nocturnal Owl Monitoring in North America. Beaverhill Bird Observatory and Bird Studies Canada, Edmonton, Alberta. 32 pp., http://www.bsc-eoc.org/download/Owl.pdf` * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/samplingProtocol-2017-10-06 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Gathering/Method Currently [dwc:samplingProtocol](http://rs.tdwg.org/dwc/terms/samplingProtocol) defined as: > The name of, reference to, or description of the method or protocol used during an Event. and with no comments associated. Depending on how granular the reported event is, there may be more than one protocol to cite. Suggestion would be to tweak definition slightly to: > The **names** of, **references** to, or **descriptions** of the **methods** or **protocols** used during an Event. and to **add a comment** on how to separate multiple values, like: "If multiple values applicable, separate the values in a list with space vertical bar space ( | )."
1.0
Change term - samplingProtocol - ## Change term * Submitter: Paula Zermoglio @pzermoglio * Justification (why is this change necessary?): To accommodate "summary" Events in which the specific protocols can not be attributed to specific Occurrences. * Proponents (who needs this change): Humboldt Core Task Group (representing multiple independent stakeholders) Current Term definition: https://dwc.tdwg.org/terms/#dwc:samplingProtocol Proposed new attributes of the term: * Term name (in lowerCamelCase): samplingProtocol * Organized in Class (e.g. Location, Taxon): Event * Definition of the term: **The names of, references to, or descriptions of the methods or protocols used during an Event.** * Usage comments (recommendations regarding content, etc.): **Recommended best practice is describe an Event with no more than one sampling protocol. In the case of a summary Event with multiple protocols, in which a specific protocol can not be attributed to specific Occurrences, the recommended best practice is to separate the values in a list with space vertical bar space ( | ).** * Examples: `UV light trap`, `mist net`, `bottom trawl`, **`ad hoc observation | point count`**, `Penguins from space: faecal stains reveal the location of emperor penguin colonies, https://doi.org/10.1111/j.1466-8238.2009.00467.x`, `Takats et al. 2001. Guidelines for Nocturnal Owl Monitoring in North America. Beaverhill Bird Observatory and Bird Studies Canada, Edmonton, Alberta. 32 pp., http://www.bsc-eoc.org/download/Owl.pdf` * Refines (identifier of the broader term this term refines, if applicable): None * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/samplingProtocol-2017-10-06 * ABCD 2.06 (XPATH of the equivalent term in ABCD or EFG, if applicable): DataSets/DataSet/Units/Unit/Gathering/Method Currently [dwc:samplingProtocol](http://rs.tdwg.org/dwc/terms/samplingProtocol) defined as: > The name of, reference to, or description of the method or protocol used during an Event. and with no comments associated. Depending on how granular the reported event is, there may be more than one protocol to cite. Suggestion would be to tweak definition slightly to: > The **names** of, **references** to, or **descriptions** of the **methods** or **protocols** used during an Event. and to **add a comment** on how to separate multiple values, like: "If multiple values applicable, separate the values in a list with space vertical bar space ( | )."
process
change term samplingprotocol change term submitter paula zermoglio pzermoglio justification why is this change necessary to accommodate summary events in which the specific protocols can not be attributed to specific occurrences proponents who needs this change humboldt core task group representing multiple independent stakeholders current term definition proposed new attributes of the term term name in lowercamelcase samplingprotocol organized in class e g location taxon event definition of the term the names of references to or descriptions of the methods or protocols used during an event usage comments recommendations regarding content etc recommended best practice is describe an event with no more than one sampling protocol in the case of a summary event with multiple protocols in which a specific protocol can not be attributed to specific occurrences the recommended best practice is to separate the values in a list with space vertical bar space examples uv light trap mist net bottom trawl ad hoc observation point count penguins from space faecal stains reveal the location of emperor penguin colonies takats et al guidelines for nocturnal owl monitoring in north america beaverhill bird observatory and bird studies canada edmonton alberta pp refines identifier of the broader term this term refines if applicable none replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd or efg if applicable datasets dataset units unit gathering method currently defined as the name of reference to or description of the method or protocol used during an event and with no comments associated depending on how granular the reported event is there may be more than one protocol to cite suggestion would be to tweak definition slightly to the names of references to or descriptions of the methods or protocols used during an event and to add a comment on how to separate multiple values like if multiple values applicable separate the values in a list with space vertical bar space
1
18,216
24,274,963,283
IssuesEvent
2022-09-28 13:15:25
zammad/zammad
https://api.github.com/repos/zammad/zammad
closed
Zammad doesn't like cyrillic aliases
bug verified prioritised by payment mail processing
<!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: stable/5.0.3 * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: Verified on Opera v83 (Chromium based) * Ticket: #10100757 ### Expected behavior: Zammad accepts by RFC valid email aliases. ### Actual behavior: Zammad does not support all RFC valid aliases. Like those containing cyrillic names. Processing emails causes: ``` "ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/416161d5732d97e0ec8eb4314fcd5750.eml, please create an issue at https://github.com/zammad/zammad/issues" "ERROR: #<Exceptions::UnprocessableEntity: Invalid email 'viktorpoblккeссdyi@example.com'>" Traceback (most recent call last): 62: from bin/rails:9:in `<main>' 61: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require' 60: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi' 59: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register' 58: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi' 57: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require' 56: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands.rb:18:in `<main>' 55: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command.rb:46:in `invoke' 54: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command/base.rb:69:in `perform' 53: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch' 52: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command' 51: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run' 50: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `perform' 49: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `eval' 48: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>' 47: from /opt/zammad/app/models/channel/email_parser.rb:504:in `process_unprocessable_mails' 46: from /opt/zammad/app/models/channel/email_parser.rb:504:in `glob' 45: from /opt/zammad/app/models/channel/email_parser.rb:505:in `block in process_unprocessable_mails' 44: from /opt/zammad/app/models/channel/email_parser.rb:119:in `process' 43: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout' 42: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' 41: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' 40: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch' 39: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout' 38: from /opt/zammad/app/models/channel/email_parser.rb:120:in `block in process' 37: from /opt/zammad/app/models/channel/email_parser.rb:156:in `_process' 36: from /opt/zammad/app/models/channel/email_parser.rb:156:in `each' 35: from /opt/zammad/app/models/channel/email_parser.rb:159:in `block in _process' 34: from /opt/zammad/app/models/channel/filter/identify_sender.rb:53:in `run' 33: from /opt/zammad/app/models/channel/filter/identify_sender.rb:157:in `user_create' 32: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/persistence.rb:55:in `create!' 31: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/suppressor.rb:48:in `save!' 30: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `save!' 29: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:366:in `with_transaction_returning_status' 28: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:212:in `transaction' 27: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `transaction' 26: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction' 25: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize' 24: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt' 23: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize' 22: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt' 21: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize' 20: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction' 19: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `block in transaction' 18: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:375:in `block in with_transaction_returning_status' 17: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `block in save!' 16: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:53:in `save!' 15: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:84:in `perform_validations' 14: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:68:in `valid?' 13: from /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations.rb:337:in `valid?' 12: from /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations/callbacks.rb:117:in `run_validations!' 11: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:825:in `_run_validation_callbacks' 10: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:134:in `run_callbacks' 9: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `invoke_before' 8: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `each' 7: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `block in invoke_before' 6: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:201:in `block in halting' 5: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `block in default_terminator' 4: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `catch' 3: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:605:in `block (2 levels) in default_terminator' 2: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:200:in `block (2 levels) in halting' 1: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:428:in `block in make_lambda' /opt/zammad/app/models/user.rb:908:in `check_email': Invalid email 'viktorpoblккeссdyi@exmaple.com' (Exceptions::UnprocessableEntity) 19: from bin/rails:9:in `<main>' 18: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require' 17: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi' 16: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register' 15: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi' 14: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require' 13: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands.rb:18:in `<main>' 12: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command.rb:46:in `invoke' 11: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command/base.rb:69:in `perform' 10: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch' 9: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command' 8: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run' 7: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `perform' 6: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `eval' 5: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>' 4: from /opt/zammad/app/models/channel/email_parser.rb:504:in `process_unprocessable_mails' 3: from /opt/zammad/app/models/channel/email_parser.rb:504:in `glob' 2: from /opt/zammad/app/models/channel/email_parser.rb:505:in `block in process_unprocessable_mails' 1: from /opt/zammad/app/models/channel/email_parser.rb:117:in `process' /opt/zammad/app/models/channel/email_parser.rb:135:in `rescue in process': #<Exceptions::UnprocessableEntity: Invalid email 'viktorpoblккeссdyi@example.com'> (RuntimeError) /opt/zammad/app/models/user.rb:908:in `check_email' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:428:in `block in make_lambda' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:200:in `block (2 levels) in halting' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:605:in `block (2 levels) in default_terminator' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `catch' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `block in default_terminator' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:201:in `block in halting' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `block in invoke_before' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `each' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `invoke_before' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:134:in `run_callbacks' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:825:in `_run_validation_callbacks' /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations/callbacks.rb:117:in `run_validations!' /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations.rb:337:in `valid?' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:68:in `valid?' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:84:in `perform_validations' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:53:in `save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `block in save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:375:in `block in with_transaction_returning_status' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `block in transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:212:in `transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:366:in `with_transaction_returning_status' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/suppressor.rb:48:in `save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/persistence.rb:55:in `create!' /opt/zammad/app/models/channel/filter/identify_sender.rb:157:in `user_create' /opt/zammad/app/models/channel/filter/identify_sender.rb:53:in `run' /opt/zammad/app/models/channel/email_parser.rb:159:in `block in _process' /opt/zammad/app/models/channel/email_parser.rb:156:in `each' /opt/zammad/app/models/channel/email_parser.rb:156:in `_process' /opt/zammad/app/models/channel/email_parser.rb:120:in `block in process' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout' /opt/zammad/app/models/channel/email_parser.rb:119:in `process' /opt/zammad/app/models/channel/email_parser.rb:505:in `block in process_unprocessable_mails' /opt/zammad/app/models/channel/email_parser.rb:504:in `glob' /opt/zammad/app/models/channel/email_parser.rb:504:in `process_unprocessable_mails' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `eval' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `perform' /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run' /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command' /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command/base.rb:69:in `perform' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command.rb:46:in `invoke' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands.rb:18:in `<main>' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require' ``` While trying to add a user to double tab it's also the user creation affected, my browser stepped in and complained instead of Zammad... :-D ![image](https://user-images.githubusercontent.com/6549061/151952188-fd79e1f8-0b71-4eca-9571-8c4997cb67c5.png) ### Steps to reproduce the behavior: * have a mail with cyrillic mail address in the from header. Yes I'm sure this is a bug and no feature request or a general question.
1.0
Zammad doesn't like cyrillic aliases - <!-- Hi there - thanks for filing an issue. Please ensure the following things before creating an issue - thank you! 🤓 Since november 15th we handle all requests, except real bugs, at our community board. Full explanation: https://community.zammad.org/t/major-change-regarding-github-issues-community-board/21 Please post: - Feature requests - Development questions - Technical questions on the board -> https://community.zammad.org ! If you think you hit a bug, please continue: - Search existing issues and the CHANGELOG.md for your issue - there might be a solution already - Make sure to use the latest version of Zammad if possible - Add the `log/production.log` file from your system. Attention: Make sure no confidential data is in it! - Please write the issue in english - Don't remove the template - otherwise we will close the issue without further comments - Ask questions about Zammad configuration and usage at our mailinglist. See: https://zammad.org/participate Note: We always do our best. Unfortunately, sometimes there are too many requests and we can't handle everything at once. If you want to prioritize/escalate your issue, you can do so by means of a support contract (see https://zammad.com/pricing#selfhosted). * The upper textblock will be removed automatically when you submit your issue * --> ### Infos: * Used Zammad version: stable/5.0.3 * Installation method (source, package, ..): any * Operating system: any * Database + version: any * Elasticsearch version: any * Browser + version: Verified on Opera v83 (Chromium based) * Ticket: #10100757 ### Expected behavior: Zammad accepts by RFC valid email aliases. ### Actual behavior: Zammad does not support all RFC valid aliases. Like those containing cyrillic names. Processing emails causes: ``` "ERROR: Can't process email, you will find it for bug reporting under /opt/zammad/tmp/unprocessable_mail/416161d5732d97e0ec8eb4314fcd5750.eml, please create an issue at https://github.com/zammad/zammad/issues" "ERROR: #<Exceptions::UnprocessableEntity: Invalid email 'viktorpoblккeссdyi@example.com'>" Traceback (most recent call last): 62: from bin/rails:9:in `<main>' 61: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require' 60: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi' 59: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register' 58: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi' 57: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require' 56: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands.rb:18:in `<main>' 55: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command.rb:46:in `invoke' 54: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command/base.rb:69:in `perform' 53: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch' 52: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command' 51: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run' 50: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `perform' 49: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `eval' 48: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>' 47: from /opt/zammad/app/models/channel/email_parser.rb:504:in `process_unprocessable_mails' 46: from /opt/zammad/app/models/channel/email_parser.rb:504:in `glob' 45: from /opt/zammad/app/models/channel/email_parser.rb:505:in `block in process_unprocessable_mails' 44: from /opt/zammad/app/models/channel/email_parser.rb:119:in `process' 43: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout' 42: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' 41: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' 40: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch' 39: from /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout' 38: from /opt/zammad/app/models/channel/email_parser.rb:120:in `block in process' 37: from /opt/zammad/app/models/channel/email_parser.rb:156:in `_process' 36: from /opt/zammad/app/models/channel/email_parser.rb:156:in `each' 35: from /opt/zammad/app/models/channel/email_parser.rb:159:in `block in _process' 34: from /opt/zammad/app/models/channel/filter/identify_sender.rb:53:in `run' 33: from /opt/zammad/app/models/channel/filter/identify_sender.rb:157:in `user_create' 32: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/persistence.rb:55:in `create!' 31: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/suppressor.rb:48:in `save!' 30: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `save!' 29: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:366:in `with_transaction_returning_status' 28: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:212:in `transaction' 27: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `transaction' 26: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction' 25: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize' 24: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt' 23: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize' 22: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt' 21: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize' 20: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction' 19: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `block in transaction' 18: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:375:in `block in with_transaction_returning_status' 17: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `block in save!' 16: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:53:in `save!' 15: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:84:in `perform_validations' 14: from /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:68:in `valid?' 13: from /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations.rb:337:in `valid?' 12: from /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations/callbacks.rb:117:in `run_validations!' 11: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:825:in `_run_validation_callbacks' 10: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:134:in `run_callbacks' 9: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `invoke_before' 8: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `each' 7: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `block in invoke_before' 6: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:201:in `block in halting' 5: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `block in default_terminator' 4: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `catch' 3: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:605:in `block (2 levels) in default_terminator' 2: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:200:in `block (2 levels) in halting' 1: from /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:428:in `block in make_lambda' /opt/zammad/app/models/user.rb:908:in `check_email': Invalid email 'viktorpoblккeссdyi@exmaple.com' (Exceptions::UnprocessableEntity) 19: from bin/rails:9:in `<main>' 18: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require' 17: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi' 16: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register' 15: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi' 14: from /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require' 13: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands.rb:18:in `<main>' 12: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command.rb:46:in `invoke' 11: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command/base.rb:69:in `perform' 10: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch' 9: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command' 8: from /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run' 7: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `perform' 6: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `eval' 5: from /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>' 4: from /opt/zammad/app/models/channel/email_parser.rb:504:in `process_unprocessable_mails' 3: from /opt/zammad/app/models/channel/email_parser.rb:504:in `glob' 2: from /opt/zammad/app/models/channel/email_parser.rb:505:in `block in process_unprocessable_mails' 1: from /opt/zammad/app/models/channel/email_parser.rb:117:in `process' /opt/zammad/app/models/channel/email_parser.rb:135:in `rescue in process': #<Exceptions::UnprocessableEntity: Invalid email 'viktorpoblккeссdyi@example.com'> (RuntimeError) /opt/zammad/app/models/user.rb:908:in `check_email' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:428:in `block in make_lambda' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:200:in `block (2 levels) in halting' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:605:in `block (2 levels) in default_terminator' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `catch' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:604:in `block in default_terminator' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:201:in `block in halting' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `block in invoke_before' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `each' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:513:in `invoke_before' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:134:in `run_callbacks' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/callbacks.rb:825:in `_run_validation_callbacks' /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations/callbacks.rb:117:in `run_validations!' /usr/local/rvm/gems/ruby-2.7.4/gems/activemodel-6.0.4.4/lib/active_model/validations.rb:337:in `valid?' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:68:in `valid?' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:84:in `perform_validations' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/validations.rb:53:in `save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `block in save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:375:in `block in with_transaction_returning_status' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `block in transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:280:in `block in within_new_transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:26:in `block (2 levels) in synchronize' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `handle_interrupt' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:25:in `block in synchronize' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `handle_interrupt' /usr/local/rvm/gems/ruby-2.7.4/gems/activesupport-6.0.4.4/lib/active_support/concurrency/load_interlock_aware_monitor.rb:21:in `synchronize' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/transaction.rb:278:in `within_new_transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/connection_adapters/abstract/database_statements.rb:280:in `transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:212:in `transaction' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:366:in `with_transaction_returning_status' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/transactions.rb:318:in `save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/suppressor.rb:48:in `save!' /usr/local/rvm/gems/ruby-2.7.4/gems/activerecord-6.0.4.4/lib/active_record/persistence.rb:55:in `create!' /opt/zammad/app/models/channel/filter/identify_sender.rb:157:in `user_create' /opt/zammad/app/models/channel/filter/identify_sender.rb:53:in `run' /opt/zammad/app/models/channel/email_parser.rb:159:in `block in _process' /opt/zammad/app/models/channel/email_parser.rb:156:in `each' /opt/zammad/app/models/channel/email_parser.rb:156:in `_process' /opt/zammad/app/models/channel/email_parser.rb:120:in `block in process' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:95:in `block in timeout' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `block in catch' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:33:in `catch' /usr/local/rvm/rubies/ruby-2.7.4/lib/ruby/2.7.0/timeout.rb:110:in `timeout' /opt/zammad/app/models/channel/email_parser.rb:119:in `process' /opt/zammad/app/models/channel/email_parser.rb:505:in `block in process_unprocessable_mails' /opt/zammad/app/models/channel/email_parser.rb:504:in `glob' /opt/zammad/app/models/channel/email_parser.rb:504:in `process_unprocessable_mails' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `<main>' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `eval' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands/runner/runner_command.rb:45:in `perform' /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/command.rb:27:in `run' /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor/invocation.rb:127:in `invoke_command' /usr/local/rvm/gems/ruby-2.7.4/gems/thor-1.1.0/lib/thor.rb:392:in `dispatch' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command/base.rb:69:in `perform' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/command.rb:46:in `invoke' /usr/local/rvm/gems/ruby-2.7.4/gems/railties-6.0.4.4/lib/rails/commands.rb:18:in `<main>' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi' /usr/local/rvm/gems/ruby-2.7.4/gems/bootsnap-1.9.1/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require' ``` While trying to add a user to double tab it's also the user creation affected, my browser stepped in and complained instead of Zammad... :-D ![image](https://user-images.githubusercontent.com/6549061/151952188-fd79e1f8-0b71-4eca-9571-8c4997cb67c5.png) ### Steps to reproduce the behavior: * have a mail with cyrillic mail address in the from header. Yes I'm sure this is a bug and no feature request or a general question.
process
zammad doesn t like cyrillic aliases hi there thanks for filing an issue please ensure the following things before creating an issue thank you 🤓 since november we handle all requests except real bugs at our community board full explanation please post feature requests development questions technical questions on the board if you think you hit a bug please continue search existing issues and the changelog md for your issue there might be a solution already make sure to use the latest version of zammad if possible add the log production log file from your system attention make sure no confidential data is in it please write the issue in english don t remove the template otherwise we will close the issue without further comments ask questions about zammad configuration and usage at our mailinglist see note we always do our best unfortunately sometimes there are too many requests and we can t handle everything at once if you want to prioritize escalate your issue you can do so by means of a support contract see the upper textblock will be removed automatically when you submit your issue infos used zammad version stable installation method source package any operating system any database version any elasticsearch version any browser version verified on opera chromium based ticket expected behavior zammad accepts by rfc valid email aliases actual behavior zammad does not support all rfc valid aliases like those containing cyrillic names processing emails causes error can t process email you will find it for bug reporting under opt zammad tmp unprocessable mail eml please create an issue at error traceback most recent call last from bin rails in from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems railties lib rails commands rb in from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process from usr local rvm rubies ruby lib ruby timeout rb in timeout from usr local rvm rubies ruby lib ruby timeout rb in catch from usr local rvm rubies ruby lib ruby timeout rb in catch from usr local rvm rubies ruby lib ruby timeout rb in block in catch from usr local rvm rubies ruby lib ruby timeout rb in block in timeout from opt zammad app models channel email parser rb in block in process from opt zammad app models channel email parser rb in process from opt zammad app models channel email parser rb in each from opt zammad app models channel email parser rb in block in process from opt zammad app models channel filter identify sender rb in run from opt zammad app models channel filter identify sender rb in user create from usr local rvm gems ruby gems activerecord lib active record persistence rb in create from usr local rvm gems ruby gems activerecord lib active record suppressor rb in save from usr local rvm gems ruby gems activerecord lib active record transactions rb in save from usr local rvm gems ruby gems activerecord lib active record transactions rb in with transaction returning status from usr local rvm gems ruby gems activerecord lib active record transactions rb in transaction from usr local rvm gems ruby gems activerecord lib active record connection adapters abstract database statements rb in transaction from usr local rvm gems ruby gems activerecord lib active record connection adapters abstract transaction rb in within new transaction from usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in synchronize from usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in handle interrupt from usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in block in synchronize from usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in handle interrupt from usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in block levels in synchronize from usr local rvm gems ruby gems activerecord lib active record connection adapters abstract transaction rb in block in within new transaction from usr local rvm gems ruby gems activerecord lib active record connection adapters abstract database statements rb in block in transaction from usr local rvm gems ruby gems activerecord lib active record transactions rb in block in with transaction returning status from usr local rvm gems ruby gems activerecord lib active record transactions rb in block in save from usr local rvm gems ruby gems activerecord lib active record validations rb in save from usr local rvm gems ruby gems activerecord lib active record validations rb in perform validations from usr local rvm gems ruby gems activerecord lib active record validations rb in valid from usr local rvm gems ruby gems activemodel lib active model validations rb in valid from usr local rvm gems ruby gems activemodel lib active model validations callbacks rb in run validations from usr local rvm gems ruby gems activesupport lib active support callbacks rb in run validation callbacks from usr local rvm gems ruby gems activesupport lib active support callbacks rb in run callbacks from usr local rvm gems ruby gems activesupport lib active support callbacks rb in invoke before from usr local rvm gems ruby gems activesupport lib active support callbacks rb in each from usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in invoke before from usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in halting from usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in default terminator from usr local rvm gems ruby gems activesupport lib active support callbacks rb in catch from usr local rvm gems ruby gems activesupport lib active support callbacks rb in block levels in default terminator from usr local rvm gems ruby gems activesupport lib active support callbacks rb in block levels in halting from usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in make lambda opt zammad app models user rb in check email invalid email viktorpoblккeссdyi exmaple com exceptions unprocessableentity from bin rails in from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi from usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require from usr local rvm gems ruby gems railties lib rails commands rb in from usr local rvm gems ruby gems railties lib rails command rb in invoke from usr local rvm gems ruby gems railties lib rails command base rb in perform from usr local rvm gems ruby gems thor lib thor rb in dispatch from usr local rvm gems ruby gems thor lib thor invocation rb in invoke command from usr local rvm gems ruby gems thor lib thor command rb in run from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval from usr local rvm gems ruby gems railties lib rails commands runner runner command rb in from opt zammad app models channel email parser rb in process unprocessable mails from opt zammad app models channel email parser rb in glob from opt zammad app models channel email parser rb in block in process unprocessable mails from opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in rescue in process runtimeerror opt zammad app models user rb in check email usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in make lambda usr local rvm gems ruby gems activesupport lib active support callbacks rb in block levels in halting usr local rvm gems ruby gems activesupport lib active support callbacks rb in block levels in default terminator usr local rvm gems ruby gems activesupport lib active support callbacks rb in catch usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in default terminator usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in halting usr local rvm gems ruby gems activesupport lib active support callbacks rb in block in invoke before usr local rvm gems ruby gems activesupport lib active support callbacks rb in each usr local rvm gems ruby gems activesupport lib active support callbacks rb in invoke before usr local rvm gems ruby gems activesupport lib active support callbacks rb in run callbacks usr local rvm gems ruby gems activesupport lib active support callbacks rb in run validation callbacks usr local rvm gems ruby gems activemodel lib active model validations callbacks rb in run validations usr local rvm gems ruby gems activemodel lib active model validations rb in valid usr local rvm gems ruby gems activerecord lib active record validations rb in valid usr local rvm gems ruby gems activerecord lib active record validations rb in perform validations usr local rvm gems ruby gems activerecord lib active record validations rb in save usr local rvm gems ruby gems activerecord lib active record transactions rb in block in save usr local rvm gems ruby gems activerecord lib active record transactions rb in block in with transaction returning status usr local rvm gems ruby gems activerecord lib active record connection adapters abstract database statements rb in block in transaction usr local rvm gems ruby gems activerecord lib active record connection adapters abstract transaction rb in block in within new transaction usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in block levels in synchronize usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in handle interrupt usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in block in synchronize usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in handle interrupt usr local rvm gems ruby gems activesupport lib active support concurrency load interlock aware monitor rb in synchronize usr local rvm gems ruby gems activerecord lib active record connection adapters abstract transaction rb in within new transaction usr local rvm gems ruby gems activerecord lib active record connection adapters abstract database statements rb in transaction usr local rvm gems ruby gems activerecord lib active record transactions rb in transaction usr local rvm gems ruby gems activerecord lib active record transactions rb in with transaction returning status usr local rvm gems ruby gems activerecord lib active record transactions rb in save usr local rvm gems ruby gems activerecord lib active record suppressor rb in save usr local rvm gems ruby gems activerecord lib active record persistence rb in create opt zammad app models channel filter identify sender rb in user create opt zammad app models channel filter identify sender rb in run opt zammad app models channel email parser rb in block in process opt zammad app models channel email parser rb in each opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process usr local rvm rubies ruby lib ruby timeout rb in block in timeout usr local rvm rubies ruby lib ruby timeout rb in block in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in catch usr local rvm rubies ruby lib ruby timeout rb in timeout opt zammad app models channel email parser rb in process opt zammad app models channel email parser rb in block in process unprocessable mails opt zammad app models channel email parser rb in glob opt zammad app models channel email parser rb in process unprocessable mails usr local rvm gems ruby gems railties lib rails commands runner runner command rb in usr local rvm gems ruby gems railties lib rails commands runner runner command rb in eval usr local rvm gems ruby gems railties lib rails commands runner runner command rb in perform usr local rvm gems ruby gems thor lib thor command rb in run usr local rvm gems ruby gems thor lib thor invocation rb in invoke command usr local rvm gems ruby gems thor lib thor rb in dispatch usr local rvm gems ruby gems railties lib rails command base rb in perform usr local rvm gems ruby gems railties lib rails command rb in invoke usr local rvm gems ruby gems railties lib rails commands rb in usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in block in require with bootsnap lfi usr local rvm gems ruby gems bootsnap lib bootsnap load path cache loaded features index rb in register usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require with bootsnap lfi usr local rvm gems ruby gems bootsnap lib bootsnap load path cache core ext kernel require rb in require while trying to add a user to double tab it s also the user creation affected my browser stepped in and complained instead of zammad d steps to reproduce the behavior have a mail with cyrillic mail address in the from header yes i m sure this is a bug and no feature request or a general question
1
1,707
4,350,448,182
IssuesEvent
2016-07-31 08:15:55
AkkadianGames/Nanoshooter
https://api.github.com/repos/AkkadianGames/Nanoshooter
closed
Separate Susa goals from Nanoshooter roadmap on Trello
Docs Process Ready
## Criteria - [x] Trello roadmaps are separated. - [x] Nanoshooter's milestones are adjusted to fit.
1.0
Separate Susa goals from Nanoshooter roadmap on Trello - ## Criteria - [x] Trello roadmaps are separated. - [x] Nanoshooter's milestones are adjusted to fit.
process
separate susa goals from nanoshooter roadmap on trello criteria trello roadmaps are separated nanoshooter s milestones are adjusted to fit
1
19,293
25,466,376,372
IssuesEvent
2022-11-25 05:06:15
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[IDP] [PM] UI issue in the non-organizational admin popup message
Bug P1 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
UI issue is observed in the non-organizational admin popup message while adding the admin in the application **AR:** ![PM3](https://user-images.githubusercontent.com/86007179/172861642-bc51a1f9-067e-4c3f-8586-226525feb30c.png) **ER:** ![PM5](https://user-images.githubusercontent.com/86007179/172863373-5e70c5ea-bc63-4bbe-b33f-6207fc248b67.png)
3.0
[IDP] [PM] UI issue in the non-organizational admin popup message - UI issue is observed in the non-organizational admin popup message while adding the admin in the application **AR:** ![PM3](https://user-images.githubusercontent.com/86007179/172861642-bc51a1f9-067e-4c3f-8586-226525feb30c.png) **ER:** ![PM5](https://user-images.githubusercontent.com/86007179/172863373-5e70c5ea-bc63-4bbe-b33f-6207fc248b67.png)
process
ui issue in the non organizational admin popup message ui issue is observed in the non organizational admin popup message while adding the admin in the application ar er
1
127,028
17,152,844,320
IssuesEvent
2021-07-14 00:03:40
ScratchAddons/ScratchAddons
https://api.github.com/repos/ScratchAddons/ScratchAddons
closed
Notices are more visible than warnings in dark mode
scope: design scope: webpages type: bug
In dark mode, the background color of notices stands out more than that of warnings.
1.0
Notices are more visible than warnings in dark mode - In dark mode, the background color of notices stands out more than that of warnings.
non_process
notices are more visible than warnings in dark mode in dark mode the background color of notices stands out more than that of warnings
0
6,413
9,498,725,790
IssuesEvent
2019-04-24 03:08:49
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
Test failure: System.Diagnostics.Tests.ProcessTests/ProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(programToOpenWith: \"vi\")
area-System.Diagnostics.Process test-run-core
Opened on behalf of @AriNuer The test `System.Diagnostics.Tests.ProcessTests/ProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(programToOpenWith: \"vi\")` has failed. Failure Message: ``` Assert.Equal() Failure ↓ (pos 2) Expected: vi Actual: vim-nox11 ↑ (pos 2) ``` Stack Trace: ``` at System.Diagnostics.Tests.ProcessTests.ProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(String programToOpenWith) in /__w/1/s/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs:line 285 ``` Build : 3.0 - 20190421.5 (Core Tests) Failing configurations: - SLES.15.Amd64-x64 - Release - SLES.12.Amd64-x64 - Release - OpenSuse.42.Amd64-x64 - Release - Ubuntu.1810.Amd64-x64 - Release - Ubuntu.1804.Amd64-x64 - Release - Ubuntu.1604.Amd64-x64 - Release - Debian.9.Amd64-x64 - Release - Debian.8.Amd64-x64 - Release - Alpine.39.Amd64-x64 - Release - Alpine.38.Amd64-x64 - Release - Alpine.38.Arm64-arm64 - Release Details: https://mc.dot.net/#/product/netcore/30/source/official~2Fdotnet~2Fcorefx~2Frefs~2Fheads~2Fmaster/type/test~2Ffunctional~2Fcli~2F/build/20190421.5/workItem/System.Diagnostics.Process.Tests/analysis/xunit/System.Diagnostics.Tests.ProcessTests~2FProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(programToOpenWith:%20%5C%22vi%5C%22)
1.0
Test failure: System.Diagnostics.Tests.ProcessTests/ProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(programToOpenWith: \"vi\") - Opened on behalf of @AriNuer The test `System.Diagnostics.Tests.ProcessTests/ProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(programToOpenWith: \"vi\")` has failed. Failure Message: ``` Assert.Equal() Failure ↓ (pos 2) Expected: vi Actual: vim-nox11 ↑ (pos 2) ``` Stack Trace: ``` at System.Diagnostics.Tests.ProcessTests.ProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(String programToOpenWith) in /__w/1/s/src/System.Diagnostics.Process/tests/ProcessTests.Unix.cs:line 285 ``` Build : 3.0 - 20190421.5 (Core Tests) Failing configurations: - SLES.15.Amd64-x64 - Release - SLES.12.Amd64-x64 - Release - OpenSuse.42.Amd64-x64 - Release - Ubuntu.1810.Amd64-x64 - Release - Ubuntu.1804.Amd64-x64 - Release - Ubuntu.1604.Amd64-x64 - Release - Debian.9.Amd64-x64 - Release - Debian.8.Amd64-x64 - Release - Alpine.39.Amd64-x64 - Release - Alpine.38.Amd64-x64 - Release - Alpine.38.Arm64-arm64 - Release Details: https://mc.dot.net/#/product/netcore/30/source/official~2Fdotnet~2Fcorefx~2Frefs~2Fheads~2Fmaster/type/test~2Ffunctional~2Fcli~2F/build/20190421.5/workItem/System.Diagnostics.Process.Tests/analysis/xunit/System.Diagnostics.Tests.ProcessTests~2FProcessStart_OpenFileOnLinux_UsesSpecifiedProgram(programToOpenWith:%20%5C%22vi%5C%22)
process
test failure system diagnostics tests processtests processstart openfileonlinux usesspecifiedprogram programtoopenwith vi opened on behalf of arinuer the test system diagnostics tests processtests processstart openfileonlinux usesspecifiedprogram programtoopenwith vi has failed failure message assert equal failure ↓ pos expected vi actual vim ↑ pos stack trace at system diagnostics tests processtests processstart openfileonlinux usesspecifiedprogram string programtoopenwith in w s src system diagnostics process tests processtests unix cs line build core tests failing configurations sles release sles release opensuse release ubuntu release ubuntu release ubuntu release debian release debian release alpine release alpine release alpine release details
1
59,365
24,738,646,742
IssuesEvent
2022-10-21 01:43:14
MicrosoftDocs/dynamics-365-customer-engagement
https://api.github.com/repos/MicrosoftDocs/dynamics-365-customer-engagement
closed
Omnichannel custom menu items in Session panel
assigned-to-author in-progress Pri2 dynamics-365-customerservice/svc
In section 1. Session panel it shows contacts on the Sitemap on the left. Is it possible to add custom persistent menu items to the Sitemap or will it only display contact items in the menu as pictured? ![image](https://user-images.githubusercontent.com/8151045/114073704-6bea6700-9858-11eb-9236-a7c3e75744e3.png) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: c9e283bf-255b-33d3-28df-26a112b77814 * Version Independent ID: 71063a7c-351a-259f-1ab0-fd6469e1b93a * Content: [Introduction to the agent interface of Omnichannel for Customer Service app](https://docs.microsoft.com/en-us/dynamics365/customer-service/oc-introduction-agent-interface) * Content Source: [ce/customer-service/oc-introduction-agent-interface.md](https://github.com/MicrosoftDocs/dynamics-365-customer-engagement/blob/main/ce/customer-service/oc-introduction-agent-interface.md) * Service: **dynamics-365-customerservice** * GitHub Login: @neeranelli * Microsoft Alias: **nenellim**
1.0
Omnichannel custom menu items in Session panel - In section 1. Session panel it shows contacts on the Sitemap on the left. Is it possible to add custom persistent menu items to the Sitemap or will it only display contact items in the menu as pictured? ![image](https://user-images.githubusercontent.com/8151045/114073704-6bea6700-9858-11eb-9236-a7c3e75744e3.png) --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: c9e283bf-255b-33d3-28df-26a112b77814 * Version Independent ID: 71063a7c-351a-259f-1ab0-fd6469e1b93a * Content: [Introduction to the agent interface of Omnichannel for Customer Service app](https://docs.microsoft.com/en-us/dynamics365/customer-service/oc-introduction-agent-interface) * Content Source: [ce/customer-service/oc-introduction-agent-interface.md](https://github.com/MicrosoftDocs/dynamics-365-customer-engagement/blob/main/ce/customer-service/oc-introduction-agent-interface.md) * Service: **dynamics-365-customerservice** * GitHub Login: @neeranelli * Microsoft Alias: **nenellim**
non_process
omnichannel custom menu items in session panel in section session panel it shows contacts on the sitemap on the left is it possible to add custom persistent menu items to the sitemap or will it only display contact items in the menu as pictured document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service dynamics customerservice github login neeranelli microsoft alias nenellim
0
756,369
26,468,340,772
IssuesEvent
2023-01-17 03:38:20
kubesphere/console
https://api.github.com/repos/kubesphere/console
closed
Pod replicas can not auto refresh
kind/bug kind/need-to-verify priority/low
**Describe the bug** The current can not auto refresh. ![image](https://user-images.githubusercontent.com/68640256/197472310-d78658a6-f9bb-4778-8295-befa0c09bc92.png) **Versions used(KubeSphere/Kubernetes)** KubeSphere: `v3.3.1-rc.5` Kubernetes: (If KubeSphere installer used, you can skip this)
1.0
Pod replicas can not auto refresh - **Describe the bug** The current can not auto refresh. ![image](https://user-images.githubusercontent.com/68640256/197472310-d78658a6-f9bb-4778-8295-befa0c09bc92.png) **Versions used(KubeSphere/Kubernetes)** KubeSphere: `v3.3.1-rc.5` Kubernetes: (If KubeSphere installer used, you can skip this)
non_process
pod replicas can not auto refresh describe the bug the current can not auto refresh versions used kubesphere kubernetes kubesphere rc kubernetes if kubesphere installer used you can skip this
0
275,183
8,575,393,369
IssuesEvent
2018-11-12 17:08:33
HabitRPG/habitica-android
https://api.github.com/repos/HabitRPG/habitica-android
opened
Achievements: Bailey's 'Town Crier NPC' Achievement image is blank
Priority: minor
On bailey's profile when you view their achievements, that image doesn't show.
1.0
Achievements: Bailey's 'Town Crier NPC' Achievement image is blank - On bailey's profile when you view their achievements, that image doesn't show.
non_process
achievements bailey s town crier npc achievement image is blank on bailey s profile when you view their achievements that image doesn t show
0
42,190
5,431,312,273
IssuesEvent
2017-03-04 00:14:14
elegantthemes/Divi-Beta
https://api.github.com/repos/elegantthemes/Divi-Beta
closed
PHP Errors
!IMPORTANT BUG DESIGN SIGNOFF QUALITY ASSURED READY FOR REVIEW
### Problem: ``` [01-Mar-2017 23:08:11 UTC] PHP Notice: Undefined index: depends_on in et.falgout.us/wp-content/themes/Divi/includes/builder/functions.php on line 596 [01-Mar-2017 23:08:11 UTC] PHP Warning: Invalid argument supplied for foreach() in et.falgout.us/wp-content/themes/Divi/includes/builder/functions.php on line 601 [01-Mar-2017 23:08:11 UTC] PHP Warning: array_intersect_key(): Argument #1 is not an array in et.falgout.us/wp-content/themes/Divi/includes/builder/functions.php on line 618 ``` ### Steps To Reproduce: 1. Possibly coming from something on Homepage Extended or Basic Premade Layouts ### Attached PR * https://github.com/elegantthemes/submodule-builder/issues/1802
1.0
PHP Errors - ### Problem: ``` [01-Mar-2017 23:08:11 UTC] PHP Notice: Undefined index: depends_on in et.falgout.us/wp-content/themes/Divi/includes/builder/functions.php on line 596 [01-Mar-2017 23:08:11 UTC] PHP Warning: Invalid argument supplied for foreach() in et.falgout.us/wp-content/themes/Divi/includes/builder/functions.php on line 601 [01-Mar-2017 23:08:11 UTC] PHP Warning: array_intersect_key(): Argument #1 is not an array in et.falgout.us/wp-content/themes/Divi/includes/builder/functions.php on line 618 ``` ### Steps To Reproduce: 1. Possibly coming from something on Homepage Extended or Basic Premade Layouts ### Attached PR * https://github.com/elegantthemes/submodule-builder/issues/1802
non_process
php errors problem php notice undefined index depends on in et falgout us wp content themes divi includes builder functions php on line php warning invalid argument supplied for foreach in et falgout us wp content themes divi includes builder functions php on line php warning array intersect key argument is not an array in et falgout us wp content themes divi includes builder functions php on line steps to reproduce possibly coming from something on homepage extended or basic premade layouts attached pr
0
21,766
30,281,977,352
IssuesEvent
2023-07-08 07:06:15
X-Sharp/XSharpPublic
https://api.github.com/repos/X-Sharp/XSharpPublic
closed
The preprocessor puts an excess parenthesis (Xbase++ dialect)
bug Preprocessor
**Describe the bug** The preprocessor puts an excess parenthesis in the wrong place **To Reproduce .prg** ``` #xtranslate ORA_DRV(<Method>([<arg_list,...>])) => ; (__oraResult := ORA_<Method>([<arg_list>]),; iif(ISOBJECT(__oraResult) .and. __oraResult:isDerivedFrom(Error()),; (Eval(ErrorBlock(), __oraResult), nil),; __oraResult)) #xtranslate ORA_DRV_LOCALS => local __oraResult := nil class dbTableOra exported: method FileName hidden: var nHandle endclass method dbTableOra:FileName() ORA_DRV_LOCALS return Upper(ORA_DRV(Alias(::nHandle))) ``` **Expected behavior (xBase++ ppo)** Output ``` method dbTableOra:FileName() local __oraResult := nil return Upper((__oraResult := ORA_Alias( ::nHandle),iif(( VALTYPE(__oraResult)=="O" ) .and. __oraResult:isDerivedFrom(Error()),(Eval(ErrorBlock(), __oraResult), nil),__oraResult))) ``` **Actual behavior (X# ppo)** ``` method dbTableOra:FileName() local __oraResult := nil return Upper( ; (__oraResult := ORA_Alias(::nHandle)),; iif( ( VALTYPE(__oraResult)=="O" ) .and. __oraResult:isDerivedFrom(Error()),; (Eval(ErrorBlock(), __oraResult), nil),; __oraResult))) ``` **Additional context** X# Compiler version 2.16.0.5 (public) -dialect:xBase++ -xpp1 -lb -memvar -vo1 -vo3 -vo5 -vo10 -vo15 -vo16
1.0
The preprocessor puts an excess parenthesis (Xbase++ dialect) - **Describe the bug** The preprocessor puts an excess parenthesis in the wrong place **To Reproduce .prg** ``` #xtranslate ORA_DRV(<Method>([<arg_list,...>])) => ; (__oraResult := ORA_<Method>([<arg_list>]),; iif(ISOBJECT(__oraResult) .and. __oraResult:isDerivedFrom(Error()),; (Eval(ErrorBlock(), __oraResult), nil),; __oraResult)) #xtranslate ORA_DRV_LOCALS => local __oraResult := nil class dbTableOra exported: method FileName hidden: var nHandle endclass method dbTableOra:FileName() ORA_DRV_LOCALS return Upper(ORA_DRV(Alias(::nHandle))) ``` **Expected behavior (xBase++ ppo)** Output ``` method dbTableOra:FileName() local __oraResult := nil return Upper((__oraResult := ORA_Alias( ::nHandle),iif(( VALTYPE(__oraResult)=="O" ) .and. __oraResult:isDerivedFrom(Error()),(Eval(ErrorBlock(), __oraResult), nil),__oraResult))) ``` **Actual behavior (X# ppo)** ``` method dbTableOra:FileName() local __oraResult := nil return Upper( ; (__oraResult := ORA_Alias(::nHandle)),; iif( ( VALTYPE(__oraResult)=="O" ) .and. __oraResult:isDerivedFrom(Error()),; (Eval(ErrorBlock(), __oraResult), nil),; __oraResult))) ``` **Additional context** X# Compiler version 2.16.0.5 (public) -dialect:xBase++ -xpp1 -lb -memvar -vo1 -vo3 -vo5 -vo10 -vo15 -vo16
process
the preprocessor puts an excess parenthesis xbase dialect describe the bug the preprocessor puts an excess parenthesis in the wrong place to reproduce prg xtranslate ora drv oraresult ora iif isobject oraresult and oraresult isderivedfrom error eval errorblock oraresult nil oraresult xtranslate ora drv locals local oraresult nil class dbtableora exported method filename hidden var nhandle endclass method dbtableora filename ora drv locals return upper ora drv alias nhandle expected behavior xbase ppo output method dbtableora filename local oraresult nil return upper oraresult ora alias nhandle iif valtype oraresult o and oraresult isderivedfrom error eval errorblock oraresult nil oraresult actual behavior x ppo method dbtableora filename local oraresult nil return upper oraresult ora alias nhandle iif valtype oraresult o and oraresult isderivedfrom error eval errorblock oraresult nil oraresult additional context x compiler version public dialect xbase lb memvar
1
4,255
7,189,050,235
IssuesEvent
2018-02-02 12:34:05
Great-Hill-Corporation/quickBlocks
https://api.github.com/repos/Great-Hill-Corporation/quickBlocks
closed
Solution to DDOS trace explosion problem
libs-etherlib status-inprocess type-enhancement
Sometime after block 2363154 and before block 'Tangerine', there are traces with 10's of thousands of entries. This slows down a full scan of the blockchain data (when one is looking at a traces) significantly (1,000s of times slower than scanning traces from other time periods). This makes naive scans of the entire blockchain (if one is looking at traces) unusable. **Solution:** Run a scan looking for transactions with > 500 traces (or other known bad data). If one finds a 'bad' trace, read those traces once and store them in a separate cache. In the `getTraces` code, if the requested block number is between `A` and `B`, check to see if the trace is already in the cache, and read it from there if so. Otherwise (outside the time span and "no file present in the cache"), read the traces from the node. This is a compromise between cache file size and speed of access.
1.0
Solution to DDOS trace explosion problem - Sometime after block 2363154 and before block 'Tangerine', there are traces with 10's of thousands of entries. This slows down a full scan of the blockchain data (when one is looking at a traces) significantly (1,000s of times slower than scanning traces from other time periods). This makes naive scans of the entire blockchain (if one is looking at traces) unusable. **Solution:** Run a scan looking for transactions with > 500 traces (or other known bad data). If one finds a 'bad' trace, read those traces once and store them in a separate cache. In the `getTraces` code, if the requested block number is between `A` and `B`, check to see if the trace is already in the cache, and read it from there if so. Otherwise (outside the time span and "no file present in the cache"), read the traces from the node. This is a compromise between cache file size and speed of access.
process
solution to ddos trace explosion problem sometime after block and before block tangerine there are traces with s of thousands of entries this slows down a full scan of the blockchain data when one is looking at a traces significantly of times slower than scanning traces from other time periods this makes naive scans of the entire blockchain if one is looking at traces unusable solution run a scan looking for transactions with traces or other known bad data if one finds a bad trace read those traces once and store them in a separate cache in the gettraces code if the requested block number is between a and b check to see if the trace is already in the cache and read it from there if so otherwise outside the time span and no file present in the cache read the traces from the node this is a compromise between cache file size and speed of access
1
20,256
26,874,566,594
IssuesEvent
2023-02-04 21:59:59
kserve/kserve
https://api.github.com/repos/kserve/kserve
closed
KServe 0.10.0 release tracking
kind/feature kserve/release-process
/kind feature **Describe the solution you'd like** KServe 0.10 release tracking: RC release Date: 11/18/2022 Release Date: TBD ## KServe Model Serving: See [project dashboard](https://github.com/orgs/kserve/projects/3/views/1?sortedBy%5Bdirection%5D=asc&sortedBy%5BcolumnId%5D=Status) to see what all features/bugs are covered as part 0.10 and their current status ## ModelMesh: ## Models UI:: ## Website: # **TIMELINE** | Week | Start | End | When | Event (Actual timelines) | | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------------------------------------------------------------------------- | | 0 | Friday, July 22, 2022 | Thursday, July 28, 2022 | | Gap week between releases. 0.9.0 release happened on July 22, 2022 | | 1 | Friday, July 29, 2022 | Thursday, August 4, 2022 | | Development | | 2 | Friday, August 5, 2022 | Thursday, August 11, 2022 | | Development | | 3 | Friday, August 12, 2022 | Thursday, August 18, 2022 | | Development | | 4 | Friday, August 19, 2022 | Thursday, August 25, 2022 | | Development | | 5 | Friday, August 26, 2022 | Thursday, September 1, 2022 | | Development | | 6 | Friday, September 2, 2022 | Thursday, September 8, 2022 | | Development | | 7 | Friday, September 9, 2022 | Thursday, September 15, 2022 | | Development | | 8 | Friday, September 16, 2022 | Thursday, September 22, 2022 | | Development | | 9 | Friday, September 23, 2022 | Thursday, September 29, 2022 | | Development | | 10 | Friday, September 30, 2022 | Thursday, October 6, 2022 | | Development | | 11 | Friday, October 7, 2022 | Thursday, October 13, 2022 | | Development | | 12 | Friday, October 14, 2022 | Thursday, October 20, 2022 | | Development | | 13 | Friday, October 21, 2022 | Thursday, October 27, 2022 | | Development | | 14 | Friday, October 28, 2022 | Thursday, November 3, 2022 | | Development | | 15 | Friday, November 4, 2022 | Thursday, November 10, 2022 | | Development | | 16 | Friday, November 11, 2022 | Thursday, November 17, 2022 | | Development +<br>Start the prep i.e.<br>Announce/Reminder about upcoming feature freeze date in 2 weeks | | 17 | Friday, November 18, 2022 | Thursday, November 24, 2022 | Friday, November 18, 2022 | Feature Freeze + RC0 Released + Documentation update starts | | 18 | Friday, November 25, 2022 | Thursday, December 1, 2022 | | Testing | | 19 | Friday, December 2, 2022 | Thursday, December 8, 2022 | | RC1 Release if necessary (Not released) | | 20 | Friday, December 9, 2022 | Thursday, December 15, 2022 | | Testing | | 21 | Friday, December 16, 2022 | Thursday, December 22, 2022 | | 0.10.0 will be released |
1.0
KServe 0.10.0 release tracking - /kind feature **Describe the solution you'd like** KServe 0.10 release tracking: RC release Date: 11/18/2022 Release Date: TBD ## KServe Model Serving: See [project dashboard](https://github.com/orgs/kserve/projects/3/views/1?sortedBy%5Bdirection%5D=asc&sortedBy%5BcolumnId%5D=Status) to see what all features/bugs are covered as part 0.10 and their current status ## ModelMesh: ## Models UI:: ## Website: # **TIMELINE** | Week | Start | End | When | Event (Actual timelines) | | ---- | -------------------------- | ---------------------------- | ------------------------- | ------------------------------------------------------------------------------------------------------- | | 0 | Friday, July 22, 2022 | Thursday, July 28, 2022 | | Gap week between releases. 0.9.0 release happened on July 22, 2022 | | 1 | Friday, July 29, 2022 | Thursday, August 4, 2022 | | Development | | 2 | Friday, August 5, 2022 | Thursday, August 11, 2022 | | Development | | 3 | Friday, August 12, 2022 | Thursday, August 18, 2022 | | Development | | 4 | Friday, August 19, 2022 | Thursday, August 25, 2022 | | Development | | 5 | Friday, August 26, 2022 | Thursday, September 1, 2022 | | Development | | 6 | Friday, September 2, 2022 | Thursday, September 8, 2022 | | Development | | 7 | Friday, September 9, 2022 | Thursday, September 15, 2022 | | Development | | 8 | Friday, September 16, 2022 | Thursday, September 22, 2022 | | Development | | 9 | Friday, September 23, 2022 | Thursday, September 29, 2022 | | Development | | 10 | Friday, September 30, 2022 | Thursday, October 6, 2022 | | Development | | 11 | Friday, October 7, 2022 | Thursday, October 13, 2022 | | Development | | 12 | Friday, October 14, 2022 | Thursday, October 20, 2022 | | Development | | 13 | Friday, October 21, 2022 | Thursday, October 27, 2022 | | Development | | 14 | Friday, October 28, 2022 | Thursday, November 3, 2022 | | Development | | 15 | Friday, November 4, 2022 | Thursday, November 10, 2022 | | Development | | 16 | Friday, November 11, 2022 | Thursday, November 17, 2022 | | Development +<br>Start the prep i.e.<br>Announce/Reminder about upcoming feature freeze date in 2 weeks | | 17 | Friday, November 18, 2022 | Thursday, November 24, 2022 | Friday, November 18, 2022 | Feature Freeze + RC0 Released + Documentation update starts | | 18 | Friday, November 25, 2022 | Thursday, December 1, 2022 | | Testing | | 19 | Friday, December 2, 2022 | Thursday, December 8, 2022 | | RC1 Release if necessary (Not released) | | 20 | Friday, December 9, 2022 | Thursday, December 15, 2022 | | Testing | | 21 | Friday, December 16, 2022 | Thursday, December 22, 2022 | | 0.10.0 will be released |
process
kserve release tracking kind feature describe the solution you d like kserve release tracking rc release date release date tbd kserve model serving see to see what all features bugs are covered as part and their current status modelmesh models ui website timeline week start end when event actual timelines friday july thursday july gap week between releases release happened on july friday july thursday august development friday august thursday august development friday august thursday august development friday august thursday august development friday august thursday september development friday september thursday september development friday september thursday september development friday september thursday september development friday september thursday september development friday september thursday october development friday october thursday october development friday october thursday october development friday october thursday october development friday october thursday november development friday november thursday november development friday november thursday november development start the prep i e announce reminder about upcoming feature freeze date in weeks friday november thursday november friday november feature freeze released documentation update starts friday november thursday december testing friday december thursday december release if necessary not released friday december thursday december testing friday december thursday december will be released
1
5,091
26,006,744,234
IssuesEvent
2022-12-20 20:10:25
camunda/zeebe
https://api.github.com/repos/camunda/zeebe
opened
Refactor internal communication layer
kind/toil area/performance area/reliability area/observability area/maintainability
<!-- In case you have questions about our software we encourage everyone to participate in our community via the - Camunda Platform community forum https://forum.camunda.io/ or - Slack https://camunda-cloud.slack.com/ (For invite: https://camunda-slack-invite.herokuapp.com/) There you can exchange ideas with other Zeebe and Camunda Platform 8 users, as well as the product developers, and use the search to find answer to similar questions. This issue template is used by the Zeebe engineers to create general tasks. --> _Sorry if there exist already an issue, but I was not able to find it._ **Description** We have seen recently that we have many issues with the Atomix based internal networking/communication layer, which we use between gateway and broker and between broker - broker. For example https://github.com/zeebe-io/zeebe-chaos/issues/294 were we send request over several minutes without detecting that the node was already gone. When trying to fix this issue via https://github.com/camunda/zeebe/pull/11307 it turned out to be rather hard to test and reason about (about the code in general). This means right now the networking part is hard to maintain, hard to test and somehow a blackhole, since there are no metrics and no good logging. Ideally we should spent some time to refactor this part of our system to be more confident in our networking, this would include introducing better visibility (logging + metrics ), improve maintability and readability and very important reduce the complexity and improve the testability. We have already thought several times about (brought up initially by @npepinpe ) it to replace it also with grpc which would be a good opportunity, which also comes with lot of costs and risks of course. We need to discuss this further within the team.
True
Refactor internal communication layer - <!-- In case you have questions about our software we encourage everyone to participate in our community via the - Camunda Platform community forum https://forum.camunda.io/ or - Slack https://camunda-cloud.slack.com/ (For invite: https://camunda-slack-invite.herokuapp.com/) There you can exchange ideas with other Zeebe and Camunda Platform 8 users, as well as the product developers, and use the search to find answer to similar questions. This issue template is used by the Zeebe engineers to create general tasks. --> _Sorry if there exist already an issue, but I was not able to find it._ **Description** We have seen recently that we have many issues with the Atomix based internal networking/communication layer, which we use between gateway and broker and between broker - broker. For example https://github.com/zeebe-io/zeebe-chaos/issues/294 were we send request over several minutes without detecting that the node was already gone. When trying to fix this issue via https://github.com/camunda/zeebe/pull/11307 it turned out to be rather hard to test and reason about (about the code in general). This means right now the networking part is hard to maintain, hard to test and somehow a blackhole, since there are no metrics and no good logging. Ideally we should spent some time to refactor this part of our system to be more confident in our networking, this would include introducing better visibility (logging + metrics ), improve maintability and readability and very important reduce the complexity and improve the testability. We have already thought several times about (brought up initially by @npepinpe ) it to replace it also with grpc which would be a good opportunity, which also comes with lot of costs and risks of course. We need to discuss this further within the team.
non_process
refactor internal communication layer in case you have questions about our software we encourage everyone to participate in our community via the camunda platform community forum or slack for invite there you can exchange ideas with other zeebe and camunda platform users as well as the product developers and use the search to find answer to similar questions this issue template is used by the zeebe engineers to create general tasks sorry if there exist already an issue but i was not able to find it description we have seen recently that we have many issues with the atomix based internal networking communication layer which we use between gateway and broker and between broker broker for example were we send request over several minutes without detecting that the node was already gone when trying to fix this issue via it turned out to be rather hard to test and reason about about the code in general this means right now the networking part is hard to maintain hard to test and somehow a blackhole since there are no metrics and no good logging ideally we should spent some time to refactor this part of our system to be more confident in our networking this would include introducing better visibility logging metrics improve maintability and readability and very important reduce the complexity and improve the testability we have already thought several times about brought up initially by npepinpe it to replace it also with grpc which would be a good opportunity which also comes with lot of costs and risks of course we need to discuss this further within the team
0
20,809
27,568,658,713
IssuesEvent
2023-03-08 07:17:16
open-telemetry/opentelemetry-collector-contrib
https://api.github.com/repos/open-telemetry/opentelemetry-collector-contrib
closed
Add size in bytes to batch processor
Stale processor/batch
**Is your feature request related to a problem? Please describe.** The Bulk processor controls the batch size via `send_batch_size` which is the number of spans or metrics. However the spans can vary in size and therefore it would be useful to instruct the batch processor to send the batch once a size in bytes is reached. **Describe the solution you'd like** Add `send_batch_size_bytes` to the Batch processor https://github.com/open-telemetry/opentelemetry-collector/blob/master/processor/batchprocessor/README.md#batch-processor The batch processor will send the batch once size in bytes is reached. **Describe alternatives you've considered** **Additional context** In Jaeger we would like to put the batch processor in front of ES exporter that uses Elasticsearch bulk API (sends multiple requests in batch). Hence we would like to control how much data is sent to the storage. Related to https://github.com/jaegertracing/jaeger/pull/2295.
1.0
Add size in bytes to batch processor - **Is your feature request related to a problem? Please describe.** The Bulk processor controls the batch size via `send_batch_size` which is the number of spans or metrics. However the spans can vary in size and therefore it would be useful to instruct the batch processor to send the batch once a size in bytes is reached. **Describe the solution you'd like** Add `send_batch_size_bytes` to the Batch processor https://github.com/open-telemetry/opentelemetry-collector/blob/master/processor/batchprocessor/README.md#batch-processor The batch processor will send the batch once size in bytes is reached. **Describe alternatives you've considered** **Additional context** In Jaeger we would like to put the batch processor in front of ES exporter that uses Elasticsearch bulk API (sends multiple requests in batch). Hence we would like to control how much data is sent to the storage. Related to https://github.com/jaegertracing/jaeger/pull/2295.
process
add size in bytes to batch processor is your feature request related to a problem please describe the bulk processor controls the batch size via send batch size which is the number of spans or metrics however the spans can vary in size and therefore it would be useful to instruct the batch processor to send the batch once a size in bytes is reached describe the solution you d like add send batch size bytes to the batch processor the batch processor will send the batch once size in bytes is reached describe alternatives you ve considered additional context in jaeger we would like to put the batch processor in front of es exporter that uses elasticsearch bulk api sends multiple requests in batch hence we would like to control how much data is sent to the storage related to
1
201,571
15,214,001,142
IssuesEvent
2021-02-17 12:37:13
CSOIreland/PxStat
https://api.github.com/repos/CSOIreland/PxStat
closed
[BUG] Pivot param missing in PxAPIv1
bug fixed released tested
Pivoting should create a "pivot" parameter as sibling of the "format" in the API. Same behavior of JSON-RPC, if pivoting is not specified, then no pivot occurs. ![image](https://user-images.githubusercontent.com/53212047/103761915-5d00d800-500f-11eb-8195-02fa4e15f22b.png)
1.0
[BUG] Pivot param missing in PxAPIv1 - Pivoting should create a "pivot" parameter as sibling of the "format" in the API. Same behavior of JSON-RPC, if pivoting is not specified, then no pivot occurs. ![image](https://user-images.githubusercontent.com/53212047/103761915-5d00d800-500f-11eb-8195-02fa4e15f22b.png)
non_process
pivot param missing in pivoting should create a pivot parameter as sibling of the format in the api same behavior of json rpc if pivoting is not specified then no pivot occurs
0
20,007
26,481,016,405
IssuesEvent
2023-01-17 14:42:20
prisma/language-tools
https://api.github.com/repos/prisma/language-tools
opened
Add completions tests for MongogoDB Native Types on composite types `type`
process/candidate tech/typescript topic: tests topic: language server topic: native types team/schema topic: mongodb topic: composite types
Found during a PR review in https://github.com/prisma/language-tools/pull/1353#pullrequestreview-1250396352 We should add tests (currently none)
1.0
Add completions tests for MongogoDB Native Types on composite types `type` - Found during a PR review in https://github.com/prisma/language-tools/pull/1353#pullrequestreview-1250396352 We should add tests (currently none)
process
add completions tests for mongogodb native types on composite types type found during a pr review in we should add tests currently none
1
13,979
16,749,502,282
IssuesEvent
2021-06-11 20:28:16
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
reopened
A canary is chirping
type: process
The dependencies and their versions are: { "dayjs": "^1.10.5", "gcf-utils": "^8.0.2" } at 2021 06-11 13:25:05 🐦
1.0
A canary is chirping - The dependencies and their versions are: { "dayjs": "^1.10.5", "gcf-utils": "^8.0.2" } at 2021 06-11 13:25:05 🐦
process
a canary is chirping the dependencies and their versions are dayjs gcf utils at 🐦
1
15,973
20,188,182,645
IssuesEvent
2022-02-11 01:15:59
savitamittalmsft/WAS-SEC-TEST
https://api.github.com/repos/savitamittalmsft/WAS-SEC-TEST
opened
Establish a security operations center (SOC)
WARP-Import WAF FEB 2021 Security Performance and Scalability Capacity Management Processes Operational Procedures Incident Response
<a href="https://docs.microsoft.com/azure/architecture/framework/security/security-operations">Establish a security operations center (SOC)</a> <p><b>Why Consider This?</b></p> A SOC has a critical role in limiting the time and access an attacker can get to valuable systems and data. In addition, it provides the vital role of detecting the presence of adversaries, reacting to an alert of suspicious activity, or proactively hunting for anomalous events in the enterprise activity logs. If your organization doesn't maintain this capability, it may be unaware of critical security incidents until long after they occur. <p><b>Context</b></p> <p><span>A SOC is a vital investment for an enterprise as it is core to limiting how much time and access attackers have in the organization. This ultimately increases the attacker's cost and decreases the benefit, which damages their return on investment (ROI) and motivation for attacking your organization. The SOC focus should be oriented toward limiting the time and access attackers can gain to the organization's assets in an attack to mitigate business risk.</span></p><p><span>The tasks of security operations are described well by the NIST Cybersecurity Framework functions of Detect, Respond, and Recover.</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span><b>Detect </b></span><span>- Security operations must detect the presence of adversaries in the system, who are incented to stay hidden in most cases as this allows them to achieve their objectives unimpeded. This can take the form of reacting to an alert of suspicious activity or proactively hunting for anomalous events in the enterprise activity logs.</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span><b>Respond</b></span><span> - Upon detection of potential adversary action or campaign, security operations must rapidly investigate to identify whether it is an actual attack (true positive) or a false alarm (false positive) and then enumerate the scope and goal of the adversary operation.</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span><b>Recover</b></span><span> - The ultimate goal of security operations is to preserve or restore the security assurances (confidentiality, integrity, availability) of business services during and after an attack.</span></li></ul> <p><b>Suggested Actions</b></p> <p><span>Implement or enhance the existing SOC to encompass the detect, respond, and recover capabilities."nbsp; </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/security/security-operations" target="_blank"><span>Security operations</span></a><span /></p>
1.0
Establish a security operations center (SOC) - <a href="https://docs.microsoft.com/azure/architecture/framework/security/security-operations">Establish a security operations center (SOC)</a> <p><b>Why Consider This?</b></p> A SOC has a critical role in limiting the time and access an attacker can get to valuable systems and data. In addition, it provides the vital role of detecting the presence of adversaries, reacting to an alert of suspicious activity, or proactively hunting for anomalous events in the enterprise activity logs. If your organization doesn't maintain this capability, it may be unaware of critical security incidents until long after they occur. <p><b>Context</b></p> <p><span>A SOC is a vital investment for an enterprise as it is core to limiting how much time and access attackers have in the organization. This ultimately increases the attacker's cost and decreases the benefit, which damages their return on investment (ROI) and motivation for attacking your organization. The SOC focus should be oriented toward limiting the time and access attackers can gain to the organization's assets in an attack to mitigate business risk.</span></p><p><span>The tasks of security operations are described well by the NIST Cybersecurity Framework functions of Detect, Respond, and Recover.</span></p><ul style="list-style-type:disc"><li value="1" style="text-indent: 0px;"><span><b>Detect </b></span><span>- Security operations must detect the presence of adversaries in the system, who are incented to stay hidden in most cases as this allows them to achieve their objectives unimpeded. This can take the form of reacting to an alert of suspicious activity or proactively hunting for anomalous events in the enterprise activity logs.</span></li><li value="2" style="margin-right: 0px;text-indent: 0px;"><span><b>Respond</b></span><span> - Upon detection of potential adversary action or campaign, security operations must rapidly investigate to identify whether it is an actual attack (true positive) or a false alarm (false positive) and then enumerate the scope and goal of the adversary operation.</span></li><li value="3" style="margin-right: 0px;text-indent: 0px;"><span><b>Recover</b></span><span> - The ultimate goal of security operations is to preserve or restore the security assurances (confidentiality, integrity, availability) of business services during and after an attack.</span></li></ul> <p><b>Suggested Actions</b></p> <p><span>Implement or enhance the existing SOC to encompass the detect, respond, and recover capabilities."nbsp; </span></p> <p><b>Learn More</b></p> <p><a href="https://docs.microsoft.com/en-us/azure/architecture/framework/security/security-operations" target="_blank"><span>Security operations</span></a><span /></p>
process
establish a security operations center soc why consider this a soc has a critical role in limiting the time and access an attacker can get to valuable systems and data in addition it provides the vital role of detecting the presence of adversaries reacting to an alert of suspicious activity or proactively hunting for anomalous events in the enterprise activity logs if your organization doesn t maintain this capability it may be unaware of critical security incidents until long after they occur context a soc is a vital investment for an enterprise as it is core to limiting how much time and access attackers have in the organization this ultimately increases the attacker s cost and decreases the benefit which damages their return on investment roi and motivation for attacking your organization the soc focus should be oriented toward limiting the time and access attackers can gain to the organization s assets in an attack to mitigate business risk the tasks of security operations are described well by the nist cybersecurity framework functions of detect respond and recover detect security operations must detect the presence of adversaries in the system who are incented to stay hidden in most cases as this allows them to achieve their objectives unimpeded this can take the form of reacting to an alert of suspicious activity or proactively hunting for anomalous events in the enterprise activity logs respond upon detection of potential adversary action or campaign security operations must rapidly investigate to identify whether it is an actual attack true positive or a false alarm false positive and then enumerate the scope and goal of the adversary operation recover the ultimate goal of security operations is to preserve or restore the security assurances confidentiality integrity availability of business services during and after an attack suggested actions implement or enhance the existing soc to encompass the detect respond and recover capabilities nbsp learn more security operations
1
18,705
24,599,419,559
IssuesEvent
2022-10-14 11:08:46
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[Consent API] [PM] Consent API Disabled > Participants details screen > Getting 'No records found' in the consent history
Bug Blocker P0 Participant manager Process: Fixed Process: Tested QA Process: Tested dev
**Steps:** 1. Install the mobile app 2. Sign in / Sign up 3. Enroll to the study 4. Withdraw from that study 5. Go, to PM 6. Navigate to participants details screen and Verify **AR:** Getting 'No records found' in the consent history **ER:** Participants consent history should be retained even when they withdraw from the study ![PM](https://user-images.githubusercontent.com/86007179/180776194-a60e5367-7691-4168-843b-2550a83027d7.png)
3.0
[Consent API] [PM] Consent API Disabled > Participants details screen > Getting 'No records found' in the consent history - **Steps:** 1. Install the mobile app 2. Sign in / Sign up 3. Enroll to the study 4. Withdraw from that study 5. Go, to PM 6. Navigate to participants details screen and Verify **AR:** Getting 'No records found' in the consent history **ER:** Participants consent history should be retained even when they withdraw from the study ![PM](https://user-images.githubusercontent.com/86007179/180776194-a60e5367-7691-4168-843b-2550a83027d7.png)
process
consent api disabled participants details screen getting no records found in the consent history steps install the mobile app sign in sign up enroll to the study withdraw from that study go to pm navigate to participants details screen and verify ar getting no records found in the consent history er participants consent history should be retained even when they withdraw from the study
1
34,277
7,806,037,647
IssuesEvent
2018-06-11 12:56:48
jolocom/smartwallet-app
https://api.github.com/repos/jolocom/smartwallet-app
closed
Better feedback for pending actions
code review required feature finished
### Description After a button is pressed and a certain action triggered, the button should be disabled to prevent accidental further taps. In addition to disabling, the text on the button can change to let the user know that an action is in progress. ### TODO - [ ] Add feedback on claim addition screen - [ ] Add feedback on entropy submission screen - [ ] Add feedback on sso confirmation screen
1.0
Better feedback for pending actions - ### Description After a button is pressed and a certain action triggered, the button should be disabled to prevent accidental further taps. In addition to disabling, the text on the button can change to let the user know that an action is in progress. ### TODO - [ ] Add feedback on claim addition screen - [ ] Add feedback on entropy submission screen - [ ] Add feedback on sso confirmation screen
non_process
better feedback for pending actions description after a button is pressed and a certain action triggered the button should be disabled to prevent accidental further taps in addition to disabling the text on the button can change to let the user know that an action is in progress todo add feedback on claim addition screen add feedback on entropy submission screen add feedback on sso confirmation screen
0
21,392
29,202,232,079
IssuesEvent
2023-05-21 00:37:41
devssa/onde-codar-em-salvador
https://api.github.com/repos/devssa/onde-codar-em-salvador
closed
[Híbrido | São Paulo/SP, Rio de Janeiro/RJ e Belo Horizonte/MG] SRE/DevOps - Nível Sênior | HrSoul
TELECOM DEVOPS AWS PROCESSOS INGLÊS UMA HELP WANTED HTTP SRE Stale
Olá! Sou o Lucas Oliveira, trabalho com Recrutamento e Seleção - TI, na HrSoul Consultoria. Sobre a HrSoul: Criada visando atender as necessidades do mercado de TI/Telecom e Serviços, suprindo a carência de consultorias com expertise na área, assim como inovar os processos de Recursos Humanos para os demais segmentos. ![HrSoul Logo](https://user-images.githubusercontent.com/98987631/212730516-47bfddec-61e3-4e55-beae-ae8f0270fd75.png) Oferecemos oportunidade para profissionais com o seguinte perfil: ## **SRE/DevOps - Nível Sênior (3 vagas) - SP/RJ/MG** Sólida experiência como Site Reliability Engineer e DevOps; Necessária certificação AWS; Imprescindível inglês fluente, para atuação em projeto global; Imprescindível disponibilidade para atuação em modelo híbrido em uma das 3 regiões: São Paulo, Rio de janeiro e/ou Belo Horizonte. Mais informações podem ser alinhadas conosco durante o processo, você conhece alguém que possa se interessar? Indicações para Aileen Azevedo - [aazevedo@hrsoul.com.br](mailto:aazevedo@hrsoul.com.br) ou cadastre seu CV em nosso site – [www.hrsoul.com.br](http://www.hrsoul.com.br/) Muito obrigado!
1.0
[Híbrido | São Paulo/SP, Rio de Janeiro/RJ e Belo Horizonte/MG] SRE/DevOps - Nível Sênior | HrSoul - Olá! Sou o Lucas Oliveira, trabalho com Recrutamento e Seleção - TI, na HrSoul Consultoria. Sobre a HrSoul: Criada visando atender as necessidades do mercado de TI/Telecom e Serviços, suprindo a carência de consultorias com expertise na área, assim como inovar os processos de Recursos Humanos para os demais segmentos. ![HrSoul Logo](https://user-images.githubusercontent.com/98987631/212730516-47bfddec-61e3-4e55-beae-ae8f0270fd75.png) Oferecemos oportunidade para profissionais com o seguinte perfil: ## **SRE/DevOps - Nível Sênior (3 vagas) - SP/RJ/MG** Sólida experiência como Site Reliability Engineer e DevOps; Necessária certificação AWS; Imprescindível inglês fluente, para atuação em projeto global; Imprescindível disponibilidade para atuação em modelo híbrido em uma das 3 regiões: São Paulo, Rio de janeiro e/ou Belo Horizonte. Mais informações podem ser alinhadas conosco durante o processo, você conhece alguém que possa se interessar? Indicações para Aileen Azevedo - [aazevedo@hrsoul.com.br](mailto:aazevedo@hrsoul.com.br) ou cadastre seu CV em nosso site – [www.hrsoul.com.br](http://www.hrsoul.com.br/) Muito obrigado!
process
sre devops nível sênior hrsoul olá sou o lucas oliveira trabalho com recrutamento e seleção ti na hrsoul consultoria sobre a hrsoul criada visando atender as necessidades do mercado de ti telecom e serviços suprindo a carência de consultorias com expertise na área assim como inovar os processos de recursos humanos para os demais segmentos oferecemos oportunidade para profissionais com o seguinte perfil sre devops nível sênior vagas sp rj mg sólida experiência como site reliability engineer e devops necessária certificação aws imprescindível inglês fluente para atuação em projeto global imprescindível disponibilidade para atuação em modelo híbrido em uma das regiões são paulo rio de janeiro e ou belo horizonte mais informações podem ser alinhadas conosco durante o processo você conhece alguém que possa se interessar indicações para aileen azevedo mailto aazevedo hrsoul com br ou cadastre seu cv em nosso site – muito obrigado
1
1,465
4,044,550,096
IssuesEvent
2016-05-21 11:45:56
sysown/proxysql
https://api.github.com/repos/sysown/proxysql
closed
Disable re-execution retry logic
ADMIN CONNECTION POOL MYSQL PROTOCOL QUERY PROCESSOR
If a query is being killed in the mysql server, ProxySQL will retry to re-executed. This behavior is enabled by default and there is no way to disable it or make it configurable. We should add a new global variable.
1.0
Disable re-execution retry logic - If a query is being killed in the mysql server, ProxySQL will retry to re-executed. This behavior is enabled by default and there is no way to disable it or make it configurable. We should add a new global variable.
process
disable re execution retry logic if a query is being killed in the mysql server proxysql will retry to re executed this behavior is enabled by default and there is no way to disable it or make it configurable we should add a new global variable
1
1,366
3,923,646,278
IssuesEvent
2016-04-22 12:20:37
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
reopened
ServiceController StopAndStart test is failing intermittently
System.ServiceProcess
I'm at about 50% over several builds from the last few days, it's usually that the service hasn't finished starting in the allotted time: ``` System.ServiceProcess.Tests.ServiceControllerTests.StopAndStart [FAIL] Assert.Equal() Failure Expected: Running Actual: StartPending Stack Trace: D:\dd\Blue\CoreFx\src\System.ServiceProcess.ServiceController\tests\ System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs(154,0 ): at System.ServiceProcess.Tests.ServiceControllerTests.StopAndStart() ``` One time it seemed to leave the service process still running, which resulted in me having to hunt down the abandoned service to stop it so I could clear out the bin directory.
1.0
ServiceController StopAndStart test is failing intermittently - I'm at about 50% over several builds from the last few days, it's usually that the service hasn't finished starting in the allotted time: ``` System.ServiceProcess.Tests.ServiceControllerTests.StopAndStart [FAIL] Assert.Equal() Failure Expected: Running Actual: StartPending Stack Trace: D:\dd\Blue\CoreFx\src\System.ServiceProcess.ServiceController\tests\ System.ServiceProcess.ServiceController.Tests\ServiceControllerTests.cs(154,0 ): at System.ServiceProcess.Tests.ServiceControllerTests.StopAndStart() ``` One time it seemed to leave the service process still running, which resulted in me having to hunt down the abandoned service to stop it so I could clear out the bin directory.
process
servicecontroller stopandstart test is failing intermittently i m at about over several builds from the last few days it s usually that the service hasn t finished starting in the allotted time system serviceprocess tests servicecontrollertests stopandstart assert equal failure expected running actual startpending stack trace d dd blue corefx src system serviceprocess servicecontroller tests system serviceprocess servicecontroller tests servicecontrollertests cs at system serviceprocess tests servicecontrollertests stopandstart one time it seemed to leave the service process still running which resulted in me having to hunt down the abandoned service to stop it so i could clear out the bin directory
1
6,965
10,118,703,636
IssuesEvent
2019-07-31 09:39:56
BlesseNtumble/GalaxySpace
https://api.github.com/repos/BlesseNtumble/GalaxySpace
closed
Отключение параметра "enablePlateOreDict" ломает рецепт сжатого сплава "SDHC-120"
1.12.2 in the process of correcting
**Please fill in the form below:** ------------------------------------------------------------------------ 1. Minecraft version: 1.12.2 2. Galacticraft version: 4.0.2.211 3. GalaxySpace version: 2.0.8 4. AsmodeusCore version (for 2.0.1 version and above): 0.0.8 5. Side (Single player (SSP), Multiplayer (SMP), or SSP opened to LAN (LAN)): SSP Description of the issue: ------------------------------------------------------------------------ Отключение параметра `enablePlateOreDict` ломает рецепт сжатого сплава "SDHC-120". Возможно, какие-то другие рецепты также повреждены. Screenshot/Video: ------------------------------------------------------------------------ ![2019-07-30_14 07 07](https://user-images.githubusercontent.com/35406698/62125018-7e1ab080-b2d4-11e9-8cc1-33cdec2859a6.png) Attached log file (or url on pastebin.com): ------------------------------------------------------------------------
1.0
Отключение параметра "enablePlateOreDict" ломает рецепт сжатого сплава "SDHC-120" - **Please fill in the form below:** ------------------------------------------------------------------------ 1. Minecraft version: 1.12.2 2. Galacticraft version: 4.0.2.211 3. GalaxySpace version: 2.0.8 4. AsmodeusCore version (for 2.0.1 version and above): 0.0.8 5. Side (Single player (SSP), Multiplayer (SMP), or SSP opened to LAN (LAN)): SSP Description of the issue: ------------------------------------------------------------------------ Отключение параметра `enablePlateOreDict` ломает рецепт сжатого сплава "SDHC-120". Возможно, какие-то другие рецепты также повреждены. Screenshot/Video: ------------------------------------------------------------------------ ![2019-07-30_14 07 07](https://user-images.githubusercontent.com/35406698/62125018-7e1ab080-b2d4-11e9-8cc1-33cdec2859a6.png) Attached log file (or url on pastebin.com): ------------------------------------------------------------------------
process
отключение параметра enableplateoredict ломает рецепт сжатого сплава sdhc please fill in the form below minecraft version galacticraft version galaxyspace version asmodeuscore version for version and above side single player ssp multiplayer smp or ssp opened to lan lan ssp description of the issue отключение параметра enableplateoredict ломает рецепт сжатого сплава sdhc возможно какие то другие рецепты также повреждены screenshot video attached log file or url on pastebin com
1
21,678
30,121,338,040
IssuesEvent
2023-06-30 15:23:56
USGS-WiM/StreamStats
https://api.github.com/repos/USGS-WiM/StreamStats
closed
BP: Descriptions for checkboxes
Batch Processor
@harper-wavra: I wonder if we need some text or something saying what the checkboxes are for. Right now, they are kind of just hanging out by themselves-- "Select Computations:"
1.0
BP: Descriptions for checkboxes - @harper-wavra: I wonder if we need some text or something saying what the checkboxes are for. Right now, they are kind of just hanging out by themselves-- "Select Computations:"
process
bp descriptions for checkboxes harper wavra i wonder if we need some text or something saying what the checkboxes are for right now they are kind of just hanging out by themselves select computations
1
59,960
14,514,645,520
IssuesEvent
2020-12-13 09:14:09
TalErez86/Test-Repository123
https://api.github.com/repos/TalErez86/Test-Repository123
opened
CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar
security vulnerability
## CVE-2015-6420 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary> <p>Types that extend and augment the Java Collections Framework.</p> <p>Path to dependency file: Test-Repository123/multi-module-maven-project-master/register-service/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar</p> <p> Dependency Hierarchy: - uberfire-io-7.40.0.Final.jar (Root Library) - uberfire-commons-7.40.0.Final.jar - artemis-jms-client-2.3.0.jar - artemis-core-client-2.3.0.jar - artemis-commons-2.3.0.jar - commons-beanutils-1.9.2.jar - :x: **commons-collections-3.2.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/TalErez86/Test-Repository123/commit/4e4afbf004b9fd8ee375cc5c910630e64cb30a8e">4e4afbf004b9fd8ee375cc5c910630e64cb30a8e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library. <p>Publish Date: 2015-12-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420>CVE-2015-6420</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1">https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1</a></p> <p>Release Date: 2015-12-15</p> <p>Fix Resolution: commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","isTransitiveDependency":true,"dependencyTree":"org.uberfire:uberfire-io:7.40.0.Final;org.uberfire:uberfire-commons:7.40.0.Final;org.apache.activemq:artemis-jms-client:2.3.0;org.apache.activemq:artemis-core-client:2.3.0;org.apache.activemq:artemis-commons:2.3.0;commons-beanutils:commons-beanutils:1.9.2;commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1"}],"vulnerabilityIdentifier":"CVE-2015-6420","vulnerabilityDetails":"Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
True
CVE-2015-6420 (High) detected in commons-collections-3.2.1.jar - ## CVE-2015-6420 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>commons-collections-3.2.1.jar</b></p></summary> <p>Types that extend and augment the Java Collections Framework.</p> <p>Path to dependency file: Test-Repository123/multi-module-maven-project-master/register-service/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/commons-collections/commons-collections/3.2.1/commons-collections-3.2.1.jar</p> <p> Dependency Hierarchy: - uberfire-io-7.40.0.Final.jar (Root Library) - uberfire-commons-7.40.0.Final.jar - artemis-jms-client-2.3.0.jar - artemis-core-client-2.3.0.jar - artemis-commons-2.3.0.jar - commons-beanutils-1.9.2.jar - :x: **commons-collections-3.2.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/TalErez86/Test-Repository123/commit/4e4afbf004b9fd8ee375cc5c910630e64cb30a8e">4e4afbf004b9fd8ee375cc5c910630e64cb30a8e</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library. <p>Publish Date: 2015-12-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420>CVE-2015-6420</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 2 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics not available</p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1">https://github.com/apache/commons-collections/tree/collections-3.2.2,https://github.com/apache/commons-collections/tree/collections-4.1</a></p> <p>Release Date: 2015-12-15</p> <p>Fix Resolution: commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"commons-collections","packageName":"commons-collections","packageVersion":"3.2.1","isTransitiveDependency":true,"dependencyTree":"org.uberfire:uberfire-io:7.40.0.Final;org.uberfire:uberfire-commons:7.40.0.Final;org.apache.activemq:artemis-jms-client:2.3.0;org.apache.activemq:artemis-core-client:2.3.0;org.apache.activemq:artemis-commons:2.3.0;commons-beanutils:commons-beanutils:1.9.2;commons-collections:commons-collections:3.2.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"commons-collections:commons-collections3.2.2,org.apache.commons:commons-collections4:4.1"}],"vulnerabilityIdentifier":"CVE-2015-6420","vulnerabilityDetails":"Serialized-object interfaces in certain Cisco Collaboration and Social Media; Endpoint Clients and Client Software; Network Application, Service, and Acceleration; Network and Content Security Devices; Network Management and Provisioning; Routing and Switching - Enterprise and Service Provider; Unified Computing; Voice and Unified Communications Devices; Video, Streaming, TelePresence, and Transcoding Devices; Wireless; and Cisco Hosted Services products allow remote attackers to execute arbitrary commands via a crafted serialized Java object, related to the Apache Commons Collections (ACC) library.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2015-6420","cvss2Severity":"high","cvss2Score":"7.5","extraData":{}}</REMEDIATE> -->
non_process
cve high detected in commons collections jar cve high severity vulnerability vulnerable library commons collections jar types that extend and augment the java collections framework path to dependency file test multi module maven project master register service pom xml path to vulnerable library home wss scanner repository commons collections commons collections commons collections jar dependency hierarchy uberfire io final jar root library uberfire commons final jar artemis jms client jar artemis core client jar artemis commons jar commons beanutils jar x commons collections jar vulnerable library found in head commit a href found in base branch main vulnerability details serialized object interfaces in certain cisco collaboration and social media endpoint clients and client software network application service and acceleration network and content security devices network management and provisioning routing and switching enterprise and service provider unified computing voice and unified communications devices video streaming telepresence and transcoding devices wireless and cisco hosted services products allow remote attackers to execute arbitrary commands via a crafted serialized java object related to the apache commons collections acc library publish date url a href cvss score details base score metrics not available suggested fix type upgrade version origin a href release date fix resolution commons collections commons org apache commons commons isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails serialized object interfaces in certain cisco collaboration and social media endpoint clients and client software network application service and acceleration network and content security devices network management and provisioning routing and switching enterprise and service provider unified computing voice and unified communications devices video streaming telepresence and transcoding devices wireless and cisco hosted services products allow remote attackers to execute arbitrary commands via a crafted serialized java object related to the apache commons collections acc library vulnerabilityurl
0
387,296
11,459,555,652
IssuesEvent
2020-02-07 07:36:29
input-output-hk/ouroboros-network
https://api.github.com/repos/input-output-hk/ouroboros-network
closed
Audit memory allocation
byron consensus optimisation priority medium
After letting a node sync the first 540k blocks: ``` 761,473,422,968 bytes allocated in the heap 63,238,559,440 bytes copied during GC 128,763,248 bytes maximum residency (292 sample(s)) 3,529,360 bytes maximum slop 122 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 724781 colls, 0 par 730.289s 731.350s 0.0010s 0.0256s Gen 1 292 colls, 0 par 11.830s 11.962s 0.0410s 0.0940s TASKS: 7 (1 bound, 6 peak workers (6 total), using -N1) SPARKS: 0(0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) INIT time 0.000s ( 0.000s elapsed) MUT time 773.008s (709.649s elapsed) GC time 742.119s (743.313s elapsed) EXIT time 0.001s ( 0.009s elapsed) Total time 1515.128s (1452.970s elapsed) Alloc rate 985,078,173 bytes per MUT second Productivity 51.0% of total user, 48.8% of total elapsed ``` The productivity is low, but not dramatically low. We should do some allocation profiling to see whether we're not allocating too much at some places.
1.0
Audit memory allocation - After letting a node sync the first 540k blocks: ``` 761,473,422,968 bytes allocated in the heap 63,238,559,440 bytes copied during GC 128,763,248 bytes maximum residency (292 sample(s)) 3,529,360 bytes maximum slop 122 MB total memory in use (0 MB lost due to fragmentation) Tot time (elapsed) Avg pause Max pause Gen 0 724781 colls, 0 par 730.289s 731.350s 0.0010s 0.0256s Gen 1 292 colls, 0 par 11.830s 11.962s 0.0410s 0.0940s TASKS: 7 (1 bound, 6 peak workers (6 total), using -N1) SPARKS: 0(0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled) INIT time 0.000s ( 0.000s elapsed) MUT time 773.008s (709.649s elapsed) GC time 742.119s (743.313s elapsed) EXIT time 0.001s ( 0.009s elapsed) Total time 1515.128s (1452.970s elapsed) Alloc rate 985,078,173 bytes per MUT second Productivity 51.0% of total user, 48.8% of total elapsed ``` The productivity is low, but not dramatically low. We should do some allocation profiling to see whether we're not allocating too much at some places.
non_process
audit memory allocation after letting a node sync the first blocks bytes allocated in the heap bytes copied during gc bytes maximum residency sample s bytes maximum slop mb total memory in use mb lost due to fragmentation tot time elapsed avg pause max pause gen colls par gen colls par tasks bound peak workers total using sparks converted overflowed dud gc d fizzled init time elapsed mut time elapsed gc time elapsed exit time elapsed total time elapsed alloc rate bytes per mut second productivity of total user of total elapsed the productivity is low but not dramatically low we should do some allocation profiling to see whether we re not allocating too much at some places
0
9,726
12,728,233,906
IssuesEvent
2020-06-25 01:53:46
googleapis/google-cloud-ruby
https://api.github.com/repos/googleapis/google-cloud-ruby
closed
spanner: run integration tests against the emulator on Kokoro
api: spanner type: process
We want to run the spanner integration tests against the Cloud Spanner Emulator in addition to the production backend.
1.0
spanner: run integration tests against the emulator on Kokoro - We want to run the spanner integration tests against the Cloud Spanner Emulator in addition to the production backend.
process
spanner run integration tests against the emulator on kokoro we want to run the spanner integration tests against the cloud spanner emulator in addition to the production backend
1
121,863
17,664,398,735
IssuesEvent
2021-08-22 06:42:52
ghc-dev/David-Gregory
https://api.github.com/repos/ghc-dev/David-Gregory
opened
CVE-2020-9488 (Low) detected in log4j-core-2.8.2.jar
security vulnerability
## CVE-2020-9488 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.2.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p> <p>Path to dependency file: David-Gregory/pom.xml</p> <p>Path to vulnerable library: ository/org/apache/logging/log4j/log4j-core/2.8.2/log4j-core-2.8.2.jar</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.8.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/David-Gregory/commit/d3b2618d00419192e837d6089e513dbe3cfce299">d3b2618d00419192e837d6089e513dbe3cfce299</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. <p>Publish Date: 2020-04-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488>CVE-2020-9488</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://issues.apache.org/jira/browse/LOG4J2-2819">https://issues.apache.org/jira/browse/LOG4J2-2819</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.13.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.8.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.8.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.13.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9488","vulnerabilityDetails":"Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488","cvss3Severity":"low","cvss3Score":"3.7","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-9488 (Low) detected in log4j-core-2.8.2.jar - ## CVE-2020-9488 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>log4j-core-2.8.2.jar</b></p></summary> <p>The Apache Log4j Implementation</p> <p>Library home page: <a href="https://logging.apache.org/log4j/2.x/log4j-core/">https://logging.apache.org/log4j/2.x/log4j-core/</a></p> <p>Path to dependency file: David-Gregory/pom.xml</p> <p>Path to vulnerable library: ository/org/apache/logging/log4j/log4j-core/2.8.2/log4j-core-2.8.2.jar</p> <p> Dependency Hierarchy: - :x: **log4j-core-2.8.2.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/ghc-dev/David-Gregory/commit/d3b2618d00419192e837d6089e513dbe3cfce299">d3b2618d00419192e837d6089e513dbe3cfce299</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender. <p>Publish Date: 2020-04-27 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488>CVE-2020-9488</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://issues.apache.org/jira/browse/LOG4J2-2819">https://issues.apache.org/jira/browse/LOG4J2-2819</a></p> <p>Release Date: 2020-04-27</p> <p>Fix Resolution: org.apache.logging.log4j:log4j-core:2.13.2</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.logging.log4j","packageName":"log4j-core","packageVersion":"2.8.2","packageFilePaths":["/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"org.apache.logging.log4j:log4j-core:2.8.2","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.logging.log4j:log4j-core:2.13.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2020-9488","vulnerabilityDetails":"Improper validation of certificate with host mismatch in Apache Log4j SMTP appender. This could allow an SMTPS connection to be intercepted by a man-in-the-middle attack which could leak any log messages sent through that appender.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9488","cvss3Severity":"low","cvss3Score":"3.7","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve low detected in core jar cve low severity vulnerability vulnerable library core jar the apache implementation library home page a href path to dependency file david gregory pom xml path to vulnerable library ository org apache logging core core jar dependency hierarchy x core jar vulnerable library found in head commit a href found in base branch master vulnerability details improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache logging core rescue worker helmet automatic remediation is available for this issue isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree org apache logging core isminimumfixversionavailable true minimumfixversion org apache logging core basebranches vulnerabilityidentifier cve vulnerabilitydetails improper validation of certificate with host mismatch in apache smtp appender this could allow an smtps connection to be intercepted by a man in the middle attack which could leak any log messages sent through that appender vulnerabilityurl
0
7,712
10,819,120,003
IssuesEvent
2019-11-08 13:43:38
OI-wiki/OI-wiki
https://api.github.com/repos/OI-wiki/OI-wiki
closed
CodeVS 当前不可用
低优先级 / P3 需要处理 / Need Processing
CodeVS 近日因为备案失效的原因,陷入了不可用的状态。 考虑到 CodeVS 长期几乎无人维护的现状,短期内恢复访问的概率不大。 通过搜索发现 OI-wiki 上有少量的 CodeVS 题目链接,因 vjudge,OI-archive 等网站均未存储 CodeVS 的题目,因此应考虑将这些题目进行更换。 另外,bzoj 近日不可用的原因大概率也与备案失效有关(备案系统里没有查到 bzoj 旧备案号对应的记录,但按域名查询查到了于 10/21 验证的新备案记录)。
1.0
CodeVS 当前不可用 - CodeVS 近日因为备案失效的原因,陷入了不可用的状态。 考虑到 CodeVS 长期几乎无人维护的现状,短期内恢复访问的概率不大。 通过搜索发现 OI-wiki 上有少量的 CodeVS 题目链接,因 vjudge,OI-archive 等网站均未存储 CodeVS 的题目,因此应考虑将这些题目进行更换。 另外,bzoj 近日不可用的原因大概率也与备案失效有关(备案系统里没有查到 bzoj 旧备案号对应的记录,但按域名查询查到了于 10/21 验证的新备案记录)。
process
codevs 当前不可用 codevs 近日因为备案失效的原因,陷入了不可用的状态。 考虑到 codevs 长期几乎无人维护的现状,短期内恢复访问的概率不大。 通过搜索发现 oi wiki 上有少量的 codevs 题目链接,因 vjudge oi archive 等网站均未存储 codevs 的题目,因此应考虑将这些题目进行更换。 另外,bzoj 近日不可用的原因大概率也与备案失效有关(备案系统里没有查到 bzoj 旧备案号对应的记录,但按域名查询查到了于 验证的新备案记录)。
1
49,868
6,275,584,487
IssuesEvent
2017-07-18 07:21:57
GDquest/Godot-2d-game-course
https://api.github.com/repos/GDquest/Godot-2d-game-course
opened
02. Core loop
design prototype
_Is the core loop satisfying?_ Prototype the core gameplay loop, refine its design and test it - [ ] Add the ability to slash through objects, 180 degrees - [ ] Add throwing pebble - [ ] Add destructible “grass” - [ ] Add a dummy enemy, with knockback support
1.0
02. Core loop - _Is the core loop satisfying?_ Prototype the core gameplay loop, refine its design and test it - [ ] Add the ability to slash through objects, 180 degrees - [ ] Add throwing pebble - [ ] Add destructible “grass” - [ ] Add a dummy enemy, with knockback support
non_process
core loop is the core loop satisfying prototype the core gameplay loop refine its design and test it add the ability to slash through objects degrees add throwing pebble add destructible “grass” add a dummy enemy with knockback support
0
41,415
10,449,998,014
IssuesEvent
2019-09-19 09:35:48
line/armeria
https://api.github.com/repos/line/armeria
closed
Spring WebFlux client isn't working with body extraction methods
defect
5.1.3.RELEASE or later, Spring WebFlux's body handling logic has changed when response has an error. See below https://github.com/spring-projects/spring-framework/blob/c187cb2fa13af2a6ff6e92d588ba70b458707460/spring-webflux/src/main/java/org/springframework/web/reactive/function/client/DefaultWebClient.java#L443-L446 This changed logic handles body twice if status code of the response determines that there is an error. But `ArmeriaHttpClientResponseSubscriber` doesn't remember that the response body is canceled or completed. Therefore erroneous response using methods like `bodyToMono()` doesn't end.
1.0
Spring WebFlux client isn't working with body extraction methods - 5.1.3.RELEASE or later, Spring WebFlux's body handling logic has changed when response has an error. See below https://github.com/spring-projects/spring-framework/blob/c187cb2fa13af2a6ff6e92d588ba70b458707460/spring-webflux/src/main/java/org/springframework/web/reactive/function/client/DefaultWebClient.java#L443-L446 This changed logic handles body twice if status code of the response determines that there is an error. But `ArmeriaHttpClientResponseSubscriber` doesn't remember that the response body is canceled or completed. Therefore erroneous response using methods like `bodyToMono()` doesn't end.
non_process
spring webflux client isn t working with body extraction methods release or later spring webflux s body handling logic has changed when response has an error see below this changed logic handles body twice if status code of the response determines that there is an error but armeriahttpclientresponsesubscriber doesn t remember that the response body is canceled or completed therefore erroneous response using methods like bodytomono doesn t end
0