Unnamed: 0
int64
3
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
2
430
labels
stringlengths
4
347
body
stringlengths
5
237k
index
stringclasses
7 values
text_combine
stringlengths
96
237k
label
stringclasses
2 values
text
stringlengths
96
219k
binary_label
int64
0
1
37,692
8,474,807,754
IssuesEvent
2018-10-24 17:09:09
brainvisa/testbidon
https://api.github.com/repos/brainvisa/testbidon
opened
volume: segfault in VolumeRef copy constructor
Category: soma-io Component: Resolution Priority: Normal Status: New Tracker: Defect
--- Author Name: **Souedet, Nicolas** (Souedet, Nicolas) Original Redmine Issue: 15714, https://bioproj.extra.cea.fr/redmine/issues/15714 Original Date: 2016-11-14 --- Here is a simple example that was working a few days ago, but that generates segfault now: ``` //--- cartobase -------------------------------------------------------------- #include <cartobase/allocator/allocator.h> //--- cartodata -------------------------------------------------------------- #include <cartodata/volume/volume.h> //---------------------------------------------------------------------------- using namespace std; using namespace carto; int main(int argc, char** argv) { VolumeRef<float> fullVolume; fullVolume = Volume<float>(1, 1, 1, 1, AllocatorContext(), false); return 0; } ``` It seems to append during affectation in an implicit conversion. It goes through the all() method that converts a volume to a single boolean value but the volume is not allocated in this case, so it tries to access unallocated memory.
1.0
volume: segfault in VolumeRef copy constructor - --- Author Name: **Souedet, Nicolas** (Souedet, Nicolas) Original Redmine Issue: 15714, https://bioproj.extra.cea.fr/redmine/issues/15714 Original Date: 2016-11-14 --- Here is a simple example that was working a few days ago, but that generates segfault now: ``` //--- cartobase -------------------------------------------------------------- #include <cartobase/allocator/allocator.h> //--- cartodata -------------------------------------------------------------- #include <cartodata/volume/volume.h> //---------------------------------------------------------------------------- using namespace std; using namespace carto; int main(int argc, char** argv) { VolumeRef<float> fullVolume; fullVolume = Volume<float>(1, 1, 1, 1, AllocatorContext(), false); return 0; } ``` It seems to append during affectation in an implicit conversion. It goes through the all() method that converts a volume to a single boolean value but the volume is not allocated in this case, so it tries to access unallocated memory.
non_comp
volume segfault in volumeref copy constructor author name souedet nicolas souedet nicolas original redmine issue original date here is a simple example that was working a few days ago but that generates segfault now cartobase include cartodata include using namespace std using namespace carto int main int argc char argv volumeref fullvolume fullvolume volume allocatorcontext false return it seems to append during affectation in an implicit conversion it goes through the all method that converts a volume to a single boolean value but the volume is not allocated in this case so it tries to access unallocated memory
0
5,807
8,249,236,255
IssuesEvent
2018-09-11 20:55:29
kwaschny/unwanted-twitch
https://api.github.com/repos/kwaschny/unwanted-twitch
closed
Extension not working on Waterfox?
incompatibility wontfix
Hi, I tried this extension on latest Waterfox portable: https://storage-waterfox.netdna-ssl.com/releases/win64/portable/WaterfoxPortable_56.2.2_English.paf.exe But nothing appears on the address bar or on Twitch. Maybe it's just incompatible with this version which is using an old Firefox source?
True
Extension not working on Waterfox? - Hi, I tried this extension on latest Waterfox portable: https://storage-waterfox.netdna-ssl.com/releases/win64/portable/WaterfoxPortable_56.2.2_English.paf.exe But nothing appears on the address bar or on Twitch. Maybe it's just incompatible with this version which is using an old Firefox source?
comp
extension not working on waterfox hi i tried this extension on latest waterfox portable but nothing appears on the address bar or on twitch maybe it s just incompatible with this version which is using an old firefox source
1
589,576
17,753,645,365
IssuesEvent
2021-08-28 09:55:38
Biologer/Biologer
https://api.github.com/repos/Biologer/Biologer
opened
Coordinate should not have more than 6 decimal places
enhancement priority:low
There is no need for more than 6 decimal places in coordinates. I. e. after user pick a location on the map, the software should round the number to 6 decimal places.
1.0
Coordinate should not have more than 6 decimal places - There is no need for more than 6 decimal places in coordinates. I. e. after user pick a location on the map, the software should round the number to 6 decimal places.
non_comp
coordinate should not have more than decimal places there is no need for more than decimal places in coordinates i e after user pick a location on the map the software should round the number to decimal places
0
14,411
17,383,054,849
IssuesEvent
2021-08-01 04:42:20
jbredwards/Fluidlogged-API
https://api.github.com/repos/jbredwards/Fluidlogged-API
closed
completely opaque water (with fluidlogged api + optifine + better foliage + forgelin)
bug incompatibility
not sure if its because im using the dev build you sent me or not. but my water is now completely opaque. https://i.imgur.com/lwI8Qxn.png
True
completely opaque water (with fluidlogged api + optifine + better foliage + forgelin) - not sure if its because im using the dev build you sent me or not. but my water is now completely opaque. https://i.imgur.com/lwI8Qxn.png
comp
completely opaque water with fluidlogged api optifine better foliage forgelin not sure if its because im using the dev build you sent me or not but my water is now completely opaque
1
8,404
10,426,739,733
IssuesEvent
2019-09-16 18:17:48
ClangBuiltLinux/linux
https://api.github.com/repos/ClangBuiltLinux/linux
closed
-Wincompatible-pointer-types in drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c
-Wincompatible-pointer-types [BUG] linux-next [PATCH] Submitted
``` drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c:121:8: error: incompatible pointer types passing 'u64 *' (aka 'unsigned long long *') to parameter of type 'phys_addr_t *' (aka 'unsigned int *') [-Werror,-Wincompatible-pointer-types] &icm_mr->dm.addr, &icm_mr->dm.obj_id); ^~~~~~~~~~~~~~~~ include/linux/mlx5/driver.h:1092:39: note: passing argument to parameter 'addr' here u64 length, u16 uid, phys_addr_t *addr, u32 *obj_id); ^ 1 error generated. ``` Patch sent: https://lore.kernel.org/lkml/20190905021415.8936-1-natechancellor@gmail.com/
True
-Wincompatible-pointer-types in drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c - ``` drivers/net/ethernet/mellanox/mlx5/core/steering/dr_icm_pool.c:121:8: error: incompatible pointer types passing 'u64 *' (aka 'unsigned long long *') to parameter of type 'phys_addr_t *' (aka 'unsigned int *') [-Werror,-Wincompatible-pointer-types] &icm_mr->dm.addr, &icm_mr->dm.obj_id); ^~~~~~~~~~~~~~~~ include/linux/mlx5/driver.h:1092:39: note: passing argument to parameter 'addr' here u64 length, u16 uid, phys_addr_t *addr, u32 *obj_id); ^ 1 error generated. ``` Patch sent: https://lore.kernel.org/lkml/20190905021415.8936-1-natechancellor@gmail.com/
comp
wincompatible pointer types in drivers net ethernet mellanox core steering dr icm pool c drivers net ethernet mellanox core steering dr icm pool c error incompatible pointer types passing aka unsigned long long to parameter of type phys addr t aka unsigned int icm mr dm addr icm mr dm obj id include linux driver h note passing argument to parameter addr here length uid phys addr t addr obj id error generated patch sent
1
461,870
13,237,479,685
IssuesEvent
2020-08-18 21:46:31
Railcraft/Railcraft
https://api.github.com/repos/Railcraft/Railcraft
closed
Cart detectors incorrectly detecting max speed locomotive
bug cannot reproduce low priority
**Description of the Bug** Cart detector will detect a locomotive travelling at max speed when there is an entire block separating it and the locomotive. Importantly, a locking track may exist there. **To Reproduce** 1. Place segment of normal tracks 2. (Optional) Place locking track (set to any mode) 3. Place Cart Detector (Any) underneath the rail immediately following the locking track 4. Place steam locomotive on track, fill completely with water and coal blocks 5. Turn locomotive to full speed towards locking track 6. Cart detector will emit redstone even though locking track prevents the locomotive from being adjacent **Expected behavior** Cart detector should not emit a redstone signal as the locomotive is never adjacent to it. **Screenshots & Video** ![image](https://user-images.githubusercontent.com/64187936/80067409-1ce0ee80-8581-11ea-91b0-bbc8a27da8c4.png) ![image](https://user-images.githubusercontent.com/64187936/80067028-70067180-8580-11ea-9377-92c70076849d.png) **Logs & Environment** Environment: Forge version: forge-1.12.2-14.23.5.2847 downloaded from curseforge with SHA-1: 8398503A59F4EBE1EA878498ADF2B32CB9B470C1 Railcraft version: v12.0.0 downloaded from curseforge with SHA-1: EA2085A509B816BB9A3CDD79F2F44175B588737A Minecraft version: v1.12.2 (singleplayer creative) Operating System: Windows Other mods installed: none Logs: Uploaded in 2 parts since text size exceeded 512M: https://pastebin.com/evnkjnbM https://pastebin.com/hXgyAAhX **Additional context** Sometimes on your first placement, this will not occur. However, this reliably occurs every time after the first occurrence, which is within 1-3 attempts at replicating. This issue does not in fact depend on locking tracks, but can ruin a token based system relying on them.
1.0
Cart detectors incorrectly detecting max speed locomotive - **Description of the Bug** Cart detector will detect a locomotive travelling at max speed when there is an entire block separating it and the locomotive. Importantly, a locking track may exist there. **To Reproduce** 1. Place segment of normal tracks 2. (Optional) Place locking track (set to any mode) 3. Place Cart Detector (Any) underneath the rail immediately following the locking track 4. Place steam locomotive on track, fill completely with water and coal blocks 5. Turn locomotive to full speed towards locking track 6. Cart detector will emit redstone even though locking track prevents the locomotive from being adjacent **Expected behavior** Cart detector should not emit a redstone signal as the locomotive is never adjacent to it. **Screenshots & Video** ![image](https://user-images.githubusercontent.com/64187936/80067409-1ce0ee80-8581-11ea-91b0-bbc8a27da8c4.png) ![image](https://user-images.githubusercontent.com/64187936/80067028-70067180-8580-11ea-9377-92c70076849d.png) **Logs & Environment** Environment: Forge version: forge-1.12.2-14.23.5.2847 downloaded from curseforge with SHA-1: 8398503A59F4EBE1EA878498ADF2B32CB9B470C1 Railcraft version: v12.0.0 downloaded from curseforge with SHA-1: EA2085A509B816BB9A3CDD79F2F44175B588737A Minecraft version: v1.12.2 (singleplayer creative) Operating System: Windows Other mods installed: none Logs: Uploaded in 2 parts since text size exceeded 512M: https://pastebin.com/evnkjnbM https://pastebin.com/hXgyAAhX **Additional context** Sometimes on your first placement, this will not occur. However, this reliably occurs every time after the first occurrence, which is within 1-3 attempts at replicating. This issue does not in fact depend on locking tracks, but can ruin a token based system relying on them.
non_comp
cart detectors incorrectly detecting max speed locomotive description of the bug cart detector will detect a locomotive travelling at max speed when there is an entire block separating it and the locomotive importantly a locking track may exist there to reproduce place segment of normal tracks optional place locking track set to any mode place cart detector any underneath the rail immediately following the locking track place steam locomotive on track fill completely with water and coal blocks turn locomotive to full speed towards locking track cart detector will emit redstone even though locking track prevents the locomotive from being adjacent expected behavior cart detector should not emit a redstone signal as the locomotive is never adjacent to it screenshots video logs environment environment forge version forge downloaded from curseforge with sha railcraft version downloaded from curseforge with sha minecraft version singleplayer creative operating system windows other mods installed none logs uploaded in parts since text size exceeded additional context sometimes on your first placement this will not occur however this reliably occurs every time after the first occurrence which is within attempts at replicating this issue does not in fact depend on locking tracks but can ruin a token based system relying on them
0
477,405
13,761,648,087
IssuesEvent
2020-10-07 08:01:11
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0 staging-1706] Steam tractor harvester should be lifted when turned off
Category: Gameplay Priority: High Status: Fixed Status: Reopen
this part: ![image](https://user-images.githubusercontent.com/45708377/89524538-6c27f380-d7ed-11ea-85b8-23f4dbec29f1.png) Because now I can't move on Ramps: ![image](https://user-images.githubusercontent.com/45708377/89524588-81048700-d7ed-11ea-8bde-29134f54c570.png)
1.0
[0.9.0 staging-1706] Steam tractor harvester should be lifted when turned off - this part: ![image](https://user-images.githubusercontent.com/45708377/89524538-6c27f380-d7ed-11ea-85b8-23f4dbec29f1.png) Because now I can't move on Ramps: ![image](https://user-images.githubusercontent.com/45708377/89524588-81048700-d7ed-11ea-8bde-29134f54c570.png)
non_comp
steam tractor harvester should be lifted when turned off this part because now i can t move on ramps
0
100,580
16,489,912,012
IssuesEvent
2021-05-25 01:09:29
billmcchesney1/flowgate
https://api.github.com/repos/billmcchesney1/flowgate
opened
CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz
security vulnerability
## CVE-2021-31597 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary> <p>XMLHttpRequest for Node</p> <p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p> <p>Path to dependency file: flowgate/ui/package.json</p> <p>Path to vulnerable library: flowgate/ui/node_modules/xmlhttprequest-ssl/package.json</p> <p> Dependency Hierarchy: - karma-5.0.9.tgz (Root Library) - socket.io-2.3.0.tgz - socket.io-client-2.3.0.tgz - engine.io-client-3.4.4.tgz - :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected. <p>Publish Date: 2021-04-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p> <p>Release Date: 2021-04-23</p> <p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"xmlhttprequest-ssl","packageVersion":"1.5.5","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:5.0.9;socket.io:2.3.0;socket.io-client:2.3.0;engine.io-client:3.4.4;xmlhttprequest-ssl:1.5.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"xmlhttprequest-ssl - 1.6.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-31597","vulnerabilityDetails":"The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597","cvss3Severity":"high","cvss3Score":"9.4","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-31597 (High) detected in xmlhttprequest-ssl-1.5.5.tgz - ## CVE-2021-31597 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>xmlhttprequest-ssl-1.5.5.tgz</b></p></summary> <p>XMLHttpRequest for Node</p> <p>Library home page: <a href="https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz">https://registry.npmjs.org/xmlhttprequest-ssl/-/xmlhttprequest-ssl-1.5.5.tgz</a></p> <p>Path to dependency file: flowgate/ui/package.json</p> <p>Path to vulnerable library: flowgate/ui/node_modules/xmlhttprequest-ssl/package.json</p> <p> Dependency Hierarchy: - karma-5.0.9.tgz (Root Library) - socket.io-2.3.0.tgz - socket.io-client-2.3.0.tgz - engine.io-client-3.4.4.tgz - :x: **xmlhttprequest-ssl-1.5.5.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected. <p>Publish Date: 2021-04-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597>CVE-2021-31597</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-31597</a></p> <p>Release Date: 2021-04-23</p> <p>Fix Resolution: xmlhttprequest-ssl - 1.6.1</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"xmlhttprequest-ssl","packageVersion":"1.5.5","packageFilePaths":["/ui/package.json"],"isTransitiveDependency":true,"dependencyTree":"karma:5.0.9;socket.io:2.3.0;socket.io-client:2.3.0;engine.io-client:3.4.4;xmlhttprequest-ssl:1.5.5","isMinimumFixVersionAvailable":true,"minimumFixVersion":"xmlhttprequest-ssl - 1.6.1"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-31597","vulnerabilityDetails":"The xmlhttprequest-ssl package before 1.6.1 for Node.js disables SSL certificate validation by default, because rejectUnauthorized (when the property exists but is undefined) is considered to be false within the https.request function of Node.js. In other words, no certificate is ever rejected.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-31597","cvss3Severity":"high","cvss3Score":"9.4","cvss3Metrics":{"A":"Low","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_comp
cve high detected in xmlhttprequest ssl tgz cve high severity vulnerability vulnerable library xmlhttprequest ssl tgz xmlhttprequest for node library home page a href path to dependency file flowgate ui package json path to vulnerable library flowgate ui node modules xmlhttprequest ssl package json dependency hierarchy karma tgz root library socket io tgz socket io client tgz engine io client tgz x xmlhttprequest ssl tgz vulnerable library found in base branch master vulnerability details the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution xmlhttprequest ssl isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree karma socket io socket io client engine io client xmlhttprequest ssl isminimumfixversionavailable true minimumfixversion xmlhttprequest ssl basebranches vulnerabilityidentifier cve vulnerabilitydetails the xmlhttprequest ssl package before for node js disables ssl certificate validation by default because rejectunauthorized when the property exists but is undefined is considered to be false within the https request function of node js in other words no certificate is ever rejected vulnerabilityurl
0
129,325
12,404,656,606
IssuesEvent
2020-05-21 15:54:15
Bionus/imgbrd-grabber
https://api.github.com/repos/Bionus/imgbrd-grabber
closed
Curious about keyboard shortcuts and image preview adjustments
documentation question
I looked around but couldn't find anything detailing keyboard shortcuts. I especially can't find anything that says how you can deselect all the images you have selected (via ctrl+click). additionally, is there any way to make the image previews in the search results area larger? I dug through settings but didn't really see anything for this.
1.0
Curious about keyboard shortcuts and image preview adjustments - I looked around but couldn't find anything detailing keyboard shortcuts. I especially can't find anything that says how you can deselect all the images you have selected (via ctrl+click). additionally, is there any way to make the image previews in the search results area larger? I dug through settings but didn't really see anything for this.
non_comp
curious about keyboard shortcuts and image preview adjustments i looked around but couldn t find anything detailing keyboard shortcuts i especially can t find anything that says how you can deselect all the images you have selected via ctrl click additionally is there any way to make the image previews in the search results area larger i dug through settings but didn t really see anything for this
0
2,112
4,839,079,147
IssuesEvent
2016-11-09 07:53:34
synfig/synfig
https://api.github.com/repos/synfig/synfig
opened
Gradient: allow editing as list and add control to animating it
Compatibility Core Editing Feature request GUI
It would be nice to be able to edit gradient directly in parameters dock as it's currently possible e.g. with curves. Also current way of how gradients interpolate is pretty limiting: it seems to be impossible to animate moving gradient control points because the result is that new points are created on tweens instead.
True
Gradient: allow editing as list and add control to animating it - It would be nice to be able to edit gradient directly in parameters dock as it's currently possible e.g. with curves. Also current way of how gradients interpolate is pretty limiting: it seems to be impossible to animate moving gradient control points because the result is that new points are created on tweens instead.
comp
gradient allow editing as list and add control to animating it it would be nice to be able to edit gradient directly in parameters dock as it s currently possible e g with curves also current way of how gradients interpolate is pretty limiting it seems to be impossible to animate moving gradient control points because the result is that new points are created on tweens instead
1
132,601
10,760,532,027
IssuesEvent
2019-10-31 18:47:54
PulpQE/pulp-smash
https://api.github.com/repos/PulpQE/pulp-smash
closed
Test file repo with basic auth
Issue Type: Test Case pulp 2 - closed - wontfix
https://pulp.plan.io/issues/2720 1. Create file repo with basic auth 2. Run repo sync 3. Assert: Sync is successful
1.0
Test file repo with basic auth - https://pulp.plan.io/issues/2720 1. Create file repo with basic auth 2. Run repo sync 3. Assert: Sync is successful
non_comp
test file repo with basic auth create file repo with basic auth run repo sync assert sync is successful
0
3,815
6,668,759,303
IssuesEvent
2017-10-03 16:53:10
jupyterlab/jupyterlab
https://api.github.com/repos/jupyterlab/jupyterlab
closed
can not use codeMirror's search or replace
pkg:fileeditor tag:Browser Compatibility type:Bug
ctrl + F or ctrl + shift + R, they can not be used . I can not input, it is like this: ![1505809083 1](https://user-images.githubusercontent.com/9131195/30582519-8ddc55cc-9d56-11e7-82d0-e3b50256068a.png)
True
can not use codeMirror's search or replace - ctrl + F or ctrl + shift + R, they can not be used . I can not input, it is like this: ![1505809083 1](https://user-images.githubusercontent.com/9131195/30582519-8ddc55cc-9d56-11e7-82d0-e3b50256068a.png)
comp
can not use codemirror s search or replace ctrl f or ctrl shift r, they can not be used i can not input it is like this
1
164,409
25,961,602,418
IssuesEvent
2022-12-19 00:01:42
JuliaDynamics/Entropies.jl
https://api.github.com/repos/JuliaDynamics/Entropies.jl
closed
`IndirectEntropy` should not subtype `Entropy`.
discussion-design
While getting CausalityTools ready for Entropies v2, I encountered the following issue: There are now two ways of computing entropies: either - `entropy(e::Entropy, x, est::ProbabilitiesEstimator)` - `entropy(e::IndirectEntropy, x)`. I have two problems with this: ### 1. The indirect entropies are not *entropies*. They are types indicating that a certain type of entropy is being estimated using a certain numerical procedure. A much more natural choice would be to have just one signature `entropy(e::Entropy, x, est::Union{ProbabilitiesEstimator, EntropyEstimator}`, where `EntropyEstimator` is what we now call `IndirectEntropy`. The API should be predictable, and the types should be named in a way that reflects what is actually going on. I argue that `IndirectEntropy` is a bit misleading, because it can be confused with `Entropy`, which is something else entirely. Actually, now we equate (through the type hierarchy) an entropic quantity with its estimator. But the former is used to *estimate* the latter. They should be different things. We already make this distinction for probabilities. `Probabilities` is the quantity being computed, while `ProbabilitiesEstimator` is the procedure used to estimate it. `Entropy`s should have the corresponding quantity `EntropyEstimator` that computes it. Thus: - `entropy(::Kraskov, x)` should be `entropy(::Shannon, x, ::Kraskov)`. - For example, if the `Kraskov` estimator can be used to estimate some other entropy, then dispatch must be implemented for that particular entropy, i.e. `entropy(::SomeOtherEntropy, x, ::Kraskov)`. ### 2. Two distinct signatures lead to code duplication The way we've designed it now, I have to duplicate source code using `entropy` where I want to be able to input both probability estimators and entropy estimators. I have to make: - One method for `ProbabilitiesEstimator` (which has one order of arguments) and, - Aother method for `IndirectEntropy`. I *can*, of course, write wrapper functions and circumvent this issue. But that shouldn't be necessary.
1.0
`IndirectEntropy` should not subtype `Entropy`. - While getting CausalityTools ready for Entropies v2, I encountered the following issue: There are now two ways of computing entropies: either - `entropy(e::Entropy, x, est::ProbabilitiesEstimator)` - `entropy(e::IndirectEntropy, x)`. I have two problems with this: ### 1. The indirect entropies are not *entropies*. They are types indicating that a certain type of entropy is being estimated using a certain numerical procedure. A much more natural choice would be to have just one signature `entropy(e::Entropy, x, est::Union{ProbabilitiesEstimator, EntropyEstimator}`, where `EntropyEstimator` is what we now call `IndirectEntropy`. The API should be predictable, and the types should be named in a way that reflects what is actually going on. I argue that `IndirectEntropy` is a bit misleading, because it can be confused with `Entropy`, which is something else entirely. Actually, now we equate (through the type hierarchy) an entropic quantity with its estimator. But the former is used to *estimate* the latter. They should be different things. We already make this distinction for probabilities. `Probabilities` is the quantity being computed, while `ProbabilitiesEstimator` is the procedure used to estimate it. `Entropy`s should have the corresponding quantity `EntropyEstimator` that computes it. Thus: - `entropy(::Kraskov, x)` should be `entropy(::Shannon, x, ::Kraskov)`. - For example, if the `Kraskov` estimator can be used to estimate some other entropy, then dispatch must be implemented for that particular entropy, i.e. `entropy(::SomeOtherEntropy, x, ::Kraskov)`. ### 2. Two distinct signatures lead to code duplication The way we've designed it now, I have to duplicate source code using `entropy` where I want to be able to input both probability estimators and entropy estimators. I have to make: - One method for `ProbabilitiesEstimator` (which has one order of arguments) and, - Aother method for `IndirectEntropy`. I *can*, of course, write wrapper functions and circumvent this issue. But that shouldn't be necessary.
non_comp
indirectentropy should not subtype entropy while getting causalitytools ready for entropies i encountered the following issue there are now two ways of computing entropies either entropy e entropy x est probabilitiesestimator entropy e indirectentropy x i have two problems with this the indirect entropies are not entropies they are types indicating that a certain type of entropy is being estimated using a certain numerical procedure a much more natural choice would be to have just one signature entropy e entropy x est union probabilitiesestimator entropyestimator where entropyestimator is what we now call indirectentropy the api should be predictable and the types should be named in a way that reflects what is actually going on i argue that indirectentropy is a bit misleading because it can be confused with entropy which is something else entirely actually now we equate through the type hierarchy an entropic quantity with its estimator but the former is used to estimate the latter they should be different things we already make this distinction for probabilities probabilities is the quantity being computed while probabilitiesestimator is the procedure used to estimate it entropy s should have the corresponding quantity entropyestimator that computes it thus entropy kraskov x should be entropy shannon x kraskov for example if the kraskov estimator can be used to estimate some other entropy then dispatch must be implemented for that particular entropy i e entropy someotherentropy x kraskov two distinct signatures lead to code duplication the way we ve designed it now i have to duplicate source code using entropy where i want to be able to input both probability estimators and entropy estimators i have to make one method for probabilitiesestimator which has one order of arguments and aother method for indirectentropy i can of course write wrapper functions and circumvent this issue but that shouldn t be necessary
0
58,448
7,154,765,394
IssuesEvent
2018-01-26 09:55:16
lalrpop/lalrpop
https://api.github.com/repos/lalrpop/lalrpop
closed
larger grammars generate a LOT of code
design-work-needed urgent
Hello, i try to parse php expressions using lalrpop: https://github.com/timglabisch/rustphp/blob/c79060d6495a55174fc2ad5710d5774d8ec94d67/src/calculator1.lalrpop my problem is that cargo run becomes incredible slow: time cargo run 1234.76s user 19.88s system 99% cpu 20:58.78 total the file is very huge (~50mb): cat src/calculator1.rs | wc -l 1044621 is there something fundamentally wrong or is this expected?
1.0
larger grammars generate a LOT of code - Hello, i try to parse php expressions using lalrpop: https://github.com/timglabisch/rustphp/blob/c79060d6495a55174fc2ad5710d5774d8ec94d67/src/calculator1.lalrpop my problem is that cargo run becomes incredible slow: time cargo run 1234.76s user 19.88s system 99% cpu 20:58.78 total the file is very huge (~50mb): cat src/calculator1.rs | wc -l 1044621 is there something fundamentally wrong or is this expected?
non_comp
larger grammars generate a lot of code hello i try to parse php expressions using lalrpop my problem is that cargo run becomes incredible slow time cargo run user system cpu total the file is very huge cat src rs wc l is there something fundamentally wrong or is this expected
0
15,364
19,590,459,886
IssuesEvent
2022-01-05 12:22:04
TycheSoftwares/Print-Invoice-Delivery-Notes-for-WooCommerce
https://api.github.com/repos/TycheSoftwares/Print-Invoice-Delivery-Notes-for-WooCommerce
closed
Quantity column shows incorrect in the print invoice when working with WC Composite products
type :bug client issue type: compatibility
The issue is when we work with the Composite products, the quantity column in the print invoice is showing incorrect. Steps to reproduce: 1) Create a composite product 2) Add 3 components or child products to it. 3) Place an order 4) print the invoice for that order and you will see 4 in the quantity column. This means the Original Composite product is also being added to the count. However, the quantity should be 3 i.e the count of only the child products. The same issue is being faced when we work with the WC Product Bundle plugin. Link: https://wordpress.org/support/topic/wrong-total-quantity-due-to-composite-bundle/#post-13542652
True
Quantity column shows incorrect in the print invoice when working with WC Composite products - The issue is when we work with the Composite products, the quantity column in the print invoice is showing incorrect. Steps to reproduce: 1) Create a composite product 2) Add 3 components or child products to it. 3) Place an order 4) print the invoice for that order and you will see 4 in the quantity column. This means the Original Composite product is also being added to the count. However, the quantity should be 3 i.e the count of only the child products. The same issue is being faced when we work with the WC Product Bundle plugin. Link: https://wordpress.org/support/topic/wrong-total-quantity-due-to-composite-bundle/#post-13542652
comp
quantity column shows incorrect in the print invoice when working with wc composite products the issue is when we work with the composite products the quantity column in the print invoice is showing incorrect steps to reproduce create a composite product add components or child products to it place an order print the invoice for that order and you will see in the quantity column this means the original composite product is also being added to the count however the quantity should be i e the count of only the child products the same issue is being faced when we work with the wc product bundle plugin link
1
55,425
14,451,298,226
IssuesEvent
2020-12-08 10:46:36
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
opened
Severe error calculated density of air is negative
Defect
Issue overview -------------- user reported on unmet hours that he got the following error: ``` ** Severe ** PsyRhoAirFnPbTdbW: RhoAir (Density of Air) is calculated <= 0 [-9.04677]. ** ~~~ ** pb =[101100.00], tdb=[-312.09], w=[0.0000000]. ** ~~~ ** Routine=CorrectZoneHumRat, During Warmup, Environment=DSDAYTANGIER COOLING 0.4%, at Simulation time=08/21 00:12 - 00:13 ** Fatal ** Program terminates due to preceding condition. ``` user supplied the file, which was at E+ 8.7. I updated the file to v9.4 and instead of getting this severe I am getting an actual crash, which isn't something we want to happen, even if it is later found the defect file had problems. ``` Performing Zone Sizing Simulation ...for Sizing Period: #1 DSDAYTANGIER COOLING 0.4% Warming up Warming up Warming up Performing Zone Sizing Simulation ...for Sizing Period: #2 DSDAYTANGIER HEATING 99.6% Calculating System sizing ...for Sizing Period: #1 DSDAYTANGIER COOLING 0.4% Calculating System sizing ...for Sizing Period: #2 DSDAYTANGIER HEATING 99.6% Adjusting Air System Sizing Adjusting Standard 62.1 Ventilation Sizing Initializing Simulation double free or corruption (out) Aborted (core dumped) ``` ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link: https://unmethours.com/question/49383/severe-error-calculated-density-of-air-is-negative/ Backtrace ``` (lldb) bt * thread #1, name = 'energyplus', stop reason = hit program assert frame #0: 0x00007ffff11c7fb7 libc.so.6`__GI_raise(sig=<unavailable>) at raise.c:51 frame #1: 0x00007ffff11c9921 libc.so.6`__GI_abort at abort.c:79 frame #2: 0x00007ffff11b948a libc.so.6`__assert_fail_base(fmt="%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion="contains( i )", file="/home/julien/Software/Others/EnergyPlus/third_party/ObjexxFCL/src/ObjexxFCL/Array1.hh", line=1172, function="T& ObjexxFCL::Array1< <template-parameter-1-1> >::operator()(int) [with T = EnergyPlus::WaterThermalTanks::StratifiedNodeData]") at assert.c:92 frame #3: 0x00007ffff11b9502 libc.so.6`__GI___assert_fail(assertion=<unavailable>, file=<unavailable>, line=<unavailable>, function=<unavailable>) at assert.c:101 * frame #4: 0x00007ffff4d1a4b1 libenergyplusapi.so.9.4.0`ObjexxFCL::Array1<EnergyPlus::WaterThermalTanks::StratifiedNodeData>::operator(this=0x00005555564ff530, i=0)(int) at Array1.hh:1172 frame #5: 0x00007ffff4cea195 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::CalcWaterThermalTankStratified(this=0x00005555564ff100, state=0x00007fffffffbdf0) at WaterThermalTanks.cc:7218 frame #6: 0x00007ffff4d11882 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::CalcStandardRatings(this=0x00005555564ff100, state=0x00007fffffffbdf0) at WaterThermalTanks.cc:11451 frame #7: 0x00007ffff4ce47d4 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::initialize(this=0x00005555564ff100, state=0x00007fffffffbdf0, FirstHVACIteration=true) at WaterThermalTanks.cc:6183 frame #8: 0x00007ffff4c8c218 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::onInitLoopEquip(this=0x00005555564ff100, state=0x00007fffffffbdf0, calledFromLocation=0x0000555555e45098) at WaterThermalTanks.cc:164 frame #9: 0x00007ffff5ffa702 libenergyplusapi.so.9.4.0`EnergyPlus::DataPlant::CompData::simulate(this=0x0000555555e44f00, state=0x00007fffffffbdf0, FirstHVACIteration=true, InitLoopEquip=0x00007ffff7d6fbdf, GetCompSizFac=true) at Component.cc:119 frame #10: 0x00007ffff5cf1f1a libenergyplusapi.so.9.4.0`EnergyPlus::PlantManager::InitializeLoops(state=0x00007fffffffbdf0, FirstHVACIteration=true) at PlantManager.cc:2204 frame #11: 0x00007ffff5cdbd1a libenergyplusapi.so.9.4.0`EnergyPlus::PlantManager::ManagePlantLoops(state=0x00007fffffffbdf0, FirstHVACIteration=true, SimAirLoops=0x00007ffff7d77439, SimZoneEquipment=0x00007ffff7d7743c, SimNonZoneEquipment=0x00007ffff7d7743d, SimPlantLoops=0x00007ffff7d7743b, SimElecCircuits=0x00007ffff7d7743a) at PlantManager.cc:222 frame #12: 0x00007ffff5827b00 libenergyplusapi.so.9.4.0`EnergyPlus::HVACManager::SimSelectedEquipment(state=0x00007fffffffbdf0, SimAirLoops=0x00007ffff7d77439, SimZoneEquipment=0x00007ffff7d7743c, SimNonZoneEquipment=0x00007ffff7d7743d, SimPlantLoops=0x00007ffff7d7743b, SimElecCircuits=0x00007ffff7d7743a, FirstHVACIteration=0x00007fffffffae2e, LockPlantFlows=false) at HVACManager.cc:1832 frame #13: 0x00007ffff581d054 libenergyplusapi.so.9.4.0`EnergyPlus::HVACManager::SimHVAC(state=0x00007fffffffbdf0) at HVACManager.cc:842 frame #14: 0x00007ffff58197f4 libenergyplusapi.so.9.4.0`EnergyPlus::HVACManager::ManageHVAC(state=0x00007fffffffbdf0) at HVACManager.cc:358 frame #15: 0x00007ffff5a1457a libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceAirManager::CalcHeatBalanceAir(state=0x00007fffffffbdf0) at HeatBalanceAirManager.cc:4356 frame #16: 0x00007ffff59b9f7f libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceAirManager::ManageAirHeatBalance(state=0x00007fffffffbdf0) at HeatBalanceAirManager.cc:204 frame #17: 0x00007ffff3e451ee libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceSurfaceManager::ManageSurfaceHeatBalance(state=0x00007fffffffbdf0) at HeatBalanceSurfaceManager.cc:281 frame #18: 0x00007ffff5a33c72 libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceManager::ManageHeatBalance(state=0x00007fffffffbdf0) at HeatBalanceManager.cc:363 frame #19: 0x00007ffff47f9269 libenergyplusapi.so.9.4.0`EnergyPlus::SimulationManager::SetupSimulation(state=0x00007fffffffbdf0, ErrorsFound=0x00007fffffffbc23) at SimulationManager.cc:2111 frame #20: 0x00007ffff47ea196 libenergyplusapi.so.9.4.0`EnergyPlus::SimulationManager::ManageSimulation(state=0x00007fffffffbdf0) at SimulationManager.cc:366 frame #21: 0x00007ffff3a08570 libenergyplusapi.so.9.4.0`RunEnergyPlus(state=0x00007fffffffbdf0, filepath="\xe0\xbd\xff\xff\xff\U0000007f"...) at EnergyPlusPgm.cc:400 frame #22: 0x00007ffff3a07a17 libenergyplusapi.so.9.4.0`EnergyPlusPgm(state=0x00007fffffffbdf0, filepath="\xe0\xbd\xff\xff\xff\U0000007f"...) at EnergyPlusPgm.cc:224 frame #23: 0x000055555578ef66 energyplus`main(argc=6, argv=0x00007fffffffccc8) at main.cc:60 frame #24: 0x00007ffff11aabf7 libc.so.6`__libc_start_main(main=(energyplus`main at main.cc:56), argc=6, argv=0x00007fffffffccc8, init=<unavailable>, fini=<unavailable>, rtld_fini=<unavailable>, stack_end=0x00007fffffffccb8) at libc-start.c:310 frame #25: 0x000055555578ec8a energyplus`_start + 42 ``` ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [x] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
1.0
Severe error calculated density of air is negative - Issue overview -------------- user reported on unmet hours that he got the following error: ``` ** Severe ** PsyRhoAirFnPbTdbW: RhoAir (Density of Air) is calculated <= 0 [-9.04677]. ** ~~~ ** pb =[101100.00], tdb=[-312.09], w=[0.0000000]. ** ~~~ ** Routine=CorrectZoneHumRat, During Warmup, Environment=DSDAYTANGIER COOLING 0.4%, at Simulation time=08/21 00:12 - 00:13 ** Fatal ** Program terminates due to preceding condition. ``` user supplied the file, which was at E+ 8.7. I updated the file to v9.4 and instead of getting this severe I am getting an actual crash, which isn't something we want to happen, even if it is later found the defect file had problems. ``` Performing Zone Sizing Simulation ...for Sizing Period: #1 DSDAYTANGIER COOLING 0.4% Warming up Warming up Warming up Performing Zone Sizing Simulation ...for Sizing Period: #2 DSDAYTANGIER HEATING 99.6% Calculating System sizing ...for Sizing Period: #1 DSDAYTANGIER COOLING 0.4% Calculating System sizing ...for Sizing Period: #2 DSDAYTANGIER HEATING 99.6% Adjusting Air System Sizing Adjusting Standard 62.1 Ventilation Sizing Initializing Simulation double free or corruption (out) Aborted (core dumped) ``` ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link: https://unmethours.com/question/49383/severe-error-calculated-density-of-air-is-negative/ Backtrace ``` (lldb) bt * thread #1, name = 'energyplus', stop reason = hit program assert frame #0: 0x00007ffff11c7fb7 libc.so.6`__GI_raise(sig=<unavailable>) at raise.c:51 frame #1: 0x00007ffff11c9921 libc.so.6`__GI_abort at abort.c:79 frame #2: 0x00007ffff11b948a libc.so.6`__assert_fail_base(fmt="%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion="contains( i )", file="/home/julien/Software/Others/EnergyPlus/third_party/ObjexxFCL/src/ObjexxFCL/Array1.hh", line=1172, function="T& ObjexxFCL::Array1< <template-parameter-1-1> >::operator()(int) [with T = EnergyPlus::WaterThermalTanks::StratifiedNodeData]") at assert.c:92 frame #3: 0x00007ffff11b9502 libc.so.6`__GI___assert_fail(assertion=<unavailable>, file=<unavailable>, line=<unavailable>, function=<unavailable>) at assert.c:101 * frame #4: 0x00007ffff4d1a4b1 libenergyplusapi.so.9.4.0`ObjexxFCL::Array1<EnergyPlus::WaterThermalTanks::StratifiedNodeData>::operator(this=0x00005555564ff530, i=0)(int) at Array1.hh:1172 frame #5: 0x00007ffff4cea195 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::CalcWaterThermalTankStratified(this=0x00005555564ff100, state=0x00007fffffffbdf0) at WaterThermalTanks.cc:7218 frame #6: 0x00007ffff4d11882 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::CalcStandardRatings(this=0x00005555564ff100, state=0x00007fffffffbdf0) at WaterThermalTanks.cc:11451 frame #7: 0x00007ffff4ce47d4 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::initialize(this=0x00005555564ff100, state=0x00007fffffffbdf0, FirstHVACIteration=true) at WaterThermalTanks.cc:6183 frame #8: 0x00007ffff4c8c218 libenergyplusapi.so.9.4.0`EnergyPlus::WaterThermalTanks::WaterThermalTankData::onInitLoopEquip(this=0x00005555564ff100, state=0x00007fffffffbdf0, calledFromLocation=0x0000555555e45098) at WaterThermalTanks.cc:164 frame #9: 0x00007ffff5ffa702 libenergyplusapi.so.9.4.0`EnergyPlus::DataPlant::CompData::simulate(this=0x0000555555e44f00, state=0x00007fffffffbdf0, FirstHVACIteration=true, InitLoopEquip=0x00007ffff7d6fbdf, GetCompSizFac=true) at Component.cc:119 frame #10: 0x00007ffff5cf1f1a libenergyplusapi.so.9.4.0`EnergyPlus::PlantManager::InitializeLoops(state=0x00007fffffffbdf0, FirstHVACIteration=true) at PlantManager.cc:2204 frame #11: 0x00007ffff5cdbd1a libenergyplusapi.so.9.4.0`EnergyPlus::PlantManager::ManagePlantLoops(state=0x00007fffffffbdf0, FirstHVACIteration=true, SimAirLoops=0x00007ffff7d77439, SimZoneEquipment=0x00007ffff7d7743c, SimNonZoneEquipment=0x00007ffff7d7743d, SimPlantLoops=0x00007ffff7d7743b, SimElecCircuits=0x00007ffff7d7743a) at PlantManager.cc:222 frame #12: 0x00007ffff5827b00 libenergyplusapi.so.9.4.0`EnergyPlus::HVACManager::SimSelectedEquipment(state=0x00007fffffffbdf0, SimAirLoops=0x00007ffff7d77439, SimZoneEquipment=0x00007ffff7d7743c, SimNonZoneEquipment=0x00007ffff7d7743d, SimPlantLoops=0x00007ffff7d7743b, SimElecCircuits=0x00007ffff7d7743a, FirstHVACIteration=0x00007fffffffae2e, LockPlantFlows=false) at HVACManager.cc:1832 frame #13: 0x00007ffff581d054 libenergyplusapi.so.9.4.0`EnergyPlus::HVACManager::SimHVAC(state=0x00007fffffffbdf0) at HVACManager.cc:842 frame #14: 0x00007ffff58197f4 libenergyplusapi.so.9.4.0`EnergyPlus::HVACManager::ManageHVAC(state=0x00007fffffffbdf0) at HVACManager.cc:358 frame #15: 0x00007ffff5a1457a libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceAirManager::CalcHeatBalanceAir(state=0x00007fffffffbdf0) at HeatBalanceAirManager.cc:4356 frame #16: 0x00007ffff59b9f7f libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceAirManager::ManageAirHeatBalance(state=0x00007fffffffbdf0) at HeatBalanceAirManager.cc:204 frame #17: 0x00007ffff3e451ee libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceSurfaceManager::ManageSurfaceHeatBalance(state=0x00007fffffffbdf0) at HeatBalanceSurfaceManager.cc:281 frame #18: 0x00007ffff5a33c72 libenergyplusapi.so.9.4.0`EnergyPlus::HeatBalanceManager::ManageHeatBalance(state=0x00007fffffffbdf0) at HeatBalanceManager.cc:363 frame #19: 0x00007ffff47f9269 libenergyplusapi.so.9.4.0`EnergyPlus::SimulationManager::SetupSimulation(state=0x00007fffffffbdf0, ErrorsFound=0x00007fffffffbc23) at SimulationManager.cc:2111 frame #20: 0x00007ffff47ea196 libenergyplusapi.so.9.4.0`EnergyPlus::SimulationManager::ManageSimulation(state=0x00007fffffffbdf0) at SimulationManager.cc:366 frame #21: 0x00007ffff3a08570 libenergyplusapi.so.9.4.0`RunEnergyPlus(state=0x00007fffffffbdf0, filepath="\xe0\xbd\xff\xff\xff\U0000007f"...) at EnergyPlusPgm.cc:400 frame #22: 0x00007ffff3a07a17 libenergyplusapi.so.9.4.0`EnergyPlusPgm(state=0x00007fffffffbdf0, filepath="\xe0\xbd\xff\xff\xff\U0000007f"...) at EnergyPlusPgm.cc:224 frame #23: 0x000055555578ef66 energyplus`main(argc=6, argv=0x00007fffffffccc8) at main.cc:60 frame #24: 0x00007ffff11aabf7 libc.so.6`__libc_start_main(main=(energyplus`main at main.cc:56), argc=6, argv=0x00007fffffffccc8, init=<unavailable>, fini=<unavailable>, rtld_fini=<unavailable>, stack_end=0x00007fffffffccb8) at libc-start.c:310 frame #25: 0x000055555578ec8a energyplus`_start + 42 ``` ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [x] Defect file added (list location of defect file here) - [ ] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
non_comp
severe error calculated density of air is negative issue overview user reported on unmet hours that he got the following error severe psyrhoairfnpbtdbw rhoair density of air is calculated pb tdb w routine correctzonehumrat during warmup environment dsdaytangier cooling at simulation time fatal program terminates due to preceding condition user supplied the file which was at e i updated the file to and instead of getting this severe i am getting an actual crash which isn t something we want to happen even if it is later found the defect file had problems performing zone sizing simulation for sizing period dsdaytangier cooling warming up warming up warming up performing zone sizing simulation for sizing period dsdaytangier heating calculating system sizing for sizing period dsdaytangier cooling calculating system sizing for sizing period dsdaytangier heating adjusting air system sizing adjusting standard ventilation sizing initializing simulation double free or corruption out aborted core dumped details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link backtrace lldb bt thread name energyplus stop reason hit program assert frame libc so gi raise sig at raise c frame libc so gi abort at abort c frame libc so assert fail base fmt s s s u s sassertion s failed n n assertion contains i file home julien software others energyplus third party objexxfcl src objexxfcl hh line function t objexxfcl operator int at assert c frame libc so gi assert fail assertion file line function at assert c frame libenergyplusapi so objexxfcl operator this i int at hh frame libenergyplusapi so energyplus waterthermaltanks waterthermaltankdata calcwaterthermaltankstratified this state at waterthermaltanks cc frame libenergyplusapi so energyplus waterthermaltanks waterthermaltankdata calcstandardratings this state at waterthermaltanks cc frame libenergyplusapi so energyplus waterthermaltanks waterthermaltankdata initialize this state firsthvaciteration true at waterthermaltanks cc frame libenergyplusapi so energyplus waterthermaltanks waterthermaltankdata oninitloopequip this state calledfromlocation at waterthermaltanks cc frame libenergyplusapi so energyplus dataplant compdata simulate this state firsthvaciteration true initloopequip getcompsizfac true at component cc frame libenergyplusapi so energyplus plantmanager initializeloops state firsthvaciteration true at plantmanager cc frame libenergyplusapi so energyplus plantmanager manageplantloops state firsthvaciteration true simairloops simzoneequipment simnonzoneequipment simplantloops simeleccircuits at plantmanager cc frame libenergyplusapi so energyplus hvacmanager simselectedequipment state simairloops simzoneequipment simnonzoneequipment simplantloops simeleccircuits firsthvaciteration lockplantflows false at hvacmanager cc frame libenergyplusapi so energyplus hvacmanager simhvac state at hvacmanager cc frame libenergyplusapi so energyplus hvacmanager managehvac state at hvacmanager cc frame libenergyplusapi so energyplus heatbalanceairmanager calcheatbalanceair state at heatbalanceairmanager cc frame libenergyplusapi so energyplus heatbalanceairmanager manageairheatbalance state at heatbalanceairmanager cc frame libenergyplusapi so energyplus heatbalancesurfacemanager managesurfaceheatbalance state at heatbalancesurfacemanager cc frame libenergyplusapi so energyplus heatbalancemanager manageheatbalance state at heatbalancemanager cc frame libenergyplusapi so energyplus simulationmanager setupsimulation state errorsfound at simulationmanager cc frame libenergyplusapi so energyplus simulationmanager managesimulation state at simulationmanager cc frame libenergyplusapi so runenergyplus state filepath xbd xff xff xff at energypluspgm cc frame libenergyplusapi so energypluspgm state filepath xbd xff xff xff at energypluspgm cc frame energyplus main argc argv at main cc frame libc so libc start main main energyplus main at main cc argc argv init fini rtld fini stack end at libc start c frame energyplus start checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
0
179,128
21,517,305,137
IssuesEvent
2022-04-28 11:08:06
finos/spring-bot
https://api.github.com/repos/finos/spring-bot
closed
WS-2020-0408 (High) detected in netty-handler-4.1.68.Final.jar
security vulnerability
## WS-2020-0408 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-handler-4.1.68.Final.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: /demos/todo-bot/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar</p> <p> Dependency Hierarchy: - bot-azure-4.14.2.jar (Root Library) - azure-storage-queue-12.8.0.jar - azure-storage-common-12.10.0.jar - azure-core-http-netty-1.7.1.jar - :x: **netty-handler-4.1.68.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/finos/spring-bot/commit/3da9e0f849079934eb92135cec1f523e22bdc1ad">3da9e0f849079934eb92135cec1f523e22bdc1ad</a></p> <p>Found in base branch: <b>spring-bot-master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was found in all versions of io.netty:netty-all. Host verification in Netty is disabled by default. This can lead to MITM attack in which an attacker can forge valid SSL/TLS certificates for a different hostname in order to intercept traffic that doesn’t intend for him. This is an issue because the certificate is not matched with the host. <p>Publish Date: 2020-06-22 <p>URL: <a href=https://github.com/netty/netty/issues/10362>WS-2020-0408</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2020-0408">https://nvd.nist.gov/vuln/detail/WS-2020-0408</a></p> <p>Release Date: 2020-06-22</p> <p>Fix Resolution: io.netty:netty-all - 4.1.68.Final-redhat-00001,4.0.0.Final,4.1.67.Final-redhat-00002;io.netty:netty-handler - 4.1.68.Final-redhat-00001,4.1.67.Final-redhat-00001</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-handler","packageVersion":"4.1.68.Final","packageFilePaths":["/demos/todo-bot/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.microsoft.bot:bot-azure:4.14.2;com.azure:azure-storage-queue:12.8.0;com.azure:azure-storage-common:12.10.0;com.azure:azure-core-http-netty:1.7.1;io.netty:netty-handler:4.1.68.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all - 4.1.68.Final-redhat-00001,4.0.0.Final,4.1.67.Final-redhat-00002;io.netty:netty-handler - 4.1.68.Final-redhat-00001,4.1.67.Final-redhat-00001","isBinary":false}],"baseBranches":["spring-bot-master"],"vulnerabilityIdentifier":"WS-2020-0408","vulnerabilityDetails":"An issue was found in all versions of io.netty:netty-all. Host verification in Netty is disabled by default. This can lead to MITM attack in which an attacker can forge valid SSL/TLS certificates for a different hostname in order to intercept traffic that doesn’t intend for him. This is an issue because the certificate is not matched with the host.","vulnerabilityUrl":"https://github.com/netty/netty/issues/10362","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
WS-2020-0408 (High) detected in netty-handler-4.1.68.Final.jar - ## WS-2020-0408 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>netty-handler-4.1.68.Final.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://netty.io/">https://netty.io/</a></p> <p>Path to dependency file: /demos/todo-bot/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar,/home/wss-scanner/.m2/repository/io/netty/netty-handler/4.1.68.Final/netty-handler-4.1.68.Final.jar</p> <p> Dependency Hierarchy: - bot-azure-4.14.2.jar (Root Library) - azure-storage-queue-12.8.0.jar - azure-storage-common-12.10.0.jar - azure-core-http-netty-1.7.1.jar - :x: **netty-handler-4.1.68.Final.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/finos/spring-bot/commit/3da9e0f849079934eb92135cec1f523e22bdc1ad">3da9e0f849079934eb92135cec1f523e22bdc1ad</a></p> <p>Found in base branch: <b>spring-bot-master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was found in all versions of io.netty:netty-all. Host verification in Netty is disabled by default. This can lead to MITM attack in which an attacker can forge valid SSL/TLS certificates for a different hostname in order to intercept traffic that doesn’t intend for him. This is an issue because the certificate is not matched with the host. <p>Publish Date: 2020-06-22 <p>URL: <a href=https://github.com/netty/netty/issues/10362>WS-2020-0408</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/WS-2020-0408">https://nvd.nist.gov/vuln/detail/WS-2020-0408</a></p> <p>Release Date: 2020-06-22</p> <p>Fix Resolution: io.netty:netty-all - 4.1.68.Final-redhat-00001,4.0.0.Final,4.1.67.Final-redhat-00002;io.netty:netty-handler - 4.1.68.Final-redhat-00001,4.1.67.Final-redhat-00001</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"io.netty","packageName":"netty-handler","packageVersion":"4.1.68.Final","packageFilePaths":["/demos/todo-bot/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.microsoft.bot:bot-azure:4.14.2;com.azure:azure-storage-queue:12.8.0;com.azure:azure-storage-common:12.10.0;com.azure:azure-core-http-netty:1.7.1;io.netty:netty-handler:4.1.68.Final","isMinimumFixVersionAvailable":true,"minimumFixVersion":"io.netty:netty-all - 4.1.68.Final-redhat-00001,4.0.0.Final,4.1.67.Final-redhat-00002;io.netty:netty-handler - 4.1.68.Final-redhat-00001,4.1.67.Final-redhat-00001","isBinary":false}],"baseBranches":["spring-bot-master"],"vulnerabilityIdentifier":"WS-2020-0408","vulnerabilityDetails":"An issue was found in all versions of io.netty:netty-all. Host verification in Netty is disabled by default. This can lead to MITM attack in which an attacker can forge valid SSL/TLS certificates for a different hostname in order to intercept traffic that doesn’t intend for him. This is an issue because the certificate is not matched with the host.","vulnerabilityUrl":"https://github.com/netty/netty/issues/10362","cvss3Severity":"high","cvss3Score":"7.4","cvss3Metrics":{"A":"None","AC":"High","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_comp
ws high detected in netty handler final jar ws high severity vulnerability vulnerable library netty handler final jar library home page a href path to dependency file demos todo bot pom xml path to vulnerable library home wss scanner repository io netty netty handler final netty handler final jar home wss scanner repository io netty netty handler final netty handler final jar home wss scanner repository io netty netty handler final netty handler final jar home wss scanner repository io netty netty handler final netty handler final jar home wss scanner repository io netty netty handler final netty handler final jar dependency hierarchy bot azure jar root library azure storage queue jar azure storage common jar azure core http netty jar x netty handler final jar vulnerable library found in head commit a href found in base branch spring bot master vulnerability details an issue was found in all versions of io netty netty all host verification in netty is disabled by default this can lead to mitm attack in which an attacker can forge valid ssl tls certificates for a different hostname in order to intercept traffic that doesn’t intend for him this is an issue because the certificate is not matched with the host publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution io netty netty all final redhat final final redhat io netty netty handler final redhat final redhat isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com microsoft bot bot azure com azure azure storage queue com azure azure storage common com azure azure core http netty io netty netty handler final isminimumfixversionavailable true minimumfixversion io netty netty all final redhat final final redhat io netty netty handler final redhat final redhat isbinary false basebranches vulnerabilityidentifier ws vulnerabilitydetails an issue was found in all versions of io netty netty all host verification in netty is disabled by default this can lead to mitm attack in which an attacker can forge valid ssl tls certificates for a different hostname in order to intercept traffic that doesn’t intend for him this is an issue because the certificate is not matched with the host vulnerabilityurl
0
13,197
15,555,334,291
IssuesEvent
2021-03-16 05:55:29
Lothrazar/Cyclic
https://api.github.com/repos/Lothrazar/Cyclic
closed
Cyclic Bag is stuck in AE2 storage
1.16 compatibility
Minecraft Version: 1.16 Forge Version: 36.0.45 Mod Version: 1.16.5-1.1.8 Single Player or Server: Server Describe problem (what you were doing; what happened; what should have happened): Can not retrieve Cyclic storage bag out of AE2 storage, just stays in storage; Bag does not remove, but a second bag that was recently created is freely able to be moved in and out of storage
True
Cyclic Bag is stuck in AE2 storage - Minecraft Version: 1.16 Forge Version: 36.0.45 Mod Version: 1.16.5-1.1.8 Single Player or Server: Server Describe problem (what you were doing; what happened; what should have happened): Can not retrieve Cyclic storage bag out of AE2 storage, just stays in storage; Bag does not remove, but a second bag that was recently created is freely able to be moved in and out of storage
comp
cyclic bag is stuck in storage minecraft version forge version mod version single player or server server describe problem what you were doing what happened what should have happened can not retrieve cyclic storage bag out of storage just stays in storage bag does not remove but a second bag that was recently created is freely able to be moved in and out of storage
1
58,650
3,090,387,653
IssuesEvent
2015-08-26 06:12:07
NRGI/resourcecontracts.org
https://api.github.com/repos/NRGI/resourcecontracts.org
closed
Update the MTurk HIT page with proper information including language in the title
2 - Working OCR-workflow Priority
It needs to be clear when added to meh turk what language the specific page is written in.
1.0
Update the MTurk HIT page with proper information including language in the title - It needs to be clear when added to meh turk what language the specific page is written in.
non_comp
update the mturk hit page with proper information including language in the title it needs to be clear when added to meh turk what language the specific page is written in
0
1,535
4,085,715,062
IssuesEvent
2016-06-01 00:14:58
skyverge/woocommerce-product-sku-generator
https://api.github.com/repos/skyverge/woocommerce-product-sku-generator
closed
WooCommerce 2.6 Compatibility
compatibility
- [x] **Misc** -- Drop WooCommerce 2.2 support * **Completed in** SHA 985d66dbfb8ee9d2e58637c8acd4fee8a6eb9c7b - [x] **Misc** -- Add any needed compat methods * **Completed in** SHA 322bcf7433d50d90a16366b1fdb0ade8448e59a6 - [x] **Feature** -- adds ability to use dashes in attribute names to replace spaces * **Completed in** SHA 2f838a7e8f696b4bf9334bdf71f86379b543342d
True
WooCommerce 2.6 Compatibility - - [x] **Misc** -- Drop WooCommerce 2.2 support * **Completed in** SHA 985d66dbfb8ee9d2e58637c8acd4fee8a6eb9c7b - [x] **Misc** -- Add any needed compat methods * **Completed in** SHA 322bcf7433d50d90a16366b1fdb0ade8448e59a6 - [x] **Feature** -- adds ability to use dashes in attribute names to replace spaces * **Completed in** SHA 2f838a7e8f696b4bf9334bdf71f86379b543342d
comp
woocommerce compatibility misc drop woocommerce support completed in sha misc add any needed compat methods completed in sha feature adds ability to use dashes in attribute names to replace spaces completed in sha
1
177,450
13,724,786,872
IssuesEvent
2020-10-03 15:40:13
coloredcow-admin/glific-website
https://api.github.com/repos/coloredcow-admin/glific-website
closed
Feedback
status : ready to test
- [x] Can we please order the section as per the content order on homepage ![image](https://user-images.githubusercontent.com/16714604/94712432-1b54f700-0367-11eb-8dd3-dea1741833af.png) - [x] Can we remove arrows when there's only one testimonial ![image](https://user-images.githubusercontent.com/16714604/94712532-3d4e7980-0367-11eb-9f56-9f536fdab350.png) - [x] Small m in Know More ![image](https://user-images.githubusercontent.com/16714604/94712693-74248f80-0367-11eb-9864-20c92701ebf1.png) - [x] @SunnaMalhotra Can you please make sure only the correct/relevant content shows on the test site now. such as on demo videos page, blog, webinar etc... - [x] the fonts in this section is not correct ![image](https://user-images.githubusercontent.com/16714604/94713074-f614b880-0367-11eb-9f78-fcd9c552775f.png) - [x] Let's remove the separator lines in the dropdown. ![image](https://user-images.githubusercontent.com/16714604/94713179-1775a480-0368-11eb-95c5-4c54b954070a.png) - [x] Is it possible to play the side videos in one click? right now, it first moves to the center then needs another click again. users would be slightly confused what happened ![image](https://user-images.githubusercontent.com/16714604/94713535-8bb04800-0368-11eb-85fb-05a0ea6f0bb8.png) @abhi1203 , I tried to but I did not get the solution for it. To play the video. It looks like youtube does not allowing to override the feature - [x] Let's use the glific green color here. in all the links in faq ![image](https://user-images.githubusercontent.com/16714604/94713643-b8645f80-0368-11eb-9f02-969ec9a5f46f.png) - [x] increase space between the hamburger menu and book a demo button in smaller devices ![image](https://user-images.githubusercontent.com/16714604/94713790-e47fe080-0368-11eb-8e6a-834b808d6a75.png) - [x] Can we make the images and text clickable also here ![image](https://user-images.githubusercontent.com/16714604/94714535-e5654200-0369-11eb-8bad-029e0f51a808.png) Please do the same for recommended reading section also - [ ] Clicking on reply doesn't do anything. should it open a input box to enter the comment or something ![image](https://user-images.githubusercontent.com/16714604/94718854-f2852f80-036f-11eb-994c-71ed19a1ca1b.png) @abhi1203 It will redirect again the Post comment form, where you can add more comment - [x] Clicking on names and tags should filter the blogs by that respective selection - [x] can we open source code github link in a new tab ![image](https://user-images.githubusercontent.com/16714604/94721360-670d9d80-0373-11eb-889b-339ab6fa07ee.png) - [ ] Tides svg isn't looking right on safari ![image](https://user-images.githubusercontent.com/16714604/94721601-c1a6f980-0373-11eb-8206-28faa8110882.png) - [x] The form layout in safari and chrome look different (chrome one is preferred) ![image](https://user-images.githubusercontent.com/16714604/94721639-d1264280-0373-11eb-89e6-2d6e0077035d.png) - [x] can we have the know more link after the text like the first point ![image](https://user-images.githubusercontent.com/16714604/94721922-2c583500-0374-11eb-95c8-9f918194239d.png) - [x] Let's use this link for facebook business verfication https://www.facebook.com/business/help/2058515294227817?id=180505742745347 and this one for gupshup account creation: https://www.gupshup.io/developer/home - [x] Does it day page not found because the complete link is not present? ![image](https://user-images.githubusercontent.com/16714604/94723023-c371bc80-0375-11eb-9a4c-146ae3ef8ae9.png) - [x] If you play a video, then click on a video in the side bar and play it. then both the videos start playing. there's no way to stop the previous one. this is happening on features page and not webinar one ![image](https://user-images.githubusercontent.com/16714604/94723129-e4d2a880-0375-11eb-8f58-bffdc4146c8d.png) - [ ]
1.0
Feedback - - [x] Can we please order the section as per the content order on homepage ![image](https://user-images.githubusercontent.com/16714604/94712432-1b54f700-0367-11eb-8dd3-dea1741833af.png) - [x] Can we remove arrows when there's only one testimonial ![image](https://user-images.githubusercontent.com/16714604/94712532-3d4e7980-0367-11eb-9f56-9f536fdab350.png) - [x] Small m in Know More ![image](https://user-images.githubusercontent.com/16714604/94712693-74248f80-0367-11eb-9864-20c92701ebf1.png) - [x] @SunnaMalhotra Can you please make sure only the correct/relevant content shows on the test site now. such as on demo videos page, blog, webinar etc... - [x] the fonts in this section is not correct ![image](https://user-images.githubusercontent.com/16714604/94713074-f614b880-0367-11eb-9f78-fcd9c552775f.png) - [x] Let's remove the separator lines in the dropdown. ![image](https://user-images.githubusercontent.com/16714604/94713179-1775a480-0368-11eb-95c5-4c54b954070a.png) - [x] Is it possible to play the side videos in one click? right now, it first moves to the center then needs another click again. users would be slightly confused what happened ![image](https://user-images.githubusercontent.com/16714604/94713535-8bb04800-0368-11eb-85fb-05a0ea6f0bb8.png) @abhi1203 , I tried to but I did not get the solution for it. To play the video. It looks like youtube does not allowing to override the feature - [x] Let's use the glific green color here. in all the links in faq ![image](https://user-images.githubusercontent.com/16714604/94713643-b8645f80-0368-11eb-9f02-969ec9a5f46f.png) - [x] increase space between the hamburger menu and book a demo button in smaller devices ![image](https://user-images.githubusercontent.com/16714604/94713790-e47fe080-0368-11eb-8e6a-834b808d6a75.png) - [x] Can we make the images and text clickable also here ![image](https://user-images.githubusercontent.com/16714604/94714535-e5654200-0369-11eb-8bad-029e0f51a808.png) Please do the same for recommended reading section also - [ ] Clicking on reply doesn't do anything. should it open a input box to enter the comment or something ![image](https://user-images.githubusercontent.com/16714604/94718854-f2852f80-036f-11eb-994c-71ed19a1ca1b.png) @abhi1203 It will redirect again the Post comment form, where you can add more comment - [x] Clicking on names and tags should filter the blogs by that respective selection - [x] can we open source code github link in a new tab ![image](https://user-images.githubusercontent.com/16714604/94721360-670d9d80-0373-11eb-889b-339ab6fa07ee.png) - [ ] Tides svg isn't looking right on safari ![image](https://user-images.githubusercontent.com/16714604/94721601-c1a6f980-0373-11eb-8206-28faa8110882.png) - [x] The form layout in safari and chrome look different (chrome one is preferred) ![image](https://user-images.githubusercontent.com/16714604/94721639-d1264280-0373-11eb-89e6-2d6e0077035d.png) - [x] can we have the know more link after the text like the first point ![image](https://user-images.githubusercontent.com/16714604/94721922-2c583500-0374-11eb-95c8-9f918194239d.png) - [x] Let's use this link for facebook business verfication https://www.facebook.com/business/help/2058515294227817?id=180505742745347 and this one for gupshup account creation: https://www.gupshup.io/developer/home - [x] Does it day page not found because the complete link is not present? ![image](https://user-images.githubusercontent.com/16714604/94723023-c371bc80-0375-11eb-9a4c-146ae3ef8ae9.png) - [x] If you play a video, then click on a video in the side bar and play it. then both the videos start playing. there's no way to stop the previous one. this is happening on features page and not webinar one ![image](https://user-images.githubusercontent.com/16714604/94723129-e4d2a880-0375-11eb-8f58-bffdc4146c8d.png) - [ ]
non_comp
feedback can we please order the section as per the content order on homepage can we remove arrows when there s only one testimonial small m in know more sunnamalhotra can you please make sure only the correct relevant content shows on the test site now such as on demo videos page blog webinar etc the fonts in this section is not correct let s remove the separator lines in the dropdown is it possible to play the side videos in one click right now it first moves to the center then needs another click again users would be slightly confused what happened i tried to but i did not get the solution for it to play the video it looks like youtube does not allowing to override the feature let s use the glific green color here in all the links in faq increase space between the hamburger menu and book a demo button in smaller devices can we make the images and text clickable also here please do the same for recommended reading section also clicking on reply doesn t do anything should it open a input box to enter the comment or something it will redirect again the post comment form where you can add more comment clicking on names and tags should filter the blogs by that respective selection can we open source code github link in a new tab tides svg isn t looking right on safari the form layout in safari and chrome look different chrome one is preferred can we have the know more link after the text like the first point let s use this link for facebook business verfication and this one for gupshup account creation does it day page not found because the complete link is not present if you play a video then click on a video in the side bar and play it then both the videos start playing there s no way to stop the previous one this is happening on features page and not webinar one
0
108,425
13,626,023,217
IssuesEvent
2020-09-24 10:22:01
djgroen/FabSim3
https://api.github.com/repos/djgroen/FabSim3
closed
Support plugin machine-specific setting
backend dev design decision enhancement long term usability
it would be great if we can have a machine-specific setting for each plugin. the current hierarchy for loading `env` variables are : 1. loading defaults `env` variable from `machines.yml` 2. Updating (add/override) `env` variable from `machines_user.yml` and with this new support, we have the third level 3. Updating (add/override) `env` variable, for a specific plugin, from `machines_<pluging name>_user.yml` if file exists
1.0
Support plugin machine-specific setting - it would be great if we can have a machine-specific setting for each plugin. the current hierarchy for loading `env` variables are : 1. loading defaults `env` variable from `machines.yml` 2. Updating (add/override) `env` variable from `machines_user.yml` and with this new support, we have the third level 3. Updating (add/override) `env` variable, for a specific plugin, from `machines_<pluging name>_user.yml` if file exists
non_comp
support plugin machine specific setting it would be great if we can have a machine specific setting for each plugin the current hierarchy for loading env variables are loading defaults env variable from machines yml updating add override env variable from machines user yml and with this new support we have the third level updating add override env variable for a specific plugin from machines user yml if file exists
0
3,097
6,023,373,025
IssuesEvent
2017-06-07 23:53:31
TerraME/terrame
https://api.github.com/repos/TerraME/terrame
closed
order of arguments in forEachConnection
<No Backward Compatibility> Core task
Review the order of arguments in `forEachConnection` in the same way of #1812.
True
order of arguments in forEachConnection - Review the order of arguments in `forEachConnection` in the same way of #1812.
comp
order of arguments in foreachconnection review the order of arguments in foreachconnection in the same way of
1
962
3,423,280,045
IssuesEvent
2015-12-09 05:09:55
angular-ui/ui-sortable
https://api.github.com/repos/angular-ui/ui-sortable
closed
ui-sortable problem when using ui-rangeSlider
incompatibility use case
Hi - I am using ui-sortable with ui-rangeSlider. I have a list of item to sort in a div. and I have the slider on the same page. Initially the sort and drag and drop works fine. But one you play with slider the ui-sortable stops working and I can not drag drop sort the lists any more. Looks like there is something connected to Model change but I tested this with a very simple model and still it does not work. Can you please look into the issue. Is there anything that I must do to refresh the ui-sortable during or after the slider events are called.
True
ui-sortable problem when using ui-rangeSlider - Hi - I am using ui-sortable with ui-rangeSlider. I have a list of item to sort in a div. and I have the slider on the same page. Initially the sort and drag and drop works fine. But one you play with slider the ui-sortable stops working and I can not drag drop sort the lists any more. Looks like there is something connected to Model change but I tested this with a very simple model and still it does not work. Can you please look into the issue. Is there anything that I must do to refresh the ui-sortable during or after the slider events are called.
comp
ui sortable problem when using ui rangeslider hi i am using ui sortable with ui rangeslider i have a list of item to sort in a div and i have the slider on the same page initially the sort and drag and drop works fine but one you play with slider the ui sortable stops working and i can not drag drop sort the lists any more looks like there is something connected to model change but i tested this with a very simple model and still it does not work can you please look into the issue is there anything that i must do to refresh the ui sortable during or after the slider events are called
1
10,031
8,328,603,255
IssuesEvent
2018-09-27 01:46:03
dotnet/core-setup
https://api.github.com/repos/dotnet/core-setup
closed
[1.1] Enable Debian9 build
area-Infrastructure enhancement
Much of the plumbing has already been added to enable a Debian9 build of core-setup in https://github.com/dotnet/core-setup/pull/4043, but it was removed in https://github.com/dotnet/core-setup/pull/4132 and https://github.com/dotnet/core-setup/pull/4170 because of an issue in the package publishing step that I didn't get a chance to resolve. We should re-enable Debian9 builds and resolve any issues.
1.0
[1.1] Enable Debian9 build - Much of the plumbing has already been added to enable a Debian9 build of core-setup in https://github.com/dotnet/core-setup/pull/4043, but it was removed in https://github.com/dotnet/core-setup/pull/4132 and https://github.com/dotnet/core-setup/pull/4170 because of an issue in the package publishing step that I didn't get a chance to resolve. We should re-enable Debian9 builds and resolve any issues.
non_comp
enable build much of the plumbing has already been added to enable a build of core setup in but it was removed in and because of an issue in the package publishing step that i didn t get a chance to resolve we should re enable builds and resolve any issues
0
163,371
13,916,231,109
IssuesEvent
2020-10-21 02:47:38
aws/aws-sdk-java-v2
https://api.github.com/repos/aws/aws-sdk-java-v2
closed
Cannot support Chinese
documentation needs-triage
<!--- Provide a general summary of the issue in the Title above --> I am currently working on a speech recognition project, but the enumeration type in the languagecode class is not Chinese. Can I add the enumeration type “zn-CH” to the transcribeStreaming service? ## Describe the issue <!--- A clear and concise description of the issue --> ## Links <!-- Include links to affected documentation page(s) -->
1.0
Cannot support Chinese - <!--- Provide a general summary of the issue in the Title above --> I am currently working on a speech recognition project, but the enumeration type in the languagecode class is not Chinese. Can I add the enumeration type “zn-CH” to the transcribeStreaming service? ## Describe the issue <!--- A clear and concise description of the issue --> ## Links <!-- Include links to affected documentation page(s) -->
non_comp
cannot support chinese i am currently working on a speech recognition project but the enumeration type in the languagecode class is not chinese can i add the enumeration type “zn ch” to the transcribestreaming service describe the issue links
0
177,780
14,644,880,481
IssuesEvent
2020-12-26 03:19:54
rrousselGit/freezed
https://api.github.com/repos/rrousselGit/freezed
opened
Create Utilities for Intellij
documentation needs triage
**Describe what scenario you think is uncovered by the existing examples/articles** A clear and concise description of the problem that you want to be explained. I added live templates for IntelliJ so you can create boilerplate code faster https://github.com/Tinhorn/freezed_intellij_live_templates **Describe why existing examples/articles do not cover this case** Explain which examples/articles you have seen before making this request, and why they did not help you with your problem. The utility section only have something for VSCode, not IntelliJ **Additional context** Add any other context or screenshots about the documentation request here. I have a repo with instructions here: https://github.com/Tinhorn/freezed_intellij_live_templates Let me know what I can do to make it better. Thanks for the package. It has been a great help. Merry Christmas
1.0
Create Utilities for Intellij - **Describe what scenario you think is uncovered by the existing examples/articles** A clear and concise description of the problem that you want to be explained. I added live templates for IntelliJ so you can create boilerplate code faster https://github.com/Tinhorn/freezed_intellij_live_templates **Describe why existing examples/articles do not cover this case** Explain which examples/articles you have seen before making this request, and why they did not help you with your problem. The utility section only have something for VSCode, not IntelliJ **Additional context** Add any other context or screenshots about the documentation request here. I have a repo with instructions here: https://github.com/Tinhorn/freezed_intellij_live_templates Let me know what I can do to make it better. Thanks for the package. It has been a great help. Merry Christmas
non_comp
create utilities for intellij describe what scenario you think is uncovered by the existing examples articles a clear and concise description of the problem that you want to be explained i added live templates for intellij so you can create boilerplate code faster describe why existing examples articles do not cover this case explain which examples articles you have seen before making this request and why they did not help you with your problem the utility section only have something for vscode not intellij additional context add any other context or screenshots about the documentation request here i have a repo with instructions here let me know what i can do to make it better thanks for the package it has been a great help merry christmas
0
343,660
30,681,838,841
IssuesEvent
2023-07-26 09:39:55
pytorch/pytorch
https://api.github.com/repos/pytorch/pytorch
closed
DISABLED test_streaming_backwards_sync_graph_root (__main__.TestCuda)
module: cuda triaged module: flaky-tests skipped
Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_streaming_backwards_sync_graph_root&suite=TestCuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/11344802621). Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_streaming_backwards_sync_graph_root` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_cuda.py` cc @ngimel
1.0
DISABLED test_streaming_backwards_sync_graph_root (__main__.TestCuda) - Platforms: rocm This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_streaming_backwards_sync_graph_root&suite=TestCuda) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/11344802621). Over the past 3 hours, it has been determined flaky in 4 workflow(s) with 4 failures and 4 successes. **Debugging instructions (after clicking on the recent samples link):** DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs. To find relevant log snippets: 1. Click on the workflow logs linked above 2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work. 3. Grep for `test_streaming_backwards_sync_graph_root` 4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs. Test file path: `test_cuda.py` cc @ngimel
non_comp
disabled test streaming backwards sync graph root main testcuda platforms rocm this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not assume things are okay if the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test streaming backwards sync graph root there should be several instances run as flaky tests are rerun in ci from which you can study the logs test file path test cuda py cc ngimel
0
10,840
4,103,143,554
IssuesEvent
2016-06-04 13:32:59
sgmap/api-communes
https://api.github.com/repos/sgmap/api-communes
closed
Docker
code
- [x] Ajout d'un `Dockerfile` - [x] <del>Publier sur [Docker Hub](https://hub.docker.com)</del> Superseeded by #21 - [x] Documenter l'utilisation
1.0
Docker - - [x] Ajout d'un `Dockerfile` - [x] <del>Publier sur [Docker Hub](https://hub.docker.com)</del> Superseeded by #21 - [x] Documenter l'utilisation
non_comp
docker ajout d un dockerfile publier sur superseeded by documenter l utilisation
0
17,053
23,521,010,266
IssuesEvent
2022-08-19 05:52:47
safing/portmaster
https://api.github.com/repos/safing/portmaster
opened
Where does portmaster place itself relative to distro firewalls (firewalld/ufw)?
in/compatibility
Hello Safing team! ❤️ Thank you so much for creating Portmaster. I haven't been this excited about new software in years. It is fantastic. I am curious how the portmaster rules execute in relation to the OS/distro firewall rules on Linux? For example, this is how firewalld works on Fedora Workstation: - Deny all incoming connections. - Allow incoming on ports 1025-65535. - Allow certain incoming services such as mdns and ssh. What happens when Portmaster is installed on a system that uses `firewalld`? My **guess (or at least hope)** is this: - Portmaster places itself at the top as the highest priority rule. - Unmarked connections are forwarded to the Portmaster user space for classification. - All packets for that connection are then marked as either accept or drop, by the flag that Portmaster decided. - Any packet that is marked as accept then goes to the remaining rules which `firewalld` created, which block all incoming connections but then allow specific service ports (ssh, 1025-65535, etc). If this is what happens, then I would be able to set Portmaster as "allow all incoming" to basically just forward everything to the rules that are already set up by Fedora in their `firewalld`. That would let me use Portmaster for my intended purpose: Per app blocking of outgoing connections. I would also be able to block specific app"s incoming connections to get extra granularity. In short if it works as I hope it does, then it would be thr best of all worlds.
True
Where does portmaster place itself relative to distro firewalls (firewalld/ufw)? - Hello Safing team! ❤️ Thank you so much for creating Portmaster. I haven't been this excited about new software in years. It is fantastic. I am curious how the portmaster rules execute in relation to the OS/distro firewall rules on Linux? For example, this is how firewalld works on Fedora Workstation: - Deny all incoming connections. - Allow incoming on ports 1025-65535. - Allow certain incoming services such as mdns and ssh. What happens when Portmaster is installed on a system that uses `firewalld`? My **guess (or at least hope)** is this: - Portmaster places itself at the top as the highest priority rule. - Unmarked connections are forwarded to the Portmaster user space for classification. - All packets for that connection are then marked as either accept or drop, by the flag that Portmaster decided. - Any packet that is marked as accept then goes to the remaining rules which `firewalld` created, which block all incoming connections but then allow specific service ports (ssh, 1025-65535, etc). If this is what happens, then I would be able to set Portmaster as "allow all incoming" to basically just forward everything to the rules that are already set up by Fedora in their `firewalld`. That would let me use Portmaster for my intended purpose: Per app blocking of outgoing connections. I would also be able to block specific app"s incoming connections to get extra granularity. In short if it works as I hope it does, then it would be thr best of all worlds.
comp
where does portmaster place itself relative to distro firewalls firewalld ufw hello safing team ❤️ thank you so much for creating portmaster i haven t been this excited about new software in years it is fantastic i am curious how the portmaster rules execute in relation to the os distro firewall rules on linux for example this is how firewalld works on fedora workstation deny all incoming connections allow incoming on ports allow certain incoming services such as mdns and ssh what happens when portmaster is installed on a system that uses firewalld my guess or at least hope is this portmaster places itself at the top as the highest priority rule unmarked connections are forwarded to the portmaster user space for classification all packets for that connection are then marked as either accept or drop by the flag that portmaster decided any packet that is marked as accept then goes to the remaining rules which firewalld created which block all incoming connections but then allow specific service ports ssh etc if this is what happens then i would be able to set portmaster as allow all incoming to basically just forward everything to the rules that are already set up by fedora in their firewalld that would let me use portmaster for my intended purpose per app blocking of outgoing connections i would also be able to block specific app s incoming connections to get extra granularity in short if it works as i hope it does then it would be thr best of all worlds
1
11,784
13,904,387,083
IssuesEvent
2020-10-20 08:32:10
stm32duino/Arduino_Core_STM32
https://api.github.com/repos/stm32duino/Arduino_Core_STM32
closed
TwoWire requestFrom(uint8_t address, size_t size, bool sendStop) Implementation
Arduino compatibility
I am having a Platformio build error with a library [(ArduinoECCX08)](https://github.com/arduino-libraries/ArduinoECCX08). This is because it calls: ```while (_wire->requestFrom((uint8_t)_address, (size_t)responseSize, (bool)true) != responseSize && retries--);``` Where it s casting the sendStop paramater as a boolean, the compiler fails saying that it is ambiguous. The core does not appear to have the Arduino specific function as found here --> [Link](https://github.com/esp8266/Arduino/blob/master/libraries/Wire/Wire.cpp#L123) **Describe the solution you'd like** Implement a handler that better aligns with the arduino specfification --> [Docs](https://www.arduino.cc/en/Reference/WireRequestFrom) Are there any guides I need to follow to implement a PR for this repo?
True
TwoWire requestFrom(uint8_t address, size_t size, bool sendStop) Implementation - I am having a Platformio build error with a library [(ArduinoECCX08)](https://github.com/arduino-libraries/ArduinoECCX08). This is because it calls: ```while (_wire->requestFrom((uint8_t)_address, (size_t)responseSize, (bool)true) != responseSize && retries--);``` Where it s casting the sendStop paramater as a boolean, the compiler fails saying that it is ambiguous. The core does not appear to have the Arduino specific function as found here --> [Link](https://github.com/esp8266/Arduino/blob/master/libraries/Wire/Wire.cpp#L123) **Describe the solution you'd like** Implement a handler that better aligns with the arduino specfification --> [Docs](https://www.arduino.cc/en/Reference/WireRequestFrom) Are there any guides I need to follow to implement a PR for this repo?
comp
twowire requestfrom t address size t size bool sendstop implementation i am having a platformio build error with a library this is because it calls while wire requestfrom t address size t responsesize bool true responsesize retries where it s casting the sendstop paramater as a boolean the compiler fails saying that it is ambiguous the core does not appear to have the arduino specific function as found here describe the solution you d like implement a handler that better aligns with the arduino specfification are there any guides i need to follow to implement a pr for this repo
1
17,780
24,533,071,623
IssuesEvent
2022-10-11 18:09:46
keymanapp/keyman
https://api.github.com/repos/keymanapp/keyman
closed
bug(linux): Backspacing and reordering not working correct in Anki
bug compatibility linux/
Keyman does not appear to be working correctly in Anki: reordering, e.g. with Khmer Angkor, xEjmr is not​ producing ខ្មែរ (normally typed xjmEr). This is similar to #1489 for Firefox. For the khmer_angkor keyboard, I'll give baseline results with Anki. It's important to understand that when I type `xEjm`, the khmer_angkor keyboard reorders the output at the time the `m` key is pressed, to give the equivalent of `xjmE`. The `r` is extraneous (but forms part of the complete word)... * Ubuntu 18.04 * Anki 2.1.40 | Correct order key | Correct order result | vs | Alternate keying order | Alternate order result | | --- | --- | --- | --- | --- | | `x` | `U+1781` | | `x` | `U+1781` | | `xj` | `U+1781 U+17D2` | | `xE` | `U+1781 U+17C2` | | `xjm` | `U+1781 U+17D2 U+1798` | | `xEj` | `U+1781 U+17C2 U+17D2` | | `xjmE` | `U+1781 U+17D2 U+1798 U+17C2` | | `xEjm` | `U+1781 U+17c2 U+17d2 U+17d2 U+1798 U+17c2 *` | | `xjmEr` | `U+1781 U+17D2 U+1798 U+17C2 U+179A` | | `xEjmr` | `U+1781 U+17c2 U+17d2 U+17d2 U+1798 U+17c2 U+179a *` | \* These are incorrect results ## Backspacing Starting with ខ្មែរ (correct order key sequence) and pressing backspace: after 1 backspace: ខ្មែ after 2-5 backspaces: ខ្មែ after 6 backspaces: ខ្ម after 7 backspaces: ខ្ after 8-9 backspaces: ខ្ after 10 backspaces: ខ after 11 backspaces: Expected: all codepoints are gone after 5 backspace; each backspace deletes one codepoint in the reverse order of the correct order key sequence.
True
bug(linux): Backspacing and reordering not working correct in Anki - Keyman does not appear to be working correctly in Anki: reordering, e.g. with Khmer Angkor, xEjmr is not​ producing ខ្មែរ (normally typed xjmEr). This is similar to #1489 for Firefox. For the khmer_angkor keyboard, I'll give baseline results with Anki. It's important to understand that when I type `xEjm`, the khmer_angkor keyboard reorders the output at the time the `m` key is pressed, to give the equivalent of `xjmE`. The `r` is extraneous (but forms part of the complete word)... * Ubuntu 18.04 * Anki 2.1.40 | Correct order key | Correct order result | vs | Alternate keying order | Alternate order result | | --- | --- | --- | --- | --- | | `x` | `U+1781` | | `x` | `U+1781` | | `xj` | `U+1781 U+17D2` | | `xE` | `U+1781 U+17C2` | | `xjm` | `U+1781 U+17D2 U+1798` | | `xEj` | `U+1781 U+17C2 U+17D2` | | `xjmE` | `U+1781 U+17D2 U+1798 U+17C2` | | `xEjm` | `U+1781 U+17c2 U+17d2 U+17d2 U+1798 U+17c2 *` | | `xjmEr` | `U+1781 U+17D2 U+1798 U+17C2 U+179A` | | `xEjmr` | `U+1781 U+17c2 U+17d2 U+17d2 U+1798 U+17c2 U+179a *` | \* These are incorrect results ## Backspacing Starting with ខ្មែរ (correct order key sequence) and pressing backspace: after 1 backspace: ខ្មែ after 2-5 backspaces: ខ្មែ after 6 backspaces: ខ្ម after 7 backspaces: ខ្ after 8-9 backspaces: ខ្ after 10 backspaces: ខ after 11 backspaces: Expected: all codepoints are gone after 5 backspace; each backspace deletes one codepoint in the reverse order of the correct order key sequence.
comp
bug linux backspacing and reordering not working correct in anki keyman does not appear to be working correctly in anki reordering e g with khmer angkor xejmr is not​ producing ខ្មែរ normally typed xjmer this is similar to for firefox for the khmer angkor keyboard i ll give baseline results with anki it s important to understand that when i type xejm the khmer angkor keyboard reorders the output at the time the m key is pressed to give the equivalent of xjme the r is extraneous but forms part of the complete word ubuntu anki correct order key correct order result vs alternate keying order alternate order result x u x u xj u u xe u u xjm u u u xej u u u xjme u u u u xejm u u u u u u xjmer u u u u u xejmr u u u u u u u these are incorrect results backspacing starting with ខ្មែរ correct order key sequence and pressing backspace after backspace ខ្មែ after backspaces ខ្មែ after backspaces ខ្ម after backspaces ខ្ after backspaces ខ្ after backspaces ខ after backspaces expected all codepoints are gone after backspace each backspace deletes one codepoint in the reverse order of the correct order key sequence
1
725,508
24,964,256,147
IssuesEvent
2022-11-01 18:00:40
GQDeltex/ft_transcendence
https://api.github.com/repos/GQDeltex/ft_transcendence
closed
Remove console.logs don't select first channel on channel list
frontend priority
When you go to the chat page, the first channel from the list is selected as currentChannel, even is you are not in it.
1.0
Remove console.logs don't select first channel on channel list - When you go to the chat page, the first channel from the list is selected as currentChannel, even is you are not in it.
non_comp
remove console logs don t select first channel on channel list when you go to the chat page the first channel from the list is selected as currentchannel even is you are not in it
0
567,928
16,919,758,768
IssuesEvent
2021-06-25 02:32:53
openmsupply/remote-server
https://api.github.com/repos/openmsupply/remote-server
opened
Epic: stocktake
epic priority: normal
Epic for implementing functionality for basic stocktake workflow. Activity diagram: TODO.
1.0
Epic: stocktake - Epic for implementing functionality for basic stocktake workflow. Activity diagram: TODO.
non_comp
epic stocktake epic for implementing functionality for basic stocktake workflow activity diagram todo
0
1,715
2,570,503,904
IssuesEvent
2015-02-10 10:00:08
ThemeAvenue/Awesome-Support
https://api.github.com/repos/ThemeAvenue/Awesome-Support
closed
Canned Response
needs testing
{agent_name} Is not parsing the name of the agent but keep "admin" ( for supervisor but the supervisor is not an admin? ).
1.0
Canned Response - {agent_name} Is not parsing the name of the agent but keep "admin" ( for supervisor but the supervisor is not an admin? ).
non_comp
canned response agent name is not parsing the name of the agent but keep admin for supervisor but the supervisor is not an admin
0
24,206
12,238,257,992
IssuesEvent
2020-05-04 19:29:14
main--/rust-factorio-ic
https://api.github.com/repos/main--/rust-factorio-ic
opened
Optimize mylee
performance
1. `HashSet` for visited is inefficient, use grid storage instead (3d array if direction matters). 2. The original lee algorithm has an important optimization where it doesn't store the walker history but instead re-traces a walker's steps in the grid once it reaches the target. This is possible by tagging the visited set with a cost counter instead of nothing (i.e. using hashmap instead of hashset) and then finding the reverse path by always walking to the tile with the lowest cost. Undergrounds significantly complicate this however. FWIW `leemaze_lib` doesn't implement this optimization either, as it makes the algorithm significantly more complex (and more fragile) to implement.
True
Optimize mylee - 1. `HashSet` for visited is inefficient, use grid storage instead (3d array if direction matters). 2. The original lee algorithm has an important optimization where it doesn't store the walker history but instead re-traces a walker's steps in the grid once it reaches the target. This is possible by tagging the visited set with a cost counter instead of nothing (i.e. using hashmap instead of hashset) and then finding the reverse path by always walking to the tile with the lowest cost. Undergrounds significantly complicate this however. FWIW `leemaze_lib` doesn't implement this optimization either, as it makes the algorithm significantly more complex (and more fragile) to implement.
non_comp
optimize mylee hashset for visited is inefficient use grid storage instead array if direction matters the original lee algorithm has an important optimization where it doesn t store the walker history but instead re traces a walker s steps in the grid once it reaches the target this is possible by tagging the visited set with a cost counter instead of nothing i e using hashmap instead of hashset and then finding the reverse path by always walking to the tile with the lowest cost undergrounds significantly complicate this however fwiw leemaze lib doesn t implement this optimization either as it makes the algorithm significantly more complex and more fragile to implement
0
35,522
4,995,299,477
IssuesEvent
2016-12-09 09:39:57
zetkin/call.zetk.in
https://api.github.com/repos/zetkin/call.zetk.in
opened
LaneOverview too hard to find
user test
Most test subjects understand that they need to navigate to the `LaneOverview` to start a new call with a previous target when tasked with that, but only provided that they've seen the tooltip. One subject did not at all note the tooltip and thus failed to execute the task of starting a new call when someone calls back (instead just taking down notes on paper to hand to an organizer after the call). This is related to #77. _This was observed (noted as signal 19) during the december 2016 user tests._
1.0
LaneOverview too hard to find - Most test subjects understand that they need to navigate to the `LaneOverview` to start a new call with a previous target when tasked with that, but only provided that they've seen the tooltip. One subject did not at all note the tooltip and thus failed to execute the task of starting a new call when someone calls back (instead just taking down notes on paper to hand to an organizer after the call). This is related to #77. _This was observed (noted as signal 19) during the december 2016 user tests._
non_comp
laneoverview too hard to find most test subjects understand that they need to navigate to the laneoverview to start a new call with a previous target when tasked with that but only provided that they ve seen the tooltip one subject did not at all note the tooltip and thus failed to execute the task of starting a new call when someone calls back instead just taking down notes on paper to hand to an organizer after the call this is related to this was observed noted as signal during the december user tests
0
12,532
14,856,643,133
IssuesEvent
2021-01-18 14:25:03
jptrrs/HumanResources
https://api.github.com/repos/jptrrs/HumanResources
closed
Error when used with the unspeakable mod
compatibility
I don't know what your position is on the unspeakable mod (RJW), so if you have some of the qualms a few other people have with it then just ignore this bug report, I guess. Even then there may be other mods that do something similar, so it might be useful to be aware of this. I'm getting this error relatively repeatably. Seems to be caused when a pawn from another faction comes on to the map as I've got it both from raids and trade caravans. https://gist.github.com/HugsLibRecordKeeper/af5bdff0fbe0f0e3ed4ef30975d2d326 The way I repeat it is to load all the bare bones mods for Human Resources, and then add RJW as well. Start a new map, and then I send in 10 trade caravans via dev mode > incident . Usually a number of the caravans have this error and don't spawn.
True
Error when used with the unspeakable mod - I don't know what your position is on the unspeakable mod (RJW), so if you have some of the qualms a few other people have with it then just ignore this bug report, I guess. Even then there may be other mods that do something similar, so it might be useful to be aware of this. I'm getting this error relatively repeatably. Seems to be caused when a pawn from another faction comes on to the map as I've got it both from raids and trade caravans. https://gist.github.com/HugsLibRecordKeeper/af5bdff0fbe0f0e3ed4ef30975d2d326 The way I repeat it is to load all the bare bones mods for Human Resources, and then add RJW as well. Start a new map, and then I send in 10 trade caravans via dev mode > incident . Usually a number of the caravans have this error and don't spawn.
comp
error when used with the unspeakable mod i don t know what your position is on the unspeakable mod rjw so if you have some of the qualms a few other people have with it then just ignore this bug report i guess even then there may be other mods that do something similar so it might be useful to be aware of this i m getting this error relatively repeatably seems to be caused when a pawn from another faction comes on to the map as i ve got it both from raids and trade caravans the way i repeat it is to load all the bare bones mods for human resources and then add rjw as well start a new map and then i send in trade caravans via dev mode incident usually a number of the caravans have this error and don t spawn
1
57,251
11,730,638,782
IssuesEvent
2020-03-10 21:50:01
DataBiosphere/azul
https://api.github.com/repos/DataBiosphere/azul
closed
Ensure that = in access key for DRS does not cause problems
bug code demoed orange
Also the = can probably just be removed and inferred from the length of the key.
1.0
Ensure that = in access key for DRS does not cause problems - Also the = can probably just be removed and inferred from the length of the key.
non_comp
ensure that in access key for drs does not cause problems also the can probably just be removed and inferred from the length of the key
0
2,210
4,957,748,230
IssuesEvent
2016-12-02 06:33:46
tex-xet/xepersian
https://api.github.com/repos/tex-xet/xepersian
opened
Support the advdate package (update \today)
Bug Compatibility Enhancement Feature
The computations in the package `xepersian-persiancal` are done only once and whenever Georgian date is changed, `\today` would still use the old Georgian date. This causes problems with the `advdate` package. ````latex \documentclass{article} \usepackage{advdate} \usepackage{xepersian} \settextfont{Yas} \begin{document} \today\\ \DayAfter[7]\\ \today\\ \end{document} ```` Ideally, this issue should be considered while resolving #5. This question was originally posted [here](http://qa.parsilatex.com/16717).
True
Support the advdate package (update \today) - The computations in the package `xepersian-persiancal` are done only once and whenever Georgian date is changed, `\today` would still use the old Georgian date. This causes problems with the `advdate` package. ````latex \documentclass{article} \usepackage{advdate} \usepackage{xepersian} \settextfont{Yas} \begin{document} \today\\ \DayAfter[7]\\ \today\\ \end{document} ```` Ideally, this issue should be considered while resolving #5. This question was originally posted [here](http://qa.parsilatex.com/16717).
comp
support the advdate package update today the computations in the package xepersian persiancal are done only once and whenever georgian date is changed today would still use the old georgian date this causes problems with the advdate package latex documentclass article usepackage advdate usepackage xepersian settextfont yas begin document today dayafter today end document ideally this issue should be considered while resolving this question was originally posted
1
822,148
30,855,262,124
IssuesEvent
2023-08-02 20:02:44
ut-issl/c2a-aobc
https://api.github.com/repos/ut-issl/c2a-aobc
opened
pytestを追加してテストカバー範囲を広げる
:beginner: good first issue :airplane: priority::medium :dolphin: minor update :wrench: tools
## 詳細 pytestを追加していき、テストカバー範囲を広げる。 ## close条件 コマンドについて、カバー率100%になったら? より細かなissueに分割して徐々に取り組んでいくのが良さそう。 ## 備考 NA <!-- ## 注意 - 割り当てれるなら`Assignees`を割り当てる - `Projects`として`6U AOCS team (private)`を設定する - `Status`を`ToDo`などに設定する - 必ず`priority` ラベルを付けること - high: 試験などの関係ですぐさま対応してほしい - middle: 通常これ - low: ゆっくりで良いもの - 必ず`update`ラベルをつけること - major: 後方互換性なしのI/F変更 - minor: 後方互換性ありのI/F変更 - patch: I/F変更なし - テンプレート記述は残さず、削除したり`NA`と書き換えたりする -->
1.0
pytestを追加してテストカバー範囲を広げる - ## 詳細 pytestを追加していき、テストカバー範囲を広げる。 ## close条件 コマンドについて、カバー率100%になったら? より細かなissueに分割して徐々に取り組んでいくのが良さそう。 ## 備考 NA <!-- ## 注意 - 割り当てれるなら`Assignees`を割り当てる - `Projects`として`6U AOCS team (private)`を設定する - `Status`を`ToDo`などに設定する - 必ず`priority` ラベルを付けること - high: 試験などの関係ですぐさま対応してほしい - middle: 通常これ - low: ゆっくりで良いもの - 必ず`update`ラベルをつけること - major: 後方互換性なしのI/F変更 - minor: 後方互換性ありのI/F変更 - patch: I/F変更なし - テンプレート記述は残さず、削除したり`NA`と書き換えたりする -->
non_comp
pytestを追加してテストカバー範囲を広げる 詳細 pytestを追加していき、テストカバー範囲を広げる。 close条件 コマンドについて、 になったら? より細かなissueに分割して徐々に取り組んでいくのが良さそう。 備考 na 注意 割り当てれるなら assignees を割り当てる projects として aocs team private を設定する status を todo などに設定する 必ず priority ラベルを付けること high 試験などの関係ですぐさま対応してほしい middle 通常これ low ゆっくりで良いもの 必ず update ラベルをつけること major 後方互換性なしのi f変更 minor 後方互換性ありのi f変更 patch i f変更なし テンプレート記述は残さず、削除したり na と書き換えたりする
0
1,472
3,985,444,754
IssuesEvent
2016-05-07 21:51:48
lelandrichardson/react-native-maps
https://api.github.com/repos/lelandrichardson/react-native-maps
closed
Support react-native < 0.17
Compatibility enhancement help wanted
I've seen similar issues, but nothing have helped. I'm receiving this error while debugging in chrome: ``` Warning: Native component for "AIRMapMarker" does not exist Warning: Native component for "AIRMapPolyline" does not exist Warning: Native component for "AIRMapPolygon" does not exist Warning: Native component for "AIRMapCircle" does not exist Warning: Native component for "AIRMapCallout" does not exist Warning: Native component for "AIRMap" does not exist ``` Both ios and android have blank screen. May it be because i'm using react-native 0.15? I've checked out release notes and didn't found significant updates, which may make impossible to use this package without updating react-native
True
Support react-native < 0.17 - I've seen similar issues, but nothing have helped. I'm receiving this error while debugging in chrome: ``` Warning: Native component for "AIRMapMarker" does not exist Warning: Native component for "AIRMapPolyline" does not exist Warning: Native component for "AIRMapPolygon" does not exist Warning: Native component for "AIRMapCircle" does not exist Warning: Native component for "AIRMapCallout" does not exist Warning: Native component for "AIRMap" does not exist ``` Both ios and android have blank screen. May it be because i'm using react-native 0.15? I've checked out release notes and didn't found significant updates, which may make impossible to use this package without updating react-native
comp
support react native i ve seen similar issues but nothing have helped i m receiving this error while debugging in chrome warning native component for airmapmarker does not exist warning native component for airmappolyline does not exist warning native component for airmappolygon does not exist warning native component for airmapcircle does not exist warning native component for airmapcallout does not exist warning native component for airmap does not exist both ios and android have blank screen may it be because i m using react native i ve checked out release notes and didn t found significant updates which may make impossible to use this package without updating react native
1
115,469
14,768,080,627
IssuesEvent
2021-01-10 10:18:57
CampusAI/CampusAI.github.io
https://api.github.com/repos/CampusAI/CampusAI.github.io
opened
Style improvement ideas
design
Index: - Authors on the left column - Summary & collab on the right column - Put post pictures on the side of the index cards ML: - Use cards with images for the posts, current cards for papers Dim reduction post - Improve interpretations, add CUR decomposition.
1.0
Style improvement ideas - Index: - Authors on the left column - Summary & collab on the right column - Put post pictures on the side of the index cards ML: - Use cards with images for the posts, current cards for papers Dim reduction post - Improve interpretations, add CUR decomposition.
non_comp
style improvement ideas index authors on the left column summary collab on the right column put post pictures on the side of the index cards ml use cards with images for the posts current cards for papers dim reduction post improve interpretations add cur decomposition
0
5,548
8,017,760,891
IssuesEvent
2018-07-25 16:54:12
daniele-salvagni/color-goggles
https://api.github.com/repos/daniele-salvagni/color-goggles
closed
Launching but not working
compatibility external
Intel HD (not UHD) Graphics 630 with GTX1050; ColorGoggles launched but I couldn't see any difference. I tried moving the "Windows Saturation" slider but nothing happened.
True
Launching but not working - Intel HD (not UHD) Graphics 630 with GTX1050; ColorGoggles launched but I couldn't see any difference. I tried moving the "Windows Saturation" slider but nothing happened.
comp
launching but not working intel hd not uhd graphics with colorgoggles launched but i couldn t see any difference i tried moving the windows saturation slider but nothing happened
1
611,235
18,948,964,651
IssuesEvent
2021-11-18 13:20:54
Qiskit/qiskit-terra
https://api.github.com/repos/Qiskit/qiskit-terra
closed
Pulse instructions should be scheduled with delays
type: enhancement priority: high mod: pulse
<!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? Currently, all `pulse.Instructions` are stored in a `Schedule` with a two-tuple containing the `(start_time, Instruction)`. While this makes it clear the explicit starting time of every pulse it leads to convoluted internal data structures such as `TimeslotCollection` for tracking the time of all instructions. It also encourages inefficient operations such as taking the union of two schedules. At first glance this operation should be efficient since all one needs to do is specifying the starting time of the instructions, but this requires verification that no instructions overlap in time in the two schedules resulting in expensive checks. The requirement for such a check to merge two schedules is obvious within the relatively schedules model, as one must look to make sure a delay exists in the first schedule for all instructions of the second schedule to merge. Furthermore, explicit timing of instructions will become confusing with the introduction of control-flow to quantum programs as it would become difficult to program experiments containing loops that are not unrollable. We should move to a model where instructions are explicitly scheduled with `Delay`s (think NOPs) of a given duration. This will: - Make appending two schedules a much simpler operation as the dependency of instructions will be specified by their order on operands - This will allow a simple transformation to a DAG representation of a basic pulse schedule block - Have positive performance impacts. - Have implications for the current pulse API. I believe this should be mostly internal-facing and that not many changes will exist for users. Requirements for closure of this issue include: - [ ] Approved [RFC](https://github.com/Qiskit/rfcs/pull/16) - [ ] Implementation
1.0
Pulse instructions should be scheduled with delays - <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? Currently, all `pulse.Instructions` are stored in a `Schedule` with a two-tuple containing the `(start_time, Instruction)`. While this makes it clear the explicit starting time of every pulse it leads to convoluted internal data structures such as `TimeslotCollection` for tracking the time of all instructions. It also encourages inefficient operations such as taking the union of two schedules. At first glance this operation should be efficient since all one needs to do is specifying the starting time of the instructions, but this requires verification that no instructions overlap in time in the two schedules resulting in expensive checks. The requirement for such a check to merge two schedules is obvious within the relatively schedules model, as one must look to make sure a delay exists in the first schedule for all instructions of the second schedule to merge. Furthermore, explicit timing of instructions will become confusing with the introduction of control-flow to quantum programs as it would become difficult to program experiments containing loops that are not unrollable. We should move to a model where instructions are explicitly scheduled with `Delay`s (think NOPs) of a given duration. This will: - Make appending two schedules a much simpler operation as the dependency of instructions will be specified by their order on operands - This will allow a simple transformation to a DAG representation of a basic pulse schedule block - Have positive performance impacts. - Have implications for the current pulse API. I believe this should be mostly internal-facing and that not many changes will exist for users. Requirements for closure of this issue include: - [ ] Approved [RFC](https://github.com/Qiskit/rfcs/pull/16) - [ ] Implementation
non_comp
pulse instructions should be scheduled with delays what is the expected enhancement currently all pulse instructions are stored in a schedule with a two tuple containing the start time instruction while this makes it clear the explicit starting time of every pulse it leads to convoluted internal data structures such as timeslotcollection for tracking the time of all instructions it also encourages inefficient operations such as taking the union of two schedules at first glance this operation should be efficient since all one needs to do is specifying the starting time of the instructions but this requires verification that no instructions overlap in time in the two schedules resulting in expensive checks the requirement for such a check to merge two schedules is obvious within the relatively schedules model as one must look to make sure a delay exists in the first schedule for all instructions of the second schedule to merge furthermore explicit timing of instructions will become confusing with the introduction of control flow to quantum programs as it would become difficult to program experiments containing loops that are not unrollable we should move to a model where instructions are explicitly scheduled with delay s think nops of a given duration this will make appending two schedules a much simpler operation as the dependency of instructions will be specified by their order on operands this will allow a simple transformation to a dag representation of a basic pulse schedule block have positive performance impacts have implications for the current pulse api i believe this should be mostly internal facing and that not many changes will exist for users requirements for closure of this issue include approved implementation
0
133,495
29,188,730,021
IssuesEvent
2023-05-19 17:47:54
mozilla/addons-server
https://api.github.com/repos/mozilla/addons-server
closed
Refactor reviewer queues to gather title, permission, url in one place
component:code_quality component:reviewer_tools priority:p4
The various properties of reviewer queues are scattered across multiple files: `urls.py`, `views.py`, `jinja_helpers.py`, `utils.py`, `queue.html`. We should consolidate all that. Maybe add `title`, `permission` and `urlname` to the various `Table` classes, add a registry `dict` gathering all queues, and then use that everywhere. The registry already exists, `reviewer_tables_registry`. So really it's all about putting more stuff in the table classes and using that.
1.0
Refactor reviewer queues to gather title, permission, url in one place - The various properties of reviewer queues are scattered across multiple files: `urls.py`, `views.py`, `jinja_helpers.py`, `utils.py`, `queue.html`. We should consolidate all that. Maybe add `title`, `permission` and `urlname` to the various `Table` classes, add a registry `dict` gathering all queues, and then use that everywhere. The registry already exists, `reviewer_tables_registry`. So really it's all about putting more stuff in the table classes and using that.
non_comp
refactor reviewer queues to gather title permission url in one place the various properties of reviewer queues are scattered across multiple files urls py views py jinja helpers py utils py queue html we should consolidate all that maybe add title permission and urlname to the various table classes add a registry dict gathering all queues and then use that everywhere the registry already exists reviewer tables registry so really it s all about putting more stuff in the table classes and using that
0
20,416
30,199,504,376
IssuesEvent
2023-07-05 03:23:51
KiltMC/Kilt
https://api.github.com/repos/KiltMC/Kilt
opened
FerriteCore crashes on startup
C-incompatible T-crash P-high
When loading into the game with FerriteCore, the game will crash with the error: ``` java.lang.ClassCastException: class net.minecraft.client.resources.model.ModelResourceLocation cannot be cast to class malte0811.ferritecore.mixin.mrl.ResourceLocationAccess (net.minecraft.client.resources.model.ModelResourceLocation and malte0811.ferritecore.mixin.mrl.ResourceLocationAccess are in unnamed module of loader net.fabricmc.loader.impl.launch.knot.KnotClassLoader @498d318c) ``` Currently, the only way to fix this is by going into `config/ferritecore.mixin.properties` and setting `modelResourceLocations = true` to `modelResourceLocations = false`.
True
FerriteCore crashes on startup - When loading into the game with FerriteCore, the game will crash with the error: ``` java.lang.ClassCastException: class net.minecraft.client.resources.model.ModelResourceLocation cannot be cast to class malte0811.ferritecore.mixin.mrl.ResourceLocationAccess (net.minecraft.client.resources.model.ModelResourceLocation and malte0811.ferritecore.mixin.mrl.ResourceLocationAccess are in unnamed module of loader net.fabricmc.loader.impl.launch.knot.KnotClassLoader @498d318c) ``` Currently, the only way to fix this is by going into `config/ferritecore.mixin.properties` and setting `modelResourceLocations = true` to `modelResourceLocations = false`.
comp
ferritecore crashes on startup when loading into the game with ferritecore the game will crash with the error java lang classcastexception class net minecraft client resources model modelresourcelocation cannot be cast to class ferritecore mixin mrl resourcelocationaccess net minecraft client resources model modelresourcelocation and ferritecore mixin mrl resourcelocationaccess are in unnamed module of loader net fabricmc loader impl launch knot knotclassloader currently the only way to fix this is by going into config ferritecore mixin properties and setting modelresourcelocations true to modelresourcelocations false
1
19,409
26,934,257,052
IssuesEvent
2023-02-07 19:13:27
ajv-validator/ajv
https://api.github.com/repos/ajv-validator/ajv
opened
Unable to generate ESM-compatible standalone code
compatibility
<!-- Frequently Asked Questions: https://ajv.js.org/faq.html Please provide all info and reduce your schema and data to the smallest possible size. This template is for compatibility issues. For other issues please see https://ajv.js.org/contributing/ --> **The version of Ajv you are using** 8.12 **The environment you have the problem with** node v16.16.0, ESM project (`{type: module}`) **Your code (please make it as small as possible to reproduce the issue)** Code to compile AJV schema: ``` // compile.js import Ajv from "ajv"; import standaloneCode from "ajv/dist/standalone/index.js"; import { writeFile } from "fs/promises"; const ajvOptions = { strict: true, allErrors: true, messages: true, code: { source: true, esm: true, // ESM support }, }; const ajv = new Ajv(ajvOptions); const exampleSchema = { type: "object", properties: { body: { type: "object", properties: { firstName: { type: "string", minLength: 1, }, lastName: { type: "string", }, }, required: ["firstName", "lastName"], additionalProperties: false, }, }, }; compileJsonSchema(exampleSchema); // Note: excluded code that runs this function async function compileJsonSchema(schema) { try { // Compile schema via AJV const compiled = ajv.compile(schema); // Build output module file content via AJV's standalone let moduleCode = standaloneCode(ajv, compiled); // Write file to parent directory of source file await writeFile("./schema.js", moduleCode, "utf-8"); } catch (err) { console.error(err); } } ``` **Results in node.js v8+** Able to successfully compile standalone code; unable to import in ESM project **Results and error messages in your platform** I am using the [standalone validation code](https://ajv.js.org/standalone.html#using-the-defaults-es6-and-cjs-exports) functionality to generate compiled schema files at build time. I am compiling the code with the AJV library in a JS script and specifying `esm: true` to compile the standalone code for ESM. The compilation works and I am writing the compiled schema to a JS file, but I am unable to import the standalone code from this schema file in my ESM project because it imports the `ucs2length` utility via a `require` statement (`const func2 = require("ajv/dist/runtime/ucs2length").default`). Here is an example schema compiled via standalone for ESM support (generated with the above script: ``` "use strict";export const validate = validate10;export default validate10;const schema11 = {"type":"object","properties":{"body":{"type":"object","properties":{"firstName":{"type":"string","minLength":1},"lastName":{"type":"string"}},"required":["firstName","lastName"],"additionalProperties":false}}};const func2 = require("ajv/dist/runtime/ucs2length").default;function validate10(data, {instancePath="", parentData, parentDataProperty, rootData=data}={}){let vErrors = null;let errors = 0;if(data && typeof data == "object" && !Array.isArray(data)){if(data.body !== undefined){let data0 = data.body;if(data0 && typeof data0 == "object" && !Array.isArray(data0)){if(data0.firstName === undefined){const err0 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/required",keyword:"required",params:{missingProperty: "firstName"},message:"must have required property '"+"firstName"+"'"};if(vErrors === null){vErrors = [err0];}else {vErrors.push(err0);}errors++;}if(data0.lastName === undefined){const err1 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/required",keyword:"required",params:{missingProperty: "lastName"},message:"must have required property '"+"lastName"+"'"};if(vErrors === null){vErrors = [err1];}else {vErrors.push(err1);}errors++;}for(const key0 in data0){if(!((key0 === "firstName") || (key0 === "lastName"))){const err2 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/additionalProperties",keyword:"additionalProperties",params:{additionalProperty: key0},message:"must NOT have additional properties"};if(vErrors === null){vErrors = [err2];}else {vErrors.push(err2);}errors++;}}if(data0.firstName !== undefined){let data1 = data0.firstName;if(typeof data1 === "string"){if(func2(data1) < 1){const err3 = {instancePath:instancePath+"/body/firstName",schemaPath:"#/properties/body/properties/firstName/minLength",keyword:"minLength",params:{limit: 1},message:"must NOT have fewer than 1 characters"};if(vErrors === null){vErrors = [err3];}else {vErrors.push(err3);}errors++;}}else {const err4 = {instancePath:instancePath+"/body/firstName",schemaPath:"#/properties/body/properties/firstName/type",keyword:"type",params:{type: "string"},message:"must be string"};if(vErrors === null){vErrors = [err4];}else {vErrors.push(err4);}errors++;}}if(data0.lastName !== undefined){if(typeof data0.lastName !== "string"){const err5 = {instancePath:instancePath+"/body/lastName",schemaPath:"#/properties/body/properties/lastName/type",keyword:"type",params:{type: "string"},message:"must be string"};if(vErrors === null){vErrors = [err5];}else {vErrors.push(err5);}errors++;}}}else {const err6 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/type",keyword:"type",params:{type: "object"},message:"must be object"};if(vErrors === null){vErrors = [err6];}else {vErrors.push(err6);}errors++;}}}else {const err7 = {instancePath,schemaPath:"#/type",keyword:"type",params:{type: "object"},message:"must be object"};if(vErrors === null){vErrors = [err7];}else {vErrors.push(err7);}errors++;}validate10.errors = vErrors;return errors === 0;} ``` I'm able to import and use the schema file by manually replacing the `require` with an `import` statement. Am I overlooking something to generate standalone validation code that is fully ESM-compatible?
True
Unable to generate ESM-compatible standalone code - <!-- Frequently Asked Questions: https://ajv.js.org/faq.html Please provide all info and reduce your schema and data to the smallest possible size. This template is for compatibility issues. For other issues please see https://ajv.js.org/contributing/ --> **The version of Ajv you are using** 8.12 **The environment you have the problem with** node v16.16.0, ESM project (`{type: module}`) **Your code (please make it as small as possible to reproduce the issue)** Code to compile AJV schema: ``` // compile.js import Ajv from "ajv"; import standaloneCode from "ajv/dist/standalone/index.js"; import { writeFile } from "fs/promises"; const ajvOptions = { strict: true, allErrors: true, messages: true, code: { source: true, esm: true, // ESM support }, }; const ajv = new Ajv(ajvOptions); const exampleSchema = { type: "object", properties: { body: { type: "object", properties: { firstName: { type: "string", minLength: 1, }, lastName: { type: "string", }, }, required: ["firstName", "lastName"], additionalProperties: false, }, }, }; compileJsonSchema(exampleSchema); // Note: excluded code that runs this function async function compileJsonSchema(schema) { try { // Compile schema via AJV const compiled = ajv.compile(schema); // Build output module file content via AJV's standalone let moduleCode = standaloneCode(ajv, compiled); // Write file to parent directory of source file await writeFile("./schema.js", moduleCode, "utf-8"); } catch (err) { console.error(err); } } ``` **Results in node.js v8+** Able to successfully compile standalone code; unable to import in ESM project **Results and error messages in your platform** I am using the [standalone validation code](https://ajv.js.org/standalone.html#using-the-defaults-es6-and-cjs-exports) functionality to generate compiled schema files at build time. I am compiling the code with the AJV library in a JS script and specifying `esm: true` to compile the standalone code for ESM. The compilation works and I am writing the compiled schema to a JS file, but I am unable to import the standalone code from this schema file in my ESM project because it imports the `ucs2length` utility via a `require` statement (`const func2 = require("ajv/dist/runtime/ucs2length").default`). Here is an example schema compiled via standalone for ESM support (generated with the above script: ``` "use strict";export const validate = validate10;export default validate10;const schema11 = {"type":"object","properties":{"body":{"type":"object","properties":{"firstName":{"type":"string","minLength":1},"lastName":{"type":"string"}},"required":["firstName","lastName"],"additionalProperties":false}}};const func2 = require("ajv/dist/runtime/ucs2length").default;function validate10(data, {instancePath="", parentData, parentDataProperty, rootData=data}={}){let vErrors = null;let errors = 0;if(data && typeof data == "object" && !Array.isArray(data)){if(data.body !== undefined){let data0 = data.body;if(data0 && typeof data0 == "object" && !Array.isArray(data0)){if(data0.firstName === undefined){const err0 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/required",keyword:"required",params:{missingProperty: "firstName"},message:"must have required property '"+"firstName"+"'"};if(vErrors === null){vErrors = [err0];}else {vErrors.push(err0);}errors++;}if(data0.lastName === undefined){const err1 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/required",keyword:"required",params:{missingProperty: "lastName"},message:"must have required property '"+"lastName"+"'"};if(vErrors === null){vErrors = [err1];}else {vErrors.push(err1);}errors++;}for(const key0 in data0){if(!((key0 === "firstName") || (key0 === "lastName"))){const err2 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/additionalProperties",keyword:"additionalProperties",params:{additionalProperty: key0},message:"must NOT have additional properties"};if(vErrors === null){vErrors = [err2];}else {vErrors.push(err2);}errors++;}}if(data0.firstName !== undefined){let data1 = data0.firstName;if(typeof data1 === "string"){if(func2(data1) < 1){const err3 = {instancePath:instancePath+"/body/firstName",schemaPath:"#/properties/body/properties/firstName/minLength",keyword:"minLength",params:{limit: 1},message:"must NOT have fewer than 1 characters"};if(vErrors === null){vErrors = [err3];}else {vErrors.push(err3);}errors++;}}else {const err4 = {instancePath:instancePath+"/body/firstName",schemaPath:"#/properties/body/properties/firstName/type",keyword:"type",params:{type: "string"},message:"must be string"};if(vErrors === null){vErrors = [err4];}else {vErrors.push(err4);}errors++;}}if(data0.lastName !== undefined){if(typeof data0.lastName !== "string"){const err5 = {instancePath:instancePath+"/body/lastName",schemaPath:"#/properties/body/properties/lastName/type",keyword:"type",params:{type: "string"},message:"must be string"};if(vErrors === null){vErrors = [err5];}else {vErrors.push(err5);}errors++;}}}else {const err6 = {instancePath:instancePath+"/body",schemaPath:"#/properties/body/type",keyword:"type",params:{type: "object"},message:"must be object"};if(vErrors === null){vErrors = [err6];}else {vErrors.push(err6);}errors++;}}}else {const err7 = {instancePath,schemaPath:"#/type",keyword:"type",params:{type: "object"},message:"must be object"};if(vErrors === null){vErrors = [err7];}else {vErrors.push(err7);}errors++;}validate10.errors = vErrors;return errors === 0;} ``` I'm able to import and use the schema file by manually replacing the `require` with an `import` statement. Am I overlooking something to generate standalone validation code that is fully ESM-compatible?
comp
unable to generate esm compatible standalone code frequently asked questions please provide all info and reduce your schema and data to the smallest possible size this template is for compatibility issues for other issues please see the version of ajv you are using the environment you have the problem with node esm project type module your code please make it as small as possible to reproduce the issue code to compile ajv schema compile js import ajv from ajv import standalonecode from ajv dist standalone index js import writefile from fs promises const ajvoptions strict true allerrors true messages true code source true esm true esm support const ajv new ajv ajvoptions const exampleschema type object properties body type object properties firstname type string minlength lastname type string required additionalproperties false compilejsonschema exampleschema note excluded code that runs this function async function compilejsonschema schema try compile schema via ajv const compiled ajv compile schema build output module file content via ajv s standalone let modulecode standalonecode ajv compiled write file to parent directory of source file await writefile schema js modulecode utf catch err console error err results in node js able to successfully compile standalone code unable to import in esm project results and error messages in your platform i am using the functionality to generate compiled schema files at build time i am compiling the code with the ajv library in a js script and specifying esm true to compile the standalone code for esm the compilation works and i am writing the compiled schema to a js file but i am unable to import the standalone code from this schema file in my esm project because it imports the utility via a require statement const require ajv dist runtime default here is an example schema compiled via standalone for esm support generated with the above script use strict export const validate export default const type object properties body type object properties firstname type string minlength lastname type string required additionalproperties false const require ajv dist runtime default function data instancepath parentdata parentdataproperty rootdata data let verrors null let errors if data typeof data object array isarray data if data body undefined let data body if typeof object array isarray if firstname undefined const instancepath instancepath body schemapath properties body required keyword required params missingproperty firstname message must have required property firstname if verrors null verrors else verrors push errors if lastname undefined const instancepath instancepath body schemapath properties body required keyword required params missingproperty lastname message must have required property lastname if verrors null verrors else verrors push errors for const in if firstname lastname const instancepath instancepath body schemapath properties body additionalproperties keyword additionalproperties params additionalproperty message must not have additional properties if verrors null verrors else verrors push errors if firstname undefined let firstname if typeof string if const instancepath instancepath body firstname schemapath properties body properties firstname minlength keyword minlength params limit message must not have fewer than characters if verrors null verrors else verrors push errors else const instancepath instancepath body firstname schemapath properties body properties firstname type keyword type params type string message must be string if verrors null verrors else verrors push errors if lastname undefined if typeof lastname string const instancepath instancepath body lastname schemapath properties body properties lastname type keyword type params type string message must be string if verrors null verrors else verrors push errors else const instancepath instancepath body schemapath properties body type keyword type params type object message must be object if verrors null verrors else verrors push errors else const instancepath schemapath type keyword type params type object message must be object if verrors null verrors else verrors push errors errors verrors return errors i m able to import and use the schema file by manually replacing the require with an import statement am i overlooking something to generate standalone validation code that is fully esm compatible
1
58,651
7,168,384,686
IssuesEvent
2018-01-30 00:14:58
Microsoft/TypeScript
https://api.github.com/repos/Microsoft/TypeScript
closed
Promise of union types compilation error
Design Limitation
<!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.0-dev.201xxxxx **Code** ```ts const p = Promise.resolve(Promise.resolve('string') || 10) ``` **Expected behavior:** `p` type to be inferred as `Promise<string | number>` **Actual behavior:** Compilation error ``` Argument of type '10 | Promise<string>' is not assignable to parameter of type 'string | PromiseLike<string>'. Type '10' is not assignable to type 'string | PromiseLike<string>'. ``` **Notes:** ``` const p2 = Promise.resolve(Promise.resolve('string') || '10') ``` works correctly. It's only when types are different that compilation breaks.
1.0
Promise of union types compilation error - <!-- BUGS: Please use this template. --> <!-- QUESTIONS: This is not a general support forum! Ask Qs at http://stackoverflow.com/questions/tagged/typescript --> <!-- SUGGESTIONS: See https://github.com/Microsoft/TypeScript-wiki/blob/master/Writing-Good-Design-Proposals.md --> <!-- Please try to reproduce the issue with `typescript@next`. It may have already been fixed. --> **TypeScript Version:** 2.7.0-dev.201xxxxx **Code** ```ts const p = Promise.resolve(Promise.resolve('string') || 10) ``` **Expected behavior:** `p` type to be inferred as `Promise<string | number>` **Actual behavior:** Compilation error ``` Argument of type '10 | Promise<string>' is not assignable to parameter of type 'string | PromiseLike<string>'. Type '10' is not assignable to type 'string | PromiseLike<string>'. ``` **Notes:** ``` const p2 = Promise.resolve(Promise.resolve('string') || '10') ``` works correctly. It's only when types are different that compilation breaks.
non_comp
promise of union types compilation error typescript version dev code ts const p promise resolve promise resolve string expected behavior p type to be inferred as promise actual behavior compilation error argument of type promise is not assignable to parameter of type string promiselike type is not assignable to type string promiselike notes const promise resolve promise resolve string works correctly it s only when types are different that compilation breaks
0
5,866
8,314,914,136
IssuesEvent
2018-09-25 01:56:23
ant-design/ant-design
https://api.github.com/repos/ant-design/ant-design
closed
Tabs组件IE中样式不兼容
Bug🐛 Compatibility
- [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Version 3.9.2 ### Environment IE11 ### Reproduction link [https://ant.design/components/tabs-cn/](https://ant.design/components/tabs-cn/) ### Steps to reproduce 在IE11中打开链接,点击Tab2, https://ant.design/components/tabs-cn/ ![252cb1553cfc4c2df5b8c17ec3470e8](https://user-images.githubusercontent.com/17638300/45857334-c733d700-bd8a-11e8-952f-124c88f406af.png) ### What is expected? Tab下方的滑动条希望移动到tab正下方 ### What is actually happening? Tab下方的滑动条移动有偏移,少移动了一个magin32的距离 --- IE浏览器下运行 <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
True
Tabs组件IE中样式不兼容 - - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### Version 3.9.2 ### Environment IE11 ### Reproduction link [https://ant.design/components/tabs-cn/](https://ant.design/components/tabs-cn/) ### Steps to reproduce 在IE11中打开链接,点击Tab2, https://ant.design/components/tabs-cn/ ![252cb1553cfc4c2df5b8c17ec3470e8](https://user-images.githubusercontent.com/17638300/45857334-c733d700-bd8a-11e8-952f-124c88f406af.png) ### What is expected? Tab下方的滑动条希望移动到tab正下方 ### What is actually happening? Tab下方的滑动条移动有偏移,少移动了一个magin32的距离 --- IE浏览器下运行 <!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
comp
tabs组件ie中样式不兼容 i have searched the of this repository and believe that this is not a duplicate version environment reproduction link steps to reproduce , , what is expected tab下方的滑动条希望移动到tab正下方 what is actually happening tab下方的滑动条移动有偏移, ie浏览器下运行
1
63,871
8,698,188,736
IssuesEvent
2018-12-04 22:31:47
python-telegram-bot/python-telegram-bot
https://api.github.com/repos/python-telegram-bot/python-telegram-bot
closed
Surely 15 spelling errors, in the DOC
documentation
![image](https://user-images.githubusercontent.com/6206827/49337467-cd2d1a80-f64d-11e8-9f28-7bd76a50c0e2.png) # 15 spelling errors, in the DOC When mention the name `channel`, an extra character before it. Exactly 15 times, on the page of [telegram.Bot](https://python-telegram-bot.readthedocs.io/en/stable/telegram.bot.html). Not a big problem. It just looks weird. Fix it at your convenience, thank you.
1.0
Surely 15 spelling errors, in the DOC - ![image](https://user-images.githubusercontent.com/6206827/49337467-cd2d1a80-f64d-11e8-9f28-7bd76a50c0e2.png) # 15 spelling errors, in the DOC When mention the name `channel`, an extra character before it. Exactly 15 times, on the page of [telegram.Bot](https://python-telegram-bot.readthedocs.io/en/stable/telegram.bot.html). Not a big problem. It just looks weird. Fix it at your convenience, thank you.
non_comp
surely spelling errors in the doc spelling errors in the doc when mention the name channel an extra character before it exactly times on the page of not a big problem it just looks weird fix it at your convenience thank you
0
4,695
7,310,022,064
IssuesEvent
2018-02-28 13:50:15
vaadin/framework
https://api.github.com/repos/vaadin/framework
closed
Allow configuring content modes for Grid cell tooltips (V8-compatibility)
pick to compatibility package
Port changes from #10396 to 8-compatibility package
True
Allow configuring content modes for Grid cell tooltips (V8-compatibility) - Port changes from #10396 to 8-compatibility package
comp
allow configuring content modes for grid cell tooltips compatibility port changes from to compatibility package
1
14,022
16,833,902,569
IssuesEvent
2021-06-18 09:21:29
ValveSoftware/Proton
https://api.github.com/repos/ValveSoftware/Proton
closed
Collapse (289620)
Game compatibility - Unofficial
# Compatibility Report - Name of the game with compatibility issues: Collapse - Steam AppID of the game: 289620 ## System Information - GPU: RX 560 - Driver/LLVM version: Mesa 19.3.2 - Kernel version: 5.4.14 - Link to full system information report as [Gist](https://gist.github.com/): https://gist.github.com/soredake/ed1fa30f885371dda5f331815920a3ae - Proton version: 4.11-12 ## I confirm: - [X] that I haven't found an existing compatibility report for this game. - [X] that I have checked whether there are updates for my system available. <!-- Please add `PROTON_LOG=1 %command%` to the game's launch options and drag and drop the generated `$HOME/steam-$APPID.log` into this issue report --> [steam-289620.log](https://github.com/ValveSoftware/Proton/files/4128200/steam-289620.log) ## Symptoms <!-- What's the problem? --> Game opens a black window for a second than closes. ## Reproduction <!-- 1. You can find the Steam AppID in the URL of the shop page of the game. e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`. 2. You can find your driver and Linux version, as well as your graphics processor's name in the system information report of Steam. 3. You can retrieve a full system information report by clicking `Help` > `System Information` in the Steam client on your machine. 4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`. Then paste it in a [Gist](https://gist.github.com/) and post the link in this issue. 5. Please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above. -->
True
Collapse (289620) - # Compatibility Report - Name of the game with compatibility issues: Collapse - Steam AppID of the game: 289620 ## System Information - GPU: RX 560 - Driver/LLVM version: Mesa 19.3.2 - Kernel version: 5.4.14 - Link to full system information report as [Gist](https://gist.github.com/): https://gist.github.com/soredake/ed1fa30f885371dda5f331815920a3ae - Proton version: 4.11-12 ## I confirm: - [X] that I haven't found an existing compatibility report for this game. - [X] that I have checked whether there are updates for my system available. <!-- Please add `PROTON_LOG=1 %command%` to the game's launch options and drag and drop the generated `$HOME/steam-$APPID.log` into this issue report --> [steam-289620.log](https://github.com/ValveSoftware/Proton/files/4128200/steam-289620.log) ## Symptoms <!-- What's the problem? --> Game opens a black window for a second than closes. ## Reproduction <!-- 1. You can find the Steam AppID in the URL of the shop page of the game. e.g. for `The Witcher 3: Wild Hunt` the AppID is `292030`. 2. You can find your driver and Linux version, as well as your graphics processor's name in the system information report of Steam. 3. You can retrieve a full system information report by clicking `Help` > `System Information` in the Steam client on your machine. 4. Please copy it to your clipboard by pressing `Ctrl+A` and then `Ctrl+C`. Then paste it in a [Gist](https://gist.github.com/) and post the link in this issue. 5. Please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above. -->
comp
collapse compatibility report name of the game with compatibility issues collapse steam appid of the game system information gpu rx driver llvm version mesa kernel version link to full system information report as proton version i confirm that i haven t found an existing compatibility report for this game that i have checked whether there are updates for my system available please add proton log command to the game s launch options and drag and drop the generated home steam appid log into this issue report symptoms game opens a black window for a second than closes reproduction you can find the steam appid in the url of the shop page of the game e g for the witcher wild hunt the appid is you can find your driver and linux version as well as your graphics processor s name in the system information report of steam you can retrieve a full system information report by clicking help system information in the steam client on your machine please copy it to your clipboard by pressing ctrl a and then ctrl c then paste it in a and post the link in this issue please search for open issues and pull requests by the name of the game and find out whether they are relevant and should be referenced above
1
13,135
15,422,023,415
IssuesEvent
2021-03-05 13:52:09
docker/compose-cli
https://api.github.com/repos/docker/compose-cli
closed
Support compose run --use-aliases
compatibility compose
run option: `--use-aliases Use the service's network aliases in the network(s) the container connects to.`
True
Support compose run --use-aliases - run option: `--use-aliases Use the service's network aliases in the network(s) the container connects to.`
comp
support compose run use aliases run option use aliases use the service s network aliases in the network s the container connects to
1
14,642
17,866,187,680
IssuesEvent
2021-09-06 09:43:09
gambitph/Stackable
https://api.github.com/repos/gambitph/Stackable
closed
v2 column container background size always set to cover
bug [block] v2 advanced column optimization-settings [version] V3 v2 compatibility
activate optimization setting add a v2 columns block in the v3 build select the first Column. Set layout to Basic open the Container panel in styles add a BG image > Adv Image Settings > Image size to Custom- 19% save. see bug in frontend.
True
v2 column container background size always set to cover - activate optimization setting add a v2 columns block in the v3 build select the first Column. Set layout to Basic open the Container panel in styles add a BG image > Adv Image Settings > Image size to Custom- 19% save. see bug in frontend.
comp
column container background size always set to cover activate optimization setting add a columns block in the build select the first column set layout to basic open the container panel in styles add a bg image adv image settings image size to custom save see bug in frontend
1
14,725
18,076,049,543
IssuesEvent
2021-09-21 10:00:21
gambitph/Stackable
https://api.github.com/repos/gambitph/Stackable
opened
Content Horizontal Align in v2 blocks not working in Frontend
bug [version] V3 v2 compatibility
<!-- Before posting, make sure that: 1. you are running the latest version of Stackable, and 2. you have searched whether your issue has already been reported --> **Describe the bug** The Content Horizontal Align in v2 blocks is not working in v3 build **To Reproduce** Steps to reproduce the behavior: 1. Add a Blockquote block. Layout: Centered quote 2. The following settings for the icon: <img width="302" alt="Screen Shot 2021-09-21 at 5 45 21 PM" src="https://user-images.githubusercontent.com/28699204/134149439-299dd7a9-72d5-40f5-b295-e6c15faa89c1.png"> 2. Turn on block BG, set Gradient. set the colors 3. Turn on Top & Bottom Separator, toggle the separator layers for both 4. Add Container link > add url & toggle open in new tab 5. In advanced tab, set content horizontal align to flex end. 6. see bug in Frontend. **Screenshots**
True
Content Horizontal Align in v2 blocks not working in Frontend - <!-- Before posting, make sure that: 1. you are running the latest version of Stackable, and 2. you have searched whether your issue has already been reported --> **Describe the bug** The Content Horizontal Align in v2 blocks is not working in v3 build **To Reproduce** Steps to reproduce the behavior: 1. Add a Blockquote block. Layout: Centered quote 2. The following settings for the icon: <img width="302" alt="Screen Shot 2021-09-21 at 5 45 21 PM" src="https://user-images.githubusercontent.com/28699204/134149439-299dd7a9-72d5-40f5-b295-e6c15faa89c1.png"> 2. Turn on block BG, set Gradient. set the colors 3. Turn on Top & Bottom Separator, toggle the separator layers for both 4. Add Container link > add url & toggle open in new tab 5. In advanced tab, set content horizontal align to flex end. 6. see bug in Frontend. **Screenshots**
comp
content horizontal align in blocks not working in frontend before posting make sure that you are running the latest version of stackable and you have searched whether your issue has already been reported describe the bug the content horizontal align in blocks is not working in build to reproduce steps to reproduce the behavior add a blockquote block layout centered quote the following settings for the icon img width alt screen shot at pm src turn on block bg set gradient set the colors turn on top bottom separator toggle the separator layers for both add container link add url toggle open in new tab in advanced tab set content horizontal align to flex end see bug in frontend screenshots
1
15,153
19,095,308,566
IssuesEvent
2021-11-29 16:06:15
mjansson/rpmalloc
https://api.github.com/repos/mjansson/rpmalloc
closed
error C2664: 'void *rpaligned_alloc(size_t,size_t)': cannot convert argument 1 from 'std::align_val_t' to 'size_t'
compatibility
rpnew.h - Visual Studio 2022 - C++ / 23. ![image](https://user-images.githubusercontent.com/73304497/143098373-b0590939-18c1-4e3c-9c76-2528f8d3ea64.png)
True
error C2664: 'void *rpaligned_alloc(size_t,size_t)': cannot convert argument 1 from 'std::align_val_t' to 'size_t' - rpnew.h - Visual Studio 2022 - C++ / 23. ![image](https://user-images.githubusercontent.com/73304497/143098373-b0590939-18c1-4e3c-9c76-2528f8d3ea64.png)
comp
error void rpaligned alloc size t size t cannot convert argument from std align val t to size t rpnew h visual studio c
1
451,590
32,035,107,449
IssuesEvent
2023-09-22 14:49:30
brazilian-utils/brutils-python
https://api.github.com/repos/brazilian-utils/brutils-python
closed
Adicionar instruções de como lançar uma nova versão
documentation
Adicionar instruções de como fazer uma release. Esta informação deve estar presente nos arquivos [CONTRIBUTING.md](https://github.com/brazilian-utils/brutils-python/blob/main/CONTRIBUTING.md) e [CONTRIBUTING_EN.md](https://github.com/brazilian-utils/brutils-python/blob/main/CONTRIBUTING_EN.md) O passo a passo é muito semelhante aos descritos aqui https://github.com/scanapi/scanapi/wiki/Deploy, porém desconsiderando tudo que é relacionado ao Docker ou Twitter.
1.0
Adicionar instruções de como lançar uma nova versão - Adicionar instruções de como fazer uma release. Esta informação deve estar presente nos arquivos [CONTRIBUTING.md](https://github.com/brazilian-utils/brutils-python/blob/main/CONTRIBUTING.md) e [CONTRIBUTING_EN.md](https://github.com/brazilian-utils/brutils-python/blob/main/CONTRIBUTING_EN.md) O passo a passo é muito semelhante aos descritos aqui https://github.com/scanapi/scanapi/wiki/Deploy, porém desconsiderando tudo que é relacionado ao Docker ou Twitter.
non_comp
adicionar instruções de como lançar uma nova versão adicionar instruções de como fazer uma release esta informação deve estar presente nos arquivos e o passo a passo é muito semelhante aos descritos aqui porém desconsiderando tudo que é relacionado ao docker ou twitter
0
11,330
13,262,019,153
IssuesEvent
2020-08-20 20:57:28
gudmdharalds-a8c/testing123
https://api.github.com/repos/gudmdharalds-a8c/testing123
closed
PHP Upgrade: Compatibility issues found in dir1/dira
PHP 7.4 Compatibility PHP Compatibility
The following issues were found when scanning branch <code>abranch12</code> for PHP compatibility issues in preparation for upgrade to PHP version 7.4: * <b>Error in dir1/dira/bla-8.php</b>: Extension 'mysql_' is deprecated since PHP 5.5 and removed since PHP 7.0; Use mysqli instead https://github.com/gudmdharalds-a8c/testing123/blob/02e81b5d2cb87a329e1a4fe049c3d097427989e4/dir1/dira/bla-8.php#L5 Note that this is an automated report. We recommend that the issues noted here are looked into, as it will make the transition to the new PHP version easier.
True
PHP Upgrade: Compatibility issues found in dir1/dira - The following issues were found when scanning branch <code>abranch12</code> for PHP compatibility issues in preparation for upgrade to PHP version 7.4: * <b>Error in dir1/dira/bla-8.php</b>: Extension 'mysql_' is deprecated since PHP 5.5 and removed since PHP 7.0; Use mysqli instead https://github.com/gudmdharalds-a8c/testing123/blob/02e81b5d2cb87a329e1a4fe049c3d097427989e4/dir1/dira/bla-8.php#L5 Note that this is an automated report. We recommend that the issues noted here are looked into, as it will make the transition to the new PHP version easier.
comp
php upgrade compatibility issues found in dira the following issues were found when scanning branch for php compatibility issues in preparation for upgrade to php version error in dira bla php extension mysql is deprecated since php and removed since php use mysqli instead note that this is an automated report we recommend that the issues noted here are looked into as it will make the transition to the new php version easier
1
754,159
26,374,174,094
IssuesEvent
2023-01-11 23:58:28
RoboJackets/urc-drone
https://api.github.com/repos/RoboJackets/urc-drone
closed
Aruco Detection
area ➤ navigation priority ➤ medium
Detect Aruco tags from the camera, then compute and publish the distance and pose of the tags.
1.0
Aruco Detection - Detect Aruco tags from the camera, then compute and publish the distance and pose of the tags.
non_comp
aruco detection detect aruco tags from the camera then compute and publish the distance and pose of the tags
0
3,155
6,078,101,284
IssuesEvent
2017-06-16 07:10:21
sass/sass
https://api.github.com/repos/sass/sass
closed
Unitless values should not compare as equal to unitful values
Bug Dart Sass Compatibility Planned Requires Deprecation
*(Edit after the fact by @nex3)* Tasks: - [x] Deprecate existing behavior in `stable`. - [x] Remove behavior from `master`. --- I know the existing behavior was intentional, but it breaks the basic rules of math: if `1px == 1` and `1 == 1em`, then `1px == 1em`. That sounds like a bug to me. In CSS the actual unit or lack of unit is very important. I can't think of any real use-case where I would be equally happy with or without a unit.
True
Unitless values should not compare as equal to unitful values - *(Edit after the fact by @nex3)* Tasks: - [x] Deprecate existing behavior in `stable`. - [x] Remove behavior from `master`. --- I know the existing behavior was intentional, but it breaks the basic rules of math: if `1px == 1` and `1 == 1em`, then `1px == 1em`. That sounds like a bug to me. In CSS the actual unit or lack of unit is very important. I can't think of any real use-case where I would be equally happy with or without a unit.
comp
unitless values should not compare as equal to unitful values edit after the fact by tasks deprecate existing behavior in stable remove behavior from master i know the existing behavior was intentional but it breaks the basic rules of math if and then that sounds like a bug to me in css the actual unit or lack of unit is very important i can t think of any real use case where i would be equally happy with or without a unit
1
9,107
11,150,403,722
IssuesEvent
2019-12-23 22:29:42
Kismuz/btgym
https://api.github.com/repos/Kismuz/btgym
closed
signal.pause() - workers exit, but signal never received -- software issue? (debian linux)
compatibility framework refactoring
Can we make an update to README.md that defines what versions of the software are needed? - matplotlib==2.0.2 - tensorflow>=1.5 - backtrader - anything else The reason for the issue is that with some configurations the backgrader graphs show after training, and with other configurations the Worker threads exit but things stop and hang there. Thanks for any input!
True
signal.pause() - workers exit, but signal never received -- software issue? (debian linux) - Can we make an update to README.md that defines what versions of the software are needed? - matplotlib==2.0.2 - tensorflow>=1.5 - backtrader - anything else The reason for the issue is that with some configurations the backgrader graphs show after training, and with other configurations the Worker threads exit but things stop and hang there. Thanks for any input!
comp
signal pause workers exit but signal never received software issue debian linux can we make an update to readme md that defines what versions of the software are needed matplotlib tensorflow backtrader anything else the reason for the issue is that with some configurations the backgrader graphs show after training and with other configurations the worker threads exit but things stop and hang there thanks for any input
1
3,656
6,537,310,443
IssuesEvent
2017-08-31 21:49:58
TerraME/terrame
https://api.github.com/repos/TerraME/terrame
closed
CellularSpace:geometry = true as default?
<No Backward Compatibility> Core task
It is necessary to load `CellularSpace` with `geometry = true` to compute GPM and also to use `Cell:area()`. What about set its default value as `true` and the user that wants to save memory set it to `false`?
True
CellularSpace:geometry = true as default? - It is necessary to load `CellularSpace` with `geometry = true` to compute GPM and also to use `Cell:area()`. What about set its default value as `true` and the user that wants to save memory set it to `false`?
comp
cellularspace geometry true as default it is necessary to load cellularspace with geometry true to compute gpm and also to use cell area what about set its default value as true and the user that wants to save memory set it to false
1
300,258
22,659,317,764
IssuesEvent
2022-07-02 00:07:41
saltstack/salt
https://api.github.com/repos/saltstack/salt
closed
[salt-cloud] on azure userdata_file: is not used properly for windows minions
Documentation Bug severity-medium Salt-Cloud time-estimate-single-day
**Description** salt-cloud: `userdata_file:` is not used properly if it point's to a local file for `azurearm.py`. I did some debug and and I found out that if `userdata_file:` is a local file, the content of the file will be assigned to the `userdata:`. After this if `userdata:` is not empty, which is not, `settings['commandToExecute']` is set to `userdata:` content, which causes `commandToExecute` to be invalid for Azure VM Extension. I was able to workaround this by modifying the windows-firewall.ps1 proposed content like this: - remove <powershell> and </powershell> tags - merge all cmd's in one line and separate them by ; - prefix the line with `powershell -ExecutionPolicy Unrestricted -File -command` and put the entire command between double quotes. I also had to escape the double quotes for all cmd's Final file will look like this: ``` powershell -command " netsh advfirewall firewall add rule name=\"Salt\" dir=in action=allow protocol=TCP localport=4505-4506; New-NetFirewallRule -Name \"SMB445\" -DisplayName \"SMB445\" -Protocol TCP -LocalPort 445; New-NetFirewallRule -Name \"WINRM5986\" -DisplayName \"WINRM5986\" -Protocol TCP -LocalPort 5986; winrm quickconfig -q; winrm set winrm/config/winrs '@{MaxMemoryPerShellMB=\"300\"}'; winrm set winrm/config '@{MaxTimeoutms=\"1800000\"}'; winrm set winrm/config/service/auth '@{Basic=\"true\"}'; $SourceStoreScope = 'LocalMachine'; $SourceStorename = 'Remote Desktop'; $SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope; $SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly); $cert = $SourceStore.Certificates | Where-Object -FilterScript { $_.subject -like '*' }; $DestStoreScope = 'LocalMachine'; $DestStoreName = 'My'; $DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $DestStoreName, $DestStoreScope; $DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite); $DestStore.Add($cert); $SourceStore.Close(); $DestStore.Close(); winrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{CertificateThumbprint=`\"($cert.Thumbprint)`\"`}; Restart-Service winrm; " ``` There is also another solution, to upload the file to storage account and specify a link. That is working, but the problem is with local file so I'm not considering this a solution, just another way to do it, which is not applicable for me. The problem is in `salt/cloud/clouds/azurearm.py`: first here: ``` if userdata_file: if os.path.exists(userdata_file): with salt.utils.files.fopen(userdata_file, 'r') as fh_: userdata = fh_.read() ``` `userdata` get's the content of userdata_file. second here: ``` settings = {} if userdata: settings['commandToExecute'] = userdata elif userdata_file.startswith('http'): settings['fileUris'] = [userdata_file] settings['commandToExecute'] = command_prefix + './' + userdata_file[userdata_file.rfind('/')+1:] ``` with these 2 combined if `userdata_file` is a local path, this will not work. because what is in the instructions here: https://docs.saltstack.com/en/master/topics/cloud/windows.html#firewall-settings cannot be executed as a cmd. **Setup** cloud profile for win 2016: ``` azure-win2016: provider: azure image: MicrosoftWindowsServer|WindowsServer|2016-Datacenter|latest size: Standard_B2s location: eastus win_installer: /etc/salt/win_utils/Salt-Minion-3000.3-Py3-AMD64-Setup.exe userdata_file: /etc/salt/firewall.ps1 use_winrm: True winrm_port: 5986 winrm_verify_ssl: False ``` **Expected behavior** the file `/etc/salt/firewall.ps1` should be uploaded to windows minion and executed. when I check the vm extension I should see this `"commandToExecute": "powershell -ExecutionPolicy Unrestricted -file firewall.ps1 "` and not this: ``` "settings": { "commandToExecute": "<powershell>\nNew-NetFirewallRule -Name \"SMB445\" -DisplayName \"SMB445\" -Protocol TCP -LocalPort 445\nNew-NetFirewallRule -Name \"WINRM5986\" -DisplayName \"WINRM5986\" -Protocol TCP -LocalPort 5986\n\nwinrm quickconfig -q\nwinrm set winrm/config/winrs '@{MaxMemoryPerShellMB=\"300\"}'\nwinrm set winrm/config '@{MaxTimeoutms=\"1800000\"}'\nwinrm set winrm/config/service/auth '@{Basic=\"true\"}'\n\n$SourceStoreScope = 'LocalMachine'\n$SourceStorename = 'Remote Desktop'\n\n$SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope\n$SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly)\n\n$cert = $SourceStore.Certificates | Where-Object -FilterScript {\n $_.subject -like '*'\n}\n\n$DestStoreScope = 'LocalMachine'\n$DestStoreName = 'My'\n\n$DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $DestStoreName, $DestStoreScope\n$DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)\n$DestStore.Add($cert)\n\n$SourceStore.Close()\n$DestStore.Close()\n\nwinrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{CertificateThumbprint=`\"($cert.Thumbprint)`\"`}\n\nRestart-Service winrm\n</powershell>" } ``` which is the content of the file. **Versions Report** ``` Salt Version: Salt: 3000.3 Dependency Versions: cffi: 1.12.2 cherrypy: unknown dateutil: 2.8.0 docker-py: Not Installed gitdb: 2.0.6 gitpython: 2.1.15 Jinja2: 2.10.1 libgit2: Not Installed M2Crypto: Not Installed Mako: 1.0.7 msgpack-pure: Not Installed msgpack-python: 0.5.6 mysql-python: Not Installed pycparser: 2.19 pycrypto: 3.8.1 pycryptodome: 3.9.7 pygit2: Not Installed Python: 3.7.7 (default, Mar 10 2020, 15:43:33) python-gnupg: 0.4.4 PyYAML: 5.1.2 PyZMQ: 18.0.1 smmap: 3.0.4 timelib: 0.2.4 Tornado: 4.5.3 ZMQ: 4.3.1 System Versions: dist: locale: UTF-8 machine: x86_64 release: 19.4.0 system: Darwin version: 10.15.4 x86_64 ``` </details> **Additional context** I also had to add this to the firewall rules `netsh advfirewall firewall add rule name="Salt" dir=in action=allow protocol=TCP localport=4505-4506`, otherwise connection to/from master is not possible. With all these fixes I was able to deploy master ( linux) and minions ( windows 2016 and 2019) on azure.
1.0
[salt-cloud] on azure userdata_file: is not used properly for windows minions - **Description** salt-cloud: `userdata_file:` is not used properly if it point's to a local file for `azurearm.py`. I did some debug and and I found out that if `userdata_file:` is a local file, the content of the file will be assigned to the `userdata:`. After this if `userdata:` is not empty, which is not, `settings['commandToExecute']` is set to `userdata:` content, which causes `commandToExecute` to be invalid for Azure VM Extension. I was able to workaround this by modifying the windows-firewall.ps1 proposed content like this: - remove <powershell> and </powershell> tags - merge all cmd's in one line and separate them by ; - prefix the line with `powershell -ExecutionPolicy Unrestricted -File -command` and put the entire command between double quotes. I also had to escape the double quotes for all cmd's Final file will look like this: ``` powershell -command " netsh advfirewall firewall add rule name=\"Salt\" dir=in action=allow protocol=TCP localport=4505-4506; New-NetFirewallRule -Name \"SMB445\" -DisplayName \"SMB445\" -Protocol TCP -LocalPort 445; New-NetFirewallRule -Name \"WINRM5986\" -DisplayName \"WINRM5986\" -Protocol TCP -LocalPort 5986; winrm quickconfig -q; winrm set winrm/config/winrs '@{MaxMemoryPerShellMB=\"300\"}'; winrm set winrm/config '@{MaxTimeoutms=\"1800000\"}'; winrm set winrm/config/service/auth '@{Basic=\"true\"}'; $SourceStoreScope = 'LocalMachine'; $SourceStorename = 'Remote Desktop'; $SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope; $SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly); $cert = $SourceStore.Certificates | Where-Object -FilterScript { $_.subject -like '*' }; $DestStoreScope = 'LocalMachine'; $DestStoreName = 'My'; $DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $DestStoreName, $DestStoreScope; $DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite); $DestStore.Add($cert); $SourceStore.Close(); $DestStore.Close(); winrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{CertificateThumbprint=`\"($cert.Thumbprint)`\"`}; Restart-Service winrm; " ``` There is also another solution, to upload the file to storage account and specify a link. That is working, but the problem is with local file so I'm not considering this a solution, just another way to do it, which is not applicable for me. The problem is in `salt/cloud/clouds/azurearm.py`: first here: ``` if userdata_file: if os.path.exists(userdata_file): with salt.utils.files.fopen(userdata_file, 'r') as fh_: userdata = fh_.read() ``` `userdata` get's the content of userdata_file. second here: ``` settings = {} if userdata: settings['commandToExecute'] = userdata elif userdata_file.startswith('http'): settings['fileUris'] = [userdata_file] settings['commandToExecute'] = command_prefix + './' + userdata_file[userdata_file.rfind('/')+1:] ``` with these 2 combined if `userdata_file` is a local path, this will not work. because what is in the instructions here: https://docs.saltstack.com/en/master/topics/cloud/windows.html#firewall-settings cannot be executed as a cmd. **Setup** cloud profile for win 2016: ``` azure-win2016: provider: azure image: MicrosoftWindowsServer|WindowsServer|2016-Datacenter|latest size: Standard_B2s location: eastus win_installer: /etc/salt/win_utils/Salt-Minion-3000.3-Py3-AMD64-Setup.exe userdata_file: /etc/salt/firewall.ps1 use_winrm: True winrm_port: 5986 winrm_verify_ssl: False ``` **Expected behavior** the file `/etc/salt/firewall.ps1` should be uploaded to windows minion and executed. when I check the vm extension I should see this `"commandToExecute": "powershell -ExecutionPolicy Unrestricted -file firewall.ps1 "` and not this: ``` "settings": { "commandToExecute": "<powershell>\nNew-NetFirewallRule -Name \"SMB445\" -DisplayName \"SMB445\" -Protocol TCP -LocalPort 445\nNew-NetFirewallRule -Name \"WINRM5986\" -DisplayName \"WINRM5986\" -Protocol TCP -LocalPort 5986\n\nwinrm quickconfig -q\nwinrm set winrm/config/winrs '@{MaxMemoryPerShellMB=\"300\"}'\nwinrm set winrm/config '@{MaxTimeoutms=\"1800000\"}'\nwinrm set winrm/config/service/auth '@{Basic=\"true\"}'\n\n$SourceStoreScope = 'LocalMachine'\n$SourceStorename = 'Remote Desktop'\n\n$SourceStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $SourceStorename, $SourceStoreScope\n$SourceStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadOnly)\n\n$cert = $SourceStore.Certificates | Where-Object -FilterScript {\n $_.subject -like '*'\n}\n\n$DestStoreScope = 'LocalMachine'\n$DestStoreName = 'My'\n\n$DestStore = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $DestStoreName, $DestStoreScope\n$DestStore.Open([System.Security.Cryptography.X509Certificates.OpenFlags]::ReadWrite)\n$DestStore.Add($cert)\n\n$SourceStore.Close()\n$DestStore.Close()\n\nwinrm create winrm/config/listener?Address=*+Transport=HTTPS `@`{CertificateThumbprint=`\"($cert.Thumbprint)`\"`}\n\nRestart-Service winrm\n</powershell>" } ``` which is the content of the file. **Versions Report** ``` Salt Version: Salt: 3000.3 Dependency Versions: cffi: 1.12.2 cherrypy: unknown dateutil: 2.8.0 docker-py: Not Installed gitdb: 2.0.6 gitpython: 2.1.15 Jinja2: 2.10.1 libgit2: Not Installed M2Crypto: Not Installed Mako: 1.0.7 msgpack-pure: Not Installed msgpack-python: 0.5.6 mysql-python: Not Installed pycparser: 2.19 pycrypto: 3.8.1 pycryptodome: 3.9.7 pygit2: Not Installed Python: 3.7.7 (default, Mar 10 2020, 15:43:33) python-gnupg: 0.4.4 PyYAML: 5.1.2 PyZMQ: 18.0.1 smmap: 3.0.4 timelib: 0.2.4 Tornado: 4.5.3 ZMQ: 4.3.1 System Versions: dist: locale: UTF-8 machine: x86_64 release: 19.4.0 system: Darwin version: 10.15.4 x86_64 ``` </details> **Additional context** I also had to add this to the firewall rules `netsh advfirewall firewall add rule name="Salt" dir=in action=allow protocol=TCP localport=4505-4506`, otherwise connection to/from master is not possible. With all these fixes I was able to deploy master ( linux) and minions ( windows 2016 and 2019) on azure.
non_comp
on azure userdata file is not used properly for windows minions description salt cloud userdata file is not used properly if it point s to a local file for azurearm py i did some debug and and i found out that if userdata file is a local file the content of the file will be assigned to the userdata after this if userdata is not empty which is not settings is set to userdata content which causes commandtoexecute to be invalid for azure vm extension i was able to workaround this by modifying the windows firewall proposed content like this remove and tags merge all cmd s in one line and separate them by prefix the line with powershell executionpolicy unrestricted file command and put the entire command between double quotes i also had to escape the double quotes for all cmd s final file will look like this powershell command netsh advfirewall firewall add rule name salt dir in action allow protocol tcp localport new netfirewallrule name displayname protocol tcp localport new netfirewallrule name displayname protocol tcp localport winrm quickconfig q winrm set winrm config winrs maxmemorypershellmb winrm set winrm config maxtimeoutms winrm set winrm config service auth basic true sourcestorescope localmachine sourcestorename remote desktop sourcestore new object typename system security cryptography argumentlist sourcestorename sourcestorescope sourcestore open readonly cert sourcestore certificates where object filterscript subject like deststorescope localmachine deststorename my deststore new object typename system security cryptography argumentlist deststorename deststorescope deststore open readwrite deststore add cert sourcestore close deststore close winrm create winrm config listener address transport https certificatethumbprint cert thumbprint restart service winrm there is also another solution to upload the file to storage account and specify a link that is working but the problem is with local file so i m not considering this a solution just another way to do it which is not applicable for me the problem is in salt cloud clouds azurearm py first here if userdata file if os path exists userdata file with salt utils files fopen userdata file r as fh userdata fh read userdata get s the content of userdata file second here settings if userdata settings userdata elif userdata file startswith http settings settings command prefix userdata file with these combined if userdata file is a local path this will not work because what is in the instructions here cannot be executed as a cmd setup cloud profile for win azure provider azure image microsoftwindowsserver windowsserver datacenter latest size standard location eastus win installer etc salt win utils salt minion setup exe userdata file etc salt firewall use winrm true winrm port winrm verify ssl false expected behavior the file etc salt firewall should be uploaded to windows minion and executed when i check the vm extension i should see this commandtoexecute powershell executionpolicy unrestricted file firewall and not this settings commandtoexecute nnew netfirewallrule name displayname protocol tcp localport nnew netfirewallrule name displayname protocol tcp localport n nwinrm quickconfig q nwinrm set winrm config winrs maxmemorypershellmb nwinrm set winrm config maxtimeoutms nwinrm set winrm config service auth basic true n n sourcestorescope localmachine n sourcestorename remote desktop n n sourcestore new object typename system security cryptography argumentlist sourcestorename sourcestorescope n sourcestore open readonly n n cert sourcestore certificates where object filterscript n subject like n n n deststorescope localmachine n deststorename my n n deststore new object typename system security cryptography argumentlist deststorename deststorescope n deststore open readwrite n deststore add cert n n sourcestore close n deststore close n nwinrm create winrm config listener address transport https certificatethumbprint cert thumbprint n nrestart service winrm n which is the content of the file versions report salt version salt dependency versions cffi cherrypy unknown dateutil docker py not installed gitdb gitpython not installed not installed mako msgpack pure not installed msgpack python mysql python not installed pycparser pycrypto pycryptodome not installed python default mar python gnupg pyyaml pyzmq smmap timelib tornado zmq system versions dist locale utf machine release system darwin version additional context i also had to add this to the firewall rules netsh advfirewall firewall add rule name salt dir in action allow protocol tcp localport otherwise connection to from master is not possible with all these fixes i was able to deploy master linux and minions windows and on azure
0
6,932
9,215,173,953
IssuesEvent
2019-03-11 01:42:04
Lothrazar/Cyclic
https://api.github.com/repos/Lothrazar/Cyclic
closed
When Valkyrien Warfare is installed, the Rainbow Cannon's projectile holds still instead of flying.
CausedByOthermod bug: visual mod compatibility
Minecraft version & Mod Version: Minecraft: 1.12.2 Forge: 14.23.4.2708 Cyclic: 1.15.11 The rest of the modlist: Valkyrien Warfare: 0.9_prerelease_5[Optimization update] Single player or Server: Single Player Describe problem (what you were doing / what happened): When Valkyrien Warfare is installed, The Rainbow Cannon (the cool new laser?), when fired, the projectile hangs in the air instead of flying forward. Thank You and the best to ya! I'm going to post this on their issue tracker too and link it here, since I don't know whoms'es "fault" this is. EDIT: https://github.com/ValkyrienWarfare/Valkyrien-Warfare-Revamped/issues/171
True
When Valkyrien Warfare is installed, the Rainbow Cannon's projectile holds still instead of flying. - Minecraft version & Mod Version: Minecraft: 1.12.2 Forge: 14.23.4.2708 Cyclic: 1.15.11 The rest of the modlist: Valkyrien Warfare: 0.9_prerelease_5[Optimization update] Single player or Server: Single Player Describe problem (what you were doing / what happened): When Valkyrien Warfare is installed, The Rainbow Cannon (the cool new laser?), when fired, the projectile hangs in the air instead of flying forward. Thank You and the best to ya! I'm going to post this on their issue tracker too and link it here, since I don't know whoms'es "fault" this is. EDIT: https://github.com/ValkyrienWarfare/Valkyrien-Warfare-Revamped/issues/171
comp
when valkyrien warfare is installed the rainbow cannon s projectile holds still instead of flying minecraft version mod version minecraft forge cyclic the rest of the modlist valkyrien warfare prerelease single player or server single player describe problem what you were doing what happened when valkyrien warfare is installed the rainbow cannon the cool new laser when fired the projectile hangs in the air instead of flying forward thank you and the best to ya i m going to post this on their issue tracker too and link it here since i don t know whoms es fault this is edit
1
7,676
9,932,592,080
IssuesEvent
2019-07-02 10:10:28
bazelbuild/bazel
https://api.github.com/repos/bazelbuild/bazel
opened
incompatible_windows_case_ignoring_glob: enables case-insensitive glob() on Windows
P1 bazel 1.0 incompatible-change team-Windows
# Description The `--[no]incompatible_windows_case_ignoring_glob` shall control whether `glob()` is case-sensitive (false) or case-ignoring (true) on Windows. ## Affected platforms This flag only affects Windows. It is a no-op on other platforms. ## Semantics - When disabled (default): `glob()`'s include and exclude patterns must match exactly the casing of files and directories on the filesystem. This is the same behavior on other platforms. E.g. `glob(["Foo/**"], exclude = ["Foo/Bar"])` will match exactly `Foo/**` and not `foO/**`, and will exclude exactly `Foo/Bar` but not `foO/baR`. - When enabled: on Windows, `glob()` will ignore the casing in the include and exclude patterns. On other platforms, this is a no-op. ## Related issues - https://github.com/bazelbuild/bazel/issues/8705 - https://github.com/bazelbuild/bazel/issues/8759 # Migration recipe <!-- TODO: would be nice to provide a migration tool that evaluates the glob with both semantics and warns if there are differences. --> # Rollout plan
True
incompatible_windows_case_ignoring_glob: enables case-insensitive glob() on Windows - # Description The `--[no]incompatible_windows_case_ignoring_glob` shall control whether `glob()` is case-sensitive (false) or case-ignoring (true) on Windows. ## Affected platforms This flag only affects Windows. It is a no-op on other platforms. ## Semantics - When disabled (default): `glob()`'s include and exclude patterns must match exactly the casing of files and directories on the filesystem. This is the same behavior on other platforms. E.g. `glob(["Foo/**"], exclude = ["Foo/Bar"])` will match exactly `Foo/**` and not `foO/**`, and will exclude exactly `Foo/Bar` but not `foO/baR`. - When enabled: on Windows, `glob()` will ignore the casing in the include and exclude patterns. On other platforms, this is a no-op. ## Related issues - https://github.com/bazelbuild/bazel/issues/8705 - https://github.com/bazelbuild/bazel/issues/8759 # Migration recipe <!-- TODO: would be nice to provide a migration tool that evaluates the glob with both semantics and warns if there are differences. --> # Rollout plan
comp
incompatible windows case ignoring glob enables case insensitive glob on windows description the incompatible windows case ignoring glob shall control whether glob is case sensitive false or case ignoring true on windows affected platforms this flag only affects windows it is a no op on other platforms semantics when disabled default glob s include and exclude patterns must match exactly the casing of files and directories on the filesystem this is the same behavior on other platforms e g glob exclude will match exactly foo and not foo and will exclude exactly foo bar but not foo bar when enabled on windows glob will ignore the casing in the include and exclude patterns on other platforms this is a no op related issues migration recipe rollout plan
1
7,975
10,143,622,101
IssuesEvent
2019-08-04 13:49:32
konsolas/AAC-Issues
https://api.github.com/repos/konsolas/AAC-Issues
closed
Unsupported protocol version: -1
3. compatibility 4. fixed
https://pastebin.com/S0SNrdcV # Core information **Server version**: Paper 1.12.2 **AAC version**: 4.0.7-b1 **ProtocolLib version**: 4.4.0 <!-- Plugin list can be left out if you are sure other plugins don't interfer -->
True
Unsupported protocol version: -1 - https://pastebin.com/S0SNrdcV # Core information **Server version**: Paper 1.12.2 **AAC version**: 4.0.7-b1 **ProtocolLib version**: 4.4.0 <!-- Plugin list can be left out if you are sure other plugins don't interfer -->
comp
unsupported protocol version core information server version paper aac version protocollib version
1
18,173
25,129,351,273
IssuesEvent
2022-11-09 14:07:36
safing/portmaster
https://api.github.com/repos/safing/portmaster
opened
Imcompatible with ExpressVPN
in/compatibility
**Pre-Submit Checklist**: - Check applicable sources for existing issues: - [Linux Compatibility](https://docs.safing.io/portmaster/install/linux#compatibility) - [VPN Compatibility](https://docs.safing.io/portmaster/install/status/vpn-compatibility) - [Github Issues](https://github.com/safing/portmaster/issues?q=is%3Aissue+label%3Ain%2Fcompatibility) **What worked?** Can connect to the internet if ExpressVPN is disconnected. **What did not work?** Connecting ExpressVPN when Portmaster is installed blocks all access to the internet. **Debug Information**: <!-- Paste debug information below if reporting a problem: - General issue: Click on "Copy Debug Information" on the Settings page. - App related issue: Click on "Copy Debug Information" in the dropdown menu of an app in the Monitor view. ⚠ Please remove sensitive/private information from the "Unexpected Logs" and "Network Connections" sections. This is easiest to do in the preview mode. If needed, additional logs can be found here: - Linux: `/opt/safing/portmaster/logs` - Windows: `%PROGRAMDATA%\Safing\Portmaster\logs` -->
True
Imcompatible with ExpressVPN - **Pre-Submit Checklist**: - Check applicable sources for existing issues: - [Linux Compatibility](https://docs.safing.io/portmaster/install/linux#compatibility) - [VPN Compatibility](https://docs.safing.io/portmaster/install/status/vpn-compatibility) - [Github Issues](https://github.com/safing/portmaster/issues?q=is%3Aissue+label%3Ain%2Fcompatibility) **What worked?** Can connect to the internet if ExpressVPN is disconnected. **What did not work?** Connecting ExpressVPN when Portmaster is installed blocks all access to the internet. **Debug Information**: <!-- Paste debug information below if reporting a problem: - General issue: Click on "Copy Debug Information" on the Settings page. - App related issue: Click on "Copy Debug Information" in the dropdown menu of an app in the Monitor view. ⚠ Please remove sensitive/private information from the "Unexpected Logs" and "Network Connections" sections. This is easiest to do in the preview mode. If needed, additional logs can be found here: - Linux: `/opt/safing/portmaster/logs` - Windows: `%PROGRAMDATA%\Safing\Portmaster\logs` -->
comp
imcompatible with expressvpn pre submit checklist check applicable sources for existing issues what worked can connect to the internet if expressvpn is disconnected what did not work connecting expressvpn when portmaster is installed blocks all access to the internet debug information paste debug information below if reporting a problem general issue click on copy debug information on the settings page app related issue click on copy debug information in the dropdown menu of an app in the monitor view ⚠ please remove sensitive private information from the unexpected logs and network connections sections this is easiest to do in the preview mode if needed additional logs can be found here linux opt safing portmaster logs windows programdata safing portmaster logs
1
11,074
13,099,086,018
IssuesEvent
2020-08-03 20:50:06
pingcap/tidb
https://api.github.com/repos/pingcap/tidb
closed
Invalid JSONs are allowed to insert
type/compatibility
## Bug Report MySQL: ``` mysql> create table tx (col json); Query OK, 0 rows affected (0.06 sec) mysql> insert into tx values ('"3""'); ERROR 3140 (22032): Invalid JSON text: "The document root must not be followed by other values." at position 3 in value for column 'tx.col'. ``` TiDB: ``` mysql> create table tx (col json); Query OK, 0 rows affected (0.13 sec) mysql> insert into tx values ('"3""'); Query OK, 1 row affected (0.03 sec) mysql> select * from tx; +------+ | col | +------+ | "3" | +------+ 1 row in set (0.00 sec) ```
True
Invalid JSONs are allowed to insert - ## Bug Report MySQL: ``` mysql> create table tx (col json); Query OK, 0 rows affected (0.06 sec) mysql> insert into tx values ('"3""'); ERROR 3140 (22032): Invalid JSON text: "The document root must not be followed by other values." at position 3 in value for column 'tx.col'. ``` TiDB: ``` mysql> create table tx (col json); Query OK, 0 rows affected (0.13 sec) mysql> insert into tx values ('"3""'); Query OK, 1 row affected (0.03 sec) mysql> select * from tx; +------+ | col | +------+ | "3" | +------+ 1 row in set (0.00 sec) ```
comp
invalid jsons are allowed to insert bug report mysql mysql create table tx col json query ok rows affected sec mysql insert into tx values error invalid json text the document root must not be followed by other values at position in value for column tx col tidb mysql create table tx col json query ok rows affected sec mysql insert into tx values query ok row affected sec mysql select from tx col row in set sec
1
624,416
19,696,930,195
IssuesEvent
2022-01-12 13:08:46
Qiskit-Partners/qiskit-ibm
https://api.github.com/repos/Qiskit-Partners/qiskit-ibm
closed
Add backend requirements to program __str__
type: enhancement priority: high
<!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? `print(program)` doesn't show backend requirements today. This was not a big deal since most programs don't have backend requirements defined. However, this will become more important with `qasm3-runner`, which requires the backend to support qasm3.
1.0
Add backend requirements to program __str__ - <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? `print(program)` doesn't show backend requirements today. This was not a big deal since most programs don't have backend requirements defined. However, this will become more important with `qasm3-runner`, which requires the backend to support qasm3.
non_comp
add backend requirements to program str what is the expected enhancement print program doesn t show backend requirements today this was not a big deal since most programs don t have backend requirements defined however this will become more important with runner which requires the backend to support
0
132,438
18,719,490,782
IssuesEvent
2021-11-03 10:07:31
mdn/yari
https://api.github.com/repos/mdn/yari
closed
Reconsider/Rethink tables
enhancement p2 needs: design
There are a number of issues related to various styling and display issues with regards to tables on MDN Web Docs: - https://github.com/mdn/yari/issues/1797 - https://github.com/mdn/yari/issues/2101 - https://github.com/mdn/yari/issues/2100 - https://github.com/mdn/yari/issues/2102 - https://github.com/mdn/yari/issues/2103 - https://github.com/mdn/yari/issues/1941 - https://github.com/mdn/yari/issues/1757 - https://github.com/mdn/yari/issues/1761 - https://github.com/mdn/yari/issues/1762 - https://github.com/mdn/yari/issues/1960 - https://github.com/mdn/yari/issues/1818 - https://github.com/mdn/yari/issues/2085 There may well be others. This means tables are problematic and I agree. Time to step back and rethink how we can improve the UX and general UI of tables on MDN Web Docs across platforms and devices.
1.0
Reconsider/Rethink tables - There are a number of issues related to various styling and display issues with regards to tables on MDN Web Docs: - https://github.com/mdn/yari/issues/1797 - https://github.com/mdn/yari/issues/2101 - https://github.com/mdn/yari/issues/2100 - https://github.com/mdn/yari/issues/2102 - https://github.com/mdn/yari/issues/2103 - https://github.com/mdn/yari/issues/1941 - https://github.com/mdn/yari/issues/1757 - https://github.com/mdn/yari/issues/1761 - https://github.com/mdn/yari/issues/1762 - https://github.com/mdn/yari/issues/1960 - https://github.com/mdn/yari/issues/1818 - https://github.com/mdn/yari/issues/2085 There may well be others. This means tables are problematic and I agree. Time to step back and rethink how we can improve the UX and general UI of tables on MDN Web Docs across platforms and devices.
non_comp
reconsider rethink tables there are a number of issues related to various styling and display issues with regards to tables on mdn web docs there may well be others this means tables are problematic and i agree time to step back and rethink how we can improve the ux and general ui of tables on mdn web docs across platforms and devices
0
381,147
11,274,023,726
IssuesEvent
2020-01-14 17:40:47
googleapis/google-cloud-python
https://api.github.com/repos/googleapis/google-cloud-python
closed
Synthesis failed for texttospeech
api: texttospeech autosynth failure priority: p1 type: bug
Hello! Autosynth couldn't regenerate texttospeech. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-texttospeech' Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main last_synth_commit_hash = get_last_metadata_commit(args.metadata_path) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit text=True, File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run with Popen(*popenargs, **kwargs) as process: TypeError: __init__() got an unexpected keyword argument 'text' ``` Google internal developers can see the full log [here](https://sponge/264076a6-41d9-4ca2-9fc2-c9d0a8a92def).
1.0
Synthesis failed for texttospeech - Hello! Autosynth couldn't regenerate texttospeech. :broken_heart: Here's the output from running `synth.py`: ``` Cloning into 'working_repo'... Switched to branch 'autosynth-texttospeech' Traceback (most recent call last): File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 256, in <module> main() File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 196, in main last_synth_commit_hash = get_last_metadata_commit(args.metadata_path) File "/tmpfs/src/git/autosynth/autosynth/synth.py", line 149, in get_last_metadata_commit text=True, File "/home/kbuilder/.pyenv/versions/3.6.1/lib/python3.6/subprocess.py", line 403, in run with Popen(*popenargs, **kwargs) as process: TypeError: __init__() got an unexpected keyword argument 'text' ``` Google internal developers can see the full log [here](https://sponge/264076a6-41d9-4ca2-9fc2-c9d0a8a92def).
non_comp
synthesis failed for texttospeech hello autosynth couldn t regenerate texttospeech broken heart here s the output from running synth py cloning into working repo switched to branch autosynth texttospeech traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src git autosynth autosynth synth py line in main file tmpfs src git autosynth autosynth synth py line in main last synth commit hash get last metadata commit args metadata path file tmpfs src git autosynth autosynth synth py line in get last metadata commit text true file home kbuilder pyenv versions lib subprocess py line in run with popen popenargs kwargs as process typeerror init got an unexpected keyword argument text google internal developers can see the full log
0
19,072
26,492,156,817
IssuesEvent
2023-01-18 00:09:07
haubna/PhysicsMod
https://api.github.com/repos/haubna/PhysicsMod
closed
Water has no collision
compatibility
**Describe the bug** Man kann durch Wasser einfach durchlaufen ohne Collision und man kann nicht mehr schwimmen. You can walk through water without collision and you are not able to swim. **To Reproduce** Wasser mit Wassereinmer platzieren und durchlaufen bzw. einfach in einen Fluss etc. springen. Place a water source with a bucket and walk through it or jump into the next nearby river to test it. **Screenshots** https://user-images.githubusercontent.com/76984555/212976105-fff9df99-294c-4d10-8b3f-5c94e772692c.mp4 (In survival mode as well) **Version** Minecraft Version: 1.19.2 Fabric Physics Mod Version: Pro v110 1.19.2 Fabric
True
Water has no collision - **Describe the bug** Man kann durch Wasser einfach durchlaufen ohne Collision und man kann nicht mehr schwimmen. You can walk through water without collision and you are not able to swim. **To Reproduce** Wasser mit Wassereinmer platzieren und durchlaufen bzw. einfach in einen Fluss etc. springen. Place a water source with a bucket and walk through it or jump into the next nearby river to test it. **Screenshots** https://user-images.githubusercontent.com/76984555/212976105-fff9df99-294c-4d10-8b3f-5c94e772692c.mp4 (In survival mode as well) **Version** Minecraft Version: 1.19.2 Fabric Physics Mod Version: Pro v110 1.19.2 Fabric
comp
water has no collision describe the bug man kann durch wasser einfach durchlaufen ohne collision und man kann nicht mehr schwimmen you can walk through water without collision and you are not able to swim to reproduce wasser mit wassereinmer platzieren und durchlaufen bzw einfach in einen fluss etc springen place a water source with a bucket and walk through it or jump into the next nearby river to test it screenshots in survival mode as well version minecraft version fabric physics mod version pro fabric
1
9,400
11,453,170,955
IssuesEvent
2020-02-06 14:57:51
digitalcreations/MaxTo
https://api.github.com/repos/digitalcreations/MaxTo
closed
Incompatible with Windows Terminal
compatibility
**Describe the bug** Incompatible with Windows Terminal **To Reproduce** Steps to reproduce the behavior: when maxto runing Windows Terminal crash **System information:** - Windows version: Win10 1909 18363.592 - MaxTo version: 2.1.0A3 - Windows Terminal version: 0.8.10261.0 from MSStore **Additional context** Application Error Event: > 错误应用程序名称: WindowsTerminal.exe,版本: 0.8.2001.26001,时间戳: 0x5e2e5240 错误模块名称: InputHost.dll,版本: 10.0.18362.387,时间戳: 0x1217c36e 异常代码: 0xc0000409 错误偏移量: 0x0000000000041a73 错误进程 ID: 0x37ac 错误应用程序启动时间: 0x01d5d8ce8abcecf5 错误应用程序路径: C:\Program Files\WindowsApps\Microsoft.WindowsTerminal_0.8.10261.0_x64__8wekyb3d8bbwe\WindowsTerminal.exe 错误模块路径: C:\Windows\System32\InputHost.dll 报告 ID: 1b13bfdf-f337-497d-bbe0-7bb9f7c8819b 错误程序包全名: Microsoft.WindowsTerminal_0.8.10261.0_x64__8wekyb3d8bbwe 错误程序包相对应用程序 ID: App Windows Error Reporting: > 故障存储段 1633211405700529229,类型 5 事件名称: MoBEX 响应: 不可用 Cab ID: 0 问题签名: P1: Microsoft.WindowsTerminal_0.8.10261.0_x64__8wekyb3d8bbwe P2: praid:App P3: 0.8.2001.26001 P4: 5e2e5240 P5: InputHost.dll P6: 10.0.18362.387 P7: 1217c36e P8: 0000000000041a73 P9: c0000409 P10: 0000000000000007
True
Incompatible with Windows Terminal - **Describe the bug** Incompatible with Windows Terminal **To Reproduce** Steps to reproduce the behavior: when maxto runing Windows Terminal crash **System information:** - Windows version: Win10 1909 18363.592 - MaxTo version: 2.1.0A3 - Windows Terminal version: 0.8.10261.0 from MSStore **Additional context** Application Error Event: > 错误应用程序名称: WindowsTerminal.exe,版本: 0.8.2001.26001,时间戳: 0x5e2e5240 错误模块名称: InputHost.dll,版本: 10.0.18362.387,时间戳: 0x1217c36e 异常代码: 0xc0000409 错误偏移量: 0x0000000000041a73 错误进程 ID: 0x37ac 错误应用程序启动时间: 0x01d5d8ce8abcecf5 错误应用程序路径: C:\Program Files\WindowsApps\Microsoft.WindowsTerminal_0.8.10261.0_x64__8wekyb3d8bbwe\WindowsTerminal.exe 错误模块路径: C:\Windows\System32\InputHost.dll 报告 ID: 1b13bfdf-f337-497d-bbe0-7bb9f7c8819b 错误程序包全名: Microsoft.WindowsTerminal_0.8.10261.0_x64__8wekyb3d8bbwe 错误程序包相对应用程序 ID: App Windows Error Reporting: > 故障存储段 1633211405700529229,类型 5 事件名称: MoBEX 响应: 不可用 Cab ID: 0 问题签名: P1: Microsoft.WindowsTerminal_0.8.10261.0_x64__8wekyb3d8bbwe P2: praid:App P3: 0.8.2001.26001 P4: 5e2e5240 P5: InputHost.dll P6: 10.0.18362.387 P7: 1217c36e P8: 0000000000041a73 P9: c0000409 P10: 0000000000000007
comp
incompatible with windows terminal describe the bug incompatible with windows terminal to reproduce steps to reproduce the behavior when maxto runing windows terminal crash system information windows version maxto version windows terminal version from msstore additional context application error event 错误应用程序名称 windowsterminal exe,版本 ,时间戳 错误模块名称 inputhost dll,版本 ,时间戳 异常代码 错误偏移量 错误进程 id 错误应用程序启动时间 错误应用程序路径 c program files windowsapps microsoft windowsterminal windowsterminal exe 错误模块路径 c windows inputhost dll 报告 id 错误程序包全名 microsoft windowsterminal 错误程序包相对应用程序 id app windows error reporting 故障存储段 ,类型 事件名称 mobex 响应 不可用 cab id 问题签名 microsoft windowsterminal praid app inputhost dll
1
39,931
12,728,946,750
IssuesEvent
2020-06-25 04:18:37
emilwareus/thimble.mozilla.org
https://api.github.com/repos/emilwareus/thimble.mozilla.org
opened
WS-2015-0033 (High) detected in uglify-js-2.3.6.tgz
security vulnerability
## WS-2015-0033 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.3.6.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/uglify-js/package.json,/tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - bower-1.3.8.tgz (Root Library) - handlebars-1.3.0.tgz - :x: **uglify-js-2.3.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/emilwareus/thimble.mozilla.org/commit/36866e15be90f8405b84c4af5b95c97afa6f3416">36866e15be90f8405b84c4af5b95c97afa6f3416</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> uglifier incorrectly handles non-boolean comparisons during minification.The upstream library for the Ruby uglifier gem, UglifyJS, is affected by a vulnerability that allows a specially crafted Javascript file to have altered functionality after minification. This bug, found in UglifyJS versions 2.4.23 and earlier, was demonstrated to allow potentially malicious code to be hidden within secure code, and activated by the minification process. <p>Publish Date: 2015-07-22 <p>URL: <a href=https://github.com/mishoo/UglifyJS2/issues/751>WS-2015-0033</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hakiri.io/technologies/uglifier/issues/279911d9720338">https://hakiri.io/technologies/uglifier/issues/279911d9720338</a></p> <p>Release Date: 2020-06-07</p> <p>Fix Resolution: Uglifier - 2.7.2;uglify-js - v2.4.24</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2015-0033 (High) detected in uglify-js-2.3.6.tgz - ## WS-2015-0033 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>uglify-js-2.3.6.tgz</b></p></summary> <p>JavaScript parser, mangler/compressor and beautifier toolkit</p> <p>Library home page: <a href="https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz">https://registry.npmjs.org/uglify-js/-/uglify-js-2.3.6.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/thimble.mozilla.org/services/id.webmaker.org/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/uglify-js/package.json,/tmp/ws-scm/thimble.mozilla.org/services/login.webmaker.org/node_modules/uglify-js/package.json</p> <p> Dependency Hierarchy: - bower-1.3.8.tgz (Root Library) - handlebars-1.3.0.tgz - :x: **uglify-js-2.3.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/emilwareus/thimble.mozilla.org/commit/36866e15be90f8405b84c4af5b95c97afa6f3416">36866e15be90f8405b84c4af5b95c97afa6f3416</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> uglifier incorrectly handles non-boolean comparisons during minification.The upstream library for the Ruby uglifier gem, UglifyJS, is affected by a vulnerability that allows a specially crafted Javascript file to have altered functionality after minification. This bug, found in UglifyJS versions 2.4.23 and earlier, was demonstrated to allow potentially malicious code to be hidden within secure code, and activated by the minification process. <p>Publish Date: 2015-07-22 <p>URL: <a href=https://github.com/mishoo/UglifyJS2/issues/751>WS-2015-0033</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.2</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://hakiri.io/technologies/uglifier/issues/279911d9720338">https://hakiri.io/technologies/uglifier/issues/279911d9720338</a></p> <p>Release Date: 2020-06-07</p> <p>Fix Resolution: Uglifier - 2.7.2;uglify-js - v2.4.24</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_comp
ws high detected in uglify js tgz ws high severity vulnerability vulnerable library uglify js tgz javascript parser mangler compressor and beautifier toolkit library home page a href path to dependency file tmp ws scm thimble mozilla org services id webmaker org package json path to vulnerable library tmp ws scm thimble mozilla org services login webmaker org node modules uglify js package json tmp ws scm thimble mozilla org services login webmaker org node modules uglify js package json dependency hierarchy bower tgz root library handlebars tgz x uglify js tgz vulnerable library found in head commit a href vulnerability details uglifier incorrectly handles non boolean comparisons during minification the upstream library for the ruby uglifier gem uglifyjs is affected by a vulnerability that allows a specially crafted javascript file to have altered functionality after minification this bug found in uglifyjs versions and earlier was demonstrated to allow potentially malicious code to be hidden within secure code and activated by the minification process publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution uglifier uglify js step up your open source security game with whitesource
0
15,436
19,702,880,245
IssuesEvent
2022-01-12 18:26:43
cseelhoff/RimThreaded
https://api.github.com/repos/cseelhoff/RimThreaded
closed
[Humanoid Alien Races] Body reference incorrect if HAR present
Mod Incompatibility Confirmed Fixed In Preview
Same as #642 (the patch proposed in that issue also fixes this one) https://github.com/cseelhoff/RimThreaded/blob/273033faf108caba82b0818406ed7a7433a9d60c/Source/HediffSet_Patch.cs#L110
True
[Humanoid Alien Races] Body reference incorrect if HAR present - Same as #642 (the patch proposed in that issue also fixes this one) https://github.com/cseelhoff/RimThreaded/blob/273033faf108caba82b0818406ed7a7433a9d60c/Source/HediffSet_Patch.cs#L110
comp
body reference incorrect if har present same as the patch proposed in that issue also fixes this one
1
49,329
13,452,465,341
IssuesEvent
2020-09-08 22:14:58
Watemlifts/JCSprout
https://api.github.com/repos/Watemlifts/JCSprout
opened
CVE-2020-9547 (High) detected in jackson-databind-2.8.9.jar
security vulnerability
## CVE-2020-9547 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/JCSprout/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.9/jackson-databind-2.8.9.jar</p> <p> Dependency Hierarchy: - kafka_2.11-2.3.1.jar (Root Library) - :x: **jackson-databind-2.8.9.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Watemlifts/JCSprout/commit/eca1ccf901218440203475fecd8ff5fd7aeaaa88">eca1ccf901218440203475fecd8ff5fd7aeaaa88</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap). <p>Publish Date: 2020-03-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547>CVE-2020-9547</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547</a></p> <p>Release Date: 2020-03-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-9547 (High) detected in jackson-databind-2.8.9.jar - ## CVE-2020-9547 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.9.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: /tmp/ws-scm/JCSprout/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.9/jackson-databind-2.8.9.jar</p> <p> Dependency Hierarchy: - kafka_2.11-2.3.1.jar (Root Library) - :x: **jackson-databind-2.8.9.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Watemlifts/JCSprout/commit/eca1ccf901218440203475fecd8ff5fd7aeaaa88">eca1ccf901218440203475fecd8ff5fd7aeaaa88</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> FasterXML jackson-databind 2.x before 2.9.10.4 mishandles the interaction between serialization gadgets and typing, related to com.ibatis.sqlmap.engine.transaction.jta.JtaTransactionConfig (aka ibatis-sqlmap). <p>Publish Date: 2020-03-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-9547>CVE-2020-9547</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-9547</a></p> <p>Release Date: 2020-03-02</p> <p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_comp
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm jcsprout pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy kafka jar root library x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com ibatis sqlmap engine transaction jta jtatransactionconfig aka ibatis sqlmap publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with whitesource
0
84,040
10,467,780,366
IssuesEvent
2019-09-22 08:36:19
8bitPit/Niagara-Issues
https://api.github.com/repos/8bitPit/Niagara-Issues
closed
Transparent background on menus
design low priority
the menu when you hold/swipe right (or folder menu, which is in issue 13) should have an option to make it have a transparent/translucent background with an option to add blur
1.0
Transparent background on menus - the menu when you hold/swipe right (or folder menu, which is in issue 13) should have an option to make it have a transparent/translucent background with an option to add blur
non_comp
transparent background on menus the menu when you hold swipe right or folder menu which is in issue should have an option to make it have a transparent translucent background with an option to add blur
0
7,402
9,653,093,775
IssuesEvent
2019-05-18 23:55:07
adventuregamestudio/ags
https://api.github.com/repos/adventuregamestudio/ags
closed
Some old games have their inventory 'empty' after picking up a item.
backwards compatibility bug
I found this in two free games: Heartland Deluxe: manipulate the trash bag twice to get a pill bottle, open inventory, nothing. https://www.adventuregamestudio.co.uk/site/games/game/813/ This city at night: talk to boss then pick up the folder on the IT guy desk, open inventory, nothing https://www.adventuregamestudio.co.uk/site/games/game/2313/
True
Some old games have their inventory 'empty' after picking up a item. - I found this in two free games: Heartland Deluxe: manipulate the trash bag twice to get a pill bottle, open inventory, nothing. https://www.adventuregamestudio.co.uk/site/games/game/813/ This city at night: talk to boss then pick up the folder on the IT guy desk, open inventory, nothing https://www.adventuregamestudio.co.uk/site/games/game/2313/
comp
some old games have their inventory empty after picking up a item i found this in two free games heartland deluxe manipulate the trash bag twice to get a pill bottle open inventory nothing this city at night talk to boss then pick up the folder on the it guy desk open inventory nothing
1
158,488
20,026,371,071
IssuesEvent
2022-02-01 21:49:51
HoangBachLeLe/ConferenceTrackManagement
https://api.github.com/repos/HoangBachLeLe/ConferenceTrackManagement
opened
CVE-2021-23463 (High) detected in h2-1.4.200.jar
security vulnerability
## CVE-2021-23463 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>h2-1.4.200.jar</b></p></summary> <p>H2 Database Engine</p> <p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /les-2/files-2.1/com.h2database/h2/1.4.200/f7533fe7cb8e99c87a43d325a77b4b678ad9031a/h2-1.4.200.jar</p> <p> Dependency Hierarchy: - :x: **h2-1.4.200.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/ConferenceTrackManagement/commit/8dadce50efdc1015a2d12c501ac6f145d4261a14">8dadce50efdc1015a2d12c501ac6f145d4261a14</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package com.h2database:h2 from 1.4.198 and before 2.0.202 are vulnerable to XML External Entity (XXE) Injection via the org.h2.jdbc.JdbcSQLXML class object, when it receives parsed string data from org.h2.jdbc.JdbcResultSet.getSQLXML() method. If it executes the getSource() method when the parameter is DOMSource.class it will trigger the vulnerability. <p>Publish Date: 2021-12-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23463>CVE-2021-23463</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-23463">https://nvd.nist.gov/vuln/detail/CVE-2021-23463</a></p> <p>Release Date: 2021-12-10</p> <p>Fix Resolution: com.h2database:h2:2.0.202</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-23463 (High) detected in h2-1.4.200.jar - ## CVE-2021-23463 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>h2-1.4.200.jar</b></p></summary> <p>H2 Database Engine</p> <p>Library home page: <a href="https://h2database.com">https://h2database.com</a></p> <p>Path to dependency file: /build.gradle</p> <p>Path to vulnerable library: /les-2/files-2.1/com.h2database/h2/1.4.200/f7533fe7cb8e99c87a43d325a77b4b678ad9031a/h2-1.4.200.jar</p> <p> Dependency Hierarchy: - :x: **h2-1.4.200.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/HoangBachLeLe/ConferenceTrackManagement/commit/8dadce50efdc1015a2d12c501ac6f145d4261a14">8dadce50efdc1015a2d12c501ac6f145d4261a14</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The package com.h2database:h2 from 1.4.198 and before 2.0.202 are vulnerable to XML External Entity (XXE) Injection via the org.h2.jdbc.JdbcSQLXML class object, when it receives parsed string data from org.h2.jdbc.JdbcResultSet.getSQLXML() method. If it executes the getSource() method when the parameter is DOMSource.class it will trigger the vulnerability. <p>Publish Date: 2021-12-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-23463>CVE-2021-23463</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2021-23463">https://nvd.nist.gov/vuln/detail/CVE-2021-23463</a></p> <p>Release Date: 2021-12-10</p> <p>Fix Resolution: com.h2database:h2:2.0.202</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_comp
cve high detected in jar cve high severity vulnerability vulnerable library jar database engine library home page a href path to dependency file build gradle path to vulnerable library les files com jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch main vulnerability details the package com from and before are vulnerable to xml external entity xxe injection via the org jdbc jdbcsqlxml class object when it receives parsed string data from org jdbc jdbcresultset getsqlxml method if it executes the getsource method when the parameter is domsource class it will trigger the vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com step up your open source security game with whitesource
0
6,527
8,796,781,928
IssuesEvent
2018-12-23 11:55:51
Voiteh/Codamo
https://api.github.com/repos/Voiteh/Codamo
closed
Create engine module
API Change Compatibility break Enhancement P1
- engine module - contains implementation of Codamo interface and all required logic related stuff like finders configurators etc. Those are not exposed outside of module. It will also contain single implementation of Provider which operates on modules or single transformers, but the engine will know how to handle it withouth abstraction so we have always full implementation. - api.core - all API which can't divided for specific submodules because engine module also uses it, Delegator, transformers (API), transformers configurations (API) - transformer.core - ceylon language (generic), basic and iterable, transformers (no collections) - a) Because of imports and new hierarchy collections/iterable transformer implementation would be required to be changed. As List is avaiable in ceylon.langauge and ArrayList in ceylon.collection. We are not able to tell that default resolvance is done to ArrayList in transformer.core - transformer.collection - collection transformers - transformer.json
True
Create engine module - - engine module - contains implementation of Codamo interface and all required logic related stuff like finders configurators etc. Those are not exposed outside of module. It will also contain single implementation of Provider which operates on modules or single transformers, but the engine will know how to handle it withouth abstraction so we have always full implementation. - api.core - all API which can't divided for specific submodules because engine module also uses it, Delegator, transformers (API), transformers configurations (API) - transformer.core - ceylon language (generic), basic and iterable, transformers (no collections) - a) Because of imports and new hierarchy collections/iterable transformer implementation would be required to be changed. As List is avaiable in ceylon.langauge and ArrayList in ceylon.collection. We are not able to tell that default resolvance is done to ArrayList in transformer.core - transformer.collection - collection transformers - transformer.json
comp
create engine module engine module contains implementation of codamo interface and all required logic related stuff like finders configurators etc those are not exposed outside of module it will also contain single implementation of provider which operates on modules or single transformers but the engine will know how to handle it withouth abstraction so we have always full implementation api core all api which can t divided for specific submodules because engine module also uses it delegator transformers api transformers configurations api transformer core ceylon language generic basic and iterable transformers no collections a because of imports and new hierarchy collections iterable transformer implementation would be required to be changed as list is avaiable in ceylon langauge and arraylist in ceylon collection we are not able to tell that default resolvance is done to arraylist in transformer core transformer collection collection transformers transformer json
1
4,699
11,583,101,022
IssuesEvent
2020-02-22 08:34:18
avdotion/clue
https://api.github.com/repos/avdotion/clue
closed
Widgets System
architecture
Now the widgets will be kept separately from components. Widgets are independent, however every widget has access to common store. Components are uikit. Redux is the storage.
1.0
Widgets System - Now the widgets will be kept separately from components. Widgets are independent, however every widget has access to common store. Components are uikit. Redux is the storage.
non_comp
widgets system now the widgets will be kept separately from components widgets are independent however every widget has access to common store components are uikit redux is the storage
0
7,878
10,092,487,281
IssuesEvent
2019-07-26 16:49:08
storybookjs/storybook
https://api.github.com/repos/storybookjs/storybook
closed
Storybook .eslintrc
compatibility with other tools inactive question / support
Hi, i'm trying to setup a vue storybook with eslint, i believe the problem is related with storybook, because i'm reusing the same components in a different vue project without storybook and i'm getting linter errors in that project as expected. After reading all the documentation i found a possible solution where someone was using `eslint-loader` but i couldn't make it work. Is there someone with the same problem? I'm trying to implement the `recommended` linter version ``` ... "@storybook/vue": "^5.0.6", "@vue/cli-plugin-eslint": "^3.5.1", "babel-eslint": "^10.0.1", "babel-loader": "^8.0.4", "eslint": "^5.15.2", "eslint-plugin-vue": "^5.0.0", ... ``` ``` module.exports = { root: true, env: { browser: true, node: true }, 'extends': [ 'plugin:vue/recommended', '@vue/standard' ], rules: { 'no-console': 'off', 'no-debugger': 'off' }, parserOptions: { parser: 'babel-eslint' } } ``` Example of linter error: **This returns error** ` isItWorking: Boolean ` **This doesn't** ` isItWorking: { type: Boolean, default: false }`
True
Storybook .eslintrc - Hi, i'm trying to setup a vue storybook with eslint, i believe the problem is related with storybook, because i'm reusing the same components in a different vue project without storybook and i'm getting linter errors in that project as expected. After reading all the documentation i found a possible solution where someone was using `eslint-loader` but i couldn't make it work. Is there someone with the same problem? I'm trying to implement the `recommended` linter version ``` ... "@storybook/vue": "^5.0.6", "@vue/cli-plugin-eslint": "^3.5.1", "babel-eslint": "^10.0.1", "babel-loader": "^8.0.4", "eslint": "^5.15.2", "eslint-plugin-vue": "^5.0.0", ... ``` ``` module.exports = { root: true, env: { browser: true, node: true }, 'extends': [ 'plugin:vue/recommended', '@vue/standard' ], rules: { 'no-console': 'off', 'no-debugger': 'off' }, parserOptions: { parser: 'babel-eslint' } } ``` Example of linter error: **This returns error** ` isItWorking: Boolean ` **This doesn't** ` isItWorking: { type: Boolean, default: false }`
comp
storybook eslintrc hi i m trying to setup a vue storybook with eslint i believe the problem is related with storybook because i m reusing the same components in a different vue project without storybook and i m getting linter errors in that project as expected after reading all the documentation i found a possible solution where someone was using eslint loader but i couldn t make it work is there someone with the same problem i m trying to implement the recommended linter version storybook vue vue cli plugin eslint babel eslint babel loader eslint eslint plugin vue module exports root true env browser true node true extends plugin vue recommended vue standard rules no console off no debugger off parseroptions parser babel eslint example of linter error this returns error isitworking boolean this doesn t isitworking type boolean default false
1
95,870
10,895,439,397
IssuesEvent
2019-11-19 10:39:18
loco-3d/crocoddyl
https://api.github.com/repos/loco-3d/crocoddyl
closed
Updated the REAME file + cleaned up references - [merged]
documentation gitlab merge request
In GitLab by @cmastalli on Apr 4, 2019, 10:32 _Merges topic/update-readme -> devel_ This is related to the reported issue #160.
1.0
Updated the REAME file + cleaned up references - [merged] - In GitLab by @cmastalli on Apr 4, 2019, 10:32 _Merges topic/update-readme -> devel_ This is related to the reported issue #160.
non_comp
updated the reame file cleaned up references in gitlab by cmastalli on apr merges topic update readme devel this is related to the reported issue
0
8,373
10,411,012,677
IssuesEvent
2019-09-13 12:55:42
JuliaReach/LazySets.jl
https://api.github.com/repos/JuliaReach/LazySets.jl
closed
Do not redefine rank of sparse matrix in Julia >= v1.2
compatibility fix
We added the workaround in #1468, but Julia v1.2 includes it.
True
Do not redefine rank of sparse matrix in Julia >= v1.2 - We added the workaround in #1468, but Julia v1.2 includes it.
comp
do not redefine rank of sparse matrix in julia we added the workaround in but julia includes it
1
78,669
15,586,061,768
IssuesEvent
2021-03-18 01:05:02
Nehamaefi/fitbit-api-example-java
https://api.github.com/repos/Nehamaefi/fitbit-api-example-java
closed
CVE-2017-5647 (High) detected in tomcat-embed-core-8.5.4.jar - autoclosed
security vulnerability
## CVE-2017-5647 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.4.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p> <p>Path to dependency file: fitbit-api-example-java/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.4/tomcat-embed-core-8.5.4.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-1.4.0.RELEASE.jar - :x: **tomcat-embed-core-8.5.4.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A bug in the handling of the pipelined requests in Apache Tomcat 9.0.0.M1 to 9.0.0.M18, 8.5.0 to 8.5.12, 8.0.0.RC1 to 8.0.42, 7.0.0 to 7.0.76, and 6.0.0 to 6.0.52, when send file was used, results in the pipelined request being lost when send file processing of the previous request completed. This could result in responses appearing to be sent for the wrong request. For example, a user agent that sent requests A, B and C could see the correct response for request A, the response for request C for request B and no response for request C. <p>Publish Date: 2017-04-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5647>CVE-2017-5647</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5647">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5647</a></p> <p>Release Date: 2017-04-17</p> <p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:tomcat-coyote:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:coyote:6.0.53</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.4","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:1.4.0.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:tomcat-coyote:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:coyote:6.0.53"}],"vulnerabilityIdentifier":"CVE-2017-5647","vulnerabilityDetails":"A bug in the handling of the pipelined requests in Apache Tomcat 9.0.0.M1 to 9.0.0.M18, 8.5.0 to 8.5.12, 8.0.0.RC1 to 8.0.42, 7.0.0 to 7.0.76, and 6.0.0 to 6.0.52, when send file was used, results in the pipelined request being lost when send file processing of the previous request completed. This could result in responses appearing to be sent for the wrong request. For example, a user agent that sent requests A, B and C could see the correct response for request A, the response for request C for request B and no response for request C.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5647","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2017-5647 (High) detected in tomcat-embed-core-8.5.4.jar - autoclosed - ## CVE-2017-5647 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tomcat-embed-core-8.5.4.jar</b></p></summary> <p>Core Tomcat implementation</p> <p>Library home page: <a href="http://tomcat.apache.org/">http://tomcat.apache.org/</a></p> <p>Path to dependency file: fitbit-api-example-java/pom.xml</p> <p>Path to vulnerable library: /root/.m2/repository/org/apache/tomcat/embed/tomcat-embed-core/8.5.4/tomcat-embed-core-8.5.4.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-web-1.4.0.RELEASE.jar (Root Library) - spring-boot-starter-tomcat-1.4.0.RELEASE.jar - :x: **tomcat-embed-core-8.5.4.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A bug in the handling of the pipelined requests in Apache Tomcat 9.0.0.M1 to 9.0.0.M18, 8.5.0 to 8.5.12, 8.0.0.RC1 to 8.0.42, 7.0.0 to 7.0.76, and 6.0.0 to 6.0.52, when send file was used, results in the pipelined request being lost when send file processing of the previous request completed. This could result in responses appearing to be sent for the wrong request. For example, a user agent that sent requests A, B and C could see the correct response for request A, the response for request C for request B and no response for request C. <p>Publish Date: 2017-04-17 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5647>CVE-2017-5647</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5647">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2017-5647</a></p> <p>Release Date: 2017-04-17</p> <p>Fix Resolution: org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:tomcat-coyote:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:coyote:6.0.53</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"org.apache.tomcat.embed","packageName":"tomcat-embed-core","packageVersion":"8.5.4","isTransitiveDependency":true,"dependencyTree":"org.springframework.boot:spring-boot-starter-web:1.4.0.RELEASE;org.springframework.boot:spring-boot-starter-tomcat:1.4.0.RELEASE;org.apache.tomcat.embed:tomcat-embed-core:8.5.4","isMinimumFixVersionAvailable":true,"minimumFixVersion":"org.apache.tomcat.embed:tomcat-embed-core:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:tomcat-coyote:9.0.0.M19,8.5.13,8.0.43,7.0.77,org.apache.tomcat:coyote:6.0.53"}],"vulnerabilityIdentifier":"CVE-2017-5647","vulnerabilityDetails":"A bug in the handling of the pipelined requests in Apache Tomcat 9.0.0.M1 to 9.0.0.M18, 8.5.0 to 8.5.12, 8.0.0.RC1 to 8.0.42, 7.0.0 to 7.0.76, and 6.0.0 to 6.0.52, when send file was used, results in the pipelined request being lost when send file processing of the previous request completed. This could result in responses appearing to be sent for the wrong request. For example, a user agent that sent requests A, B and C could see the correct response for request A, the response for request C for request B and no response for request C.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2017-5647","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"None","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_comp
cve high detected in tomcat embed core jar autoclosed cve high severity vulnerability vulnerable library tomcat embed core jar core tomcat implementation library home page a href path to dependency file fitbit api example java pom xml path to vulnerable library root repository org apache tomcat embed tomcat embed core tomcat embed core jar dependency hierarchy spring boot starter web release jar root library spring boot starter tomcat release jar x tomcat embed core jar vulnerable library vulnerability details a bug in the handling of the pipelined requests in apache tomcat to to to to and to when send file was used results in the pipelined request being lost when send file processing of the previous request completed this could result in responses appearing to be sent for the wrong request for example a user agent that sent requests a b and c could see the correct response for request a the response for request c for request b and no response for request c publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org apache tomcat embed tomcat embed core org apache tomcat tomcat coyote org apache tomcat coyote isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails a bug in the handling of the pipelined requests in apache tomcat to to to to and to when send file was used results in the pipelined request being lost when send file processing of the previous request completed this could result in responses appearing to be sent for the wrong request for example a user agent that sent requests a b and c could see the correct response for request a the response for request c for request b and no response for request c vulnerabilityurl
0
129,220
17,766,486,392
IssuesEvent
2021-08-30 08:11:59
decidim/decidim
https://api.github.com/repos/decidim/decidim
closed
Online meetings iframe: extended UI
module: meetings status: design required contract: online-meetings
Ref: OM15 / DECOM-33 **Is your feature request related to a problem? Please describe.** Some videoconferences providers have lots of elements in their default UI and that brings too much noise to the current Decidim design. **Describe the solution you'd like** To have a bigger default layout wrapper when entering a videoconference. **Describe alternatives you've considered** To have another kind of integration (for instance with Javascript APIs) that would allow us to have much better control, but this is out of the budget of the current development. **Additional context** By default this is the BigBlueButton UI: ![](https://i.imgur.com/HAiAeuH.png) **Does this issue could impact users' private data?** Potentially, as we're talking about videoconferences. **Acceptance criteria** - [x] As a visitor when I enter a videoconference, Decidim UI changes and the wrapper layout is wider
1.0
Online meetings iframe: extended UI - Ref: OM15 / DECOM-33 **Is your feature request related to a problem? Please describe.** Some videoconferences providers have lots of elements in their default UI and that brings too much noise to the current Decidim design. **Describe the solution you'd like** To have a bigger default layout wrapper when entering a videoconference. **Describe alternatives you've considered** To have another kind of integration (for instance with Javascript APIs) that would allow us to have much better control, but this is out of the budget of the current development. **Additional context** By default this is the BigBlueButton UI: ![](https://i.imgur.com/HAiAeuH.png) **Does this issue could impact users' private data?** Potentially, as we're talking about videoconferences. **Acceptance criteria** - [x] As a visitor when I enter a videoconference, Decidim UI changes and the wrapper layout is wider
non_comp
online meetings iframe extended ui ref decom is your feature request related to a problem please describe some videoconferences providers have lots of elements in their default ui and that brings too much noise to the current decidim design describe the solution you d like to have a bigger default layout wrapper when entering a videoconference describe alternatives you ve considered to have another kind of integration for instance with javascript apis that would allow us to have much better control but this is out of the budget of the current development additional context by default this is the bigbluebutton ui does this issue could impact users private data potentially as we re talking about videoconferences acceptance criteria as a visitor when i enter a videoconference decidim ui changes and the wrapper layout is wider
0
10,273
12,268,334,435
IssuesEvent
2020-05-07 12:20:52
dosbox-staging/dosbox-staging
https://api.github.com/repos/dosbox-staging/dosbox-staging
closed
RAYMAN CD: game crashed at start
game compatibility
Hello, I just tried last Dosbox-staging git version from today and I have an issue with RAYMAN DOS CD: game crashed at start I joined you error messages and my conf file. [rayman_dosbox.zip](https://github.com/dosbox-staging/dosbox-staging/files/4589868/rayman_dosbox.zip) ![Capture d’écran de 2020-05-07 00-56-17](https://user-images.githubusercontent.com/325405/81236522-acfd4a00-8ffd-11ea-8cad-da8c786aadbe.png) Thank you for your help.
True
RAYMAN CD: game crashed at start - Hello, I just tried last Dosbox-staging git version from today and I have an issue with RAYMAN DOS CD: game crashed at start I joined you error messages and my conf file. [rayman_dosbox.zip](https://github.com/dosbox-staging/dosbox-staging/files/4589868/rayman_dosbox.zip) ![Capture d’écran de 2020-05-07 00-56-17](https://user-images.githubusercontent.com/325405/81236522-acfd4a00-8ffd-11ea-8cad-da8c786aadbe.png) Thank you for your help.
comp
rayman cd game crashed at start hello i just tried last dosbox staging git version from today and i have an issue with rayman dos cd game crashed at start i joined you error messages and my conf file thank you for your help
1
110,829
24,014,831,027
IssuesEvent
2022-09-14 22:52:22
grpc/grpc-java
https://api.github.com/repos/grpc/grpc-java
closed
Intermittent test failure in io.grpc.okhttp.OkHttpClientTransportTest > testClientHandlerFrameLogger
code health
<!-- Please answer these questions before submitting a bug report. --> ### What version of gRPC-Java are you using? 1.47.0-SNAPSHOT ### What is your environment? <!-- operating system (Linux, Windows,...) and version, jdk version, etc. --> GitHub actions linux environment ### What did you expect to see? Successful build ### What did you see instead? grpc-okhttp build failure: https://github.com/grpc/grpc-java/runs/6119593543 io.grpc.okhttp.OkHttpClientTransportTest > testClientHandlerFrameLogger FAILED value of : iterable.size() expected : 1 but was : 2 iterable was: [java.util.logging.LogRecord@617ad1d1, java.util.logging.LogRecord@4a1bad92] ### Steps to reproduce the bug <!-- Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs). --> Happens intermittently during a build.
1.0
Intermittent test failure in io.grpc.okhttp.OkHttpClientTransportTest > testClientHandlerFrameLogger - <!-- Please answer these questions before submitting a bug report. --> ### What version of gRPC-Java are you using? 1.47.0-SNAPSHOT ### What is your environment? <!-- operating system (Linux, Windows,...) and version, jdk version, etc. --> GitHub actions linux environment ### What did you expect to see? Successful build ### What did you see instead? grpc-okhttp build failure: https://github.com/grpc/grpc-java/runs/6119593543 io.grpc.okhttp.OkHttpClientTransportTest > testClientHandlerFrameLogger FAILED value of : iterable.size() expected : 1 but was : 2 iterable was: [java.util.logging.LogRecord@617ad1d1, java.util.logging.LogRecord@4a1bad92] ### Steps to reproduce the bug <!-- Make sure you include information that can help us debug (full error message, exception listing, stack trace, logs). --> Happens intermittently during a build.
non_comp
intermittent test failure in io grpc okhttp okhttpclienttransporttest testclienthandlerframelogger what version of grpc java are you using snapshot what is your environment github actions linux environment what did you expect to see successful build what did you see instead grpc okhttp build failure io grpc okhttp okhttpclienttransporttest testclienthandlerframelogger failed value of iterable size expected but was iterable was steps to reproduce the bug happens intermittently during a build
0
17,243
23,784,296,464
IssuesEvent
2022-09-02 08:38:50
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Create separate QOM types for JSON / JSONB functions
T: Enhancement C: Functionality P: Medium T: Incompatible change E: All Editions
Most `JSON` and `JSONB` functions are identical in API and functionality. They differ only in terms of data type. In most RDBMS, people will just use `JSON` (in text format, not binary), not `JSONB`. So, this is mostly a PostgreSQL specific thing, where `JSONB` is more popular than `JSON`. For historic reasons, the implementation and API shared types. This is a challenge in the `QOM` model, which was introduced in jOOQ 3.16, after JSON (3.14). The newly added `->` and `->>` operators (see https://github.com/jOOQ/jOOQ/issues/10018) use separate `QOM` types and API for the `JSON` and `JSONB` variants, for now. We'll need to decide, strategically, which way is the preferred way: - [ ] Keep everything separate if there are separate DSL APIs (no shared types, no shared DSL, possibly shared internal base implementation) - [ ] Share types and DSL where it makes "sense" The difficulty is to find a specification where sharing makes "sense." This discussion also applies to temporal functions, e.g. when the same function can work with `DATE`, `TIMESTAMP`, `TIMESTAMPTZ` No matter the decision, if things are changed, the `QOM` API must be changed incompatibly, which is fine, because it's experimental.
True
Create separate QOM types for JSON / JSONB functions - Most `JSON` and `JSONB` functions are identical in API and functionality. They differ only in terms of data type. In most RDBMS, people will just use `JSON` (in text format, not binary), not `JSONB`. So, this is mostly a PostgreSQL specific thing, where `JSONB` is more popular than `JSON`. For historic reasons, the implementation and API shared types. This is a challenge in the `QOM` model, which was introduced in jOOQ 3.16, after JSON (3.14). The newly added `->` and `->>` operators (see https://github.com/jOOQ/jOOQ/issues/10018) use separate `QOM` types and API for the `JSON` and `JSONB` variants, for now. We'll need to decide, strategically, which way is the preferred way: - [ ] Keep everything separate if there are separate DSL APIs (no shared types, no shared DSL, possibly shared internal base implementation) - [ ] Share types and DSL where it makes "sense" The difficulty is to find a specification where sharing makes "sense." This discussion also applies to temporal functions, e.g. when the same function can work with `DATE`, `TIMESTAMP`, `TIMESTAMPTZ` No matter the decision, if things are changed, the `QOM` API must be changed incompatibly, which is fine, because it's experimental.
comp
create separate qom types for json jsonb functions most json and jsonb functions are identical in api and functionality they differ only in terms of data type in most rdbms people will just use json in text format not binary not jsonb so this is mostly a postgresql specific thing where jsonb is more popular than json for historic reasons the implementation and api shared types this is a challenge in the qom model which was introduced in jooq after json the newly added and operators see use separate qom types and api for the json and jsonb variants for now we ll need to decide strategically which way is the preferred way keep everything separate if there are separate dsl apis no shared types no shared dsl possibly shared internal base implementation share types and dsl where it makes sense the difficulty is to find a specification where sharing makes sense this discussion also applies to temporal functions e g when the same function can work with date timestamp timestamptz no matter the decision if things are changed the qom api must be changed incompatibly which is fine because it s experimental
1
108,784
4,350,588,109
IssuesEvent
2016-07-31 10:38:40
marvinlabs/customer-area
https://api.github.com/repos/marvinlabs/customer-area
opened
Allow admin notifications to be sent to author instead
enhancement Premium add-ons Priority - medium
See: http://wp-customerarea.com/support/topic/notifications-administrateur/ Would be nice to be able to chose to send to admin and/or author for some notifications like file download, content viewed, etc.
1.0
Allow admin notifications to be sent to author instead - See: http://wp-customerarea.com/support/topic/notifications-administrateur/ Would be nice to be able to chose to send to admin and/or author for some notifications like file download, content viewed, etc.
non_comp
allow admin notifications to be sent to author instead see would be nice to be able to chose to send to admin and or author for some notifications like file download content viewed etc
0
626,356
19,808,630,979
IssuesEvent
2022-01-19 09:48:16
kammt/MemeAssembly
https://api.github.com/repos/kammt/MemeAssembly
opened
[Feature] Add DWARF-support
enhancement low priority
Since stabs is only supported on Linux, MacOS and Windows currently do not support the `-g`-Flag
1.0
[Feature] Add DWARF-support - Since stabs is only supported on Linux, MacOS and Windows currently do not support the `-g`-Flag
non_comp
add dwarf support since stabs is only supported on linux macos and windows currently do not support the g flag
0
187,088
14,426,956,607
IssuesEvent
2020-12-06 01:00:40
kalexmills/github-vet-tests-dec2020
https://api.github.com/repos/kalexmills/github-vet-tests-dec2020
closed
giantswarm/kvm-operator-node-controller: vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go; 48 LoC
fresh small test vendored
Found a possible issue in [giantswarm/kvm-operator-node-controller](https://www.github.com/giantswarm/kvm-operator-node-controller) at [vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go](https://github.com/giantswarm/kvm-operator-node-controller/blob/7146561e54142d4f986daee0206336ebee3ceb18/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L3071-L3118) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to node at line 3084 may start a goroutine [Click here to see the code in its original context.](https://github.com/giantswarm/kvm-operator-node-controller/blob/7146561e54142d4f986daee0206336ebee3ceb18/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L3071-L3118) <details> <summary>Click here to show the 48 line(s) of Go which triggered the analyzer.</summary> ```go for _, node := range test.nodes { var podsOnNode []*v1.Pod for _, pod := range test.pods { if pod.Spec.NodeName == node.Name { podsOnNode = append(podsOnNode, pod) } } testFit := PodAffinityChecker{ info: nodeListInfo, podLister: schedulertesting.FakePodLister(test.pods), } nodeInfo := schedulercache.NewNodeInfo(podsOnNode...) nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} var meta interface{} = nil if !test.nometa { meta = PredicateMetadata(test.pod, nodeInfoMap) } fits, reasons, err := testFit.InterPodAffinityMatches(test.pod, meta, nodeInfo) if err != nil { t.Errorf("%s: unexpected error %v", test.test, err) } if !fits && !reflect.DeepEqual(reasons, affinityExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v", test.test, reasons) } affinity := test.pod.Spec.Affinity if affinity != nil && affinity.NodeAffinity != nil { nodeInfo := schedulercache.NewNodeInfo() nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} fits2, reasons, err := PodMatchNodeSelector(test.pod, PredicateMetadata(test.pod, nodeInfoMap), nodeInfo) if err != nil { t.Errorf("%s: unexpected error: %v", test.test, err) } if !fits2 && !reflect.DeepEqual(reasons, selectorExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v, want: %v", test.test, reasons, selectorExpectedFailureReasons) } fits = fits && fits2 } if fits != test.fits[node.Name] { t.Errorf("%s: expected %v for %s got %v", test.test, test.fits[node.Name], node.Name, fits) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 7146561e54142d4f986daee0206336ebee3ceb18
1.0
giantswarm/kvm-operator-node-controller: vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go; 48 LoC - Found a possible issue in [giantswarm/kvm-operator-node-controller](https://www.github.com/giantswarm/kvm-operator-node-controller) at [vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go](https://github.com/giantswarm/kvm-operator-node-controller/blob/7146561e54142d4f986daee0206336ebee3ceb18/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L3071-L3118) Below is the message reported by the analyzer for this snippet of code. Beware that the analyzer only reports the first issue it finds, so please do not limit your consideration to the contents of the below message. > function call which takes a reference to node at line 3084 may start a goroutine [Click here to see the code in its original context.](https://github.com/giantswarm/kvm-operator-node-controller/blob/7146561e54142d4f986daee0206336ebee3ceb18/vendor/k8s.io/kubernetes/plugin/pkg/scheduler/algorithm/predicates/predicates_test.go#L3071-L3118) <details> <summary>Click here to show the 48 line(s) of Go which triggered the analyzer.</summary> ```go for _, node := range test.nodes { var podsOnNode []*v1.Pod for _, pod := range test.pods { if pod.Spec.NodeName == node.Name { podsOnNode = append(podsOnNode, pod) } } testFit := PodAffinityChecker{ info: nodeListInfo, podLister: schedulertesting.FakePodLister(test.pods), } nodeInfo := schedulercache.NewNodeInfo(podsOnNode...) nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} var meta interface{} = nil if !test.nometa { meta = PredicateMetadata(test.pod, nodeInfoMap) } fits, reasons, err := testFit.InterPodAffinityMatches(test.pod, meta, nodeInfo) if err != nil { t.Errorf("%s: unexpected error %v", test.test, err) } if !fits && !reflect.DeepEqual(reasons, affinityExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v", test.test, reasons) } affinity := test.pod.Spec.Affinity if affinity != nil && affinity.NodeAffinity != nil { nodeInfo := schedulercache.NewNodeInfo() nodeInfo.SetNode(&node) nodeInfoMap := map[string]*schedulercache.NodeInfo{node.Name: nodeInfo} fits2, reasons, err := PodMatchNodeSelector(test.pod, PredicateMetadata(test.pod, nodeInfoMap), nodeInfo) if err != nil { t.Errorf("%s: unexpected error: %v", test.test, err) } if !fits2 && !reflect.DeepEqual(reasons, selectorExpectedFailureReasons) { t.Errorf("%s: unexpected failure reasons: %v, want: %v", test.test, reasons, selectorExpectedFailureReasons) } fits = fits && fits2 } if fits != test.fits[node.Name] { t.Errorf("%s: expected %v for %s got %v", test.test, test.fits[node.Name], node.Name, fits) } } ``` </details> Leave a reaction on this issue to contribute to the project by classifying this instance as a **Bug** :-1:, **Mitigated** :+1:, or **Desirable Behavior** :rocket: See the descriptions of the classifications [here](https://github.com/github-vet/rangeclosure-findings#how-can-i-help) for more information. commit ID: 7146561e54142d4f986daee0206336ebee3ceb18
non_comp
giantswarm kvm operator node controller vendor io kubernetes plugin pkg scheduler algorithm predicates predicates test go loc found a possible issue in at below is the message reported by the analyzer for this snippet of code beware that the analyzer only reports the first issue it finds so please do not limit your consideration to the contents of the below message function call which takes a reference to node at line may start a goroutine click here to show the line s of go which triggered the analyzer go for node range test nodes var podsonnode pod for pod range test pods if pod spec nodename node name podsonnode append podsonnode pod testfit podaffinitychecker info nodelistinfo podlister schedulertesting fakepodlister test pods nodeinfo schedulercache newnodeinfo podsonnode nodeinfo setnode node nodeinfomap map schedulercache nodeinfo node name nodeinfo var meta interface nil if test nometa meta predicatemetadata test pod nodeinfomap fits reasons err testfit interpodaffinitymatches test pod meta nodeinfo if err nil t errorf s unexpected error v test test err if fits reflect deepequal reasons affinityexpectedfailurereasons t errorf s unexpected failure reasons v test test reasons affinity test pod spec affinity if affinity nil affinity nodeaffinity nil nodeinfo schedulercache newnodeinfo nodeinfo setnode node nodeinfomap map schedulercache nodeinfo node name nodeinfo reasons err podmatchnodeselector test pod predicatemetadata test pod nodeinfomap nodeinfo if err nil t errorf s unexpected error v test test err if reflect deepequal reasons selectorexpectedfailurereasons t errorf s unexpected failure reasons v want v test test reasons selectorexpectedfailurereasons fits fits if fits test fits t errorf s expected v for s got v test test test fits node name fits leave a reaction on this issue to contribute to the project by classifying this instance as a bug mitigated or desirable behavior rocket see the descriptions of the classifications for more information commit id
0
628,043
19,960,303,489
IssuesEvent
2022-01-28 07:32:01
ut-issl/c2a-core
https://api.github.com/repos/ut-issl/c2a-core
closed
UTCなどの絶対時刻でTLC (Time Line Command) が打てるようにする
enhancement priority::high WINGS
## 概要 UTCなどの絶対時刻でTLC (Time Line Command) が打てるようにする ## 詳細 - OBC_TL.Cmd_HOGE において,時刻引数をtiではなく,utcなどの絶対時刻にて打てるようにする. - 既存の OBC_TL.Cmd_HOGE は残すので,あらたに追加することになる. - 命名は? OBC_ATL (absolute tl)?,OBC_UTL (utc tl)? - これは,あくまでエイリアス的な機能であることに注意する - 現在, OBC_TL.Cmd_HOGE は TCP_CMD_EXEC_TYPE_TL のTCP_EXEC_TYPEとなり,それはCCP_EXEC_TYPE_TL0のCCP_EXEC_TYPEに対応し,TL0に展開される. - ここで作るものは,コマンド受信部  (https://github.com/ut-issl/c2a-core/blob/31dbe6e8c347252b55b6fc63ef7c11016b75e9b5/CmdTlm/packet_handler.c#L77) では,別のTCP_EXEC_TYPEとして受理されるが,最終的にはCCP_EXEC_TYPE_TL0に変換されるべき(つまり, https://github.com/ut-issl/c2a-core/blob/31dbe6e8c347252b55b6fc63ef7c11016b75e9b5/CmdTlm/packet_handler.c#L94 にルーティングされるべき)である. - つまり,引数はUTCなどであるが,それがPacketHandlerにて,TIへ変換され(これはObcTimeの関数で行う)た後に,TL0コマンドに変換され,TL0のキューに登録される. - それに応じて,TCP_CMD_EXEC_TYPE はこのCmd用のtypeを追加する必要がある. - TCP_CMD_EXEC_TYPE などは 別途資料 を参照 - その他の注意 - 新しい TCP_CMD_EXEC_TYPE の対応や,コマンドファイル対応などでWINGSの改修が必要. - これは早めにWINGSと調整すべき.C2Aの実装が終わる前にもWINGS側はリリースして問題ないはず.はやくリリースされたほうが,SILSなどでの検証が楽. - ObcTime (https://github.com/ut-issl/c2a-core/blob/31dbe6e8c347252b55b6fc63ef7c11016b75e9b5/System/TimeManager/obc_time.h) を一度整理してもいいかもしれない - 結構汚いコードが残ってるので - 型とか公開関数とか整理しても良いかも ## close条件 実装ができたら ## 備考 - https://github.com/ut-issl/c2a-core/issues/63 の娘issue
1.0
UTCなどの絶対時刻でTLC (Time Line Command) が打てるようにする - ## 概要 UTCなどの絶対時刻でTLC (Time Line Command) が打てるようにする ## 詳細 - OBC_TL.Cmd_HOGE において,時刻引数をtiではなく,utcなどの絶対時刻にて打てるようにする. - 既存の OBC_TL.Cmd_HOGE は残すので,あらたに追加することになる. - 命名は? OBC_ATL (absolute tl)?,OBC_UTL (utc tl)? - これは,あくまでエイリアス的な機能であることに注意する - 現在, OBC_TL.Cmd_HOGE は TCP_CMD_EXEC_TYPE_TL のTCP_EXEC_TYPEとなり,それはCCP_EXEC_TYPE_TL0のCCP_EXEC_TYPEに対応し,TL0に展開される. - ここで作るものは,コマンド受信部  (https://github.com/ut-issl/c2a-core/blob/31dbe6e8c347252b55b6fc63ef7c11016b75e9b5/CmdTlm/packet_handler.c#L77) では,別のTCP_EXEC_TYPEとして受理されるが,最終的にはCCP_EXEC_TYPE_TL0に変換されるべき(つまり, https://github.com/ut-issl/c2a-core/blob/31dbe6e8c347252b55b6fc63ef7c11016b75e9b5/CmdTlm/packet_handler.c#L94 にルーティングされるべき)である. - つまり,引数はUTCなどであるが,それがPacketHandlerにて,TIへ変換され(これはObcTimeの関数で行う)た後に,TL0コマンドに変換され,TL0のキューに登録される. - それに応じて,TCP_CMD_EXEC_TYPE はこのCmd用のtypeを追加する必要がある. - TCP_CMD_EXEC_TYPE などは 別途資料 を参照 - その他の注意 - 新しい TCP_CMD_EXEC_TYPE の対応や,コマンドファイル対応などでWINGSの改修が必要. - これは早めにWINGSと調整すべき.C2Aの実装が終わる前にもWINGS側はリリースして問題ないはず.はやくリリースされたほうが,SILSなどでの検証が楽. - ObcTime (https://github.com/ut-issl/c2a-core/blob/31dbe6e8c347252b55b6fc63ef7c11016b75e9b5/System/TimeManager/obc_time.h) を一度整理してもいいかもしれない - 結構汚いコードが残ってるので - 型とか公開関数とか整理しても良いかも ## close条件 実装ができたら ## 備考 - https://github.com/ut-issl/c2a-core/issues/63 の娘issue
non_comp
utcなどの絶対時刻でtlc time line command が打てるようにする 概要 utcなどの絶対時刻でtlc time line command が打てるようにする 詳細 obc tl cmd hoge において,時刻引数をtiではなく,utcなどの絶対時刻にて打てるようにする. 既存の obc tl cmd hoge は残すので,あらたに追加することになる. 命名は? obc atl absolute tl ?,obc utl utc tl ? これは,あくまでエイリアス的な機能であることに注意する 現在, obc tl cmd hoge は tcp cmd exec type tl のtcp exec typeとなり,それはccp exec type exec typeに対応し, . ここで作るものは,コマンド受信部   では,別のtcp exec typeとして受理されるが,最終的にはccp exec type (つまり, にルーティングされるべき)である. つまり,引数はutcなどであるが,それがpackethandlerにて,tiへ変換され(これはobctimeの関数で行う)た後に, , . それに応じて,tcp cmd exec type はこのcmd用のtypeを追加する必要がある. tcp cmd exec type などは 別途資料 を参照 その他の注意 新しい tcp cmd exec type の対応や,コマンドファイル対応などでwingsの改修が必要. これは早めにwingsと調整すべき. .はやくリリースされたほうが,silsなどでの検証が楽. obctime を一度整理してもいいかもしれない 結構汚いコードが残ってるので 型とか公開関数とか整理しても良いかも close条件 実装ができたら 備考 の娘issue
0
164,974
13,963,585,528
IssuesEvent
2020-10-25 14:45:51
olifolkerd/tabulator
https://api.github.com/repos/olifolkerd/tabulator
closed
Ambiguous explanation of .selectRow arguments
Documentation
**Website Page** http://tabulator.info/docs/4.8/select#manage **Describe the issue** The documentation linked above says: > To select a specific row you can pass the any of the standard row component look up options into the first argument of the function. Phrase "row component look up" is a link to http://tabulator.info/docs/4.8/components#lookup, where it is explained that values used for looking up a `Row` are: 1. A `RowComponent` 2. The index value of the row 3. DOM node of the row While I did not check 1. and 3., I was badly misguided by 2. I understood this as a regular `Array` index, which it obviously is not. It is only after I read the selection module source code when I discovered that for this to work, my data MUST have a field called `id`. Alternatively I can configure in Tabulator config the name of index field, by adding a string type property `index` to the configuration object. Neither of this information can be found on neither of those two links I pasted above. I was also further misguided by the example found at http://tabulator.info/docs/4.8/select#manage, which showed selecting rows by feeding the `.selectRow` method with an `Array` of numbers. I think this information should be added to the docs, along with a correct example showing that the index field can be configured. In order to promote good practices, index field can contain UUIDs (the only reasonable substitute for numbers, when it comes to indexing a table data). If the above information is already included somewhere else in the docs, there should be some kind of a pointer to that "somewhere" either in docs about select or in docs regarding component lookup.
1.0
Ambiguous explanation of .selectRow arguments - **Website Page** http://tabulator.info/docs/4.8/select#manage **Describe the issue** The documentation linked above says: > To select a specific row you can pass the any of the standard row component look up options into the first argument of the function. Phrase "row component look up" is a link to http://tabulator.info/docs/4.8/components#lookup, where it is explained that values used for looking up a `Row` are: 1. A `RowComponent` 2. The index value of the row 3. DOM node of the row While I did not check 1. and 3., I was badly misguided by 2. I understood this as a regular `Array` index, which it obviously is not. It is only after I read the selection module source code when I discovered that for this to work, my data MUST have a field called `id`. Alternatively I can configure in Tabulator config the name of index field, by adding a string type property `index` to the configuration object. Neither of this information can be found on neither of those two links I pasted above. I was also further misguided by the example found at http://tabulator.info/docs/4.8/select#manage, which showed selecting rows by feeding the `.selectRow` method with an `Array` of numbers. I think this information should be added to the docs, along with a correct example showing that the index field can be configured. In order to promote good practices, index field can contain UUIDs (the only reasonable substitute for numbers, when it comes to indexing a table data). If the above information is already included somewhere else in the docs, there should be some kind of a pointer to that "somewhere" either in docs about select or in docs regarding component lookup.
non_comp
ambiguous explanation of selectrow arguments website page describe the issue the documentation linked above says to select a specific row you can pass the any of the standard row component look up options into the first argument of the function phrase row component look up is a link to where it is explained that values used for looking up a row are a rowcomponent the index value of the row dom node of the row while i did not check and i was badly misguided by i understood this as a regular array index which it obviously is not it is only after i read the selection module source code when i discovered that for this to work my data must have a field called id alternatively i can configure in tabulator config the name of index field by adding a string type property index to the configuration object neither of this information can be found on neither of those two links i pasted above i was also further misguided by the example found at which showed selecting rows by feeding the selectrow method with an array of numbers i think this information should be added to the docs along with a correct example showing that the index field can be configured in order to promote good practices index field can contain uuids the only reasonable substitute for numbers when it comes to indexing a table data if the above information is already included somewhere else in the docs there should be some kind of a pointer to that somewhere either in docs about select or in docs regarding component lookup
0
105,668
4,240,142,341
IssuesEvent
2016-07-06 12:23:59
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
[k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite}
kind/flake priority/P2
https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce/19436/ Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite} ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1066 Expected error: <*errors.errorString | 0xc820552470>: { s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.156.152 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] [] 0xc82081eca0 failed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout\n [] <nil> 0xc8208b8380 exit status 1 <nil> true [0xc820030660 0xc820030688 0xc820030698] [0xc820030660 0xc820030688 0xc820030698] [0xc820030668 0xc820030680 0xc820030690] [0xa9ae40 0xa9afa0 0xa9afa0] 0xc820f48300}:\nCommand stdout:\n\nstderr:\nfailed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout\n\nerror:\nexit status 1\n", } Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.156.152 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] [] 0xc82081eca0 failed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout [] <nil> 0xc8208b8380 exit status 1 <nil> true [0xc820030660 0xc820030688 0xc820030698] [0xc820030660 0xc820030688 0xc820030698] [0xc820030668 0xc820030680 0xc820030690] [0xa9ae40 0xa9afa0 0xa9afa0] 0xc820f48300}: Command stdout: stderr: failed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout error: exit status 1 not to have occurred ``` Previous issues for this test: #26728
1.0
[k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite} - https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/kubernetes-e2e-gce/19436/ Failed: [k8s.io] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] {Kubernetes e2e suite} ``` /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl.go:1066 Expected error: <*errors.errorString | 0xc820552470>: { s: "Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.156.152 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] [] 0xc82081eca0 failed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout\n [] <nil> 0xc8208b8380 exit status 1 <nil> true [0xc820030660 0xc820030688 0xc820030698] [0xc820030660 0xc820030688 0xc820030698] [0xc820030668 0xc820030680 0xc820030690] [0xa9ae40 0xa9afa0 0xa9afa0] 0xc820f48300}:\nCommand stdout:\n\nstderr:\nfailed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout\n\nerror:\nexit status 1\n", } Error running &{/workspace/kubernetes/platforms/linux/amd64/kubectl [kubectl --server=https://104.197.156.152 --kubeconfig=/workspace/.kube/config --namespace= run e2e-test-rm-busybox-job --image=gcr.io/google_containers/busybox:1.24 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'] [] 0xc82081eca0 failed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout [] <nil> 0xc8208b8380 exit status 1 <nil> true [0xc820030660 0xc820030688 0xc820030698] [0xc820030660 0xc820030688 0xc820030698] [0xc820030668 0xc820030680 0xc820030690] [0xa9ae40 0xa9afa0 0xa9afa0] 0xc820f48300}: Command stdout: stderr: failed to find client for version v1: Unable to connect to the server: net/http: TLS handshake timeout error: exit status 1 not to have occurred ``` Previous issues for this test: #26728
non_comp
kubectl client kubectl run rm job should create a job from an image then delete the job kubernetes suite failed kubectl client kubectl run rm job should create a job from an image then delete the job kubernetes suite go src io kubernetes output dockerized go src io kubernetes test kubectl go expected error s error running workspace kubernetes platforms linux kubectl failed to find client for version unable to connect to the server net http tls handshake timeout n exit status true ncommand stdout n nstderr nfailed to find client for version unable to connect to the server net http tls handshake timeout n nerror nexit status n error running workspace kubernetes platforms linux kubectl failed to find client for version unable to connect to the server net http tls handshake timeout exit status true command stdout stderr failed to find client for version unable to connect to the server net http tls handshake timeout error exit status not to have occurred previous issues for this test
0
11,193
13,199,987,615
IssuesEvent
2020-08-14 07:17:10
jenkinsci/dark-theme-plugin
https://api.github.com/repos/jenkinsci/dark-theme-plugin
closed
Pipeline stage view difficult to read
plugin-compatibility
The pipeline stage is difficult to read when the dark theme is applied as some of the text is fuzzy and it looks like the dark theme hasn't been applied to the date and time; <img width="904" alt="Screen Shot 2020-06-06 at 12 24 11" src="https://user-images.githubusercontent.com/11042532/83949317-0ae5a300-a7f1-11ea-8533-8ccfa774209c.png"> I am unsure if this is an issue with the pipeline stage plugin or the dark theme.
True
Pipeline stage view difficult to read - The pipeline stage is difficult to read when the dark theme is applied as some of the text is fuzzy and it looks like the dark theme hasn't been applied to the date and time; <img width="904" alt="Screen Shot 2020-06-06 at 12 24 11" src="https://user-images.githubusercontent.com/11042532/83949317-0ae5a300-a7f1-11ea-8533-8ccfa774209c.png"> I am unsure if this is an issue with the pipeline stage plugin or the dark theme.
comp
pipeline stage view difficult to read the pipeline stage is difficult to read when the dark theme is applied as some of the text is fuzzy and it looks like the dark theme hasn t been applied to the date and time img width alt screen shot at src i am unsure if this is an issue with the pipeline stage plugin or the dark theme
1
11,767
13,882,336,645
IssuesEvent
2020-10-18 06:21:25
TerraformersMC/Terrestria
https://api.github.com/repos/TerraformersMC/Terrestria
opened
Conflict with Wild Explorer
mod compatibility
It looks like Wild Explorer is patching TreeFeature to enable growing palm trees on sand: https://github.com/DawnTeamMC/WildExplorer/blob/1.16/src/main/java/com/hugman/wild_explorer/mixin/TreeFeatureMixin.java Which, as it happens, is exactly what we do as well: https://github.com/TerraformersMC/Terrestria/blob/1.16.3/src/main/java/com/terraformersmc/terrestria/mixin/MixinTreeFeature.java It would probably be best to create a tiny little compatibility module that handles the injection, eliminating the redirect conflict. Perhaps a Terraform module?
True
Conflict with Wild Explorer - It looks like Wild Explorer is patching TreeFeature to enable growing palm trees on sand: https://github.com/DawnTeamMC/WildExplorer/blob/1.16/src/main/java/com/hugman/wild_explorer/mixin/TreeFeatureMixin.java Which, as it happens, is exactly what we do as well: https://github.com/TerraformersMC/Terrestria/blob/1.16.3/src/main/java/com/terraformersmc/terrestria/mixin/MixinTreeFeature.java It would probably be best to create a tiny little compatibility module that handles the injection, eliminating the redirect conflict. Perhaps a Terraform module?
comp
conflict with wild explorer it looks like wild explorer is patching treefeature to enable growing palm trees on sand which as it happens is exactly what we do as well it would probably be best to create a tiny little compatibility module that handles the injection eliminating the redirect conflict perhaps a terraform module
1
21,998
3,587,952,557
IssuesEvent
2016-01-30 17:55:05
bigbluebutton/bigbluebutton
https://api.github.com/repos/bigbluebutton/bigbluebutton
closed
Error in bigbluebutton.log: Conference NNNNN not found on clean install
Accepted Apps Defect Low Priority
Originally reported on Google Code with ID 661 ``` What steps will reproduce the problem? 1. Install BigBlueButton 0.71-beta 2. sudo bbb-conf --clean 3. Join the Demo Meeting What is the expected output? What do you see instead? The output from sudo bbb-conf --debug shows firstuser@clean-vm-20101017-00:~$ sudo bbb-conf --debug -- ERRORS found in /usr/share/red5/log/* -- /usr/share/red5/log/bigbluebutton.log:2010-10-17 01:15:08,320 [NioProcessor-1] ERROR o.b.w.v.f.a.PopulateRoomCommand - Not XML: [Conference 78989 not found] Everthing seems to work, so not sure if this error was a real error or not. Please use labels and text to provide additional information. ``` Reported by `ffdixon` on 2010-10-17 01:19:17
1.0
Error in bigbluebutton.log: Conference NNNNN not found on clean install - Originally reported on Google Code with ID 661 ``` What steps will reproduce the problem? 1. Install BigBlueButton 0.71-beta 2. sudo bbb-conf --clean 3. Join the Demo Meeting What is the expected output? What do you see instead? The output from sudo bbb-conf --debug shows firstuser@clean-vm-20101017-00:~$ sudo bbb-conf --debug -- ERRORS found in /usr/share/red5/log/* -- /usr/share/red5/log/bigbluebutton.log:2010-10-17 01:15:08,320 [NioProcessor-1] ERROR o.b.w.v.f.a.PopulateRoomCommand - Not XML: [Conference 78989 not found] Everthing seems to work, so not sure if this error was a real error or not. Please use labels and text to provide additional information. ``` Reported by `ffdixon` on 2010-10-17 01:19:17
non_comp
error in bigbluebutton log conference nnnnn not found on clean install originally reported on google code with id what steps will reproduce the problem install bigbluebutton beta sudo bbb conf clean join the demo meeting what is the expected output what do you see instead the output from sudo bbb conf debug shows firstuser clean vm sudo bbb conf debug errors found in usr share log usr share log bigbluebutton log error o b w v f a populateroomcommand not xml everthing seems to work so not sure if this error was a real error or not please use labels and text to provide additional information reported by ffdixon on
0
37,248
8,243,896,991
IssuesEvent
2018-09-11 03:00:20
triplea-game/triplea
https://api.github.com/repos/triplea-game/triplea
reopened
Illegal reflective access in Java 9 build
category: code improvement ice box - close and revisit later
The Java 9 build on Travis is reporting a few illegal reflective access violations when running the tests. I may have missed some, but the ones I saw are reproduced below: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.mockito.internal.util.reflection.AccessibilityChanger (file:/home/travis/.gradle/caches/modules-2/files-2.1/org.mockito/mockito-core/2.13.0/8e372943974e4a121fb8617baced8ebfe46d54f0/mockito-core-2.13.0.jar) to field java.util.logging.LogManager.props WARNING: Please consider reporting this to the maintainers of org.mockito.internal.util.reflection.AccessibilityChanger WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.postgresql.jdbc.TimestampUtils (file:/home/travis/.gradle/caches/modules-2/files-2.1/org.postgresql/postgresql/42.1.4/1c7788d16b67d51f2f38ae99e474ece968bf715a/postgresql-42.1.4.jar) to field java.util.TimeZone.defaultTimeZone WARNING: Please consider reporting this to the maintainers of org.postgresql.jdbc.TimestampUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` We should investigate if these issues have already been reported to the Mockito and Postgres projects, and, if not, open new issues, as necessary. The suggestion to use `--illegal-access=warn` may mean that only the first such violation is reported in each process. (The Postgres warning is from the integration tests, while the Mockito warning is from the unit tests, so they are indeed run in separate processes.) We may want to enable that option to see if there are any others. <hr> External dependencies with illegal reflective access: * [ ] Gradle * [ ] LoggingConfigurationTest * [x] Postgres JDBC driver (#2866)
1.0
Illegal reflective access in Java 9 build - The Java 9 build on Travis is reporting a few illegal reflective access violations when running the tests. I may have missed some, but the ones I saw are reproduced below: ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.mockito.internal.util.reflection.AccessibilityChanger (file:/home/travis/.gradle/caches/modules-2/files-2.1/org.mockito/mockito-core/2.13.0/8e372943974e4a121fb8617baced8ebfe46d54f0/mockito-core-2.13.0.jar) to field java.util.logging.LogManager.props WARNING: Please consider reporting this to the maintainers of org.mockito.internal.util.reflection.AccessibilityChanger WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` ``` WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.postgresql.jdbc.TimestampUtils (file:/home/travis/.gradle/caches/modules-2/files-2.1/org.postgresql/postgresql/42.1.4/1c7788d16b67d51f2f38ae99e474ece968bf715a/postgresql-42.1.4.jar) to field java.util.TimeZone.defaultTimeZone WARNING: Please consider reporting this to the maintainers of org.postgresql.jdbc.TimestampUtils WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release ``` We should investigate if these issues have already been reported to the Mockito and Postgres projects, and, if not, open new issues, as necessary. The suggestion to use `--illegal-access=warn` may mean that only the first such violation is reported in each process. (The Postgres warning is from the integration tests, while the Mockito warning is from the unit tests, so they are indeed run in separate processes.) We may want to enable that option to see if there are any others. <hr> External dependencies with illegal reflective access: * [ ] Gradle * [ ] LoggingConfigurationTest * [x] Postgres JDBC driver (#2866)
non_comp
illegal reflective access in java build the java build on travis is reporting a few illegal reflective access violations when running the tests i may have missed some but the ones i saw are reproduced below warning an illegal reflective access operation has occurred warning illegal reflective access by org mockito internal util reflection accessibilitychanger file home travis gradle caches modules files org mockito mockito core mockito core jar to field java util logging logmanager props warning please consider reporting this to the maintainers of org mockito internal util reflection accessibilitychanger warning use illegal access warn to enable warnings of further illegal reflective access operations warning all illegal access operations will be denied in a future release warning an illegal reflective access operation has occurred warning illegal reflective access by org postgresql jdbc timestamputils file home travis gradle caches modules files org postgresql postgresql postgresql jar to field java util timezone defaulttimezone warning please consider reporting this to the maintainers of org postgresql jdbc timestamputils warning use illegal access warn to enable warnings of further illegal reflective access operations warning all illegal access operations will be denied in a future release we should investigate if these issues have already been reported to the mockito and postgres projects and if not open new issues as necessary the suggestion to use illegal access warn may mean that only the first such violation is reported in each process the postgres warning is from the integration tests while the mockito warning is from the unit tests so they are indeed run in separate processes we may want to enable that option to see if there are any others external dependencies with illegal reflective access gradle loggingconfigurationtest postgres jdbc driver
0