Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
35,103
7,893,191,917
IssuesEvent
2018-06-28 17:13:51
dotnet/coreclr
https://api.github.com/repos/dotnet/coreclr
opened
Bounds checks on array/span not eliminated after length check
area-CodeGen
I've got code similar to the following repro: ```C# using System; using System.Runtime.InteropServices; using System.Runtime.CompilerServices; public class C { public static void Main() => new C().TryFormat(new char[4], out _); public bool TryFormat(Span<char> dst, out int charsWritten) { if (dst.Length >= 4) { dst[0] = 't'; dst[1] = 'r'; dst[2] = 'u'; dst[3] = 'e'; charsWritten = 4; return true; } charsWritten = 0; return false; } } ``` I was hoping/expecting the bounds checks on each of those four writes to dst to be eliminated, but they’re not: ``` G_M404_IG02: 488B02 mov rax, bword ptr [rdx] 8B5208 mov edx, dword ptr [rdx+8] 83FA04 cmp edx, 4 7C3C jl SHORT G_M404_IG04 83FA00 cmp edx, 0 7641 jbe SHORT G_M404_IG06 66C7007400 mov word ptr [rax], 116 83FA01 cmp edx, 1 7637 jbe SHORT G_M404_IG06 66C740027200 mov word ptr [rax+2], 114 83FA02 cmp edx, 2 762C jbe SHORT G_M404_IG06 66C740047500 mov word ptr [rax+4], 117 83FA03 cmp edx, 3 7621 jbe SHORT G_M404_IG06 66C740066500 mov word ptr [rax+6], 101 41C70004000000 mov dword ptr [r8], 4 B801000000 mov eax, 1 ``` To work around that, I can use Unsafe.Add and MemoryMarshal.GetReference, e.g. ```C# public bool TryFormat(Span<char> dst, out int charsWritten) { if (dst.Length >= 4) { ref char c = ref MemoryMarshal.GetReference(dst); c = 't'; Unsafe.Add(ref c, 1) = 'r'; Unsafe.Add(ref c, 2) = 'u'; Unsafe.Add(ref c, 3) = 'e'; charsWritten = 4; return true; } charsWritten = 0; return false; } ``` in which case I get the better: ``` G_M408_IG02: 837A0804 cmp dword ptr [rdx+8], 4 7C27 jl SHORT G_M408_IG04 488B02 mov rax, bword ptr [rdx] 66C7007400 mov word ptr [rax], 116 66C740027200 mov word ptr [rax+2], 114 66C740047500 mov word ptr [rax+4], 117 66C740066500 mov word ptr [rax+6], 101 41C70004000000 mov dword ptr [r8], 4 B801000000 mov eax, 1 ``` but it’d be nice not to have to use Unsafe for cases like this. cc: @AndyAyersMS Related: https://github.com/dotnet/coreclr/issues/12639
1.0
Bounds checks on array/span not eliminated after length check - I've got code similar to the following repro: ```C# using System; using System.Runtime.InteropServices; using System.Runtime.CompilerServices; public class C { public static void Main() => new C().TryFormat(new char[4], out _); public bool TryFormat(Span<char> dst, out int charsWritten) { if (dst.Length >= 4) { dst[0] = 't'; dst[1] = 'r'; dst[2] = 'u'; dst[3] = 'e'; charsWritten = 4; return true; } charsWritten = 0; return false; } } ``` I was hoping/expecting the bounds checks on each of those four writes to dst to be eliminated, but they’re not: ``` G_M404_IG02: 488B02 mov rax, bword ptr [rdx] 8B5208 mov edx, dword ptr [rdx+8] 83FA04 cmp edx, 4 7C3C jl SHORT G_M404_IG04 83FA00 cmp edx, 0 7641 jbe SHORT G_M404_IG06 66C7007400 mov word ptr [rax], 116 83FA01 cmp edx, 1 7637 jbe SHORT G_M404_IG06 66C740027200 mov word ptr [rax+2], 114 83FA02 cmp edx, 2 762C jbe SHORT G_M404_IG06 66C740047500 mov word ptr [rax+4], 117 83FA03 cmp edx, 3 7621 jbe SHORT G_M404_IG06 66C740066500 mov word ptr [rax+6], 101 41C70004000000 mov dword ptr [r8], 4 B801000000 mov eax, 1 ``` To work around that, I can use Unsafe.Add and MemoryMarshal.GetReference, e.g. ```C# public bool TryFormat(Span<char> dst, out int charsWritten) { if (dst.Length >= 4) { ref char c = ref MemoryMarshal.GetReference(dst); c = 't'; Unsafe.Add(ref c, 1) = 'r'; Unsafe.Add(ref c, 2) = 'u'; Unsafe.Add(ref c, 3) = 'e'; charsWritten = 4; return true; } charsWritten = 0; return false; } ``` in which case I get the better: ``` G_M408_IG02: 837A0804 cmp dword ptr [rdx+8], 4 7C27 jl SHORT G_M408_IG04 488B02 mov rax, bword ptr [rdx] 66C7007400 mov word ptr [rax], 116 66C740027200 mov word ptr [rax+2], 114 66C740047500 mov word ptr [rax+4], 117 66C740066500 mov word ptr [rax+6], 101 41C70004000000 mov dword ptr [r8], 4 B801000000 mov eax, 1 ``` but it’d be nice not to have to use Unsafe for cases like this. cc: @AndyAyersMS Related: https://github.com/dotnet/coreclr/issues/12639
non_defect
bounds checks on array span not eliminated after length check i ve got code similar to the following repro c using system using system runtime interopservices using system runtime compilerservices public class c public static void main new c tryformat new char out public bool tryformat span dst out int charswritten if dst length dst t dst r dst u dst e charswritten return true charswritten return false i was hoping expecting the bounds checks on each of those four writes to dst to be eliminated but they’re not g mov rax bword ptr mov edx dword ptr cmp edx jl short g cmp edx jbe short g mov word ptr cmp edx jbe short g mov word ptr cmp edx jbe short g mov word ptr cmp edx jbe short g mov word ptr mov dword ptr mov eax to work around that i can use unsafe add and memorymarshal getreference e g c public bool tryformat span dst out int charswritten if dst length ref char c ref memorymarshal getreference dst c t unsafe add ref c r unsafe add ref c u unsafe add ref c e charswritten return true charswritten return false in which case i get the better g cmp dword ptr jl short g mov rax bword ptr mov word ptr mov word ptr mov word ptr mov word ptr mov dword ptr mov eax but it’d be nice not to have to use unsafe for cases like this cc andyayersms related
0
8,102
2,611,452,468
IssuesEvent
2015-02-27 05:00:12
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
When you choose to disable land objects, the girders aren't generated too.
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Enter game mode options and make a custom set. 2. Check "disable land objects" and make sure that "disable girders" is unchecked. 3. Start the game. What is the expected output? What do you see instead? The girders aren't generated. What version of the product are you using? On what operating system? 0.9.13 on Windows XP SP2 Please provide any additional information below. - ``` Original issue reported on code.google.com by `adibiaz...@gmail.com` on 7 Oct 2010 at 4:15
1.0
When you choose to disable land objects, the girders aren't generated too. - ``` What steps will reproduce the problem? 1. Enter game mode options and make a custom set. 2. Check "disable land objects" and make sure that "disable girders" is unchecked. 3. Start the game. What is the expected output? What do you see instead? The girders aren't generated. What version of the product are you using? On what operating system? 0.9.13 on Windows XP SP2 Please provide any additional information below. - ``` Original issue reported on code.google.com by `adibiaz...@gmail.com` on 7 Oct 2010 at 4:15
defect
when you choose to disable land objects the girders aren t generated too what steps will reproduce the problem enter game mode options and make a custom set check disable land objects and make sure that disable girders is unchecked start the game what is the expected output what do you see instead the girders aren t generated what version of the product are you using on what operating system on windows xp please provide any additional information below original issue reported on code google com by adibiaz gmail com on oct at
1
55,760
11,463,295,033
IssuesEvent
2020-02-07 15:44:50
canonical-web-and-design/tutorials.ubuntu.com
https://api.github.com/repos/canonical-web-and-design/tutorials.ubuntu.com
closed
Tutorial Wanted - install and configure LDAP
Google Code In Tutorials Content Type: Tutorial Request
This tutorial will cover the installation and configuration of an OpenLDAP server. As this will be a fairly advanced tutorial, it's safe to assume the reader will have good Linux server and systems experience and will understand the security and authentication requirements of running OpenLDAP. The following documentation may be helpful: https://help.ubuntu.com/lts/serverguide/openldap-server.html
1.0
Tutorial Wanted - install and configure LDAP - This tutorial will cover the installation and configuration of an OpenLDAP server. As this will be a fairly advanced tutorial, it's safe to assume the reader will have good Linux server and systems experience and will understand the security and authentication requirements of running OpenLDAP. The following documentation may be helpful: https://help.ubuntu.com/lts/serverguide/openldap-server.html
non_defect
tutorial wanted install and configure ldap this tutorial will cover the installation and configuration of an openldap server as this will be a fairly advanced tutorial it s safe to assume the reader will have good linux server and systems experience and will understand the security and authentication requirements of running openldap the following documentation may be helpful
0
119,145
17,604,047,280
IssuesEvent
2021-08-17 14:59:07
Pio1006/ui
https://api.github.com/repos/Pio1006/ui
opened
CVE-2018-11499 (High) detected in multiple libraries
security vulnerability
## CVE-2018-11499 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-26.0.0-1</b>, <b>opennmsopennms-source-26.0.0-1</b>, <b>opennmsopennms-source-26.0.0-1</b>, <b>opennmsopennms-source-26.0.0-1</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A use-after-free vulnerability exists in handle_error() in sass_context.cpp in LibSass 3.4.x and 3.5.x through 3.5.4 that could be leveraged to cause a denial of service (application crash) or possibly unspecified other impact. <p>Publish Date: 2018-05-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11499>CVE-2018-11499</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Change files</p> <p>Origin: <a href="https://github.com/sass/libsass/commit/930857ce4938f64ce1c31463dbd19b1aa781a5f7">https://github.com/sass/libsass/commit/930857ce4938f64ce1c31463dbd19b1aa781a5f7</a></p> <p>Release Date: 2018-11-23</p> <p>Fix Resolution: Replace or update the following files: error_handling.cpp, error_handling.hpp, parser.cpp</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-11499","vulnerabilityDetails":"A use-after-free vulnerability exists in handle_error() in sass_context.cpp in LibSass 3.4.x and 3.5.x through 3.5.4 that could be leveraged to cause a denial of service (application crash) or possibly unspecified other impact.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11499","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2018-11499 (High) detected in multiple libraries - ## CVE-2018-11499 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>opennmsopennms-source-26.0.0-1</b>, <b>opennmsopennms-source-26.0.0-1</b>, <b>opennmsopennms-source-26.0.0-1</b>, <b>opennmsopennms-source-26.0.0-1</b></p></summary> <p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A use-after-free vulnerability exists in handle_error() in sass_context.cpp in LibSass 3.4.x and 3.5.x through 3.5.4 that could be leveraged to cause a denial of service (application crash) or possibly unspecified other impact. <p>Publish Date: 2018-05-26 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11499>CVE-2018-11499</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Change files</p> <p>Origin: <a href="https://github.com/sass/libsass/commit/930857ce4938f64ce1c31463dbd19b1aa781a5f7">https://github.com/sass/libsass/commit/930857ce4938f64ce1c31463dbd19b1aa781a5f7</a></p> <p>Release Date: 2018-11-23</p> <p>Fix Resolution: Replace or update the following files: error_handling.cpp, error_handling.hpp, parser.cpp</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2018-11499","vulnerabilityDetails":"A use-after-free vulnerability exists in handle_error() in sass_context.cpp in LibSass 3.4.x and 3.5.x through 3.5.4 that could be leveraged to cause a denial of service (application crash) or possibly unspecified other impact.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2018-11499","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_defect
cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries opennmsopennms source opennmsopennms source opennmsopennms source opennmsopennms source vulnerability details a use after free vulnerability exists in handle error in sass context cpp in libsass x and x through that could be leveraged to cause a denial of service application crash or possibly unspecified other impact publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type change files origin a href release date fix resolution replace or update the following files error handling cpp error handling hpp parser cpp isopenpronvulnerability false ispackagebased true isdefaultbranch true packages basebranches vulnerabilityidentifier cve vulnerabilitydetails a use after free vulnerability exists in handle error in sass context cpp in libsass x and x through that could be leveraged to cause a denial of service application crash or possibly unspecified other impact vulnerabilityurl
0
178,467
13,780,823,109
IssuesEvent
2020-10-08 15:23:21
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
opened
[CI] ConcurrentSnapshotsIT.testDeletesAreBatched fails
:Distributed/Snapshot/Restore >test-failure
**Build scan**: https://gradle-enterprise.elastic.co/s/ddwl7fzpvxuck **Repro line**: ./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.snapshots.ConcurrentSnapshotsIT.testDeletesAreBatched" \ -Dtests.seed=B05CAAB2F86ED160 \ -Dtests.security.manager=true \ -Dbuild.snapshot=false \ -Dtests.jvm.argline="-Dbuild.snapshot=false" \ -Dtests.locale=sq-AL \ -Dtests.timezone=Indian/Mauritius \ -Druntime.java=8 **Reproduces locally?**: no **Applicable branches**: 7.x **Failure history**: Another one on 7.9 on Oct 8th:https://gradle-enterprise.elastic.co/s/q5pxw3i4jidi2 Two more back in August: https://gradle-enterprise.elastic.co/s/4onwuh6duap5g https://gradle-enterprise.elastic.co/s/vx64ubpuwjwbc **Failure excerpt**: java.lang.AssertionError: Expected: is <SUCCESS> but: was <PARTIAL> at __randomizedtesting.SeedInfo.seed([B05CAAB2F86ED160:94FBBD6B29151D05]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.junit.Assert.assertThat(Assert.java:923) at org.elasticsearch.snapshots.ConcurrentSnapshotsIT.testDeletesAreBatched(ConcurrentSnapshotsIT.java:141)
1.0
[CI] ConcurrentSnapshotsIT.testDeletesAreBatched fails - **Build scan**: https://gradle-enterprise.elastic.co/s/ddwl7fzpvxuck **Repro line**: ./gradlew ':server:internalClusterTest' --tests "org.elasticsearch.snapshots.ConcurrentSnapshotsIT.testDeletesAreBatched" \ -Dtests.seed=B05CAAB2F86ED160 \ -Dtests.security.manager=true \ -Dbuild.snapshot=false \ -Dtests.jvm.argline="-Dbuild.snapshot=false" \ -Dtests.locale=sq-AL \ -Dtests.timezone=Indian/Mauritius \ -Druntime.java=8 **Reproduces locally?**: no **Applicable branches**: 7.x **Failure history**: Another one on 7.9 on Oct 8th:https://gradle-enterprise.elastic.co/s/q5pxw3i4jidi2 Two more back in August: https://gradle-enterprise.elastic.co/s/4onwuh6duap5g https://gradle-enterprise.elastic.co/s/vx64ubpuwjwbc **Failure excerpt**: java.lang.AssertionError: Expected: is <SUCCESS> but: was <PARTIAL> at __randomizedtesting.SeedInfo.seed([B05CAAB2F86ED160:94FBBD6B29151D05]:0) at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) at org.junit.Assert.assertThat(Assert.java:956) at org.junit.Assert.assertThat(Assert.java:923) at org.elasticsearch.snapshots.ConcurrentSnapshotsIT.testDeletesAreBatched(ConcurrentSnapshotsIT.java:141)
non_defect
concurrentsnapshotsit testdeletesarebatched fails build scan repro line gradlew server internalclustertest tests org elasticsearch snapshots concurrentsnapshotsit testdeletesarebatched dtests seed dtests security manager true dbuild snapshot false dtests jvm argline dbuild snapshot false dtests locale sq al dtests timezone indian mauritius druntime java reproduces locally no applicable branches x failure history another one on on oct two more back in august failure excerpt java lang assertionerror expected is but was at randomizedtesting seedinfo seed at org hamcrest matcherassert assertthat matcherassert java at org junit assert assertthat assert java at org junit assert assertthat assert java at org elasticsearch snapshots concurrentsnapshotsit testdeletesarebatched concurrentsnapshotsit java
0
568,141
16,960,266,562
IssuesEvent
2021-06-29 02:07:14
cjs8487/SS-Randomizer-Tracker
https://api.github.com/repos/cjs8487/SS-Randomizer-Tracker
closed
Refactor Components
Low Priority Refactor
There are several ways to reduce the number of components we have, both by combining them into files (rather than one per file) as well as changing components to take prop values to determine state, removing the need for individual components for everything. These changes will allow logic refactors to be applied more easily, as well as consolidate state, rather than track the same state in multiple locations, just in different ways
1.0
Refactor Components - There are several ways to reduce the number of components we have, both by combining them into files (rather than one per file) as well as changing components to take prop values to determine state, removing the need for individual components for everything. These changes will allow logic refactors to be applied more easily, as well as consolidate state, rather than track the same state in multiple locations, just in different ways
non_defect
refactor components there are several ways to reduce the number of components we have both by combining them into files rather than one per file as well as changing components to take prop values to determine state removing the need for individual components for everything these changes will allow logic refactors to be applied more easily as well as consolidate state rather than track the same state in multiple locations just in different ways
0
128,233
17,465,994,670
IssuesEvent
2021-08-06 16:54:49
Esri/calcite-components
https://api.github.com/repos/Esri/calcite-components
closed
refactor(Tile and Tile Select): match Figma and utilize Tailwind
help wanted design refactor 3 - installed
Description This component needs to match Figma designs. Acceptance Criteria The component appears as it does in Figma. Related issues: - #1230 Related prs: - pr #1769 - pr #1770 - pr #1771 Design: - https://www.figma.com/file/E3SB0i5wPy7AagB7KcjF05/?node-id=1183%3A19780
1.0
refactor(Tile and Tile Select): match Figma and utilize Tailwind - Description This component needs to match Figma designs. Acceptance Criteria The component appears as it does in Figma. Related issues: - #1230 Related prs: - pr #1769 - pr #1770 - pr #1771 Design: - https://www.figma.com/file/E3SB0i5wPy7AagB7KcjF05/?node-id=1183%3A19780
non_defect
refactor tile and tile select match figma and utilize tailwind description this component needs to match figma designs acceptance criteria the component appears as it does in figma related issues related prs pr pr pr design
0
325,835
27,965,082,430
IssuesEvent
2023-03-24 18:45:26
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
pkg/sql/logictest/tests/fakedist-vec-off/fakedist-vec-off_test: TestLogic_timestamp failed
C-test-failure O-robot T-sql-queries branch-release-23.1
pkg/sql/logictest/tests/fakedist-vec-off/fakedist-vec-off_test.TestLogic_timestamp [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/9225821?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/9225821?buildTab=artifacts#/) on release-23.1 @ [52e55d2ef172b7cfec14e8a0a954f8864b2be779](https://github.com/cockroachdb/cockroach/commits/52e55d2ef172b7cfec14e8a0a954f8864b2be779): Fatal error: ``` panic: test timed out after 59m55s ``` Stack: ``` goroutine 372298 [running]: testing.(*M).startAlarm.func1() GOROOT/src/testing/testing.go:2036 +0x8e created by time.goFunc GOROOT/src/time/sleep.go:176 +0x32 ``` <details><summary>Log preceding fatal error</summary> <p> ``` * github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFS).startTickerLocked.func1() * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:474 +0x10e * created by github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFS).startTickerLocked * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:468 +0x7a * * goroutine 355567 [select, 2 minutes]: * github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs.func1() * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:462 +0x1d6 * created by github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:435 +0x56 * * goroutine 355496 [select, 2 minutes]: * github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs.func1() * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:462 +0x1d6 * created by github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:435 +0x56 * * goroutine 355706 [select, 2 minutes]: * github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFile).startTicker.func1() * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:148 +0xdc * created by github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFile).startTicker * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:143 +0x5d * * goroutine 355491 [select, 2 minutes]: * github.com/cockroachdb/cockroach/pkg/util/admission.initWorkQueue.func2() * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:388 +0x86 * created by github.com/cockroachdb/cockroach/pkg/util/admission.initWorkQueue * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:385 +0x33f * * goroutine 355635 [chan receive, 2 minutes]: * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop.func1({0x64ca000, 0xc00a3061e0}) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:324 +0x9f * runtime/pprof.Do({0x64c9f90?, 0xc000082038?}, {{0xc000137200?, 0x1000000000001?, 0xc007613fc0?}}, 0xc004e477a8) * GOROOT/src/runtime/pprof/runtime.go:40 +0xa3 * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop(0x0?) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:322 +0x58 * created by github.com/cockroachdb/pebble.(*tableCacheShard).init * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:314 +0xef * * goroutine 355742 [chan receive, 2 minutes]: * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop.func1({0x64ca000, 0xc00ae42ae0}) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:324 +0x9f * runtime/pprof.Do({0x64c9f90?, 0xc000082038?}, {{0xc000137200?, 0x100000000000000?, 0x6512460?}}, 0xc0000c1fa8) * GOROOT/src/runtime/pprof/runtime.go:40 +0xa3 * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop(0xc0000c1fb8?) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:322 +0x58 * created by github.com/cockroachdb/pebble.(*tableCacheShard).init * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:314 +0xef * * ``` </p> </details> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestLogic_timestamp.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-25887
1.0
pkg/sql/logictest/tests/fakedist-vec-off/fakedist-vec-off_test: TestLogic_timestamp failed - pkg/sql/logictest/tests/fakedist-vec-off/fakedist-vec-off_test.TestLogic_timestamp [failed](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/9225821?buildTab=log) with [artifacts](https://teamcity.cockroachdb.com/buildConfiguration/Cockroach_Nightlies_SqlLogicTestHighVModuleNightlyBazel/9225821?buildTab=artifacts#/) on release-23.1 @ [52e55d2ef172b7cfec14e8a0a954f8864b2be779](https://github.com/cockroachdb/cockroach/commits/52e55d2ef172b7cfec14e8a0a954f8864b2be779): Fatal error: ``` panic: test timed out after 59m55s ``` Stack: ``` goroutine 372298 [running]: testing.(*M).startAlarm.func1() GOROOT/src/testing/testing.go:2036 +0x8e created by time.goFunc GOROOT/src/time/sleep.go:176 +0x32 ``` <details><summary>Log preceding fatal error</summary> <p> ``` * github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFS).startTickerLocked.func1() * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:474 +0x10e * created by github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFS).startTickerLocked * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:468 +0x7a * * goroutine 355567 [select, 2 minutes]: * github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs.func1() * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:462 +0x1d6 * created by github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:435 +0x56 * * goroutine 355496 [select, 2 minutes]: * github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs.func1() * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:462 +0x1d6 * created by github.com/cockroachdb/cockroach/pkg/util/admission.(*WorkQueue).startClosingEpochs * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:435 +0x56 * * goroutine 355706 [select, 2 minutes]: * github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFile).startTicker.func1() * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:148 +0xdc * created by github.com/cockroachdb/pebble/vfs.(*diskHealthCheckingFile).startTicker * github.com/cockroachdb/pebble/vfs/external/com_github_cockroachdb_pebble/vfs/disk_health.go:143 +0x5d * * goroutine 355491 [select, 2 minutes]: * github.com/cockroachdb/cockroach/pkg/util/admission.initWorkQueue.func2() * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:388 +0x86 * created by github.com/cockroachdb/cockroach/pkg/util/admission.initWorkQueue * github.com/cockroachdb/cockroach/pkg/util/admission/work_queue.go:385 +0x33f * * goroutine 355635 [chan receive, 2 minutes]: * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop.func1({0x64ca000, 0xc00a3061e0}) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:324 +0x9f * runtime/pprof.Do({0x64c9f90?, 0xc000082038?}, {{0xc000137200?, 0x1000000000001?, 0xc007613fc0?}}, 0xc004e477a8) * GOROOT/src/runtime/pprof/runtime.go:40 +0xa3 * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop(0x0?) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:322 +0x58 * created by github.com/cockroachdb/pebble.(*tableCacheShard).init * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:314 +0xef * * goroutine 355742 [chan receive, 2 minutes]: * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop.func1({0x64ca000, 0xc00ae42ae0}) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:324 +0x9f * runtime/pprof.Do({0x64c9f90?, 0xc000082038?}, {{0xc000137200?, 0x100000000000000?, 0x6512460?}}, 0xc0000c1fa8) * GOROOT/src/runtime/pprof/runtime.go:40 +0xa3 * github.com/cockroachdb/pebble.(*tableCacheShard).releaseLoop(0xc0000c1fb8?) * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:322 +0x58 * created by github.com/cockroachdb/pebble.(*tableCacheShard).init * github.com/cockroachdb/pebble/external/com_github_cockroachdb_pebble/table_cache.go:314 +0xef * * ``` </p> </details> <details><summary>Help</summary> <p> See also: [How To Investigate a Go Test Failure \(internal\)](https://cockroachlabs.atlassian.net/l/c/HgfXfJgM) </p> </details> /cc @cockroachdb/sql-queries <sub> [This test on roachdash](https://roachdash.crdb.dev/?filter=status:open%20t:.*TestLogic_timestamp.*&sort=title+created&display=lastcommented+project) | [Improve this report!](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues) </sub> Jira issue: CRDB-25887
non_defect
pkg sql logictest tests fakedist vec off fakedist vec off test testlogic timestamp failed pkg sql logictest tests fakedist vec off fakedist vec off test testlogic timestamp with on release fatal error panic test timed out after stack goroutine testing m startalarm goroot src testing testing go created by time gofunc goroot src time sleep go log preceding fatal error github com cockroachdb pebble vfs diskhealthcheckingfs starttickerlocked github com cockroachdb pebble vfs external com github cockroachdb pebble vfs disk health go created by github com cockroachdb pebble vfs diskhealthcheckingfs starttickerlocked github com cockroachdb pebble vfs external com github cockroachdb pebble vfs disk health go goroutine github com cockroachdb cockroach pkg util admission workqueue startclosingepochs github com cockroachdb cockroach pkg util admission work queue go created by github com cockroachdb cockroach pkg util admission workqueue startclosingepochs github com cockroachdb cockroach pkg util admission work queue go goroutine github com cockroachdb cockroach pkg util admission workqueue startclosingepochs github com cockroachdb cockroach pkg util admission work queue go created by github com cockroachdb cockroach pkg util admission workqueue startclosingepochs github com cockroachdb cockroach pkg util admission work queue go goroutine github com cockroachdb pebble vfs diskhealthcheckingfile startticker github com cockroachdb pebble vfs external com github cockroachdb pebble vfs disk health go created by github com cockroachdb pebble vfs diskhealthcheckingfile startticker github com cockroachdb pebble vfs external com github cockroachdb pebble vfs disk health go goroutine github com cockroachdb cockroach pkg util admission initworkqueue github com cockroachdb cockroach pkg util admission work queue go created by github com cockroachdb cockroach pkg util admission initworkqueue github com cockroachdb cockroach pkg util admission work queue go goroutine github com cockroachdb pebble tablecacheshard releaseloop github com cockroachdb pebble external com github cockroachdb pebble table cache go runtime pprof do goroot src runtime pprof runtime go github com cockroachdb pebble tablecacheshard releaseloop github com cockroachdb pebble external com github cockroachdb pebble table cache go created by github com cockroachdb pebble tablecacheshard init github com cockroachdb pebble external com github cockroachdb pebble table cache go goroutine github com cockroachdb pebble tablecacheshard releaseloop github com cockroachdb pebble external com github cockroachdb pebble table cache go runtime pprof do goroot src runtime pprof runtime go github com cockroachdb pebble tablecacheshard releaseloop github com cockroachdb pebble external com github cockroachdb pebble table cache go created by github com cockroachdb pebble tablecacheshard init github com cockroachdb pebble external com github cockroachdb pebble table cache go help see also cc cockroachdb sql queries jira issue crdb
0
195,965
6,922,581,238
IssuesEvent
2017-11-30 04:07:10
webcompat/web-bugs
https://api.github.com/repos/webcompat/web-bugs
closed
accounts.craigslist.org - see bug description
browser-firefox-mobile priority-critical
<!-- @browser: Firefox Mobile 59.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:59.0) Gecko/59.0 Firefox/59.0 --> <!-- @reported_with: web --> **URL**: https://accounts.craigslist.org/eaf?postingID=6383247401 **Browser / Version**: Firefox Mobile 59.0 **Operating System**: Android 7.0 **Tested Another Browser**: No **Problem type**: Something else **Description**: email form glitches **Steps to Reproduce**: when trying to select an email address that has been used before for the email form om craigslit, the sugestion box with previously used emails glitches open and closed two or three times before stabalizing so a selection can be made. thank you for all of your work guys! fight for net nutrality! [![Screenshot Description](https://webcompat.com/uploads/2017/11/aa18ce51-f48c-47de-9cc8-8d61f0ba3e86-thumb.jpg)](https://webcompat.com/uploads/2017/11/aa18ce51-f48c-47de-9cc8-8d61f0ba3e86.jpg) _From [webcompat.com](https://webcompat.com/) with ❤️_
1.0
accounts.craigslist.org - see bug description - <!-- @browser: Firefox Mobile 59.0 --> <!-- @ua_header: Mozilla/5.0 (Android 7.0; Mobile; rv:59.0) Gecko/59.0 Firefox/59.0 --> <!-- @reported_with: web --> **URL**: https://accounts.craigslist.org/eaf?postingID=6383247401 **Browser / Version**: Firefox Mobile 59.0 **Operating System**: Android 7.0 **Tested Another Browser**: No **Problem type**: Something else **Description**: email form glitches **Steps to Reproduce**: when trying to select an email address that has been used before for the email form om craigslit, the sugestion box with previously used emails glitches open and closed two or three times before stabalizing so a selection can be made. thank you for all of your work guys! fight for net nutrality! [![Screenshot Description](https://webcompat.com/uploads/2017/11/aa18ce51-f48c-47de-9cc8-8d61f0ba3e86-thumb.jpg)](https://webcompat.com/uploads/2017/11/aa18ce51-f48c-47de-9cc8-8d61f0ba3e86.jpg) _From [webcompat.com](https://webcompat.com/) with ❤️_
non_defect
accounts craigslist org see bug description url browser version firefox mobile operating system android tested another browser no problem type something else description email form glitches steps to reproduce when trying to select an email address that has been used before for the email form om craigslit the sugestion box with previously used emails glitches open and closed two or three times before stabalizing so a selection can be made thank you for all of your work guys fight for net nutrality from with ❤️
0
61,378
17,023,679,648
IssuesEvent
2021-07-03 03:15:52
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
postcode should be a string?
Component: nominatim Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 10.03pm, Sunday, 6th February 2011]** Request in question: http://nominatim.openstreetmap.org/reverse?format=json&osm_type=N&osm_id=524588214&addressdetails=0 JSON does not support integers with leading zeros. I'm not sure how complex postcodes could be. Perhaps the best way to fix the problem is to return a string. This could be done in javascript_renderData by removing the if statement with is_int. If we would like to parse ints and floats we should check for is_int and return (int) $xVal or (float) $xVal I guess. Thx for your help.
1.0
postcode should be a string? - **[Submitted to the original trac issue database at 10.03pm, Sunday, 6th February 2011]** Request in question: http://nominatim.openstreetmap.org/reverse?format=json&osm_type=N&osm_id=524588214&addressdetails=0 JSON does not support integers with leading zeros. I'm not sure how complex postcodes could be. Perhaps the best way to fix the problem is to return a string. This could be done in javascript_renderData by removing the if statement with is_int. If we would like to parse ints and floats we should check for is_int and return (int) $xVal or (float) $xVal I guess. Thx for your help.
defect
postcode should be a string request in question json does not support integers with leading zeros i m not sure how complex postcodes could be perhaps the best way to fix the problem is to return a string this could be done in javascript renderdata by removing the if statement with is int if we would like to parse ints and floats we should check for is int and return int xval or float xval i guess thx for your help
1
142,985
5,487,428,799
IssuesEvent
2017-03-14 04:32:25
DavidAylaian/CarbonOS
https://api.github.com/repos/DavidAylaian/CarbonOS
opened
Printfln() is broken
bug medium priority
```c printfln("%s", "this will not work"); ``` should print `this will not work ` but instead it prints ![printfln](https://cloud.githubusercontent.com/assets/22228595/23885951/6664ef20-084d-11e7-917f-35aa6a0eb1ab.png) There are no warnings or errors left by the compiler.
1.0
Printfln() is broken - ```c printfln("%s", "this will not work"); ``` should print `this will not work ` but instead it prints ![printfln](https://cloud.githubusercontent.com/assets/22228595/23885951/6664ef20-084d-11e7-917f-35aa6a0eb1ab.png) There are no warnings or errors left by the compiler.
non_defect
printfln is broken c printfln s this will not work should print this will not work but instead it prints there are no warnings or errors left by the compiler
0
304,742
26,329,320,982
IssuesEvent
2023-01-10 09:35:39
harvester/harvester
https://api.github.com/repos/harvester/harvester
closed
[BUG] Filter the network created by storage-network in the node drive (RKE1)
kind/bug area/ui priority/1 severity/3 area/dashboard-related reproduce/always require-ui/small not-require/test-plan
**Describe the bug** <!-- A clear and concise description of what the bug is. --> storage-network will automatically create a network, and we should filter it. ![image.png](https://images.zenhubusercontent.com/60345555ec1db310c78aa2b8/319c7ae0-8e91-4103-8b2c-4f94acd33a81) **To Reproduce** Steps to reproduce the behavior: 1. go to setting page, enable storage-network 2. Importing harveste into rancher 3. create harvester node driver (RKE1) **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> we should filter the network created by storage-network in the node drive. **Support bundle** <!-- You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue. Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider: - Wait for a developer to reach you and provide the bundle file by any secure methods. - Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle. - Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. --> **Environment** - Harvester ISO version: - Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): **Additional context** Add any other context about the problem here.
1.0
[BUG] Filter the network created by storage-network in the node drive (RKE1) - **Describe the bug** <!-- A clear and concise description of what the bug is. --> storage-network will automatically create a network, and we should filter it. ![image.png](https://images.zenhubusercontent.com/60345555ec1db310c78aa2b8/319c7ae0-8e91-4103-8b2c-4f94acd33a81) **To Reproduce** Steps to reproduce the behavior: 1. go to setting page, enable storage-network 2. Importing harveste into rancher 3. create harvester node driver (RKE1) **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> we should filter the network created by storage-network in the node drive. **Support bundle** <!-- You can generate a support bundle in the bottom of Harvester UI (https://docs.harvesterhci.io/v1.0/troubleshooting/harvester/#generate-a-support-bundle). It includes logs and configurations that help diagnose the issue. Tokens, passwords, and secrets are automatically removed from support bundles. If you feel it's not appropriate to share the bundle files publicly, please consider: - Wait for a developer to reach you and provide the bundle file by any secure methods. - Join our Slack community (https://rancher-users.slack.com/archives/C01GKHKAG0K) to provide the bundle. - Send the bundle to harvester-support-bundle@suse.com with the correct issue ID. --> **Environment** - Harvester ISO version: - Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630): **Additional context** Add any other context about the problem here.
non_defect
filter the network created by storage network in the node drive describe the bug storage network will automatically create a network and we should filter it to reproduce steps to reproduce the behavior go to setting page enable storage network importing harveste into rancher create harvester node driver expected behavior we should filter the network created by storage network in the node drive support bundle you can generate a support bundle in the bottom of harvester ui it includes logs and configurations that help diagnose the issue tokens passwords and secrets are automatically removed from support bundles if you feel it s not appropriate to share the bundle files publicly please consider wait for a developer to reach you and provide the bundle file by any secure methods join our slack community to provide the bundle send the bundle to harvester support bundle suse com with the correct issue id environment harvester iso version underlying infrastructure e g baremetal with dell poweredge additional context add any other context about the problem here
0
21,939
30,446,798,942
IssuesEvent
2023-07-15 19:28:36
h4sh5/pypi-auto-scanner
https://api.github.com/repos/h4sh5/pypi-auto-scanner
opened
pyutils 0.0.1b2 has 2 GuardDog issues
guarddog typosquatting silent-process-execution
https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b2", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils", "silent-process-execution": [ { "location": "pyutils-0.0.1b2/src/pyutils/exec_utils.py:200", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp_uwg8ssv/pyutils" } }```
1.0
pyutils 0.0.1b2 has 2 GuardDog issues - https://pypi.org/project/pyutils https://inspector.pypi.io/project/pyutils ```{ "dependency": "pyutils", "version": "0.0.1b2", "result": { "issues": 2, "errors": {}, "results": { "typosquatting": "This package closely ressembles the following package names, and might be a typosquatting attempt: python-utils, pytils", "silent-process-execution": [ { "location": "pyutils-0.0.1b2/src/pyutils/exec_utils.py:200", "code": " subproc = subprocess.Popen(\n args,\n stdin=subprocess.DEVNULL,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.DEVNULL,\n )", "message": "This package is silently executing an external binary, redirecting stdout, stderr and stdin to /dev/null" } ] }, "path": "/tmp/tmp_uwg8ssv/pyutils" } }```
non_defect
pyutils has guarddog issues dependency pyutils version result issues errors results typosquatting this package closely ressembles the following package names and might be a typosquatting attempt python utils pytils silent process execution location pyutils src pyutils exec utils py code subproc subprocess popen n args n stdin subprocess devnull n stdout subprocess devnull n stderr subprocess devnull n message this package is silently executing an external binary redirecting stdout stderr and stdin to dev null path tmp tmp pyutils
0
16,126
2,872,987,048
IssuesEvent
2015-06-08 14:54:09
msimpson/pixelcity
https://api.github.com/repos/msimpson/pixelcity
closed
glass building act strange
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. turn on glass buildings What is the expected output? What do you see instead? the textures rotate really fast around the buildings. What version of the product are you using? On what operating system? 1.011 vista 64 radeon 4870 Q6600 Please provide any additional information below. wow that was a lot of bug reporting :) ``` Original issue reported on code.google.com by `zivpe...@gmail.com` on 14 May 2009 at 11:19
1.0
glass building act strange - ``` What steps will reproduce the problem? 1. turn on glass buildings What is the expected output? What do you see instead? the textures rotate really fast around the buildings. What version of the product are you using? On what operating system? 1.011 vista 64 radeon 4870 Q6600 Please provide any additional information below. wow that was a lot of bug reporting :) ``` Original issue reported on code.google.com by `zivpe...@gmail.com` on 14 May 2009 at 11:19
defect
glass building act strange what steps will reproduce the problem turn on glass buildings what is the expected output what do you see instead the textures rotate really fast around the buildings what version of the product are you using on what operating system vista radeon please provide any additional information below wow that was a lot of bug reporting original issue reported on code google com by zivpe gmail com on may at
1
50,371
13,187,464,870
IssuesEvent
2020-08-13 03:30:01
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
ports dies if you don't specify --prefix (Trac #597)
Migrated from Trac defect tools/ports
you get: troy@zinc:~/Icecube/DarwinPorts/t2$ /opt/local/bin/port sync can't find package darwinports while executing "package require darwinports" (file "/opt/local/bin/port" line 36) <details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/597 , reported by troy and owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2011-07-29T20:08:27", "description": "you get:\n\ntroy@zinc:~/Icecube/DarwinPorts/t2$ /opt/local/bin/port sync\ncan't find package darwinports\n while executing\n\"package require darwinports\"\n (file \"/opt/local/bin/port\" line 36)\n", "reporter": "troy", "cc": "", "resolution": "wont or cant fix", "_ts": "1311970107000000", "component": "tools/ports", "summary": "ports dies if you don't specify --prefix", "priority": "normal", "keywords": "", "time": "2010-02-22T01:19:13", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
ports dies if you don't specify --prefix (Trac #597) - you get: troy@zinc:~/Icecube/DarwinPorts/t2$ /opt/local/bin/port sync can't find package darwinports while executing "package require darwinports" (file "/opt/local/bin/port" line 36) <details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/597 , reported by troy and owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2011-07-29T20:08:27", "description": "you get:\n\ntroy@zinc:~/Icecube/DarwinPorts/t2$ /opt/local/bin/port sync\ncan't find package darwinports\n while executing\n\"package require darwinports\"\n (file \"/opt/local/bin/port\" line 36)\n", "reporter": "troy", "cc": "", "resolution": "wont or cant fix", "_ts": "1311970107000000", "component": "tools/ports", "summary": "ports dies if you don't specify --prefix", "priority": "normal", "keywords": "", "time": "2010-02-22T01:19:13", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
ports dies if you don t specify prefix trac you get troy zinc icecube darwinports opt local bin port sync can t find package darwinports while executing package require darwinports file opt local bin port line migrated from reported by troy and owned by nega json status closed changetime description you get n ntroy zinc icecube darwinports opt local bin port sync ncan t find package darwinports n while executing n package require darwinports n file opt local bin port line n reporter troy cc resolution wont or cant fix ts component tools ports summary ports dies if you don t specify prefix priority normal keywords time milestone owner nega type defect
1
67,871
21,211,810,077
IssuesEvent
2022-04-11 00:16:06
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
DOC: stats tutorials Gamma vs gamma
defect scipy.stats Documentation
Many of the stats tutorials report the distribution's CDF using `\Gamma(s, x)` and I'm wondering if `\gamma(s,x)` is in fact what was meant? [The lowercase gamma is the "lower incomplete gamma function" `\gamma(s, x) = \int_0^x t^{s-1} exp(-t) dt`, the uppercase is the "upper incomplete gamma function" `\Gamma(s, x) = \int_x^\infty t^{s-1} exp(-t) dt`and they are related by `\gamma(s, x) + \Gamma(s, x) = \Gamma(s) = (s-1)!`. As `x -> \infty`, `\Gamma(s, x) -> 0`, and `\gamma(s, x) -> \Gamma(s)`. E.g. https://en.wikipedia.org/wiki/Incomplete_gamma_function] Question: The tutorials for chi-squared, gamma, dgamma, gengamma, loggamma, nakagami all use `\Gamma(s, x)` when I would have expected `\gamma(s, x)`. Is this just due to a different definition for `\Gamma(s, x)`? Or should they all be the lower incomplete gamma function `\gamma(s, x)`? Examples: The Gamma distribution https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_gamma.html lists the CDF as `\Gamma(a, x)`. Except that this function goes to 0 as `x -> \infty`. [It also hasn't been normalized - it needs to be divided by `\Gamma(a)`.] The Generalized Gamma distribution https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_gengamma.html lists the CDF as `\Gamma(a, x^c)` for c>0. Again this function goes to 0 as `x -> \infty`. For c<0, the listed CDF is `1-\Gamma(a, x^c)/\Gamma(a)`, which goes to 0 as `x -> \infty`. [This also disagrees with the CDF for the Inverted Gamma distribution https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_invgamma.html, which is the special case c=-1 of gen gamma. There the CDF is listed as `\Gamma(a, x^c)/\Gamma(a)` and this actually has the correct behavior as `x -> \infty`.] I did find some usage of `\gamma`, the gennorm distribution uses it (correctly), so both \gamma and \Gamma do appear in the tutorials. [One side note, the tutorials often use symbols for functions without defining them, and the meaning is not always obvious from the context.] ### Scipy/Numpy/Python version information: ``` 1.0.0.dev0+e5a0cd7 1.14.0.dev0+d93a5dd sys.version_info(major=3, minor=6, micro=1, releaselevel='final', serial=0) ``` <img width="528" alt="gammagamma" src="https://user-images.githubusercontent.com/23403152/27695136-0fdbae3c-5cbc-11e7-8a88-25d78de03f95.png">
1.0
DOC: stats tutorials Gamma vs gamma - Many of the stats tutorials report the distribution's CDF using `\Gamma(s, x)` and I'm wondering if `\gamma(s,x)` is in fact what was meant? [The lowercase gamma is the "lower incomplete gamma function" `\gamma(s, x) = \int_0^x t^{s-1} exp(-t) dt`, the uppercase is the "upper incomplete gamma function" `\Gamma(s, x) = \int_x^\infty t^{s-1} exp(-t) dt`and they are related by `\gamma(s, x) + \Gamma(s, x) = \Gamma(s) = (s-1)!`. As `x -> \infty`, `\Gamma(s, x) -> 0`, and `\gamma(s, x) -> \Gamma(s)`. E.g. https://en.wikipedia.org/wiki/Incomplete_gamma_function] Question: The tutorials for chi-squared, gamma, dgamma, gengamma, loggamma, nakagami all use `\Gamma(s, x)` when I would have expected `\gamma(s, x)`. Is this just due to a different definition for `\Gamma(s, x)`? Or should they all be the lower incomplete gamma function `\gamma(s, x)`? Examples: The Gamma distribution https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_gamma.html lists the CDF as `\Gamma(a, x)`. Except that this function goes to 0 as `x -> \infty`. [It also hasn't been normalized - it needs to be divided by `\Gamma(a)`.] The Generalized Gamma distribution https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_gengamma.html lists the CDF as `\Gamma(a, x^c)` for c>0. Again this function goes to 0 as `x -> \infty`. For c<0, the listed CDF is `1-\Gamma(a, x^c)/\Gamma(a)`, which goes to 0 as `x -> \infty`. [This also disagrees with the CDF for the Inverted Gamma distribution https://docs.scipy.org/doc/scipy/reference/tutorial/stats/continuous_invgamma.html, which is the special case c=-1 of gen gamma. There the CDF is listed as `\Gamma(a, x^c)/\Gamma(a)` and this actually has the correct behavior as `x -> \infty`.] I did find some usage of `\gamma`, the gennorm distribution uses it (correctly), so both \gamma and \Gamma do appear in the tutorials. [One side note, the tutorials often use symbols for functions without defining them, and the meaning is not always obvious from the context.] ### Scipy/Numpy/Python version information: ``` 1.0.0.dev0+e5a0cd7 1.14.0.dev0+d93a5dd sys.version_info(major=3, minor=6, micro=1, releaselevel='final', serial=0) ``` <img width="528" alt="gammagamma" src="https://user-images.githubusercontent.com/23403152/27695136-0fdbae3c-5cbc-11e7-8a88-25d78de03f95.png">
defect
doc stats tutorials gamma vs gamma many of the stats tutorials report the distribution s cdf using gamma s x and i m wondering if gamma s x is in fact what was meant the lowercase gamma is the lower incomplete gamma function gamma s x int x t s exp t dt the uppercase is the upper incomplete gamma function gamma s x int x infty t s exp t dt and they are related by gamma s x gamma s x gamma s s as x infty gamma s x and gamma s x gamma s e g question the tutorials for chi squared gamma dgamma gengamma loggamma nakagami all use gamma s x when i would have expected gamma s x is this just due to a different definition for gamma s x or should they all be the lower incomplete gamma function gamma s x examples the gamma distribution lists the cdf as gamma a x except that this function goes to as x infty the generalized gamma distribution lists the cdf as gamma a x c for c again this function goes to as x infty for c infty this also disagrees with the cdf for the inverted gamma distribution which is the special case c of gen gamma there the cdf is listed as gamma a x c gamma a and this actually has the correct behavior as x infty i did find some usage of gamma the gennorm distribution uses it correctly so both gamma and gamma do appear in the tutorials scipy numpy python version information sys version info major minor micro releaselevel final serial img width alt gammagamma src
1
391,941
26,915,527,193
IssuesEvent
2023-02-07 05:58:09
john-waczak/MLJGaussianProcesses.jl
https://api.github.com/repos/john-waczak/MLJGaussianProcesses.jl
opened
Update the Documentation
documentation good first issue
We should update the documentation and `README.md` using the quarto document generating during the development phase. This should include demonstrations of the GP concept as well as use of this package.
1.0
Update the Documentation - We should update the documentation and `README.md` using the quarto document generating during the development phase. This should include demonstrations of the GP concept as well as use of this package.
non_defect
update the documentation we should update the documentation and readme md using the quarto document generating during the development phase this should include demonstrations of the gp concept as well as use of this package
0
21,546
3,518,269,870
IssuesEvent
2016-01-12 12:01:27
Virtual-Labs/problem-solving-iiith
https://api.github.com/repos/Virtual-Labs/problem-solving-iiith
reopened
QA_More on Numbers_UI
Category :UI Defect raised on: 24-11-2015 Developed by:IIIT Hyd Release Number Severity :S3 Status :Open Version Number :1.1
Defect Description: In the Landing page of "More on Numbers" experiment, the 'Home' &'Problem Solving Lab' links are present outside of the page width instead the links should be placed within the page limit inorder to maintain the page utility. Actual Result: In the Landing page of "More on Numbers" experiment,the 'Home' &'Problem Solving Lab' links are placed outside of the page width. Environment : OS: Windows 7, Ubuntu-16.04,Centos-6 Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0 Bandwidth : 100Mbps Hardware Configuration:8GBRAM , Processor:i5 Test Step Link: https://github.com/Virtual-Labs/problem-solving-iiith/blob/master/test-cases/integration_test-cases/More%20on%20Numbers/More%20on%20Numbers_01_Usability_smk.org ![1](https://cloud.githubusercontent.com/assets/14869397/11362560/6ac72b00-92ba-11e5-95d8-37a6d76a9777.png)
1.0
QA_More on Numbers_UI - Defect Description: In the Landing page of "More on Numbers" experiment, the 'Home' &'Problem Solving Lab' links are present outside of the page width instead the links should be placed within the page limit inorder to maintain the page utility. Actual Result: In the Landing page of "More on Numbers" experiment,the 'Home' &'Problem Solving Lab' links are placed outside of the page width. Environment : OS: Windows 7, Ubuntu-16.04,Centos-6 Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0 Bandwidth : 100Mbps Hardware Configuration:8GBRAM , Processor:i5 Test Step Link: https://github.com/Virtual-Labs/problem-solving-iiith/blob/master/test-cases/integration_test-cases/More%20on%20Numbers/More%20on%20Numbers_01_Usability_smk.org ![1](https://cloud.githubusercontent.com/assets/14869397/11362560/6ac72b00-92ba-11e5-95d8-37a6d76a9777.png)
defect
qa more on numbers ui defect description in the landing page of more on numbers experiment the home problem solving lab links are present outside of the page width instead the links should be placed within the page limit inorder to maintain the page utility actual result in the landing page of more on numbers experiment the home problem solving lab links are placed outside of the page width environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link
1
346,366
10,411,415,486
IssuesEvent
2019-09-13 13:49:27
getkirby/kirby
https://api.github.com/repos/getkirby/kirby
closed
Custom panel css/js without timestamp
difficulty: easy 🍓 priority: medium 🔜 type: enhancement ✨
**Describe the bug** It is not possible to cachebust [custom panel css/js](https://getkirby.com/docs/reference/system/options/panel#custom-panel-css). **To Reproduce** ```php 'panel' => [ 'css' => 'assets/css/custom-panel.css' ] ``` Go to yourdomain.com/panel and find in the source code: ```html <link rel="stylesheet" href="https://yourdomain.com/assets/css/custom-panel.css"> ``` All other css/js files are copied to a media/fingerprint folder. **Expected behavior** Copy custom panel assets to the media/fingerprint folder. **Kirby Version** 3.1.3
1.0
Custom panel css/js without timestamp - **Describe the bug** It is not possible to cachebust [custom panel css/js](https://getkirby.com/docs/reference/system/options/panel#custom-panel-css). **To Reproduce** ```php 'panel' => [ 'css' => 'assets/css/custom-panel.css' ] ``` Go to yourdomain.com/panel and find in the source code: ```html <link rel="stylesheet" href="https://yourdomain.com/assets/css/custom-panel.css"> ``` All other css/js files are copied to a media/fingerprint folder. **Expected behavior** Copy custom panel assets to the media/fingerprint folder. **Kirby Version** 3.1.3
non_defect
custom panel css js without timestamp describe the bug it is not possible to cachebust to reproduce php panel css assets css custom panel css go to yourdomain com panel and find in the source code html link rel stylesheet href all other css js files are copied to a media fingerprint folder expected behavior copy custom panel assets to the media fingerprint folder kirby version
0
55,118
7,961,902,249
IssuesEvent
2018-07-13 12:35:38
zulip/zulip
https://api.github.com/repos/zulip/zulip
closed
Scrolling problems on /api and /help
area: documentation (api and integrations) area: documentation (user)
The use of perfect-scrollbar on the /api and /help pages is associated with a number of problems. * If I search the page with Ctrl+F for a word that’s out of view, the word isn’t scrolled into view. * perfect-scrollbar doesn’t feel like a native scrollbar. On my laptop, it’s entirely missing the momentum I expect when making flicking motions on the touchpad, and the smooth animation I expect when scrolling with arrow keys or PgUp/PgDn. On my phone, it seems to be emulating momentum but with subtly different physics than expected. * If the browser is less than 500px tall, [this rule in portico-signin.scss](https://github.com/zulip/zulip/blob/1.8.1/static/styles/portico-signin.css#L2) forces the browser to draw its own scrollbar over the perfect-scrollbar. The browser’s scrollbar ends up scrolling the logo in the header, rather than anything you might actually want to scroll. Can we rethink the use of an emulated JavaScript scrollbar? What was wrong with a real scrollbar?
2.0
Scrolling problems on /api and /help - The use of perfect-scrollbar on the /api and /help pages is associated with a number of problems. * If I search the page with Ctrl+F for a word that’s out of view, the word isn’t scrolled into view. * perfect-scrollbar doesn’t feel like a native scrollbar. On my laptop, it’s entirely missing the momentum I expect when making flicking motions on the touchpad, and the smooth animation I expect when scrolling with arrow keys or PgUp/PgDn. On my phone, it seems to be emulating momentum but with subtly different physics than expected. * If the browser is less than 500px tall, [this rule in portico-signin.scss](https://github.com/zulip/zulip/blob/1.8.1/static/styles/portico-signin.css#L2) forces the browser to draw its own scrollbar over the perfect-scrollbar. The browser’s scrollbar ends up scrolling the logo in the header, rather than anything you might actually want to scroll. Can we rethink the use of an emulated JavaScript scrollbar? What was wrong with a real scrollbar?
non_defect
scrolling problems on api and help the use of perfect scrollbar on the api and help pages is associated with a number of problems if i search the page with ctrl f for a word that’s out of view the word isn’t scrolled into view perfect scrollbar doesn’t feel like a native scrollbar on my laptop it’s entirely missing the momentum i expect when making flicking motions on the touchpad and the smooth animation i expect when scrolling with arrow keys or pgup pgdn on my phone it seems to be emulating momentum but with subtly different physics than expected if the browser is less than tall forces the browser to draw its own scrollbar over the perfect scrollbar the browser’s scrollbar ends up scrolling the logo in the header rather than anything you might actually want to scroll can we rethink the use of an emulated javascript scrollbar what was wrong with a real scrollbar
0
178,663
29,952,584,220
IssuesEvent
2023-06-23 03:29:06
microsoft/microsoft-ui-xaml
https://api.github.com/repos/microsoft/microsoft-ui-xaml
closed
UI-XAML files used outdated code for icons
product-winui3 product-winui2 team-Design
There are files that still use outdated codes for icons. ![image](https://user-images.githubusercontent.com/65828559/111816539-3f889e00-88dd-11eb-85f6-e9b0f0f65256.png) https://docs.microsoft.com/windows/apps/design/style/segoe-fluent-icons-font For example - but not limited to: [microsoft-ui-xaml](https://github.com/microsoft/microsoft-ui-xaml/tree/6684208ed977a1490bce9a6d0a8722e21d6d77ec)/[dev](https://github.com/microsoft/microsoft-ui-xaml/tree/6684208ed977a1490bce9a6d0a8722e21d6d77ec/dev)/[NavigationView](https://github.com/microsoft/microsoft-ui-xaml/tree/6684208ed977a1490bce9a6d0a8722e21d6d77ec/dev/NavigationView)/NavigationView_rs1_themeresources.xaml ```xaml 397 <Setter Property="Content" Value="&#xE11A;"/> 464 Glyph="&#xE10C;" 532 Glyph="&#xE10C;" ``` Please check these files and update according to your own recommendation
1.0
UI-XAML files used outdated code for icons - There are files that still use outdated codes for icons. ![image](https://user-images.githubusercontent.com/65828559/111816539-3f889e00-88dd-11eb-85f6-e9b0f0f65256.png) https://docs.microsoft.com/windows/apps/design/style/segoe-fluent-icons-font For example - but not limited to: [microsoft-ui-xaml](https://github.com/microsoft/microsoft-ui-xaml/tree/6684208ed977a1490bce9a6d0a8722e21d6d77ec)/[dev](https://github.com/microsoft/microsoft-ui-xaml/tree/6684208ed977a1490bce9a6d0a8722e21d6d77ec/dev)/[NavigationView](https://github.com/microsoft/microsoft-ui-xaml/tree/6684208ed977a1490bce9a6d0a8722e21d6d77ec/dev/NavigationView)/NavigationView_rs1_themeresources.xaml ```xaml 397 <Setter Property="Content" Value="&#xE11A;"/> 464 Glyph="&#xE10C;" 532 Glyph="&#xE10C;" ``` Please check these files and update according to your own recommendation
non_defect
ui xaml files used outdated code for icons there are files that still use outdated codes for icons for example but not limited to xaml glyph glyph please check these files and update according to your own recommendation
0
33,833
9,206,030,457
IssuesEvent
2019-03-08 12:29:12
qissue-bot/QGIS
https://api.github.com/repos/qissue-bot/QGIS
closed
make install fails: missing file
Category: Build/Install Component: Affected QGIS version Component: Crashes QGIS or corrupts data Component: Easy fix? Component: Operating System Component: Pull Request or Patch supplied Component: Regression? Component: Resolution Priority: Low Project: QGIS Application Status: Closed Tracker: Bug report
--- Author Name: **gjm -** (gjm -) Original Redmine Issue: 1332, https://issues.qgis.org/issues/1332 Original Assignee: nobody - --- From head, when doing _make install_, I get this error: ``` CMake Error at images/themes/default/cmake_install.cmake:36 (FILE): file INSTALL cannot find file "/home/gavin/work/qgis/qgis/images/themes/default/mActionPan.png" to install. Call Stack (most recent call first): images/themes/cmake_install.cmake:37 (INCLUDE) images/cmake_install.cmake:41 (INCLUDE) cmake_install.cmake:56 (INCLUDE) make: *** [install] Error 1 ``` Perhaps someone forgot to add that file?
1.0
make install fails: missing file - --- Author Name: **gjm -** (gjm -) Original Redmine Issue: 1332, https://issues.qgis.org/issues/1332 Original Assignee: nobody - --- From head, when doing _make install_, I get this error: ``` CMake Error at images/themes/default/cmake_install.cmake:36 (FILE): file INSTALL cannot find file "/home/gavin/work/qgis/qgis/images/themes/default/mActionPan.png" to install. Call Stack (most recent call first): images/themes/cmake_install.cmake:37 (INCLUDE) images/cmake_install.cmake:41 (INCLUDE) cmake_install.cmake:56 (INCLUDE) make: *** [install] Error 1 ``` Perhaps someone forgot to add that file?
non_defect
make install fails missing file author name gjm gjm original redmine issue original assignee nobody from head when doing make install i get this error cmake error at images themes default cmake install cmake file file install cannot find file home gavin work qgis qgis images themes default mactionpan png to install call stack most recent call first images themes cmake install cmake include images cmake install cmake include cmake install cmake include make error perhaps someone forgot to add that file
0
30,557
6,155,590,735
IssuesEvent
2017-06-28 15:02:27
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Sort is not working for columns when sortable is placed on columns in p-headerColumnGroup tag
defect
<!-- - IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING. - IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours. --> **I'm submitting a ...** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed. http://plnkr.co/edit/NtWWnN You can download my demo at https://github.com/yuvalbl/ng2-grids-demo then do (also in the repo readme.md): npm install angular-cli -g npm install ng serve Now you can see the app on http http://localhost:4200/ issue can be seen on http://localhost:4200/grid1 **Current behavior** <!-- Describe how the bug manifests. --> Table when sortable tag is placed on columns in p-headerColumnGroup do not sort when header is pressed (only first few entries are sorted) **Expected behavior** <!-- Describe what the behavior would be without the bug. --> Should be sort all entries **Minimal reproduction of the problem with instructions** Download my demo from above and follow the install steps to view the issue. OR create a table (with at least 10 lines) with p-headerColumnGroup tag, and place a sortable="true" on one of the columns Click on the column header to sort and see the issue arise. <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> * **Angular version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> 2.3.1 * **PrimeNG version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> 2.0.0-rc.2 * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced --> chrome 56 * **Language:** [all | TypeScript X.X | ES6/7 | ES5] TypeScript * **Node (for AoT issues):** `node --version` =
1.0
Sort is not working for columns when sortable is placed on columns in p-headerColumnGroup tag - <!-- - IF YOU DON'T FILL OUT THE FOLLOWING INFORMATION WE MIGHT CLOSE YOUR ISSUE WITHOUT INVESTIGATING. - IF YOU'D LIKE TO SECURE OUR RESPONSE, YOU MAY CONSIDER PRIMENG PRO SUPPORT WHERE SUPPORT IS PROVIDED WITHIN 4 hours. --> **I'm submitting a ...** (check one with "x") ``` [x] bug report => Search github for a similar issue or PR before submitting [ ] feature request => Please check if request is not on the roadmap already https://github.com/primefaces/primeng/wiki/Roadmap [ ] support request => Please do not submit support request here, instead see http://forum.primefaces.org/viewforum.php?f=35 ``` **Plunkr Case (Bug Reports)** Please fork the plunkr below and create a case demonstrating your bug report. Issues without a plunkr have much less possibility to be reviewed. http://plnkr.co/edit/NtWWnN You can download my demo at https://github.com/yuvalbl/ng2-grids-demo then do (also in the repo readme.md): npm install angular-cli -g npm install ng serve Now you can see the app on http http://localhost:4200/ issue can be seen on http://localhost:4200/grid1 **Current behavior** <!-- Describe how the bug manifests. --> Table when sortable tag is placed on columns in p-headerColumnGroup do not sort when header is pressed (only first few entries are sorted) **Expected behavior** <!-- Describe what the behavior would be without the bug. --> Should be sort all entries **Minimal reproduction of the problem with instructions** Download my demo from above and follow the install steps to view the issue. OR create a table (with at least 10 lines) with p-headerColumnGroup tag, and place a sortable="true" on one of the columns Click on the column header to sort and see the issue arise. <!-- If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the *STEPS TO REPRODUCE* and if possible a *MINIMAL DEMO* of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:AvJOMERrnz94ekVua0u5). --> **What is the motivation / use case for changing the behavior?** <!-- Describe the motivation or the concrete use case --> **Please tell us about your environment:** <!-- Operating system, IDE, package manager, HTTP server, ... --> * **Angular version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> 2.3.1 * **PrimeNG version:** 2.0.X <!-- Check whether this is still an issue in the most recent Angular version --> 2.0.0-rc.2 * **Browser:** [all | Chrome XX | Firefox XX | IE XX | Safari XX | Mobile Chrome XX | Android X.X Web Browser | iOS XX Safari | iOS XX UIWebView | iOS XX WKWebView ] <!-- All browsers where this could be reproduced --> chrome 56 * **Language:** [all | TypeScript X.X | ES6/7 | ES5] TypeScript * **Node (for AoT issues):** `node --version` =
defect
sort is not working for columns when sortable is placed on columns in p headercolumngroup tag if you don t fill out the following information we might close your issue without investigating if you d like to secure our response you may consider primeng pro support where support is provided within hours i m submitting a check one with x bug report search github for a similar issue or pr before submitting feature request please check if request is not on the roadmap already support request please do not submit support request here instead see plunkr case bug reports please fork the plunkr below and create a case demonstrating your bug report issues without a plunkr have much less possibility to be reviewed you can download my demo at then do also in the repo readme md npm install angular cli g npm install ng serve now you can see the app on http issue can be seen on current behavior table when sortable tag is placed on columns in p headercolumngroup do not sort when header is pressed only first few entries are sorted expected behavior should be sort all entries minimal reproduction of the problem with instructions download my demo from above and follow the install steps to view the issue or create a table with at least lines with p headercolumngroup tag and place a sortable true on one of the columns click on the column header to sort and see the issue arise if the current behavior is a bug or you can illustrate your feature request better with an example please provide the steps to reproduce and if possible a minimal demo of the problem via or similar you can use this template as a starting point what is the motivation use case for changing the behavior please tell us about your environment angular version x primeng version x rc browser chrome language typescript node for aot issues node version
1
79,684
28,496,378,541
IssuesEvent
2023-04-18 14:31:03
vector-im/element-desktop
https://api.github.com/repos/vector-im/element-desktop
opened
MacOS app is much slower than the webapp on the same device
T-Defect
### Steps to reproduce 1. Where are you starting? What can you see? The MacOS app is clearly slower at decoding messages than the webapp ### Outcome #### What did you expect? The same speed or even faster speed on the app than on the webapp #### What happened instead? All the contrary ^^ ### Operating system macOS ### Application version Element version: 1.9.7 Olm version: 3.2.8 ### How did you install the app? from the official website ### Homeserver _No response_ ### Will you send logs? No
1.0
MacOS app is much slower than the webapp on the same device - ### Steps to reproduce 1. Where are you starting? What can you see? The MacOS app is clearly slower at decoding messages than the webapp ### Outcome #### What did you expect? The same speed or even faster speed on the app than on the webapp #### What happened instead? All the contrary ^^ ### Operating system macOS ### Application version Element version: 1.9.7 Olm version: 3.2.8 ### How did you install the app? from the official website ### Homeserver _No response_ ### Will you send logs? No
defect
macos app is much slower than the webapp on the same device steps to reproduce where are you starting what can you see the macos app is clearly slower at decoding messages than the webapp outcome what did you expect the same speed or even faster speed on the app than on the webapp what happened instead all the contrary operating system macos application version element version olm version how did you install the app from the official website homeserver no response will you send logs no
1
163,477
12,731,642,357
IssuesEvent
2020-06-25 09:09:40
ucl-candi/DeepReg
https://api.github.com/repos/ucl-candi/DeepReg
closed
Modify output tests to pytest style and parallelise to make CI more efficient.
CI help wanted: tests
# Issue description CI times out at 30 minutes which causes CI to fail for h5 data loaders. The total expected time it should take to finish all tests is around 40-50 minutes so an extension to the timeout is likely to fix the problem
1.0
Modify output tests to pytest style and parallelise to make CI more efficient. - # Issue description CI times out at 30 minutes which causes CI to fail for h5 data loaders. The total expected time it should take to finish all tests is around 40-50 minutes so an extension to the timeout is likely to fix the problem
non_defect
modify output tests to pytest style and parallelise to make ci more efficient issue description ci times out at minutes which causes ci to fail for data loaders the total expected time it should take to finish all tests is around minutes so an extension to the timeout is likely to fix the problem
0
21,062
3,455,074,980
IssuesEvent
2015-12-17 18:32:15
cakephp/migrations
https://api.github.com/repos/cakephp/migrations
closed
Migration Snapshot isn't exporting decimal fields correctly
Defect
We started to use migrations with an existing database, so used the `bake migration_snapshot` command as per the [docs][1]. However using this migration to build our test database on our CI platform, broke the builds. This is due to decimal fields being exported incorrectly To reproduce: Create a table as follows ``` CREATE TABLE `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `fee_percentage` decimal(7,6) DEFAULT '0.150000' PRIMARY KEY (`id`), ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; ``` Export it using the `bin/cake bake migration_snapshot Initial` command Review the code and see the field definitions are incorrect ``` $table = $this->table('projects'); $table ->addColumn('fee_percentage', 'decimal', [ 'default' => 0, // this is not the default value 'limit' => 7, // this will not create a 7,6 decimal field 'null' => true, ]) ->create(); ``` I think part of the problem is that precision isn't handled in the bake template in migrations plugin in the `wantedOptions` array. [here][2] Plus as you are casting default values to int's in the Migration helper, you can't have decimal values as defaults in these cases. [here][3] [1]: http://book.cakephp.org/3.0/en/migrations.html#generating-migrations-from-existing-databases [2]: https://github.com/cakephp/migrations/blob/master/src/Template/Bake/config/snapshot.ctp#L16 [3]: https://github.com/cakephp/migrations/blob/master/src/View/Helper/MigrationHelper.php#L264
1.0
Migration Snapshot isn't exporting decimal fields correctly - We started to use migrations with an existing database, so used the `bake migration_snapshot` command as per the [docs][1]. However using this migration to build our test database on our CI platform, broke the builds. This is due to decimal fields being exported incorrectly To reproduce: Create a table as follows ``` CREATE TABLE `projects` ( `id` int(11) NOT NULL AUTO_INCREMENT, `fee_percentage` decimal(7,6) DEFAULT '0.150000' PRIMARY KEY (`id`), ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8; ``` Export it using the `bin/cake bake migration_snapshot Initial` command Review the code and see the field definitions are incorrect ``` $table = $this->table('projects'); $table ->addColumn('fee_percentage', 'decimal', [ 'default' => 0, // this is not the default value 'limit' => 7, // this will not create a 7,6 decimal field 'null' => true, ]) ->create(); ``` I think part of the problem is that precision isn't handled in the bake template in migrations plugin in the `wantedOptions` array. [here][2] Plus as you are casting default values to int's in the Migration helper, you can't have decimal values as defaults in these cases. [here][3] [1]: http://book.cakephp.org/3.0/en/migrations.html#generating-migrations-from-existing-databases [2]: https://github.com/cakephp/migrations/blob/master/src/Template/Bake/config/snapshot.ctp#L16 [3]: https://github.com/cakephp/migrations/blob/master/src/View/Helper/MigrationHelper.php#L264
defect
migration snapshot isn t exporting decimal fields correctly we started to use migrations with an existing database so used the bake migration snapshot command as per the however using this migration to build our test database on our ci platform broke the builds this is due to decimal fields being exported incorrectly to reproduce create a table as follows create table projects id int not null auto increment fee percentage decimal default primary key id engine innodb auto increment default charset export it using the bin cake bake migration snapshot initial command review the code and see the field definitions are incorrect table this table projects table addcolumn fee percentage decimal default this is not the default value limit this will not create a decimal field null true create i think part of the problem is that precision isn t handled in the bake template in migrations plugin in the wantedoptions array plus as you are casting default values to int s in the migration helper you can t have decimal values as defaults in these cases
1
215,060
16,589,509,216
IssuesEvent
2021-06-01 05:32:19
LEFT-BEE/SWP1
https://api.github.com/repos/LEFT-BEE/SWP1
opened
src파일 생성 및 파일이름 변경
documentation
파일 내 수정사항 1. environ.py -> enviroment.py 로 이름변경 2. src폴더 생성후 *.py 파일 src폴더로 이동
1.0
src파일 생성 및 파일이름 변경 - 파일 내 수정사항 1. environ.py -> enviroment.py 로 이름변경 2. src폴더 생성후 *.py 파일 src폴더로 이동
non_defect
src파일 생성 및 파일이름 변경 파일 내 수정사항 environ py enviroment py 로 이름변경 src폴더 생성후 py 파일 src폴더로 이동
0
68,124
7,088,194,107
IssuesEvent
2018-01-11 20:34:57
brave/browser-laptop
https://api.github.com/repos/brave/browser-laptop
reopened
Update Muon to 4.5.37
QA/checked-Linux QA/checked-Win64 QA/checked-macOS QA/test-plan-specified muon release-notes/include
## Test plan 1. Launch Brave and open `about:brave` 2. `Muon` version should be `4.5.37` ## Description Muon changes were required for: - Brave fills in some password on sites, even though I explicitly turned off password management. #10566 - Trezor passphrase being saved to password list and offered by auto fill without permission. #12563
1.0
Update Muon to 4.5.37 - ## Test plan 1. Launch Brave and open `about:brave` 2. `Muon` version should be `4.5.37` ## Description Muon changes were required for: - Brave fills in some password on sites, even though I explicitly turned off password management. #10566 - Trezor passphrase being saved to password list and offered by auto fill without permission. #12563
non_defect
update muon to test plan launch brave and open about brave muon version should be description muon changes were required for brave fills in some password on sites even though i explicitly turned off password management trezor passphrase being saved to password list and offered by auto fill without permission
0
63,791
8,694,550,084
IssuesEvent
2018-12-04 12:59:06
weaveworks/ui-components
https://api.github.com/repos/weaveworks/ui-components
closed
Add commit message convention to README
chore documentation
There is currently no description of how to format commit messages in this repo, even though improper messages are already being caught by husky's linter.
1.0
Add commit message convention to README - There is currently no description of how to format commit messages in this repo, even though improper messages are already being caught by husky's linter.
non_defect
add commit message convention to readme there is currently no description of how to format commit messages in this repo even though improper messages are already being caught by husky s linter
0
202,216
7,045,392,099
IssuesEvent
2018-01-01 19:04:56
benbaptist/minecraft-wrapper
https://api.github.com/repos/benbaptist/minecraft-wrapper
closed
IRC connection
high priority needs investigation OP reply needed
So this is my first time running this wrapper. And I'm getting this error, according to the traceback log. [2016-01-05 06:02:04] [Wrapper.py/INFO] Disconnected from IRC [2016-01-05 06:02:09] [Wrapper.py/INFO] Connecting to IRC... [2016-01-05 06:03:12] [Wrapper.py/ERROR] Traceback (most recent call last): [2016-01-05 06:03:12] [Wrapper.py/ERROR] File "Wrapper.py/irc.py", line 39, in init [2016-01-05 06:03:12] [Wrapper.py/ERROR] self.connect() [2016-01-05 06:03:12] [Wrapper.py/ERROR] File "Wrapper.py/irc.py", line 53, in connect [2016-01-05 06:03:12] [Wrapper.py/ERROR] self.socket.connect((self.address, self.port)) [2016-01-05 06:03:12] [Wrapper.py/ERROR] File "/usr/lib/python2.7/socket.py", line 224, in meth [2016-01-05 06:03:12] [Wrapper.py/ERROR] return getattr(self._sock,name)(*args) [2016-01-05 06:03:12] [Wrapper.py/ERROR] error: [Errno 110] Connection timed out [2016-01-05 06:03:12] [Wrapper.py/ERROR] [2016-01-05 06:03:12] [Wrapper.py/INFO] Disconnected from IRC Any ideas on what's going wrong?
1.0
IRC connection - So this is my first time running this wrapper. And I'm getting this error, according to the traceback log. [2016-01-05 06:02:04] [Wrapper.py/INFO] Disconnected from IRC [2016-01-05 06:02:09] [Wrapper.py/INFO] Connecting to IRC... [2016-01-05 06:03:12] [Wrapper.py/ERROR] Traceback (most recent call last): [2016-01-05 06:03:12] [Wrapper.py/ERROR] File "Wrapper.py/irc.py", line 39, in init [2016-01-05 06:03:12] [Wrapper.py/ERROR] self.connect() [2016-01-05 06:03:12] [Wrapper.py/ERROR] File "Wrapper.py/irc.py", line 53, in connect [2016-01-05 06:03:12] [Wrapper.py/ERROR] self.socket.connect((self.address, self.port)) [2016-01-05 06:03:12] [Wrapper.py/ERROR] File "/usr/lib/python2.7/socket.py", line 224, in meth [2016-01-05 06:03:12] [Wrapper.py/ERROR] return getattr(self._sock,name)(*args) [2016-01-05 06:03:12] [Wrapper.py/ERROR] error: [Errno 110] Connection timed out [2016-01-05 06:03:12] [Wrapper.py/ERROR] [2016-01-05 06:03:12] [Wrapper.py/INFO] Disconnected from IRC Any ideas on what's going wrong?
non_defect
irc connection so this is my first time running this wrapper and i m getting this error according to the traceback log disconnected from irc connecting to irc traceback most recent call last file wrapper py irc py line in init self connect file wrapper py irc py line in connect self socket connect self address self port file usr lib socket py line in meth return getattr self sock name args error connection timed out disconnected from irc any ideas on what s going wrong
0
53,118
13,260,933,308
IssuesEvent
2020-08-20 19:00:29
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
gulliver-modules::fancyfit test fails on SL5 32bit (Trac #746)
Migrated from Trac combo reconstruction defect
Python traceback: ```text ERROR (I3FortyTwo): PLEASE INVESTIGATE (fortytwo.py:185 in Finish) FATAL (I3FortyTwo): SOME CHECKS FAILED (fortytwo.py:186 in Finish) ERROR (I3Module): <class 'fortytwo.I3FortyTwo'>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)())) Traceback (most recent call last): File "/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fancyfit.py", line 142, in ? tray.Finish() File "/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fortytwo.py", line 186, in Finish icetray.logging.log_fatal("SOME CHECKS FAILED",unit=u42) File "/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py", line 150, in log_fatal raise RuntimeError(message + " (in " + tb[2] + ")") RuntimeError: SOME CHECKS FAILED (in Finish) ``` More output at: http://builds.icecube.wisc.edu/builders/quick_icerec_SL5/builds/1103/steps/test/logs/stdio <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/746">https://code.icecube.wisc.edu/projects/icecube/ticket/746</a>, reported by negaand owned by boersma</em></summary> <p> ```json { "status": "closed", "changetime": "2015-02-11T17:23:17", "_ts": "1423675397463977", "description": "Python traceback:\n\n{{{\nERROR (I3FortyTwo): PLEASE INVESTIGATE (fortytwo.py:185 in Finish)\nFATAL (I3FortyTwo): SOME CHECKS FAILED (fortytwo.py:186 in Finish)\nERROR (I3Module): <class 'fortytwo.I3FortyTwo'>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fancyfit.py\", line 142, in ?\n tray.Finish()\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fortytwo.py\", line 186, in Finish\n icetray.logging.log_fatal(\"SOME CHECKS FAILED\",unit=u42)\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: SOME CHECKS FAILED (in Finish)\n}}}\n\nMore output at: http://builds.icecube.wisc.edu/builders/quick_icerec_SL5/builds/1103/steps/test/logs/stdio\n", "reporter": "nega", "cc": "dataclass@icecube.wisc.edu", "resolution": "wontfix", "time": "2014-09-05T21:19:03", "component": "combo reconstruction", "summary": "gulliver-modules::fancyfit test fails on SL5 32bit", "priority": "normal", "keywords": "gulliver-modules tests", "milestone": "", "owner": "boersma", "type": "defect" } ``` </p> </details>
1.0
gulliver-modules::fancyfit test fails on SL5 32bit (Trac #746) - Python traceback: ```text ERROR (I3FortyTwo): PLEASE INVESTIGATE (fortytwo.py:185 in Finish) FATAL (I3FortyTwo): SOME CHECKS FAILED (fortytwo.py:186 in Finish) ERROR (I3Module): <class 'fortytwo.I3FortyTwo'>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)())) Traceback (most recent call last): File "/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fancyfit.py", line 142, in ? tray.Finish() File "/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fortytwo.py", line 186, in Finish icetray.logging.log_fatal("SOME CHECKS FAILED",unit=u42) File "/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py", line 150, in log_fatal raise RuntimeError(message + " (in " + tb[2] + ")") RuntimeError: SOME CHECKS FAILED (in Finish) ``` More output at: http://builds.icecube.wisc.edu/builders/quick_icerec_SL5/builds/1103/steps/test/logs/stdio <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/746">https://code.icecube.wisc.edu/projects/icecube/ticket/746</a>, reported by negaand owned by boersma</em></summary> <p> ```json { "status": "closed", "changetime": "2015-02-11T17:23:17", "_ts": "1423675397463977", "description": "Python traceback:\n\n{{{\nERROR (I3FortyTwo): PLEASE INVESTIGATE (fortytwo.py:185 in Finish)\nFATAL (I3FortyTwo): SOME CHECKS FAILED (fortytwo.py:186 in Finish)\nERROR (I3Module): <class 'fortytwo.I3FortyTwo'>_0000: Exception thrown (I3Module.cxx:113 in void I3Module::Do(void (I3Module::*)()))\nTraceback (most recent call last):\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fancyfit.py\", line 142, in ?\n tray.Finish()\n File \"/build/buildslave/foraii/quick_icerec_SL5/source/gulliver-modules/resources/scripts/fortytwo.py\", line 186, in Finish\n icetray.logging.log_fatal(\"SOME CHECKS FAILED\",unit=u42)\n File \"/build/buildslave/foraii/quick_icerec_SL5/build/lib/icecube/icetray/i3logging.py\", line 150, in log_fatal\n raise RuntimeError(message + \" (in \" + tb[2] + \")\")\nRuntimeError: SOME CHECKS FAILED (in Finish)\n}}}\n\nMore output at: http://builds.icecube.wisc.edu/builders/quick_icerec_SL5/builds/1103/steps/test/logs/stdio\n", "reporter": "nega", "cc": "dataclass@icecube.wisc.edu", "resolution": "wontfix", "time": "2014-09-05T21:19:03", "component": "combo reconstruction", "summary": "gulliver-modules::fancyfit test fails on SL5 32bit", "priority": "normal", "keywords": "gulliver-modules tests", "milestone": "", "owner": "boersma", "type": "defect" } ``` </p> </details>
defect
gulliver modules fancyfit test fails on trac python traceback text error please investigate fortytwo py in finish fatal some checks failed fortytwo py in finish error exception thrown cxx in void do void traceback most recent call last file build buildslave foraii quick icerec source gulliver modules resources scripts fancyfit py line in tray finish file build buildslave foraii quick icerec source gulliver modules resources scripts fortytwo py line in finish icetray logging log fatal some checks failed unit file build buildslave foraii quick icerec build lib icecube icetray py line in log fatal raise runtimeerror message in tb runtimeerror some checks failed in finish more output at migrated from json status closed changetime ts description python traceback n n nerror please investigate fortytwo py in finish nfatal some checks failed fortytwo py in finish nerror exception thrown cxx in void do void ntraceback most recent call last n file build buildslave foraii quick icerec source gulliver modules resources scripts fancyfit py line in n tray finish n file build buildslave foraii quick icerec source gulliver modules resources scripts fortytwo py line in finish n icetray logging log fatal some checks failed unit n file build buildslave foraii quick icerec build lib icecube icetray py line in log fatal n raise runtimeerror message in tb nruntimeerror some checks failed in finish n n nmore output at reporter nega cc dataclass icecube wisc edu resolution wontfix time component combo reconstruction summary gulliver modules fancyfit test fails on priority normal keywords gulliver modules tests milestone owner boersma type defect
1
78,891
27,811,134,807
IssuesEvent
2023-03-18 05:39:56
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
ztest fails in l2arc_apply_transforms comparing calculated MAC
Type: Defect Component: Encryption Status: Stale
### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Ubuntu Distribution Version | 20.04 Kernel Version | 5.4.0-90 Architecture | amd64 OpenZFS Version | 2.1.99 + delphix changes ### Describe the problem you're observing The specific assertion that failed here is ASSERT0(bcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN));. This assertion verifies that the calculated MAC is the same as the one stored in the header. Failure here means that when redoing the encryption process in order to store the data in the L2ARC, we somehow got a result that differs from the one we got when retrieving the data as it was stored on disk (I think, anyway). This seems like it might be possible if the data was somehow transformed between how it was stored on disk and its current in-memory state. Examining the ARC header, it looks like the ARC_FLAG_COMPRESSED_ARC flag is set. When examining the compression bits, we see that it looks like the compression mode is ZIO_COMPRESS_OFF. This tracks with the b_psize being equal to the b_lsize (both 32, representing a 16 KiB block). ### Describe how to reproduce the problem Occasional ztest failure ### Include any warning/errors/backtraces from the system logs ``` #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007ff3ef1b5921 in __GI_abort () at abort.c:79 #2 0x00007ff3efd3f805 in libspl_assertf (file=0x7ff3f01ea275 "../../module/zfs/arc.c", func=0x7ff3f01ec910 <__FUNCTION__.20744> "l2arc_apply_transforms", line=9014, format=<optimized out>) at assert.c:45 #3 0x00007ff3effa5f98 in l2arc_apply_transforms (spa=spa@entry=0x555e32d69e90, hdr=hdr@entry=0x7ff3b940d250, asize=asize@entry=16384, abd_out=abd_out@entry=0x7ff3ec399e20) at ../../module/zfs/arc.c:9014 #4 0x00007ff3effb8516 in l2arc_write_buffers (spa=spa@entry=0x555e32d69e90, dev=dev@entry=0x7ff3d444b490, target_sz=target_sz@entry=8388608) at ../../module/zfs/arc.c:9207 #5 0x00007ff3effb9c45 in l2arc_feed_thread (unused=<optimized out>) at ../../module/zfs/arc.c:9425 #6 0x00007ff3ef56d6db in start_thread (arg=0x7ff3ec39b700) at pthread_create.c:463 #7 0x00007ff3ef29671f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 ``` I have a core file that I can do some basic analysis on if needed.
1.0
ztest fails in l2arc_apply_transforms comparing calculated MAC - ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Ubuntu Distribution Version | 20.04 Kernel Version | 5.4.0-90 Architecture | amd64 OpenZFS Version | 2.1.99 + delphix changes ### Describe the problem you're observing The specific assertion that failed here is ASSERT0(bcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN));. This assertion verifies that the calculated MAC is the same as the one stored in the header. Failure here means that when redoing the encryption process in order to store the data in the L2ARC, we somehow got a result that differs from the one we got when retrieving the data as it was stored on disk (I think, anyway). This seems like it might be possible if the data was somehow transformed between how it was stored on disk and its current in-memory state. Examining the ARC header, it looks like the ARC_FLAG_COMPRESSED_ARC flag is set. When examining the compression bits, we see that it looks like the compression mode is ZIO_COMPRESS_OFF. This tracks with the b_psize being equal to the b_lsize (both 32, representing a 16 KiB block). ### Describe how to reproduce the problem Occasional ztest failure ### Include any warning/errors/backtraces from the system logs ``` #0 __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51 #1 0x00007ff3ef1b5921 in __GI_abort () at abort.c:79 #2 0x00007ff3efd3f805 in libspl_assertf (file=0x7ff3f01ea275 "../../module/zfs/arc.c", func=0x7ff3f01ec910 <__FUNCTION__.20744> "l2arc_apply_transforms", line=9014, format=<optimized out>) at assert.c:45 #3 0x00007ff3effa5f98 in l2arc_apply_transforms (spa=spa@entry=0x555e32d69e90, hdr=hdr@entry=0x7ff3b940d250, asize=asize@entry=16384, abd_out=abd_out@entry=0x7ff3ec399e20) at ../../module/zfs/arc.c:9014 #4 0x00007ff3effb8516 in l2arc_write_buffers (spa=spa@entry=0x555e32d69e90, dev=dev@entry=0x7ff3d444b490, target_sz=target_sz@entry=8388608) at ../../module/zfs/arc.c:9207 #5 0x00007ff3effb9c45 in l2arc_feed_thread (unused=<optimized out>) at ../../module/zfs/arc.c:9425 #6 0x00007ff3ef56d6db in start_thread (arg=0x7ff3ec39b700) at pthread_create.c:463 #7 0x00007ff3ef29671f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95 ``` I have a core file that I can do some basic analysis on if needed.
defect
ztest fails in apply transforms comparing calculated mac system information type version name distribution name ubuntu distribution version kernel version architecture openzfs version delphix changes describe the problem you re observing the specific assertion that failed here is bcmp mac hdr b crypt hdr b mac zio data mac len this assertion verifies that the calculated mac is the same as the one stored in the header failure here means that when redoing the encryption process in order to store the data in the we somehow got a result that differs from the one we got when retrieving the data as it was stored on disk i think anyway this seems like it might be possible if the data was somehow transformed between how it was stored on disk and its current in memory state examining the arc header it looks like the arc flag compressed arc flag is set when examining the compression bits we see that it looks like the compression mode is zio compress off this tracks with the b psize being equal to the b lsize both representing a kib block describe how to reproduce the problem occasional ztest failure include any warning errors backtraces from the system logs gi raise sig sig entry at sysdeps unix sysv linux raise c in gi abort at abort c in libspl assertf file module zfs arc c func apply transforms line format at assert c in apply transforms spa spa entry hdr hdr entry asize asize entry abd out abd out entry at module zfs arc c in write buffers spa spa entry dev dev entry target sz target sz entry at module zfs arc c in feed thread unused at module zfs arc c in start thread arg at pthread create c in clone at sysdeps unix sysv linux clone s i have a core file that i can do some basic analysis on if needed
1
46,564
13,055,935,034
IssuesEvent
2020-07-30 03:10:00
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
[cvmfs] opencl environment issues (Trac #1450)
Incomplete Migration Migrated from Trac cvmfs defect
Migrated from https://code.icecube.wisc.edu/ticket/1450 ```json { "status": "closed", "changetime": "2016-03-18T21:14:09", "description": "As seen on buildbot \"Ubuntu 14.04 (cvmfs)\", there is some sort of conflict where `libOpenCL.so` is not found, but `/etc/OpenCL/vendors` exists. Make sure that if we're using CVMFS OpenCL that we have it in the list of vendors.\n\nSolution? :\n\n1. If `/etc/OpenCL/vendors` does not exist, use CVMFS copy\n2. If `/etc/OpenCL/vendors` exists, make temp hybrid dir with entries from `/etc/OpenCL/vendors` and an entry for CVMFS.", "reporter": "david.schultz", "cc": "nega, claudio.kopper", "resolution": "fixed", "_ts": "1458335649133028", "component": "cvmfs", "summary": "[cvmfs] opencl environment issues", "priority": "major", "keywords": "", "time": "2015-11-25T16:51:22", "milestone": "", "owner": "david.schultz", "type": "defect" } ```
1.0
[cvmfs] opencl environment issues (Trac #1450) - Migrated from https://code.icecube.wisc.edu/ticket/1450 ```json { "status": "closed", "changetime": "2016-03-18T21:14:09", "description": "As seen on buildbot \"Ubuntu 14.04 (cvmfs)\", there is some sort of conflict where `libOpenCL.so` is not found, but `/etc/OpenCL/vendors` exists. Make sure that if we're using CVMFS OpenCL that we have it in the list of vendors.\n\nSolution? :\n\n1. If `/etc/OpenCL/vendors` does not exist, use CVMFS copy\n2. If `/etc/OpenCL/vendors` exists, make temp hybrid dir with entries from `/etc/OpenCL/vendors` and an entry for CVMFS.", "reporter": "david.schultz", "cc": "nega, claudio.kopper", "resolution": "fixed", "_ts": "1458335649133028", "component": "cvmfs", "summary": "[cvmfs] opencl environment issues", "priority": "major", "keywords": "", "time": "2015-11-25T16:51:22", "milestone": "", "owner": "david.schultz", "type": "defect" } ```
defect
opencl environment issues trac migrated from json status closed changetime description as seen on buildbot ubuntu cvmfs there is some sort of conflict where libopencl so is not found but etc opencl vendors exists make sure that if we re using cvmfs opencl that we have it in the list of vendors n nsolution n if etc opencl vendors does not exist use cvmfs copy if etc opencl vendors exists make temp hybrid dir with entries from etc opencl vendors and an entry for cvmfs reporter david schultz cc nega claudio kopper resolution fixed ts component cvmfs summary opencl environment issues priority major keywords time milestone owner david schultz type defect
1
37,740
15,365,029,820
IssuesEvent
2021-03-01 22:50:52
AutoPacker-OSS/autopacker
https://api.github.com/repos/AutoPacker-OSS/autopacker
closed
Make Selenium for the new organization form
Priority: Low Service: Web App Type: Test
Make a test relating to other issues that was recently added: https://github.com/AutoPacker-OSS/autopacker/issues/79 Description: - The point with the test is to get into testing Selenium in practice while building the test while the new orginization part of the website gets developed. This test will be quite simple, but more of a exercise other than serve a bigger purpose at the start. Requriement for the test: - 1. The test need to identify the button appear on the screen. 2. Being able to navigate by clicking the button and appearing on the "New Orginization" page 3. Identifying on the next page all the elements that is tied to the "New Orginization" form (The test will end here since backend isn't implemented yet. This will be a test only for front end)
1.0
Make Selenium for the new organization form - Make a test relating to other issues that was recently added: https://github.com/AutoPacker-OSS/autopacker/issues/79 Description: - The point with the test is to get into testing Selenium in practice while building the test while the new orginization part of the website gets developed. This test will be quite simple, but more of a exercise other than serve a bigger purpose at the start. Requriement for the test: - 1. The test need to identify the button appear on the screen. 2. Being able to navigate by clicking the button and appearing on the "New Orginization" page 3. Identifying on the next page all the elements that is tied to the "New Orginization" form (The test will end here since backend isn't implemented yet. This will be a test only for front end)
non_defect
make selenium for the new organization form make a test relating to other issues that was recently added description the point with the test is to get into testing selenium in practice while building the test while the new orginization part of the website gets developed this test will be quite simple but more of a exercise other than serve a bigger purpose at the start requriement for the test the test need to identify the button appear on the screen being able to navigate by clicking the button and appearing on the new orginization page identifying on the next page all the elements that is tied to the new orginization form the test will end here since backend isn t implemented yet this will be a test only for front end
0
20,700
3,408,194,760
IssuesEvent
2015-12-04 09:21:02
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Extremely slow mapping Record.into(ProxyableInterface.class)
C: Functionality P: Medium T: Defect
The following is currently rather slow in benchmarks: ```java interface Proxyable { } for (int i = 0; i < 1000000; i++) record.into(Proxyable.class); ``` ---- See also: http://www.jooq.org/doc/latest/manual/sql-execution/performance-considerations/#comment-2390864218
1.0
Extremely slow mapping Record.into(ProxyableInterface.class) - The following is currently rather slow in benchmarks: ```java interface Proxyable { } for (int i = 0; i < 1000000; i++) record.into(Proxyable.class); ``` ---- See also: http://www.jooq.org/doc/latest/manual/sql-execution/performance-considerations/#comment-2390864218
defect
extremely slow mapping record into proxyableinterface class the following is currently rather slow in benchmarks java interface proxyable for int i i i record into proxyable class see also
1
41,996
10,737,987,466
IssuesEvent
2019-10-29 14:04:40
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
NPE in MultiMapService.getStats()
Module: MultiMap Team: Core Type: Defect
Probably `getStats()` called before `partitionContainers` was initialized that shut down MC service. ``` 12:51:54,821 WARN |testWithOneMapTwoSyncs[consistencyCheckStrategy:NONE, maxConcurrentInvocations:-1]| - [ManagementCenterService] hz.ClusterA-194ce8ff-a50d-44ff-a160-fbeb3a8de1580.MC.State.Sender - [127.0.0.1]:5701 [A] [4.0-SNAPSHOT] Hazelcast Management Center Service will be shutdown due to exception. java.lang.NullPointerException at com.hazelcast.multimap.impl.MultiMapService.getStats(MultiMapService.java:459) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.TimedMemberStateFactory.createMemState(TimedMemberStateFactory.java:251) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.TimedMemberStateFactory.createMemberState(TimedMemberStateFactory.java:201) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.TimedMemberStateFactory.createTimedMemberState(TimedMemberStateFactory.java:126) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.ManagementCenterService$PrepareStateThread.run(ManagementCenterService.java:450) [hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] ``` Caught in https://console.aws.amazon.com/s3/object/j-artifacts/Hazelcast-EE-pr-builder/1214 accidentally, in `WanSyncTrackingTest-output.txt`. The NPE occurred multiple times in different test cases.
1.0
NPE in MultiMapService.getStats() - Probably `getStats()` called before `partitionContainers` was initialized that shut down MC service. ``` 12:51:54,821 WARN |testWithOneMapTwoSyncs[consistencyCheckStrategy:NONE, maxConcurrentInvocations:-1]| - [ManagementCenterService] hz.ClusterA-194ce8ff-a50d-44ff-a160-fbeb3a8de1580.MC.State.Sender - [127.0.0.1]:5701 [A] [4.0-SNAPSHOT] Hazelcast Management Center Service will be shutdown due to exception. java.lang.NullPointerException at com.hazelcast.multimap.impl.MultiMapService.getStats(MultiMapService.java:459) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.TimedMemberStateFactory.createMemState(TimedMemberStateFactory.java:251) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.TimedMemberStateFactory.createMemberState(TimedMemberStateFactory.java:201) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.TimedMemberStateFactory.createTimedMemberState(TimedMemberStateFactory.java:126) ~[hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] at com.hazelcast.internal.management.ManagementCenterService$PrepareStateThread.run(ManagementCenterService.java:450) [hazelcast-4.0-SNAPSHOT.jar:4.0-SNAPSHOT] ``` Caught in https://console.aws.amazon.com/s3/object/j-artifacts/Hazelcast-EE-pr-builder/1214 accidentally, in `WanSyncTrackingTest-output.txt`. The NPE occurred multiple times in different test cases.
defect
npe in multimapservice getstats probably getstats called before partitioncontainers was initialized that shut down mc service warn testwithonemaptwosyncs hz clustera mc state sender hazelcast management center service will be shutdown due to exception java lang nullpointerexception at com hazelcast multimap impl multimapservice getstats multimapservice java at com hazelcast internal management timedmemberstatefactory creatememstate timedmemberstatefactory java at com hazelcast internal management timedmemberstatefactory creatememberstate timedmemberstatefactory java at com hazelcast internal management timedmemberstatefactory createtimedmemberstate timedmemberstatefactory java at com hazelcast internal management managementcenterservice preparestatethread run managementcenterservice java caught in accidentally in wansynctrackingtest output txt the npe occurred multiple times in different test cases
1
193,049
6,877,806,776
IssuesEvent
2017-11-20 09:33:43
OpenNebula/one
https://api.github.com/repos/OpenNebula/one
opened
(live) reschedule running VMs with migrate(live)
Category: Core & System Priority: Normal Status: Pending Tracker: Backlog
--- Author Name: **Anton Todorov** (Anton Todorov) Original Redmine Issue: 4133, https://dev.opennebula.org/issues/4133 Original Date: 2015-11-04 --- Add option to Reschedule (live) that migrates the running VMs with migrate(live) I think that this way it will be faster and with less downtime. Kind Regards Anton Todorov
1.0
(live) reschedule running VMs with migrate(live) - --- Author Name: **Anton Todorov** (Anton Todorov) Original Redmine Issue: 4133, https://dev.opennebula.org/issues/4133 Original Date: 2015-11-04 --- Add option to Reschedule (live) that migrates the running VMs with migrate(live) I think that this way it will be faster and with less downtime. Kind Regards Anton Todorov
non_defect
live reschedule running vms with migrate live author name anton todorov anton todorov original redmine issue original date add option to reschedule live that migrates the running vms with migrate live i think that this way it will be faster and with less downtime kind regards anton todorov
0
202,580
15,287,023,915
IssuesEvent
2021-02-23 15:19:52
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
closed
roachtest: splits/largerange/size=10GiB,nodes=3 failed
C-test-failure O-roachtest O-robot branch-release-20.2 release-blocker
[(roachtest).splits/largerange/size=10GiB,nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46): ``` | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) output in run_100114.143_n1_workload_init_bank Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657161-1612856692-116-n3cpu4:1 -- ./workload init bank --rows=41297762 --payload-bytes=100 --ranges=1 {pgurl:1-3} returned | stderr: | ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload) | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 1. Command with error: | | ``` | | ./workload init bank --rows=41297762 --payload-bytes=100 --ranges=1 {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2654,split.go:335,split.go:227,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650 | main.runLargeRangeSplits | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/split.go:335 | main.registerLargeRange.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/split.go:227 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/splits/largerange/size=10GiB,nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=artifacts#/splits/largerange/size=10GiB,nodes=3) Related: - #59958 roachtest: splits/largerange/size=10GiB,nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asplits%2Flargerange%2Fsize%3D10GiB%2Cnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
2.0
roachtest: splits/largerange/size=10GiB,nodes=3 failed - [(roachtest).splits/largerange/size=10GiB,nodes=3 failed](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=buildLog) on [release-20.2@8c79e2bc4b35d36c8527f4c40c974f03d9034f46](https://github.com/cockroachdb/cockroach/commits/8c79e2bc4b35d36c8527f4c40c974f03d9034f46): ``` | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (2) output in run_100114.143_n1_workload_init_bank Wraps: (3) /home/agent/work/.go/src/github.com/cockroachdb/cockroach/bin/roachprod run teamcity-2657161-1612856692-116-n3cpu4:1 -- ./workload init bank --rows=41297762 --payload-bytes=100 --ranges=1 {pgurl:1-3} returned | stderr: | ./workload: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by ./workload) | Error: COMMAND_PROBLEM: exit status 1 | (1) COMMAND_PROBLEM | Wraps: (2) Node 1. Command with error: | | ``` | | ./workload init bank --rows=41297762 --payload-bytes=100 --ranges=1 {pgurl:1-3} | | ``` | Wraps: (3) exit status 1 | Error types: (1) errors.Cmd (2) *hintdetail.withDetail (3) *exec.ExitError | | stdout: Wraps: (4) exit status 20 Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *main.withCommandDetails (4) *exec.ExitError cluster.go:2654,split.go:335,split.go:227,test_runner.go:755: monitor failure: monitor task failed: t.Fatal() was called (1) attached stack trace -- stack trace: | main.(*monitor).WaitE | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2642 | main.(*monitor).Wait | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2650 | main.runLargeRangeSplits | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/split.go:335 | main.registerLargeRange.func1 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/split.go:227 | main.(*testRunner).runTest.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/test_runner.go:755 Wraps: (2) monitor failure Wraps: (3) attached stack trace -- stack trace: | main.(*monitor).wait.func2 | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2698 Wraps: (4) monitor task failed Wraps: (5) attached stack trace -- stack trace: | main.init | /home/agent/work/.go/src/github.com/cockroachdb/cockroach/pkg/cmd/roachtest/cluster.go:2612 | runtime.doInit | /usr/local/go/src/runtime/proc.go:5652 | runtime.main | /usr/local/go/src/runtime/proc.go:191 | runtime.goexit | /usr/local/go/src/runtime/asm_amd64.s:1374 Wraps: (6) t.Fatal() was called Error types: (1) *withstack.withStack (2) *errutil.withPrefix (3) *withstack.withStack (4) *errutil.withPrefix (5) *withstack.withStack (6) *errutil.leafError ``` <details><summary>More</summary><p> Artifacts: [/splits/largerange/size=10GiB,nodes=3](https://teamcity.cockroachdb.com/viewLog.html?buildId=2657161&tab=artifacts#/splits/largerange/size=10GiB,nodes=3) Related: - #59958 roachtest: splits/largerange/size=10GiB,nodes=3 failed [C-test-failure](https://api.github.com/repos/cockroachdb/cockroach/labels/C-test-failure) [O-roachtest](https://api.github.com/repos/cockroachdb/cockroach/labels/O-roachtest) [O-robot](https://api.github.com/repos/cockroachdb/cockroach/labels/O-robot) [branch-master](https://api.github.com/repos/cockroachdb/cockroach/labels/branch-master) [release-blocker](https://api.github.com/repos/cockroachdb/cockroach/labels/release-blocker) [See this test on roachdash](https://roachdash.crdb.dev/?filter=status%3Aopen+t%3A.%2Asplits%2Flargerange%2Fsize%3D10GiB%2Cnodes%3D3.%2A&sort=title&restgroup=false&display=lastcommented+project) <sub>powered by [pkg/cmd/internal/issues](https://github.com/cockroachdb/cockroach/tree/master/pkg/cmd/internal/issues)</sub></p></details>
non_defect
roachtest splits largerange size nodes failed on runtime goexit usr local go src runtime asm s wraps output in run workload init bank wraps home agent work go src github com cockroachdb cockroach bin roachprod run teamcity workload init bank rows payload bytes ranges pgurl returned stderr workload lib linux gnu libm so version glibc not found required by workload error command problem exit status command problem wraps node command with error workload init bank rows payload bytes ranges pgurl wraps exit status error types errors cmd hintdetail withdetail exec exiterror stdout wraps exit status error types withstack withstack errutil withprefix main withcommanddetails exec exiterror cluster go split go split go test runner go monitor failure monitor task failed t fatal was called attached stack trace stack trace main monitor waite home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go main runlargerangesplits home agent work go src github com cockroachdb cockroach pkg cmd roachtest split go main registerlargerange home agent work go src github com cockroachdb cockroach pkg cmd roachtest split go main testrunner runtest home agent work go src github com cockroachdb cockroach pkg cmd roachtest test runner go wraps monitor failure wraps attached stack trace stack trace main monitor wait home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go wraps monitor task failed wraps attached stack trace stack trace main init home agent work go src github com cockroachdb cockroach pkg cmd roachtest cluster go runtime doinit usr local go src runtime proc go runtime main usr local go src runtime proc go runtime goexit usr local go src runtime asm s wraps t fatal was called error types withstack withstack errutil withprefix withstack withstack errutil withprefix withstack withstack errutil leaferror more artifacts related roachtest splits largerange size nodes failed powered by
0
713,451
24,528,604,214
IssuesEvent
2022-10-11 14:47:21
LLK/scratch-www
https://api.github.com/repos/LLK/scratch-www
closed
Mute pop up on studio pages is too big
priority 2 Medium Impact High Severity
This is what the mute pop up looks like when you are on a studio page: <img width="1432" alt="Screen Shot 2021-07-14 at 11 41 22 AM" src="https://user-images.githubusercontent.com/68165163/125651148-c1320144-738a-46fe-afd6-141ded0a403d.png"> You can't scroll down to get the close/next button. ### Steps to Reproduce Get muted and open the pop up on a studio page ### Operating System and Browser macOS 11.4 Firefox 89 By the way, I got muted for sharing a link to the password reset page...
1.0
Mute pop up on studio pages is too big - This is what the mute pop up looks like when you are on a studio page: <img width="1432" alt="Screen Shot 2021-07-14 at 11 41 22 AM" src="https://user-images.githubusercontent.com/68165163/125651148-c1320144-738a-46fe-afd6-141ded0a403d.png"> You can't scroll down to get the close/next button. ### Steps to Reproduce Get muted and open the pop up on a studio page ### Operating System and Browser macOS 11.4 Firefox 89 By the way, I got muted for sharing a link to the password reset page...
non_defect
mute pop up on studio pages is too big this is what the mute pop up looks like when you are on a studio page img width alt screen shot at am src you can t scroll down to get the close next button steps to reproduce get muted and open the pop up on a studio page operating system and browser macos firefox by the way i got muted for sharing a link to the password reset page
0
67,944
21,329,952,604
IssuesEvent
2022-04-18 06:54:18
klubcoin/lcn-mobile
https://api.github.com/repos/klubcoin/lcn-mobile
opened
[Account Maintenance][Security & Privacy] Fix should change "MetaMetrics" to "KlubcoinMetrics".
Defect Should Have Minor Account Maintenance Services
### **Description:** Should change "MetaMetrics" to "KlubcoinMetrics". **Build Environment:** Prod Candidate Environment **Affects Version:** 1.0.0.prod.4 **Device Platform:** Android **Device OS:** 11 **Test Device:** OnePlus 7T Pro ### **Pre-condition:** 1. User successfully installed Klubcoin App 2. User has an existing Klubcoin Wallet Account 3. User is currently at Klubcoin Dashboard ### **Steps to Reproduce:** 1. Tap Hamburger Button 2. Tap Settings 3. Tap Security & Privacy 4. Navigate to Metrics Section ### **Expected Result:** Display KlubcoinMetrics ### **Actual Result:** Displaying MetaMetrics ### **Attachment/s:** ![Screenshot_20220418-132808__03](https://user-images.githubusercontent.com/100281200/163768130-f43be62b-296c-4cab-a579-a77fbc268afa.jpg)
1.0
[Account Maintenance][Security & Privacy] Fix should change "MetaMetrics" to "KlubcoinMetrics". - ### **Description:** Should change "MetaMetrics" to "KlubcoinMetrics". **Build Environment:** Prod Candidate Environment **Affects Version:** 1.0.0.prod.4 **Device Platform:** Android **Device OS:** 11 **Test Device:** OnePlus 7T Pro ### **Pre-condition:** 1. User successfully installed Klubcoin App 2. User has an existing Klubcoin Wallet Account 3. User is currently at Klubcoin Dashboard ### **Steps to Reproduce:** 1. Tap Hamburger Button 2. Tap Settings 3. Tap Security & Privacy 4. Navigate to Metrics Section ### **Expected Result:** Display KlubcoinMetrics ### **Actual Result:** Displaying MetaMetrics ### **Attachment/s:** ![Screenshot_20220418-132808__03](https://user-images.githubusercontent.com/100281200/163768130-f43be62b-296c-4cab-a579-a77fbc268afa.jpg)
defect
fix should change metametrics to klubcoinmetrics description should change metametrics to klubcoinmetrics build environment prod candidate environment affects version prod device platform android device os test device oneplus pro pre condition user successfully installed klubcoin app user has an existing klubcoin wallet account user is currently at klubcoin dashboard steps to reproduce tap hamburger button tap settings tap security privacy navigate to metrics section expected result display klubcoinmetrics actual result displaying metametrics attachment s
1
18,385
3,052,131,025
IssuesEvent
2015-08-12 13:13:33
bigbluebutton/bigbluebutton
https://api.github.com/repos/bigbluebutton/bigbluebutton
closed
Simplfy the steps to broadcast video
Priority-Low Status-Verified Type-Defect
Originally reported on Google Code with ID 18 ``` Reduce the steps for broadcasting a video to the following: 1. Click "Broadcast Video" button Note: If the user does not have any video camera, this button should not be shown. After clicking, the video chat module appears. If there is = 1 one valid video source, automatically select, else if there is > 1 video source, then 2a. Display the settings dialog to let the user choose. 2b. Users chooses and clicks 'OK' Flash prompts to let the user connect to the video. 3. User clicks OK in flash prompt. At this point, the video should automatically broadcast. There shouldn't be any stop button. To stop broadcasting the video, the user needs only close the dialog. What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Please provide any additional information below. ``` Reported by `ffdixon` on 2008-09-26 20:49:49
1.0
Simplfy the steps to broadcast video - Originally reported on Google Code with ID 18 ``` Reduce the steps for broadcasting a video to the following: 1. Click "Broadcast Video" button Note: If the user does not have any video camera, this button should not be shown. After clicking, the video chat module appears. If there is = 1 one valid video source, automatically select, else if there is > 1 video source, then 2a. Display the settings dialog to let the user choose. 2b. Users chooses and clicks 'OK' Flash prompts to let the user connect to the video. 3. User clicks OK in flash prompt. At this point, the video should automatically broadcast. There shouldn't be any stop button. To stop broadcasting the video, the user needs only close the dialog. What steps will reproduce the problem? 1. 2. 3. What is the expected output? What do you see instead? What version of the product are you using? On what operating system? Please provide any additional information below. ``` Reported by `ffdixon` on 2008-09-26 20:49:49
defect
simplfy the steps to broadcast video originally reported on google code with id reduce the steps for broadcasting a video to the following click broadcast video button note if the user does not have any video camera this button should not be shown after clicking the video chat module appears if there is one valid video source automatically select else if there is video source then display the settings dialog to let the user choose users chooses and clicks ok flash prompts to let the user connect to the video user clicks ok in flash prompt at this point the video should automatically broadcast there shouldn t be any stop button to stop broadcasting the video the user needs only close the dialog what steps will reproduce the problem what is the expected output what do you see instead what version of the product are you using on what operating system please provide any additional information below reported by ffdixon on
1
751,017
26,227,921,054
IssuesEvent
2023-01-04 20:35:57
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
closed
[YSQL] Support backspace in terminal
kind/bug area/ysql priority/medium community/request
Jira Link: [DB-4830](https://yugabyte.atlassian.net/browse/DB-4830) Currently, backspace is not supported. This makes it impossible to correct typos when typing commands. Supporting backspace will improve user experience a lot.
1.0
[YSQL] Support backspace in terminal - Jira Link: [DB-4830](https://yugabyte.atlassian.net/browse/DB-4830) Currently, backspace is not supported. This makes it impossible to correct typos when typing commands. Supporting backspace will improve user experience a lot.
non_defect
support backspace in terminal jira link currently backspace is not supported this makes it impossible to correct typos when typing commands supporting backspace will improve user experience a lot
0
456,709
13,150,972,727
IssuesEvent
2020-08-09 14:28:06
chrisjsewell/docutils
https://api.github.com/repos/chrisjsewell/docutils
closed
Suggestion to change the URLs for :RFC: role [SF:bugs:233]
bugs closed-fixed priority-5
author: pchampin created: 2013-04-30 22:15:05.865000 assigned: None SF_url: https://sourceforge.net/p/docutils/bugs/233 the :RFC: role redirects to URLs of the form: http://www.faqs.org/rfcs/rfc3986.html I prefer URLs of the form: http://tools.ietf.org/html/rfc3986 and have the following arguments for that: * they are hosted by IETF, which is the original source * they contain much more hyperlinks (page references, section references, * they contain links to the PDF and TXT versions --- commenter: milde posted: 2015-02-17 14:55:13.967000 title: #233 Suggestion to change the URLs for :RFC: role - **status**: open --> closed-fixed --- commenter: milde posted: 2015-02-17 14:55:14.865000 title: #233 Suggestion to change the URLs for :RFC: role Fixed. Thank you for the hint. --- commenter: milde posted: 2017-10-25 07:16:44.797000 title: #233 Suggestion to change the URLs for :RFC: role - **Group**: repository --> Default
1.0
Suggestion to change the URLs for :RFC: role [SF:bugs:233] - author: pchampin created: 2013-04-30 22:15:05.865000 assigned: None SF_url: https://sourceforge.net/p/docutils/bugs/233 the :RFC: role redirects to URLs of the form: http://www.faqs.org/rfcs/rfc3986.html I prefer URLs of the form: http://tools.ietf.org/html/rfc3986 and have the following arguments for that: * they are hosted by IETF, which is the original source * they contain much more hyperlinks (page references, section references, * they contain links to the PDF and TXT versions --- commenter: milde posted: 2015-02-17 14:55:13.967000 title: #233 Suggestion to change the URLs for :RFC: role - **status**: open --> closed-fixed --- commenter: milde posted: 2015-02-17 14:55:14.865000 title: #233 Suggestion to change the URLs for :RFC: role Fixed. Thank you for the hint. --- commenter: milde posted: 2017-10-25 07:16:44.797000 title: #233 Suggestion to change the URLs for :RFC: role - **Group**: repository --> Default
non_defect
suggestion to change the urls for rfc role author pchampin created assigned none sf url the rfc role redirects to urls of the form i prefer urls of the form and have the following arguments for that they are hosted by ietf which is the original source they contain much more hyperlinks page references section references they contain links to the pdf and txt versions commenter milde posted title suggestion to change the urls for rfc role status open closed fixed commenter milde posted title suggestion to change the urls for rfc role fixed thank you for the hint commenter milde posted title suggestion to change the urls for rfc role group repository default
0
76,354
26,391,213,705
IssuesEvent
2023-01-12 15:48:59
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
opened
Datatable: with columnToggler displays wrong headerText when header has a Link/Button
:lady_beetle: defect :bangbang: needs-triage
### Describe the bug I have some datatable columns with links/buttons specified like: ``` <p:column > <f:facet name="header"> <h:outputText value="myHeader" /> <p:commandLink action="#{myBean.xpto}" > <i class=" fa fa-pie-chart icon-black"/> </p:commandLink> </f:facet> <h:outputText value="#{item.test}" /> </p:column> ``` And a component columnToggler to select which columns need to be showed: `<p:columnToggler datasource="tabelaItens" trigger="togglerColTab" />` Each column that has <p:commandLink /> or <p:commandButton /> (this one is also worse because it displays the title="" value as well) displays a code like: ``` myHeader $(function(){PrimeFaces.cw("CommandLink","widget_formDetalhe_tabelaItens_j_idt371", {id:"formDetalhe:tabelaItens:j_idt371"});} ``` on columnToggler near the column name. Even if insert `<p:column headerText="myHeader" ...>` error persist. ### Reproducer _No response_ ### Expected behavior Only column header (text) should be displayed on <p:columnToggler />, especially when headerText is specified. ### PrimeFaces edition Elite ### PrimeFaces version 12.0.2 ### Theme _No response_ ### JSF implementation All ### JSF version 2.2 ### Java version 8 ### Browser(s) _No response_
1.0
Datatable: with columnToggler displays wrong headerText when header has a Link/Button - ### Describe the bug I have some datatable columns with links/buttons specified like: ``` <p:column > <f:facet name="header"> <h:outputText value="myHeader" /> <p:commandLink action="#{myBean.xpto}" > <i class=" fa fa-pie-chart icon-black"/> </p:commandLink> </f:facet> <h:outputText value="#{item.test}" /> </p:column> ``` And a component columnToggler to select which columns need to be showed: `<p:columnToggler datasource="tabelaItens" trigger="togglerColTab" />` Each column that has <p:commandLink /> or <p:commandButton /> (this one is also worse because it displays the title="" value as well) displays a code like: ``` myHeader $(function(){PrimeFaces.cw("CommandLink","widget_formDetalhe_tabelaItens_j_idt371", {id:"formDetalhe:tabelaItens:j_idt371"});} ``` on columnToggler near the column name. Even if insert `<p:column headerText="myHeader" ...>` error persist. ### Reproducer _No response_ ### Expected behavior Only column header (text) should be displayed on <p:columnToggler />, especially when headerText is specified. ### PrimeFaces edition Elite ### PrimeFaces version 12.0.2 ### Theme _No response_ ### JSF implementation All ### JSF version 2.2 ### Java version 8 ### Browser(s) _No response_
defect
datatable with columntoggler displays wrong headertext when header has a link button describe the bug i have some datatable columns with links buttons specified like and a component columntoggler to select which columns need to be showed each column that has or this one is also worse because it displays the title value as well displays a code like myheader function primefaces cw commandlink widget formdetalhe tabelaitens j id formdetalhe tabelaitens j on columntoggler near the column name even if insert error persist reproducer no response expected behavior only column header text should be displayed on especially when headertext is specified primefaces edition elite primefaces version theme no response jsf implementation all jsf version java version browser s no response
1
173,596
27,495,422,868
IssuesEvent
2023-03-05 04:13:02
Every-Time-Clone/every-time-iOS
https://api.github.com/repos/Every-Time-Clone/every-time-iOS
closed
[Design] 로그인 첫 화면 구현
Design
## 📌 Issue <!-- 이슈에 대해 간략하게 설명해주세요 --> 로그인 첫 화면 구현 ## 📝 To-do <!-- 진행할 작업에 대해 적어주세요 --> - [x] 로그인 첫 화면 레이아웃 - [ ] 에브리타임 로그인 버튼 누를 시 홈 화면으로 전환 (임시)
1.0
[Design] 로그인 첫 화면 구현 - ## 📌 Issue <!-- 이슈에 대해 간략하게 설명해주세요 --> 로그인 첫 화면 구현 ## 📝 To-do <!-- 진행할 작업에 대해 적어주세요 --> - [x] 로그인 첫 화면 레이아웃 - [ ] 에브리타임 로그인 버튼 누를 시 홈 화면으로 전환 (임시)
non_defect
로그인 첫 화면 구현 📌 issue 로그인 첫 화면 구현 📝 to do 로그인 첫 화면 레이아웃 에브리타임 로그인 버튼 누를 시 홈 화면으로 전환 임시
0
71,416
23,616,573,713
IssuesEvent
2022-08-24 16:22:22
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Unable to decrypt: The sender's device has not sent us the keys for this message.
T-Defect
### Steps to reproduce I simply dmed one of the people on the server I'm in. ### Outcome #### What did you expect? For the DM to go through normally #### What happened instead? The person on the other end can't see my messages and it turns out like "Unable to decrypt: The sender's device has not sent us the keys for this message." ![image](https://user-images.githubusercontent.com/112004453/186470814-aa95f6d6-6f2c-4d49-94bb-a216777cb63e.png) ### Operating system Windows ### Application version Element version: 1.11.3 Olm version: 3.2.12 ### How did you install the app? Element.io/get-started ### Homeserver _No response_ ### Will you send logs? No
1.0
Unable to decrypt: The sender's device has not sent us the keys for this message. - ### Steps to reproduce I simply dmed one of the people on the server I'm in. ### Outcome #### What did you expect? For the DM to go through normally #### What happened instead? The person on the other end can't see my messages and it turns out like "Unable to decrypt: The sender's device has not sent us the keys for this message." ![image](https://user-images.githubusercontent.com/112004453/186470814-aa95f6d6-6f2c-4d49-94bb-a216777cb63e.png) ### Operating system Windows ### Application version Element version: 1.11.3 Olm version: 3.2.12 ### How did you install the app? Element.io/get-started ### Homeserver _No response_ ### Will you send logs? No
defect
unable to decrypt the sender s device has not sent us the keys for this message steps to reproduce i simply dmed one of the people on the server i m in outcome what did you expect for the dm to go through normally what happened instead the person on the other end can t see my messages and it turns out like unable to decrypt the sender s device has not sent us the keys for this message operating system windows application version element version olm version how did you install the app element io get started homeserver no response will you send logs no
1
16,562
2,918,327,745
IssuesEvent
2015-06-24 07:18:55
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
stats.spearmanr - wrong input arguments description
defect Documentation scipy.stats
In the axis part: axis : int or None, optional If axis=0 (default), then each column represents a variable, with observations in the rows. If axis=0, the relationship is transposed: each row represents a variable, while the columns contain observations. If axis=None, then both arrays will be raveled. version 0.15.0 Issue: Twice axis=0, plus based on my observation it works the other way round (or correct me please) I think it should be: axis : int or None, optional If axis=0 (default), then each ROW represents a variable, with observations in the COLUMNS. If axis=1, the relationship is transposed: each COLUMN represents a variable, while the ROWS contain observations. If axis=None, then both arrays will be raveled. My code: var1 = [1, 1, 1, 1, 1, 2] var2 = [5, 5, 5, 5, 5, 10] from scipy.stats import spearmanr cor, p = spearmanr(var1,var2) print(cor) OUTPUT: 1.0 _____ My code2: var1 = [1, 1, 1, 1, 1, 2] var2 = [5, 5, 5, 5, 5, 10] from scipy.stats import spearmanr cor, p = spearmanr(var1,var2,0) print(cor) OUTPUT: 1.0 _____ My code3: var1 = [1, 1, 1, 1, 1, 2] var2 = [5, 5, 5, 5, 5, 10] from scipy.stats import spearmanr cor, p = spearmanr(var1,var2,1) print(cor) OUTPUT: ValueError: axis must be less than arr.ndim; axis=1, rank=1.
1.0
stats.spearmanr - wrong input arguments description - In the axis part: axis : int or None, optional If axis=0 (default), then each column represents a variable, with observations in the rows. If axis=0, the relationship is transposed: each row represents a variable, while the columns contain observations. If axis=None, then both arrays will be raveled. version 0.15.0 Issue: Twice axis=0, plus based on my observation it works the other way round (or correct me please) I think it should be: axis : int or None, optional If axis=0 (default), then each ROW represents a variable, with observations in the COLUMNS. If axis=1, the relationship is transposed: each COLUMN represents a variable, while the ROWS contain observations. If axis=None, then both arrays will be raveled. My code: var1 = [1, 1, 1, 1, 1, 2] var2 = [5, 5, 5, 5, 5, 10] from scipy.stats import spearmanr cor, p = spearmanr(var1,var2) print(cor) OUTPUT: 1.0 _____ My code2: var1 = [1, 1, 1, 1, 1, 2] var2 = [5, 5, 5, 5, 5, 10] from scipy.stats import spearmanr cor, p = spearmanr(var1,var2,0) print(cor) OUTPUT: 1.0 _____ My code3: var1 = [1, 1, 1, 1, 1, 2] var2 = [5, 5, 5, 5, 5, 10] from scipy.stats import spearmanr cor, p = spearmanr(var1,var2,1) print(cor) OUTPUT: ValueError: axis must be less than arr.ndim; axis=1, rank=1.
defect
stats spearmanr wrong input arguments description in the axis part axis int or none optional if axis default then each column represents a variable with observations in the rows if axis the relationship is transposed each row represents a variable while the columns contain observations if axis none then both arrays will be raveled version issue twice axis plus based on my observation it works the other way round or correct me please i think it should be axis int or none optional if axis default then each row represents a variable with observations in the columns if axis the relationship is transposed each column represents a variable while the rows contain observations if axis none then both arrays will be raveled my code from scipy stats import spearmanr cor p spearmanr print cor output my from scipy stats import spearmanr cor p spearmanr print cor output my from scipy stats import spearmanr cor p spearmanr print cor output valueerror axis must be less than arr ndim axis rank
1
56,504
15,116,019,081
IssuesEvent
2021-02-09 05:53:19
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
mmap of immutable file behaves differently on zfs vs. ext4/btrfs
Status: Stale Type: Defect
I use an application called Geeqie to view photos. Geeqie cannot open files from zfs (0.6.3) when those files are marked immutable. However, it _can_ open them when marked immutable on ext4 or btrfs. I'm not a programmer but I believe I traced it to this line in the Geeqie source: ``` il->mapped_file = mmap(0, il->bytes_total, PROT_READ|PROT_WRITE, MAP_PRIVATE, load_fd, 0); ``` This returns `MAP_FAILED` on zfs, but works on the other filesystems. The offending part appears to be the `PROT_WRITE` flag. Should zfs behave as the others do here?
1.0
mmap of immutable file behaves differently on zfs vs. ext4/btrfs - I use an application called Geeqie to view photos. Geeqie cannot open files from zfs (0.6.3) when those files are marked immutable. However, it _can_ open them when marked immutable on ext4 or btrfs. I'm not a programmer but I believe I traced it to this line in the Geeqie source: ``` il->mapped_file = mmap(0, il->bytes_total, PROT_READ|PROT_WRITE, MAP_PRIVATE, load_fd, 0); ``` This returns `MAP_FAILED` on zfs, but works on the other filesystems. The offending part appears to be the `PROT_WRITE` flag. Should zfs behave as the others do here?
defect
mmap of immutable file behaves differently on zfs vs btrfs i use an application called geeqie to view photos geeqie cannot open files from zfs when those files are marked immutable however it can open them when marked immutable on or btrfs i m not a programmer but i believe i traced it to this line in the geeqie source il mapped file mmap il bytes total prot read prot write map private load fd this returns map failed on zfs but works on the other filesystems the offending part appears to be the prot write flag should zfs behave as the others do here
1
67,529
20,978,025,439
IssuesEvent
2022-03-28 16:58:58
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Fix react error on share dialog
T-Defect S-Tolerable A-Share O-Uncommon good first issue
### Steps to reproduce 1. Hover over an event 2. Select "share" in the message action bar to open the share dialog ### Outcome #### What did you expect? no error #### What happened instead? In the console I can see ``` Warning: You provided a `checked` prop to a form field without an `onChange` handler. This will render a read-only field. If the field should be mutable use `defaultChecked`. Otherwise, set either `onChange` or `readOnly`. ``` ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
Fix react error on share dialog - ### Steps to reproduce 1. Hover over an event 2. Select "share" in the message action bar to open the share dialog ### Outcome #### What did you expect? no error #### What happened instead? In the console I can see ``` Warning: You provided a `checked` prop to a form field without an `onChange` handler. This will render a read-only field. If the field should be mutable use `defaultChecked`. Otherwise, set either `onChange` or `readOnly`. ``` ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Application version _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
fix react error on share dialog steps to reproduce hover over an event select share in the message action bar to open the share dialog outcome what did you expect no error what happened instead in the console i can see warning you provided a checked prop to a form field without an onchange handler this will render a read only field if the field should be mutable use defaultchecked otherwise set either onchange or readonly operating system no response browser information no response url for webapp no response application version no response homeserver no response will you send logs no
1
45,282
12,701,077,740
IssuesEvent
2020-06-22 17:28:38
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
[Doc] search result links don't work
Documentation defect
When using the documentation's search box, one gets 404 errors when clicking the result links. Example at https://docs.scipy.org/doc/scipy/reference/search.html?q=sum_duplicates&check_keywords=yes&area=default
1.0
[Doc] search result links don't work - When using the documentation's search box, one gets 404 errors when clicking the result links. Example at https://docs.scipy.org/doc/scipy/reference/search.html?q=sum_duplicates&check_keywords=yes&area=default
defect
search result links don t work when using the documentation s search box one gets errors when clicking the result links example at
1
102,216
11,276,981,125
IssuesEvent
2020-01-15 01:06:30
streamnative/bookkeeper
https://api.github.com/repos/streamnative/bookkeeper
opened
ISSUE-1867: BP-37: Improve configuration management for better documentation
area/documentation triage/week-8 type/proposal
Original Issue: apache/bookkeeper#1867 --- **BP** This is the master ticket for tracking BP-37 : One common task in developing bookkeeper is to make sure all the configuration settings are well documented, and the configuration file we ship in each release is in-sync with the code itself. However maintaining things in-sync is non-trivial. This proposal is exploring a new way to manage configuration settings for better documentation. Proposal PR - #1868
1.0
ISSUE-1867: BP-37: Improve configuration management for better documentation - Original Issue: apache/bookkeeper#1867 --- **BP** This is the master ticket for tracking BP-37 : One common task in developing bookkeeper is to make sure all the configuration settings are well documented, and the configuration file we ship in each release is in-sync with the code itself. However maintaining things in-sync is non-trivial. This proposal is exploring a new way to manage configuration settings for better documentation. Proposal PR - #1868
non_defect
issue bp improve configuration management for better documentation original issue apache bookkeeper bp this is the master ticket for tracking bp one common task in developing bookkeeper is to make sure all the configuration settings are well documented and the configuration file we ship in each release is in sync with the code itself however maintaining things in sync is non trivial this proposal is exploring a new way to manage configuration settings for better documentation proposal pr
0
31,450
6,527,984,154
IssuesEvent
2017-08-30 04:45:50
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
opened
DateTime equals does not compare Ticks
defect
When two DateTime objects are compared, the Ticks property is not used for the comparison. ### Steps To Reproduce https://dev.deck.net/d32b32f7cec18cd12c95ddd3f98513ac ```c# public class Program { public static void Main() { DateTime now = DateTime.Now; DateTime utcNow = now.ToUniversalTime(); Console.WriteLine(now); Console.WriteLine(utcNow); if (now == utcNow) Console.WriteLine("Equal"); else Console.WriteLine("Not equal"); } } ``` ### Expected Result ``` Not equal ``` ### Actual Result ``` Equal ``` ## See Also * https://forums.bridge.net/forum/bridge-net-pro/bugs/4697-datetime-comparison-and-utc
1.0
DateTime equals does not compare Ticks - When two DateTime objects are compared, the Ticks property is not used for the comparison. ### Steps To Reproduce https://dev.deck.net/d32b32f7cec18cd12c95ddd3f98513ac ```c# public class Program { public static void Main() { DateTime now = DateTime.Now; DateTime utcNow = now.ToUniversalTime(); Console.WriteLine(now); Console.WriteLine(utcNow); if (now == utcNow) Console.WriteLine("Equal"); else Console.WriteLine("Not equal"); } } ``` ### Expected Result ``` Not equal ``` ### Actual Result ``` Equal ``` ## See Also * https://forums.bridge.net/forum/bridge-net-pro/bugs/4697-datetime-comparison-and-utc
defect
datetime equals does not compare ticks when two datetime objects are compared the ticks property is not used for the comparison steps to reproduce c public class program public static void main datetime now datetime now datetime utcnow now touniversaltime console writeline now console writeline utcnow if now utcnow console writeline equal else console writeline not equal expected result not equal actual result equal see also
1
38,850
8,971,402,938
IssuesEvent
2019-01-29 15:50:44
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
UndefinedErrorCodeException: Class name: com.hazelcast.cp.exception.CannotReplicateException
Priority: High Team: Client Team: Core Type: Critical Type: Defect
``` com.hazelcast.client.UndefinedErrorCodeException: Class name: com.hazelcast.cp.exception.CannotReplicateException, Message: Cannot replicate new operations for now at com.hazelcast.cp.internal.raft.impl.task.ReplicateTask.run(ReplicateTask.java:72) at com.hazelcast.cp.internal.NodeEngineRaftIntegration.execute(NodeEngineRaftIntegration.java:95) at com.hazelcast.cp.internal.raft.impl.RaftNodeImpl.replicate(RaftNodeImpl.java:239) at com.hazelcast.cp.internal.operation.RaftReplicateOp.replicate(RaftReplicateOp.java:76) at com.hazelcast.cp.internal.operation.RaftReplicateOp.run(RaftReplicateOp.java:67) at com.hazelcast.spi.Operation.call(Operation.java:170) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:210) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:199) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:416) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110) at ------ submitted from ------.(Unknown Source) at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:126) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:250) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64) at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80) at ------ submitted from ------.(Unknown Source) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:96) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:33) at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:190) at com.hazelcast.client.util.ClientDelegatingFuture.get(ClientDelegatingFuture.java:125) at com.hazelcast.client.util.ClientDelegatingFuture.get(ClientDelegatingFuture.java:116) at com.hazelcast.client.util.ClientDelegatingFuture.join(ClientDelegatingFuture.java:132) at com.hazelcast.client.cp.internal.datastructures.atomiclong.RaftAtomicLongProxy.get(RaftAtomicLongProxy.java:127) ``` http://54.234.90.98/~jenkins/workspace/kill-x/3.12-SNAPSHOT/2019_01_29-15_07_00/long/output/HZ/HzClient1HZ/exception.txt http://54.234.90.98/~jenkins/workspace/kill-x/3.12-SNAPSHOT/2019_01_29-15_07_00/long/output/HZ/HzMember1HZ/out.txt
1.0
UndefinedErrorCodeException: Class name: com.hazelcast.cp.exception.CannotReplicateException - ``` com.hazelcast.client.UndefinedErrorCodeException: Class name: com.hazelcast.cp.exception.CannotReplicateException, Message: Cannot replicate new operations for now at com.hazelcast.cp.internal.raft.impl.task.ReplicateTask.run(ReplicateTask.java:72) at com.hazelcast.cp.internal.NodeEngineRaftIntegration.execute(NodeEngineRaftIntegration.java:95) at com.hazelcast.cp.internal.raft.impl.RaftNodeImpl.replicate(RaftNodeImpl.java:239) at com.hazelcast.cp.internal.operation.RaftReplicateOp.replicate(RaftReplicateOp.java:76) at com.hazelcast.cp.internal.operation.RaftReplicateOp.run(RaftReplicateOp.java:67) at com.hazelcast.spi.Operation.call(Operation.java:170) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.call(OperationRunnerImpl.java:210) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:199) at com.hazelcast.spi.impl.operationservice.impl.OperationRunnerImpl.run(OperationRunnerImpl.java:416) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:153) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.process(OperationThread.java:123) at com.hazelcast.spi.impl.operationexecutor.impl.OperationThread.run(OperationThread.java:110) at ------ submitted from ------.(Unknown Source) at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolve(InvocationFuture.java:126) at com.hazelcast.spi.impl.AbstractInvocationFuture$1.run(AbstractInvocationFuture.java:250) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:622) at java.lang.Thread.run(Thread.java:748) at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:64) at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:80) at ------ submitted from ------.(Unknown Source) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:96) at com.hazelcast.client.spi.impl.ClientInvocationFuture.resolveAndThrowIfException(ClientInvocationFuture.java:33) at com.hazelcast.spi.impl.AbstractInvocationFuture.get(AbstractInvocationFuture.java:190) at com.hazelcast.client.util.ClientDelegatingFuture.get(ClientDelegatingFuture.java:125) at com.hazelcast.client.util.ClientDelegatingFuture.get(ClientDelegatingFuture.java:116) at com.hazelcast.client.util.ClientDelegatingFuture.join(ClientDelegatingFuture.java:132) at com.hazelcast.client.cp.internal.datastructures.atomiclong.RaftAtomicLongProxy.get(RaftAtomicLongProxy.java:127) ``` http://54.234.90.98/~jenkins/workspace/kill-x/3.12-SNAPSHOT/2019_01_29-15_07_00/long/output/HZ/HzClient1HZ/exception.txt http://54.234.90.98/~jenkins/workspace/kill-x/3.12-SNAPSHOT/2019_01_29-15_07_00/long/output/HZ/HzMember1HZ/out.txt
defect
undefinederrorcodeexception class name com hazelcast cp exception cannotreplicateexception com hazelcast client undefinederrorcodeexception class name com hazelcast cp exception cannotreplicateexception message cannot replicate new operations for now at com hazelcast cp internal raft impl task replicatetask run replicatetask java at com hazelcast cp internal nodeengineraftintegration execute nodeengineraftintegration java at com hazelcast cp internal raft impl raftnodeimpl replicate raftnodeimpl java at com hazelcast cp internal operation raftreplicateop replicate raftreplicateop java at com hazelcast cp internal operation raftreplicateop run raftreplicateop java at com hazelcast spi operation call operation java at com hazelcast spi impl operationservice impl operationrunnerimpl call operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationservice impl operationrunnerimpl run operationrunnerimpl java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread process operationthread java at com hazelcast spi impl operationexecutor impl operationthread run operationthread java at submitted from unknown source at com hazelcast spi impl operationservice impl invocationfuture resolve invocationfuture java at com hazelcast spi impl abstractinvocationfuture run abstractinvocationfuture java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java at com hazelcast util executor hazelcastmanagedthread executerun hazelcastmanagedthread java at com hazelcast util executor hazelcastmanagedthread run hazelcastmanagedthread java at submitted from unknown source at com hazelcast client spi impl clientinvocationfuture resolveandthrowifexception clientinvocationfuture java at com hazelcast client spi impl clientinvocationfuture resolveandthrowifexception clientinvocationfuture java at com hazelcast spi impl abstractinvocationfuture get abstractinvocationfuture java at com hazelcast client util clientdelegatingfuture get clientdelegatingfuture java at com hazelcast client util clientdelegatingfuture get clientdelegatingfuture java at com hazelcast client util clientdelegatingfuture join clientdelegatingfuture java at com hazelcast client cp internal datastructures atomiclong raftatomiclongproxy get raftatomiclongproxy java
1
290,057
21,801,211,552
IssuesEvent
2022-05-16 05:34:47
jmbannon/ytdl-sub
https://api.github.com/repos/jmbannon/ytdl-sub
closed
Add documentation for plugins
documentation
Would be ideal for sphinx to automatically scrape all `Plugin` classes and display their docstrings
1.0
Add documentation for plugins - Would be ideal for sphinx to automatically scrape all `Plugin` classes and display their docstrings
non_defect
add documentation for plugins would be ideal for sphinx to automatically scrape all plugin classes and display their docstrings
0
77,818
27,178,791,089
IssuesEvent
2023-02-18 10:52:40
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
zpool hangs
Type: Defect
### System information <!-- add version after "|" character --> Type | Version/Name Distribution Name | Ubuntu Distribution Version | 22.04.1 LTS Kernel Version | 5.15.0-50 Architecture | x86_64 OpenZFS Version | 2.1.4 ### Describe the problem you're observing The command zpool status hangs. What additional info should I provide for debugging? How can I reboot the system safely? Below you can find the strace of `zpool status` where the command is sitting for several days now. From server logs I deduced that other incovations must have been wating for more than 30 days. ### Describe how to reproduce the problem ### Include any warning/errors/backtraces from the system logs ``` ~> strace zpool status execve("/usr/sbin/zpool", ["zpool", "status"], 0x7ffc4bb70ab8 /* 20 vars */) = 0 brk(NULL) = 0x55d8cc9c5000 arch_prctl(0x3001 /* ARCH_??? */, 0x7ffd987b8e60) = -1 EINVAL (Invalid argument) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f495156a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v3/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v3", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v2/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v2", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=40143, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 40143, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f4951560000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libzfs.so.4", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=420624, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 435944, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514f5000 mmap(0x7f4951502000, 274432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd000) = 0x7f4951502000 mmap(0x7f4951545000, 81920, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x50000) = 0x7f4951545000 mmap(0x7f4951559000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x63000) = 0x7f4951559000 mmap(0x7f495155d000, 9960, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f495155d000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libzfs_core.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=118560, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 124776, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514d6000 mmap(0x7f49514dd000, 65536, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f49514dd000 mmap(0x7f49514ed000, 20480, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x7f49514ed000 mmap(0x7f49514f2000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b000) = 0x7f49514f2000 mmap(0x7f49514f4000, 1896, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f49514f4000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libuutil.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=64136, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 70664, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514c4000 mmap(0x7f49514c9000, 24576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f49514c9000 mmap(0x7f49514cf000, 16384, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f49514cf000 mmap(0x7f49514d3000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x7f49514d3000 mmap(0x7f49514d5000, 1032, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f49514d5000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnvpair.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=100496, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 102488, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514aa000 mmap(0x7f49514af000, 53248, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f49514af000 mmap(0x7f49514bc000, 24576, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12000) = 0x7f49514bc000 mmap(0x7f49514c2000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x7f49514c2000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libm.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=940560, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 942344, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49513c3000 mmap(0x7f49513d1000, 507904, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x7f49513d1000 mmap(0x7f495144d000, 372736, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8a000) = 0x7f495144d000 mmap(0x7f49514a8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe4000) = 0x7f49514a8000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libblkid.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=220192, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f49513c1000 mmap(NULL, 222136, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f495138a000 mprotect(0x7f4951391000, 172032, PROT_NONE) = 0 mmap(0x7f4951391000, 131072, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f4951391000 mmap(0x7f49513b1000, 36864, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27000) = 0x7f49513b1000 mmap(0x7f49513bb000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x30000) = 0x7f49513bb000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libuuid.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=30920, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 32808, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4951381000 mmap(0x7f4951383000, 16384, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4951383000 mmap(0x7f4951387000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f4951387000 mmap(0x7f4951388000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f4951388000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\237\2\0\0\0\0\0"..., 832) = 832 pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48 pread64(3, "\4\0\0\0\24\0\0\0\3\0\0\0GNU\0i8\235HZ\227\223\333\350s\360\352,\223\340."..., 68, 896) = 68 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=2216304, ...}, AT_EMPTY_PATH) = 0 pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 mmap(NULL, 2260560, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4951159000 mmap(0x7f4951181000, 1658880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7f4951181000 mmap(0x7f4951316000, 360448, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bd000) = 0x7f4951316000 mmap(0x7f495136e000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x214000) = 0x7f495136e000 mmap(0x7f4951374000, 52816, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4951374000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcrypto.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=4447536, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 4461760, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950d17000 mmap(0x7f4950dc9000, 2478080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb2000) = 0x7f4950dc9000 mmap(0x7f4951026000, 860160, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x30f000) = 0x7f4951026000 mmap(0x7f49510f8000, 385024, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3e0000) = 0x7f49510f8000 mmap(0x7f4951156000, 9408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4951156000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libz.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=108936, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 110776, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950cfb000 mprotect(0x7f4950cfd000, 98304, PROT_NONE) = 0 mmap(0x7f4950cfd000, 69632, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4950cfd000 mmap(0x7f4950d0e000, 24576, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13000) = 0x7f4950d0e000 mmap(0x7f4950d15000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19000) = 0x7f4950d15000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libudev.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=166240, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 170272, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950cd1000 mprotect(0x7f4950cd5000, 147456, PROT_NONE) = 0 mmap(0x7f4950cd5000, 106496, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950cd5000 mmap(0x7f4950cef000, 36864, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e000) = 0x7f4950cef000 mmap(0x7f4950cf9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27000) = 0x7f4950cf9000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libtirpc.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=182912, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950ccf000 mmap(NULL, 187256, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950ca1000 mprotect(0x7f4950ca8000, 151552, PROT_NONE) = 0 mmap(0x7f4950ca8000, 110592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f4950ca8000 mmap(0x7f4950cc3000, 36864, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7f4950cc3000 mmap(0x7f4950ccd000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2b000) = 0x7f4950ccd000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=338712, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 340960, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950c4d000 mprotect(0x7f4950c58000, 282624, PROT_NONE) = 0 mmap(0x7f4950c58000, 229376, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f4950c58000 mmap(0x7f4950c90000, 49152, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x43000) = 0x7f4950c90000 mmap(0x7f4950c9d000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4f000) = 0x7f4950c9d000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libkrb5.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=828000, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 830576, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b82000 mprotect(0x7f4950ba3000, 634880, PROT_NONE) = 0 mmap(0x7f4950ba3000, 380928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x21000) = 0x7f4950ba3000 mmap(0x7f4950c00000, 249856, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7e000) = 0x7f4950c00000 mmap(0x7f4950c3e000, 61440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xbb000) = 0x7f4950c3e000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libk5crypto.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=182928, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 188472, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b53000 mprotect(0x7f4950b57000, 163840, PROT_NONE) = 0 mmap(0x7f4950b57000, 110592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950b57000 mmap(0x7f4950b72000, 49152, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1f000) = 0x7f4950b72000 mmap(0x7f4950b7f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2b000) = 0x7f4950b7f000 mmap(0x7f4950b81000, 56, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4950b81000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcom_err.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=18504, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 20552, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b4d000 mmap(0x7f4950b4f000, 4096, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4950b4f000 mmap(0x7f4950b50000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b50000 mmap(0x7f4950b51000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b51000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libkrb5support.so.0", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=52080, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950b4b000 mmap(NULL, 54224, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b3d000 mprotect(0x7f4950b40000, 36864, PROT_NONE) = 0 mmap(0x7f4950b40000, 24576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b40000 mmap(0x7f4950b46000, 8192, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x7f4950b46000 mmap(0x7f4950b49000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f4950b49000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libkeyutils.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=22600, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 24592, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b36000 mmap(0x7f4950b38000, 8192, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4950b38000 mmap(0x7f4950b3a000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950b3a000 mmap(0x7f4950b3b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950b3b000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libresolv.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=68552, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 80456, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b22000 mmap(0x7f4950b25000, 40960, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b25000 mmap(0x7f4950b2f000, 12288, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd000) = 0x7f4950b2f000 mmap(0x7f4950b32000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xf000) = 0x7f4950b32000 mmap(0x7f4950b34000, 6728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4950b34000 close(3) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950b20000 mmap(NULL, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950b1b000 arch_prctl(ARCH_SET_FS, 0x7f4950b1d7c0) = 0 set_tid_address(0x7f4950b1da90) = 2179784 set_robust_list(0x7f4950b1daa0, 24) = 0 rseq(0x7f4950b1e160, 0x20, 0, 0x53053053) = 0 mprotect(0x7f495136e000, 16384, PROT_READ) = 0 mprotect(0x7f4950b32000, 4096, PROT_READ) = 0 mprotect(0x7f4950b3b000, 4096, PROT_READ) = 0 mprotect(0x7f4950b49000, 4096, PROT_READ) = 0 mprotect(0x7f4950b51000, 4096, PROT_READ) = 0 mprotect(0x7f4950b7f000, 4096, PROT_READ) = 0 mprotect(0x7f4950c3e000, 53248, PROT_READ) = 0 mprotect(0x7f4950c9d000, 8192, PROT_READ) = 0 mprotect(0x7f4950ccd000, 4096, PROT_READ) = 0 mprotect(0x7f4950cf9000, 4096, PROT_READ) = 0 mprotect(0x7f4950d15000, 4096, PROT_READ) = 0 mprotect(0x7f49510f8000, 372736, PROT_READ) = 0 mprotect(0x7f4951388000, 4096, PROT_READ) = 0 mprotect(0x7f49513bb000, 20480, PROT_READ) = 0 mprotect(0x7f49514a8000, 4096, PROT_READ) = 0 mprotect(0x7f49514c2000, 4096, PROT_READ) = 0 mprotect(0x7f49514d3000, 4096, PROT_READ) = 0 mprotect(0x7f49514f2000, 4096, PROT_READ) = 0 mprotect(0x7f4951559000, 8192, PROT_READ) = 0 mprotect(0x55d8cbcfc000, 8192, PROT_READ) = 0 mprotect(0x7f49515a4000, 8192, PROT_READ) = 0 prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0 munmap(0x7f4951560000, 40143) = 0 getrandom("\x5f\xb7\x70\xe0\x8f\xd8\xdc\x45", 8, GRND_NONBLOCK) = 8 brk(NULL) = 0x55d8cc9c5000 brk(0x55d8cc9e6000) = 0x55d8cc9e6000 openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=3048928, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 3048928, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f4950832000 close(3) = 0 access("/run/systemd/container", R_OK) = -1 ENOENT (No such file or directory) access("/sys/module/zfs", F_OK) = 0 openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_CLOEXEC) = 3 close(3) = 0 openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=27002, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 27002, PROT_READ, MAP_SHARED, 3, 0) = 0x7f4951563000 close(3) = 0 futex(0x7f4951373a6c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_EXCL|O_CLOEXEC) = 3 openat(AT_FDCWD, "/proc/self/mounts", O_RDONLY|O_CLOEXEC) = 4 openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_CLOEXEC) = 5 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/redundant_metadata", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/sync", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/checksum", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/dedup", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/compression", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapdir", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapdev", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/aclmode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/acltype", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/aclinherit", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/copies", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/primarycache", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/secondarycache", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/logbias", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/xattr", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/dnodesize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/volmode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/atime", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/relatime", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/devices", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/exec", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/setuid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/readonly", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/zoned", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/vscan", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/nbmand", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/overlay", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/version", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/canmount", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/mounted", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/defer_destroy", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keystatus", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/normalization", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/casesensitivity", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keyformat", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/encryption", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/utf8only", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/origin", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/clones", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/mountpoint", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/sharenfs", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/type", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/sharesmb", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/mlslabel", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/context", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/fscontext", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/defcontext", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/rootcontext", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/receive_resume_token", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/encryptionroot", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keylocation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/redact_snaps", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/used", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/available", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/referenced", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/compressratio", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/refcompressratio", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/volblocksize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbysnapshots", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbydataset", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbychildren", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbyrefreservation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/userrefs", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/written", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/logicalused", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/logicalreferenced", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/filesystem_count", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapshot_count", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/guid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/createtxg", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/pbkdf2iters", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/objsetid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/quota", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/reservation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/volsize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/refquota", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/refreservation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/filesystem_limit", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapshot_limit", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/recordsize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/special_small_blocks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/numclones", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/name", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/iscsioptions", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/stmf_sbd_lu", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/useraccounting", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/unique", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/inconsistent", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/ivsetguid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/prevsnap", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/pbkdf2salt", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keyguid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/redacted", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/remaptxg", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/creation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/altroot", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/bootfs", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/cachefile", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/comment", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/compatibility", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/size", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/free", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/freeing", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/checkpoint", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/leaked", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/allocated", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/expandsize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/fragmentation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/capacity", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/guid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/load_guid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/health", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/dedupratio", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/version", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/ashift", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/delegation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/autoreplace", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/listsnapshots", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/autoexpand", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/readonly", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/multihost", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/failmode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/autotrim", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/name", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/maxblocksize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/tname", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/maxdnodesize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/dedupditto", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:async_destroy", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:empty_bpobj", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:lz4_compress", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.joyent:multi_vdev_crash_dump", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:spacemap_histogram", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:enabled_txg", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:hole_birth", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:zpool_checkpoint", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:spacemap_v2", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:extensible_dataset", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:bookmarks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.joyent:filesystem_limits", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:embedded_data", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:livelist", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:log_spacemap", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.open-zfs:large_blocks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:large_dnode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:sha512", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:skein", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:edonr", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:redaction_bookmarks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:redacted_datasets", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:bookmark_written", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:device_removal", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:obsolete_counts", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:userobj_accounting", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.datto:bookmark_v2", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.datto:encryption", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:project_quota", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:allocation_classes", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.datto:resilver_defer", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.openzfs:device_rebuild", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.freebsd:zstd_compress", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.openzfs:draid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 mmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f49507f1000 ioctl(3, ZFS_IOC_POOL_CONFIGS, 0x7ffd987b2000) = 0 munmap(0x7f49507f1000, 266240) = 0 ioctl(3, ZFS_IOC_POOL_STATS ```
1.0
zpool hangs - ### System information <!-- add version after "|" character --> Type | Version/Name Distribution Name | Ubuntu Distribution Version | 22.04.1 LTS Kernel Version | 5.15.0-50 Architecture | x86_64 OpenZFS Version | 2.1.4 ### Describe the problem you're observing The command zpool status hangs. What additional info should I provide for debugging? How can I reboot the system safely? Below you can find the strace of `zpool status` where the command is sitting for several days now. From server logs I deduced that other incovations must have been wating for more than 30 days. ### Describe how to reproduce the problem ### Include any warning/errors/backtraces from the system logs ``` ~> strace zpool status execve("/usr/sbin/zpool", ["zpool", "status"], 0x7ffc4bb70ab8 /* 20 vars */) = 0 brk(NULL) = 0x55d8cc9c5000 arch_prctl(0x3001 /* ARCH_??? */, 0x7ffd987b8e60) = -1 EINVAL (Invalid argument) mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f495156a000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v3/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v3", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v2/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/glibc-hwcaps/x86-64-v2", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/haswell", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/tls", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/haswell", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/x86_64/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/x86_64", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib/libzfs.so.4", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory) newfstatat(AT_FDCWD, "/usr/local/lib/R/site-library/ospsuite/lib", 0x7ffd987b8080, 0) = -1 ENOENT (No such file or directory) openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=40143, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 40143, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f4951560000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libzfs.so.4", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=420624, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 435944, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514f5000 mmap(0x7f4951502000, 274432, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd000) = 0x7f4951502000 mmap(0x7f4951545000, 81920, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x50000) = 0x7f4951545000 mmap(0x7f4951559000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x63000) = 0x7f4951559000 mmap(0x7f495155d000, 9960, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f495155d000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libzfs_core.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=118560, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 124776, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514d6000 mmap(0x7f49514dd000, 65536, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f49514dd000 mmap(0x7f49514ed000, 20480, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x7f49514ed000 mmap(0x7f49514f2000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1b000) = 0x7f49514f2000 mmap(0x7f49514f4000, 1896, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f49514f4000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libuutil.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=64136, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 70664, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514c4000 mmap(0x7f49514c9000, 24576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f49514c9000 mmap(0x7f49514cf000, 16384, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f49514cf000 mmap(0x7f49514d3000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x7f49514d3000 mmap(0x7f49514d5000, 1032, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f49514d5000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libnvpair.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=100496, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 102488, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49514aa000 mmap(0x7f49514af000, 53248, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x5000) = 0x7f49514af000 mmap(0x7f49514bc000, 24576, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x12000) = 0x7f49514bc000 mmap(0x7f49514c2000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17000) = 0x7f49514c2000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libm.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=940560, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 942344, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f49513c3000 mmap(0x7f49513d1000, 507904, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe000) = 0x7f49513d1000 mmap(0x7f495144d000, 372736, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x8a000) = 0x7f495144d000 mmap(0x7f49514a8000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xe4000) = 0x7f49514a8000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libblkid.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=220192, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f49513c1000 mmap(NULL, 222136, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f495138a000 mprotect(0x7f4951391000, 172032, PROT_NONE) = 0 mmap(0x7f4951391000, 131072, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f4951391000 mmap(0x7f49513b1000, 36864, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27000) = 0x7f49513b1000 mmap(0x7f49513bb000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x30000) = 0x7f49513bb000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libuuid.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=30920, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 32808, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4951381000 mmap(0x7f4951383000, 16384, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4951383000 mmap(0x7f4951387000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f4951387000 mmap(0x7f4951388000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x6000) = 0x7f4951388000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0P\237\2\0\0\0\0\0"..., 832) = 832 pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 pread64(3, "\4\0\0\0 \0\0\0\5\0\0\0GNU\0\2\0\0\300\4\0\0\0\3\0\0\0\0\0\0\0"..., 48, 848) = 48 pread64(3, "\4\0\0\0\24\0\0\0\3\0\0\0GNU\0i8\235HZ\227\223\333\350s\360\352,\223\340."..., 68, 896) = 68 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=2216304, ...}, AT_EMPTY_PATH) = 0 pread64(3, "\6\0\0\0\4\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0@\0\0\0\0\0\0\0"..., 784, 64) = 784 mmap(NULL, 2260560, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4951159000 mmap(0x7f4951181000, 1658880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x28000) = 0x7f4951181000 mmap(0x7f4951316000, 360448, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1bd000) = 0x7f4951316000 mmap(0x7f495136e000, 24576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x214000) = 0x7f495136e000 mmap(0x7f4951374000, 52816, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4951374000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcrypto.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=4447536, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 4461760, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950d17000 mmap(0x7f4950dc9000, 2478080, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb2000) = 0x7f4950dc9000 mmap(0x7f4951026000, 860160, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x30f000) = 0x7f4951026000 mmap(0x7f49510f8000, 385024, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3e0000) = 0x7f49510f8000 mmap(0x7f4951156000, 9408, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4951156000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libz.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=108936, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 110776, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950cfb000 mprotect(0x7f4950cfd000, 98304, PROT_NONE) = 0 mmap(0x7f4950cfd000, 69632, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4950cfd000 mmap(0x7f4950d0e000, 24576, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13000) = 0x7f4950d0e000 mmap(0x7f4950d15000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x19000) = 0x7f4950d15000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libudev.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=166240, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 170272, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950cd1000 mprotect(0x7f4950cd5000, 147456, PROT_NONE) = 0 mmap(0x7f4950cd5000, 106496, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950cd5000 mmap(0x7f4950cef000, 36864, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e000) = 0x7f4950cef000 mmap(0x7f4950cf9000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x27000) = 0x7f4950cf9000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libtirpc.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=182912, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950ccf000 mmap(NULL, 187256, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950ca1000 mprotect(0x7f4950ca8000, 151552, PROT_NONE) = 0 mmap(0x7f4950ca8000, 110592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7000) = 0x7f4950ca8000 mmap(0x7f4950cc3000, 36864, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x22000) = 0x7f4950cc3000 mmap(0x7f4950ccd000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2b000) = 0x7f4950ccd000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libgssapi_krb5.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=338712, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 340960, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950c4d000 mprotect(0x7f4950c58000, 282624, PROT_NONE) = 0 mmap(0x7f4950c58000, 229376, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f4950c58000 mmap(0x7f4950c90000, 49152, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x43000) = 0x7f4950c90000 mmap(0x7f4950c9d000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4f000) = 0x7f4950c9d000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libkrb5.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=828000, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 830576, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b82000 mprotect(0x7f4950ba3000, 634880, PROT_NONE) = 0 mmap(0x7f4950ba3000, 380928, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x21000) = 0x7f4950ba3000 mmap(0x7f4950c00000, 249856, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x7e000) = 0x7f4950c00000 mmap(0x7f4950c3e000, 61440, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xbb000) = 0x7f4950c3e000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libk5crypto.so.3", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=182928, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 188472, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b53000 mprotect(0x7f4950b57000, 163840, PROT_NONE) = 0 mmap(0x7f4950b57000, 110592, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950b57000 mmap(0x7f4950b72000, 49152, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1f000) = 0x7f4950b72000 mmap(0x7f4950b7f000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2b000) = 0x7f4950b7f000 mmap(0x7f4950b81000, 56, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4950b81000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libcom_err.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=18504, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 20552, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b4d000 mmap(0x7f4950b4f000, 4096, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4950b4f000 mmap(0x7f4950b50000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b50000 mmap(0x7f4950b51000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b51000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libkrb5support.so.0", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=52080, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950b4b000 mmap(NULL, 54224, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b3d000 mprotect(0x7f4950b40000, 36864, PROT_NONE) = 0 mmap(0x7f4950b40000, 24576, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b40000 mmap(0x7f4950b46000, 8192, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x9000) = 0x7f4950b46000 mmap(0x7f4950b49000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xb000) = 0x7f4950b49000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libkeyutils.so.1", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=22600, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 24592, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b36000 mmap(0x7f4950b38000, 8192, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x2000) = 0x7f4950b38000 mmap(0x7f4950b3a000, 4096, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950b3a000 mmap(0x7f4950b3b000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x4000) = 0x7f4950b3b000 close(3) = 0 openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libresolv.so.2", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0\0\0\0\0\0\0\0"..., 832) = 832 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=68552, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 80456, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f4950b22000 mmap(0x7f4950b25000, 40960, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x3000) = 0x7f4950b25000 mmap(0x7f4950b2f000, 12288, PROT_READ, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xd000) = 0x7f4950b2f000 mmap(0x7f4950b32000, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0xf000) = 0x7f4950b32000 mmap(0x7f4950b34000, 6728, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f4950b34000 close(3) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950b20000 mmap(NULL, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f4950b1b000 arch_prctl(ARCH_SET_FS, 0x7f4950b1d7c0) = 0 set_tid_address(0x7f4950b1da90) = 2179784 set_robust_list(0x7f4950b1daa0, 24) = 0 rseq(0x7f4950b1e160, 0x20, 0, 0x53053053) = 0 mprotect(0x7f495136e000, 16384, PROT_READ) = 0 mprotect(0x7f4950b32000, 4096, PROT_READ) = 0 mprotect(0x7f4950b3b000, 4096, PROT_READ) = 0 mprotect(0x7f4950b49000, 4096, PROT_READ) = 0 mprotect(0x7f4950b51000, 4096, PROT_READ) = 0 mprotect(0x7f4950b7f000, 4096, PROT_READ) = 0 mprotect(0x7f4950c3e000, 53248, PROT_READ) = 0 mprotect(0x7f4950c9d000, 8192, PROT_READ) = 0 mprotect(0x7f4950ccd000, 4096, PROT_READ) = 0 mprotect(0x7f4950cf9000, 4096, PROT_READ) = 0 mprotect(0x7f4950d15000, 4096, PROT_READ) = 0 mprotect(0x7f49510f8000, 372736, PROT_READ) = 0 mprotect(0x7f4951388000, 4096, PROT_READ) = 0 mprotect(0x7f49513bb000, 20480, PROT_READ) = 0 mprotect(0x7f49514a8000, 4096, PROT_READ) = 0 mprotect(0x7f49514c2000, 4096, PROT_READ) = 0 mprotect(0x7f49514d3000, 4096, PROT_READ) = 0 mprotect(0x7f49514f2000, 4096, PROT_READ) = 0 mprotect(0x7f4951559000, 8192, PROT_READ) = 0 mprotect(0x55d8cbcfc000, 8192, PROT_READ) = 0 mprotect(0x7f49515a4000, 8192, PROT_READ) = 0 prlimit64(0, RLIMIT_STACK, NULL, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0 munmap(0x7f4951560000, 40143) = 0 getrandom("\x5f\xb7\x70\xe0\x8f\xd8\xdc\x45", 8, GRND_NONBLOCK) = 8 brk(NULL) = 0x55d8cc9c5000 brk(0x55d8cc9e6000) = 0x55d8cc9e6000 openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=3048928, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 3048928, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f4950832000 close(3) = 0 access("/run/systemd/container", R_OK) = -1 ENOENT (No such file or directory) access("/sys/module/zfs", F_OK) = 0 openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_CLOEXEC) = 3 close(3) = 0 openat(AT_FDCWD, "/usr/lib/x86_64-linux-gnu/gconv/gconv-modules.cache", O_RDONLY) = 3 newfstatat(3, "", {st_mode=S_IFREG|0644, st_size=27002, ...}, AT_EMPTY_PATH) = 0 mmap(NULL, 27002, PROT_READ, MAP_SHARED, 3, 0) = 0x7f4951563000 close(3) = 0 futex(0x7f4951373a6c, FUTEX_WAKE_PRIVATE, 2147483647) = 0 openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_EXCL|O_CLOEXEC) = 3 openat(AT_FDCWD, "/proc/self/mounts", O_RDONLY|O_CLOEXEC) = 4 openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_CLOEXEC) = 5 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/redundant_metadata", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/sync", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/checksum", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/dedup", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/compression", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapdir", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapdev", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/aclmode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/acltype", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/aclinherit", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/copies", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/primarycache", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/secondarycache", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/logbias", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/xattr", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/dnodesize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/volmode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/atime", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/relatime", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/devices", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/exec", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/setuid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/readonly", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/zoned", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/vscan", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/nbmand", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/overlay", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/version", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/canmount", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/mounted", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/defer_destroy", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keystatus", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/normalization", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/casesensitivity", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keyformat", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/encryption", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/utf8only", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/origin", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/clones", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/mountpoint", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/sharenfs", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/type", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/sharesmb", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/mlslabel", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/context", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/fscontext", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/defcontext", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/rootcontext", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/receive_resume_token", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/encryptionroot", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keylocation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/redact_snaps", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/used", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/available", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/referenced", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/compressratio", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/refcompressratio", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/volblocksize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbysnapshots", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbydataset", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbychildren", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/usedbyrefreservation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/userrefs", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/written", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/logicalused", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/logicalreferenced", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/filesystem_count", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapshot_count", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/guid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/createtxg", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/pbkdf2iters", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/objsetid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/quota", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/reservation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/volsize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/refquota", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/refreservation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/filesystem_limit", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/snapshot_limit", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/recordsize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/special_small_blocks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/numclones", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/name", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/iscsioptions", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/stmf_sbd_lu", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/useraccounting", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/unique", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/inconsistent", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/ivsetguid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/prevsnap", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/pbkdf2salt", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/keyguid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/redacted", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/remaptxg", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.dataset/creation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/altroot", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/bootfs", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/cachefile", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/comment", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/compatibility", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/size", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/free", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/freeing", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/checkpoint", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/leaked", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/allocated", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/expandsize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/fragmentation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/capacity", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/guid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/load_guid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/health", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/dedupratio", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/version", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/ashift", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/delegation", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/autoreplace", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/listsnapshots", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/autoexpand", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/readonly", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/multihost", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/failmode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/autotrim", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/name", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/maxblocksize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/tname", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/maxdnodesize", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/properties.pool/dedupditto", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:async_destroy", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:empty_bpobj", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:lz4_compress", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.joyent:multi_vdev_crash_dump", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:spacemap_histogram", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:enabled_txg", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:hole_birth", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:zpool_checkpoint", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:spacemap_v2", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:extensible_dataset", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:bookmarks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.joyent:filesystem_limits", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:embedded_data", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:livelist", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:log_spacemap", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.open-zfs:large_blocks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:large_dnode", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:sha512", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:skein", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.illumos:edonr", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:redaction_bookmarks", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:redacted_datasets", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:bookmark_written", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:device_removal", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.delphix:obsolete_counts", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:userobj_accounting", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.datto:bookmark_v2", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.datto:encryption", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:project_quota", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.zfsonlinux:allocation_classes", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/com.datto:resilver_defer", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.openzfs:device_rebuild", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.freebsd:zstd_compress", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 newfstatat(AT_FDCWD, "/sys/module/zfs/features.pool/org.openzfs:draid", {st_mode=S_IFDIR|0755, st_size=0, ...}, 0) = 0 mmap(NULL, 266240, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f49507f1000 ioctl(3, ZFS_IOC_POOL_CONFIGS, 0x7ffd987b2000) = 0 munmap(0x7f49507f1000, 266240) = 0 ioctl(3, ZFS_IOC_POOL_STATS ```
defect
zpool hangs system information type version name distribution name ubuntu distribution version lts kernel version architecture openzfs version describe the problem you re observing the command zpool status hangs what additional info should i provide for debugging how can i reboot the system safely below you can find the strace of zpool status where the command is sitting for several days now from server logs i deduced that other incovations must have been wating for more than days describe how to reproduce the problem include any warning errors backtraces from the system logs strace zpool status execve usr sbin zpool vars brk null arch prctl arch einval invalid argument mmap null prot read prot write map private map anonymous access etc ld so preload r ok enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib glibc hwcaps libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib glibc hwcaps enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib glibc hwcaps libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib glibc hwcaps enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib tls haswell libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib tls haswell enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib tls haswell libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib tls haswell enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib tls libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib tls enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib tls libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib tls enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib haswell libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib haswell enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib haswell libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib haswell enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib enoent no such file or directory openat at fdcwd usr local lib r site library ospsuite lib libzfs so o rdonly o cloexec enoent no such file or directory newfstatat at fdcwd usr local lib r site library ospsuite lib enoent no such file or directory openat at fdcwd etc ld so cache o rdonly o cloexec newfstatat st mode s ifreg st size at empty path mmap null prot read map private close openat at fdcwd lib linux gnu libzfs so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close openat at fdcwd lib linux gnu libzfs core so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close openat at fdcwd lib linux gnu libuutil so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close openat at fdcwd lib linux gnu libnvpair so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libm so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libblkid so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read prot write map private map anonymous mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libuuid so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libc so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close openat at fdcwd lib linux gnu libcrypto so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close openat at fdcwd lib linux gnu libz so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libudev so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libtirpc so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read prot write map private map anonymous mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libgssapi so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close openat at fdcwd lib linux gnu libcom err so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read prot write map private map anonymous mmap null prot read map private map denywrite mprotect prot none mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libkeyutils so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite close openat at fdcwd lib linux gnu libresolv so o rdonly o cloexec read newfstatat st mode s ifreg st size at empty path mmap null prot read map private map denywrite mmap prot read prot exec map private map fixed map denywrite mmap prot read map private map fixed map denywrite mmap prot read prot write map private map fixed map denywrite mmap prot read prot write map private map fixed map anonymous close mmap null prot read prot write map private map anonymous mmap null prot read prot write map private map anonymous arch prctl arch set fs set tid address set robust list rseq mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read mprotect prot read rlimit stack null rlim cur rlim max infinity munmap getrandom xdc grnd nonblock brk null brk openat at fdcwd usr lib locale locale archive o rdonly o cloexec newfstatat st mode s ifreg st size at empty path mmap null prot read map private close access run systemd container r ok enoent no such file or directory access sys module zfs f ok openat at fdcwd dev zfs o rdwr o cloexec close openat at fdcwd usr lib linux gnu gconv gconv modules cache o rdonly newfstatat st mode s ifreg st size at empty path mmap null prot read map shared close futex futex wake private openat at fdcwd dev zfs o rdwr o excl o cloexec openat at fdcwd proc self mounts o rdonly o cloexec openat at fdcwd dev zfs o rdwr o cloexec newfstatat at fdcwd sys module zfs properties dataset redundant metadata st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset sync st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset checksum st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset dedup st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset compression st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset snapdir st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset snapdev st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset aclmode st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset acltype st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset aclinherit st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset copies st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset primarycache st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset secondarycache st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset logbias st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset xattr st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset dnodesize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset volmode st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset atime st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset relatime st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset devices st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset exec st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset setuid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset readonly st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset zoned st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset vscan st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset nbmand st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset overlay st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset version st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset canmount st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset mounted st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset defer destroy st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset keystatus st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset normalization st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset casesensitivity st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset keyformat st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset encryption st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset origin st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset clones st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset mountpoint st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset sharenfs st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset type st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset sharesmb st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset mlslabel st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset context st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset fscontext st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset defcontext st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset rootcontext st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset receive resume token st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset encryptionroot st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset keylocation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset redact snaps st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset used st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset available st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset referenced st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset compressratio st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset refcompressratio st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset volblocksize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset usedbysnapshots st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset usedbydataset st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset usedbychildren st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset usedbyrefreservation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset userrefs st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset written st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset logicalused st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset logicalreferenced st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset filesystem count st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset snapshot count st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset guid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset createtxg st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset objsetid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset quota st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset reservation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset volsize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset refquota st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset refreservation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset filesystem limit st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset snapshot limit st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset recordsize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset special small blocks st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset numclones st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset name st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset iscsioptions st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset stmf sbd lu st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset useraccounting st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset unique st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset inconsistent st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset ivsetguid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset prevsnap st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset keyguid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset redacted st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset remaptxg st mode s ifdir st size newfstatat at fdcwd sys module zfs properties dataset creation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool altroot st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool bootfs st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool cachefile st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool comment st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool compatibility st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool size st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool free st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool freeing st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool checkpoint st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool leaked st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool allocated st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool expandsize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool fragmentation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool capacity st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool guid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool load guid st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool health st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool dedupratio st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool version st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool ashift st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool delegation st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool autoreplace st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool listsnapshots st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool autoexpand st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool readonly st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool multihost st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool failmode st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool autotrim st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool name st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool maxblocksize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool tname st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool maxdnodesize st mode s ifdir st size newfstatat at fdcwd sys module zfs properties pool dedupditto st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix async destroy st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix empty bpobj st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org illumos compress st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com joyent multi vdev crash dump st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix spacemap histogram st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix enabled txg st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix hole birth st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix zpool checkpoint st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix spacemap st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix extensible dataset st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix bookmarks st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com joyent filesystem limits st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix embedded data st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix livelist st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix log spacemap st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org open zfs large blocks st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org zfsonlinux large dnode st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org illumos st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org illumos skein st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org illumos edonr st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix redaction bookmarks st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix redacted datasets st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix bookmark written st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix device removal st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com delphix obsolete counts st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org zfsonlinux userobj accounting st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com datto bookmark st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com datto encryption st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org zfsonlinux project quota st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org zfsonlinux allocation classes st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool com datto resilver defer st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org openzfs device rebuild st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org freebsd zstd compress st mode s ifdir st size newfstatat at fdcwd sys module zfs features pool org openzfs draid st mode s ifdir st size mmap null prot read prot write map private map anonymous ioctl zfs ioc pool configs munmap ioctl zfs ioc pool stats
1
49,081
20,573,869,407
IssuesEvent
2022-03-04 00:55:55
BCDevOps/developer-experience
https://api.github.com/repos/BCDevOps/developer-experience
closed
Alert when Postgres Has no synchronous standby
artifactory patroni ops medium priority ops and shared services
Patroni HA DB has occasionally lost it's synchronous standby and did not automatically re-initializing one. A restart of a replication member will reset the Sync Standby flag for a member. - [x] Alerting for this situation to be added - [x] Possible monitoring and auto-heal?
1.0
Alert when Postgres Has no synchronous standby - Patroni HA DB has occasionally lost it's synchronous standby and did not automatically re-initializing one. A restart of a replication member will reset the Sync Standby flag for a member. - [x] Alerting for this situation to be added - [x] Possible monitoring and auto-heal?
non_defect
alert when postgres has no synchronous standby patroni ha db has occasionally lost it s synchronous standby and did not automatically re initializing one a restart of a replication member will reset the sync standby flag for a member alerting for this situation to be added possible monitoring and auto heal
0
132,301
5,176,312,013
IssuesEvent
2017-01-19 00:10:50
SemanticWebBuilder/SWBPortal
https://api.github.com/repos/SemanticWebBuilder/SWBPortal
closed
El componente RecursoLastUpdatesJSP tarda 141818ms en cargar los registros
bug component:SWBPortal priority:Major resolution:Fixed
<p>El componente RecursoLastUpdatesJSP tarda 141818ms en cargar los registros por primera vez (Optimizarlo)</p> ## Affects versions * 4.2.0.5 ## Fix versions * 4.2.0.5 ## Environment <p>Tomcat 7<br/> Hipersonic<br/> Win 7</p> ## Kenai Metadata |created|updated|reporter|assignee|due|resolved|link| |---|---|---|---|---|---|---| |Tue, 25 Sep 2012 18:48:17 +0000|Wed, 26 Sep 2012 22:49:55 +0000|jordi|francisco_jimenez|Tue, 25 Sep 2012 00:00:00 +0000|Wed, 26 Sep 2012 22:49:55 +0000|[https://kenai.com/jira/browse/SEMANTICWEBBUILDER-60](https://kenai.com/jira/browse/SEMANTICWEBBUILDER-60)|
1.0
El componente RecursoLastUpdatesJSP tarda 141818ms en cargar los registros - <p>El componente RecursoLastUpdatesJSP tarda 141818ms en cargar los registros por primera vez (Optimizarlo)</p> ## Affects versions * 4.2.0.5 ## Fix versions * 4.2.0.5 ## Environment <p>Tomcat 7<br/> Hipersonic<br/> Win 7</p> ## Kenai Metadata |created|updated|reporter|assignee|due|resolved|link| |---|---|---|---|---|---|---| |Tue, 25 Sep 2012 18:48:17 +0000|Wed, 26 Sep 2012 22:49:55 +0000|jordi|francisco_jimenez|Tue, 25 Sep 2012 00:00:00 +0000|Wed, 26 Sep 2012 22:49:55 +0000|[https://kenai.com/jira/browse/SEMANTICWEBBUILDER-60](https://kenai.com/jira/browse/SEMANTICWEBBUILDER-60)|
non_defect
el componente recursolastupdatesjsp tarda en cargar los registros el componente recursolastupdatesjsp tarda en cargar los registros por primera vez optimizarlo affects versions fix versions environment tomcat hipersonic win kenai metadata created updated reporter assignee due resolved link tue sep wed sep jordi francisco jimenez tue sep wed sep
0
63,920
18,058,033,918
IssuesEvent
2021-09-20 10:45:47
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Slack bridge to a channel in someone else's workspace
T-Defect A-Scalar
### Steps to reproduce 1. Where are you starting? What can you see? - A partner organisation wishes to collaborate with us and has created a shared (open) slack channel, inviting the relevant people from our team into it. I invited the Element Bridge app in. I know the bridge functionality is working because it is already operational to other shared slack channels, both channels owned by our slack workspace, and channels owned by others' slack workspaces. 2. What do you click? I go to the Home space, then click on the explore rooms compass icon. I then search for the bridged room (both full name and fragments of the title). The room does not appear. ### What happened? #### What did you expect? I expected to be able to see the slack channel as a bridged room in the list of available rooms to join, as has happened when bridging to other slack channels. #### What happened? I couldn't find the room. ### Operating system Windows 10 Home, 21H1 ### Application version Element version: 1.8.5 Olm version: 3.2.3 ### How did you install the app? From the Element site ### Homeserver allied-partners.element.io ### Have you submitted a rageshake? No
1.0
Slack bridge to a channel in someone else's workspace - ### Steps to reproduce 1. Where are you starting? What can you see? - A partner organisation wishes to collaborate with us and has created a shared (open) slack channel, inviting the relevant people from our team into it. I invited the Element Bridge app in. I know the bridge functionality is working because it is already operational to other shared slack channels, both channels owned by our slack workspace, and channels owned by others' slack workspaces. 2. What do you click? I go to the Home space, then click on the explore rooms compass icon. I then search for the bridged room (both full name and fragments of the title). The room does not appear. ### What happened? #### What did you expect? I expected to be able to see the slack channel as a bridged room in the list of available rooms to join, as has happened when bridging to other slack channels. #### What happened? I couldn't find the room. ### Operating system Windows 10 Home, 21H1 ### Application version Element version: 1.8.5 Olm version: 3.2.3 ### How did you install the app? From the Element site ### Homeserver allied-partners.element.io ### Have you submitted a rageshake? No
defect
slack bridge to a channel in someone else s workspace steps to reproduce where are you starting what can you see a partner organisation wishes to collaborate with us and has created a shared open slack channel inviting the relevant people from our team into it i invited the element bridge app in i know the bridge functionality is working because it is already operational to other shared slack channels both channels owned by our slack workspace and channels owned by others slack workspaces what do you click i go to the home space then click on the explore rooms compass icon i then search for the bridged room both full name and fragments of the title the room does not appear what happened what did you expect i expected to be able to see the slack channel as a bridged room in the list of available rooms to join as has happened when bridging to other slack channels what happened i couldn t find the room operating system windows home application version element version olm version how did you install the app from the element site homeserver allied partners element io have you submitted a rageshake no
1
52,103
13,211,389,070
IssuesEvent
2020-08-15 22:47:18
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
IceRec.IC2011-L2_V12-08-00_IceSim4compat_V5 compatible with version 10 of I3DOMCalibration (Trac #1691)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1691">https://code.icecube.wisc.edu/projects/icecube/ticket/1691</a>, reported by juancarlosand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:58", "_ts": "1550067178841456", "description": "The latest IceRec for 2011 (IceSim4 compat) is not compatible with version I3DOMCalibration. We need a new release with a newer version of dataclasses that will support the official GCD.", "reporter": "juancarlos", "cc": "javierg@udel.edu", "resolution": "wontfix", "time": "2016-05-04T19:04:57", "component": "combo reconstruction", "summary": "IceRec.IC2011-L2_V12-08-00_IceSim4compat_V5 compatible with version 10 of I3DOMCalibration", "priority": "critical", "keywords": "IceRec", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
IceRec.IC2011-L2_V12-08-00_IceSim4compat_V5 compatible with version 10 of I3DOMCalibration (Trac #1691) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1691">https://code.icecube.wisc.edu/projects/icecube/ticket/1691</a>, reported by juancarlosand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:58", "_ts": "1550067178841456", "description": "The latest IceRec for 2011 (IceSim4 compat) is not compatible with version I3DOMCalibration. We need a new release with a newer version of dataclasses that will support the official GCD.", "reporter": "juancarlos", "cc": "javierg@udel.edu", "resolution": "wontfix", "time": "2016-05-04T19:04:57", "component": "combo reconstruction", "summary": "IceRec.IC2011-L2_V12-08-00_IceSim4compat_V5 compatible with version 10 of I3DOMCalibration", "priority": "critical", "keywords": "IceRec", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
icerec compatible with version of trac migrated from json status closed changetime ts description the latest icerec for compat is not compatible with version we need a new release with a newer version of dataclasses that will support the official gcd reporter juancarlos cc javierg udel edu resolution wontfix time component combo reconstruction summary icerec compatible with version of priority critical keywords icerec milestone owner olivas type defect
1
58,615
16,636,610,582
IssuesEvent
2021-06-04 00:07:36
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
Error building osd-zfs during Lustre installation
Status: Triage Needed Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | CentOS Distribution Version | CentOS 8 Linux Kernel | 4.18.0-240.1.1.el8_3.x86_64 Architecture | x86_64 ZFS Version | 2.0.0-1 SPL Version | <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing I ran into this error when trying to install Lustre server packages. I'm reporting this here because the error message is a missing zfs header file. Let me know if this a Lustre bug and I should report it there instead. Installing zfs-dkms throws the following error: ``` Error! Bad return status for module build on kernel: 4.18.0-240.1.1.el8_3.x86_64 (x86_64) Consult /var/lib/dkms/lustre-zfs/2.14.0/build/make.log for more information. warning: %post(lustre-zfs-dkms-2.14.0-1.el8.noarch) scriptlet failed, exit status 10 Error in POSTIN scriptlet in rpm package lustre-zfs-dkms ``` ### Describe how to reproduce the problem ``` DIST="el8_3" yum -y install epel-release yum -y install http://download.zfsonlinux.org/epel/zfs-release.$DIST.noarch.rpm # Add the power tools repo, if isn't already there yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm yum config-manager --set-enabled powertools # Download lustre PACKAGE="lustre" ARCH="x86_64" [ ! -d $PACKAGE ] && mkdir $PACKAGE cd $PACKAGE wget -r -nc -nd -level=0 -R "*index.html*,*debuginfo*" -np https://downloads.whamcloud.com/public/lustre/latest-feature-release/el8.3.2011/server/RPMS/x86_64/ # Install lustre packages yum -y install --nogpg ./lustre-zfs-dkms-2.14.0-1.el8.noarch.rpm ./lustre-osd-zfs-mount-2.14.0-1.el8.x86_64.rpm ./kmod-lustre-2.14.0-1.el8.x86_64.rpm ./kmod-lustre-osd-zfs-2.14.0-1.el8.x86_64.rpm ./libzfs4-2.0.0-1.el8.x86_64.rpm ./kmod-zfs-4.18.0-240.1.1.el8_lustre.x86_64-2.0.0-1.el8.x86_64.rpm ./zfs-kmod-debugsource-2.0.0-1.el8.x86_64.rpm ./kmod-zfs-4.18.0-240.1.1.el8_lustre.x86_64-2.0.0-1.el8.x86_64.rpm ./zfs-2.0.0-1.el8.x86_64.rpm ./libzpool4-2.0.0-1.el8.x86_64.rpm ./libnvpair3-2.0.0-1.el8.x86_64.rpm ./libuutil3-2.0.0-1.el8.x86_64.rpm ./zfs-dkms-2.0.0-1.el8.noarch.rpm yum -y install ./lustre-2.14.0-1.el8.x86_64.rpm lustre-devel-2.14.0-1.el8.x86_64.rpm ./lustre-tests-2.14.0-1.el8.x86_64.rpm yum install ./python3-pyzfs-2.0.0-1.el8.noarch.rpm ./zfs-dracut-2.0.0-1.el8.noarch.rpm ./zfs-test-2.0.0-1.el8.x86_64.rpm # This package has dependencies I don't know how to resolve # yum install lustre-resource-agents-2.14.0-1.el8.x86_64.rpm cd ../ ``` To replicate this issue, I rebuilt all lustre zfs kernel modules ``` dkms remove -m lustre-zfs -v 2.14.0 -k $(uname -r) dkms build -m lustre-zfs -v 2.14.0 ``` Error message: ``` <snip> config.status: executing depfiles commands config.status: executing libtool commands CC: gcc LD: /bin/ld -m elf_x86_64 CPPFLAGS: -include /var/lib/dkms/lustre-zfs/2.14.0/build/undef.h -include /var/lib/dkms/lustre-zfs/2.14.0/build/config.h -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/libcfs/include -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/utils -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include CFLAGS: -g -O2 -Wall -Werror EXTRA_KCFLAGS: -include /var/lib/dkms/lustre-zfs/2.14.0/build/undef.h -include /var/lib/dkms/lustre-zfs/2.14.0/build/config.h -g -I/var/lib/dkms/lustre-zfs/2.14.0/build/libcfs/include -I/var/lib/dkms/lustre-zfs/2.14.0/build/libcfs/include/libcfs -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/include -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include -Wno-format-truncation -Wno-stringop-truncation -Wno-stringop-overflow Type 'make' to build Lustre. Building module: cleaning build area... make -j8 KERNELRELEASE=4.18.0-240.1.1.el8_3.x86_64................(bad exit status: 2) Error! Bad return status for module build on kernel: 4.18.0-240.1.1.el8_3.x86_64 (x86_64) Consult /var/lib/dkms/lustre-zfs/2.14.0/build/make.log for more information. ``` Relevant messages from make.log: ``` <snip> CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/ofd/ofd_fs.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/llite/crypto.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_handler.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/mgs/mgs_barrier.o In file included from /usr/src/zfs-2.0.0/include/sys/arc.h:32, from /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_internal.h:51, from /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_handler.c:52: /usr/src/zfs-2.0.0/include/sys/zfs_context.h:45:10: fatal error: sys/types.h: No such file or directory #include <sys/types.h> ^~~~~~~~~~~~~ compilation terminated. make[6]: *** [scripts/Makefile.build:315: /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_handler.o] Error 1 make[5]: *** [scripts/Makefile.build:556: /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs] Error 2 make[5]: *** Waiting for unfinished jobs.... CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osc/osc_dev.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/obdclass/llog_osd.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/ofd/ofd_trans.o <snip> ``` ### Include any warning/errors/backtraces from the system logs N/A <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
1.0
Error building osd-zfs during Lustre installation - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | CentOS Distribution Version | CentOS 8 Linux Kernel | 4.18.0-240.1.1.el8_3.x86_64 Architecture | x86_64 ZFS Version | 2.0.0-1 SPL Version | <!-- Commands to find ZFS/SPL versions: modinfo zfs | grep -iw version modinfo spl | grep -iw version --> ### Describe the problem you're observing I ran into this error when trying to install Lustre server packages. I'm reporting this here because the error message is a missing zfs header file. Let me know if this a Lustre bug and I should report it there instead. Installing zfs-dkms throws the following error: ``` Error! Bad return status for module build on kernel: 4.18.0-240.1.1.el8_3.x86_64 (x86_64) Consult /var/lib/dkms/lustre-zfs/2.14.0/build/make.log for more information. warning: %post(lustre-zfs-dkms-2.14.0-1.el8.noarch) scriptlet failed, exit status 10 Error in POSTIN scriptlet in rpm package lustre-zfs-dkms ``` ### Describe how to reproduce the problem ``` DIST="el8_3" yum -y install epel-release yum -y install http://download.zfsonlinux.org/epel/zfs-release.$DIST.noarch.rpm # Add the power tools repo, if isn't already there yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm yum config-manager --set-enabled powertools # Download lustre PACKAGE="lustre" ARCH="x86_64" [ ! -d $PACKAGE ] && mkdir $PACKAGE cd $PACKAGE wget -r -nc -nd -level=0 -R "*index.html*,*debuginfo*" -np https://downloads.whamcloud.com/public/lustre/latest-feature-release/el8.3.2011/server/RPMS/x86_64/ # Install lustre packages yum -y install --nogpg ./lustre-zfs-dkms-2.14.0-1.el8.noarch.rpm ./lustre-osd-zfs-mount-2.14.0-1.el8.x86_64.rpm ./kmod-lustre-2.14.0-1.el8.x86_64.rpm ./kmod-lustre-osd-zfs-2.14.0-1.el8.x86_64.rpm ./libzfs4-2.0.0-1.el8.x86_64.rpm ./kmod-zfs-4.18.0-240.1.1.el8_lustre.x86_64-2.0.0-1.el8.x86_64.rpm ./zfs-kmod-debugsource-2.0.0-1.el8.x86_64.rpm ./kmod-zfs-4.18.0-240.1.1.el8_lustre.x86_64-2.0.0-1.el8.x86_64.rpm ./zfs-2.0.0-1.el8.x86_64.rpm ./libzpool4-2.0.0-1.el8.x86_64.rpm ./libnvpair3-2.0.0-1.el8.x86_64.rpm ./libuutil3-2.0.0-1.el8.x86_64.rpm ./zfs-dkms-2.0.0-1.el8.noarch.rpm yum -y install ./lustre-2.14.0-1.el8.x86_64.rpm lustre-devel-2.14.0-1.el8.x86_64.rpm ./lustre-tests-2.14.0-1.el8.x86_64.rpm yum install ./python3-pyzfs-2.0.0-1.el8.noarch.rpm ./zfs-dracut-2.0.0-1.el8.noarch.rpm ./zfs-test-2.0.0-1.el8.x86_64.rpm # This package has dependencies I don't know how to resolve # yum install lustre-resource-agents-2.14.0-1.el8.x86_64.rpm cd ../ ``` To replicate this issue, I rebuilt all lustre zfs kernel modules ``` dkms remove -m lustre-zfs -v 2.14.0 -k $(uname -r) dkms build -m lustre-zfs -v 2.14.0 ``` Error message: ``` <snip> config.status: executing depfiles commands config.status: executing libtool commands CC: gcc LD: /bin/ld -m elf_x86_64 CPPFLAGS: -include /var/lib/dkms/lustre-zfs/2.14.0/build/undef.h -include /var/lib/dkms/lustre-zfs/2.14.0/build/config.h -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/libcfs/include -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/utils -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include CFLAGS: -g -O2 -Wall -Werror EXTRA_KCFLAGS: -include /var/lib/dkms/lustre-zfs/2.14.0/build/undef.h -include /var/lib/dkms/lustre-zfs/2.14.0/build/config.h -g -I/var/lib/dkms/lustre-zfs/2.14.0/build/libcfs/include -I/var/lib/dkms/lustre-zfs/2.14.0/build/libcfs/include/libcfs -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/lnet/include -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include/uapi -I/var/lib/dkms/lustre-zfs/2.14.0/build/lustre/include -Wno-format-truncation -Wno-stringop-truncation -Wno-stringop-overflow Type 'make' to build Lustre. Building module: cleaning build area... make -j8 KERNELRELEASE=4.18.0-240.1.1.el8_3.x86_64................(bad exit status: 2) Error! Bad return status for module build on kernel: 4.18.0-240.1.1.el8_3.x86_64 (x86_64) Consult /var/lib/dkms/lustre-zfs/2.14.0/build/make.log for more information. ``` Relevant messages from make.log: ``` <snip> CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/ofd/ofd_fs.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/llite/crypto.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_handler.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/mgs/mgs_barrier.o In file included from /usr/src/zfs-2.0.0/include/sys/arc.h:32, from /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_internal.h:51, from /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_handler.c:52: /usr/src/zfs-2.0.0/include/sys/zfs_context.h:45:10: fatal error: sys/types.h: No such file or directory #include <sys/types.h> ^~~~~~~~~~~~~ compilation terminated. make[6]: *** [scripts/Makefile.build:315: /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs/osd_handler.o] Error 1 make[5]: *** [scripts/Makefile.build:556: /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osd-zfs] Error 2 make[5]: *** Waiting for unfinished jobs.... CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/osc/osc_dev.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/obdclass/llog_osd.o CC [M] /var/lib/dkms/lustre-zfs/2.14.0/build/lustre/ofd/ofd_trans.o <snip> ``` ### Include any warning/errors/backtraces from the system logs N/A <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` -->
defect
error building osd zfs during lustre installation thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name centos distribution version centos linux kernel architecture zfs version spl version commands to find zfs spl versions modinfo zfs grep iw version modinfo spl grep iw version describe the problem you re observing i ran into this error when trying to install lustre server packages i m reporting this here because the error message is a missing zfs header file let me know if this a lustre bug and i should report it there instead installing zfs dkms throws the following error error bad return status for module build on kernel consult var lib dkms lustre zfs build make log for more information warning post lustre zfs dkms noarch scriptlet failed exit status error in postin scriptlet in rpm package lustre zfs dkms describe how to reproduce the problem dist yum y install epel release yum y install add the power tools repo if isn t already there yum y install yum config manager set enabled powertools download lustre package lustre arch mkdir package cd package wget r nc nd level r index html debuginfo np install lustre packages yum y install nogpg lustre zfs dkms noarch rpm lustre osd zfs mount rpm kmod lustre rpm kmod lustre osd zfs rpm rpm kmod zfs lustre rpm zfs kmod debugsource rpm kmod zfs lustre rpm zfs rpm rpm rpm rpm zfs dkms noarch rpm yum y install lustre rpm lustre devel rpm lustre tests rpm yum install pyzfs noarch rpm zfs dracut noarch rpm zfs test rpm this package has dependencies i don t know how to resolve yum install lustre resource agents rpm cd to replicate this issue i rebuilt all lustre zfs kernel modules dkms remove m lustre zfs v k uname r dkms build m lustre zfs v error message config status executing depfiles commands config status executing libtool commands cc gcc ld bin ld m elf cppflags include var lib dkms lustre zfs build undef h include var lib dkms lustre zfs build config h i var lib dkms lustre zfs build lnet include uapi i var lib dkms lustre zfs build lustre include uapi i var lib dkms lustre zfs build libcfs include i var lib dkms lustre zfs build lnet utils i var lib dkms lustre zfs build lustre include cflags g wall werror extra kcflags include var lib dkms lustre zfs build undef h include var lib dkms lustre zfs build config h g i var lib dkms lustre zfs build libcfs include i var lib dkms lustre zfs build libcfs include libcfs i var lib dkms lustre zfs build lnet include uapi i var lib dkms lustre zfs build lnet include i var lib dkms lustre zfs build lustre include uapi i var lib dkms lustre zfs build lustre include wno format truncation wno stringop truncation wno stringop overflow type make to build lustre building module cleaning build area make kernelrelease bad exit status error bad return status for module build on kernel consult var lib dkms lustre zfs build make log for more information relevant messages from make log cc var lib dkms lustre zfs build lustre ofd ofd fs o cc var lib dkms lustre zfs build lustre llite crypto o cc var lib dkms lustre zfs build lustre osd zfs osd handler o cc var lib dkms lustre zfs build lustre mgs mgs barrier o in file included from usr src zfs include sys arc h from var lib dkms lustre zfs build lustre osd zfs osd internal h from var lib dkms lustre zfs build lustre osd zfs osd handler c usr src zfs include sys zfs context h fatal error sys types h no such file or directory include compilation terminated make error make error make waiting for unfinished jobs cc var lib dkms lustre zfs build lustre osc osc dev o cc var lib dkms lustre zfs build lustre obdclass llog osd o cc var lib dkms lustre zfs build lustre ofd ofd trans o include any warning errors backtraces from the system logs n a important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with
1
67,443
8,133,866,424
IssuesEvent
2018-08-19 08:44:06
NG-ZORRO/ng-zorro-antd
https://api.github.com/repos/NG-ZORRO/ng-zorro-antd
closed
[nz-timeline-item] Custom color on circle's
Ant Design Enhancement
## What problem does this feature solve? In the DOC's [nz-timeline-item](https://ng.ant.design/components/timeline/en#nz-timeline-item), the description says: "Set the circle's color to blue, red, green or other custom colors", but what "custom colors"? I tried hex color, named color and none of this worked. ## What does the proposed API look like? That property "nzColor" should accept any type of color, e.g.: '#ced12e', 'rgb(255, 255, 255)', 'rgba(255, 255, 255, .65)', 'gray'.. etc. And, instead of using "blue, red or green" to default value, use "primary or default, error, success and warning". <!-- generated by ng-zorro-issue-helper. DO NOT REMOVE -->
1.0
[nz-timeline-item] Custom color on circle's - ## What problem does this feature solve? In the DOC's [nz-timeline-item](https://ng.ant.design/components/timeline/en#nz-timeline-item), the description says: "Set the circle's color to blue, red, green or other custom colors", but what "custom colors"? I tried hex color, named color and none of this worked. ## What does the proposed API look like? That property "nzColor" should accept any type of color, e.g.: '#ced12e', 'rgb(255, 255, 255)', 'rgba(255, 255, 255, .65)', 'gray'.. etc. And, instead of using "blue, red or green" to default value, use "primary or default, error, success and warning". <!-- generated by ng-zorro-issue-helper. DO NOT REMOVE -->
non_defect
custom color on circle s what problem does this feature solve in the doc s the description says set the circle s color to blue red green or other custom colors but what custom colors i tried hex color named color and none of this worked what does the proposed api look like that property nzcolor should accept any type of color e g rgb rgba gray etc and instead of using blue red or green to default value use primary or default error success and warning
0
43,710
11,800,355,095
IssuesEvent
2020-03-18 17:23:58
NREL/EnergyPlus
https://api.github.com/repos/NREL/EnergyPlus
closed
AirloopHVAC:UnitarySystem RH control broken with V9.2
Defect PriorityHigh
Issue overview -------------- Previous code refactoring broke RH control by converting a global MoistureLoad variable to an unused local variable. In other words the global MoisureLoad variable is always 0. ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link or helpdesk ticket number ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [x] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
1.0
AirloopHVAC:UnitarySystem RH control broken with V9.2 - Issue overview -------------- Previous code refactoring broke RH control by converting a global MoistureLoad variable to an unused local variable. In other words the global MoisureLoad variable is always 0. ### Details Some additional details for this issue (if relevant): - Platform (Operating system, version) - Version of EnergyPlus (if using an intermediate build, include SHA) - Unmethours link or helpdesk ticket number ### Checklist Add to this list or remove from it as applicable. This is a simple templated set of guidelines. - [ ] Defect file added (list location of defect file here) - [x] Ticket added to Pivotal for defect (development team task) - [ ] Pull request created (the pull request will have additional tasks related to reviewing changes that fix this defect)
defect
airloophvac unitarysystem rh control broken with issue overview previous code refactoring broke rh control by converting a global moistureload variable to an unused local variable in other words the global moisureload variable is always details some additional details for this issue if relevant platform operating system version version of energyplus if using an intermediate build include sha unmethours link or helpdesk ticket number checklist add to this list or remove from it as applicable this is a simple templated set of guidelines defect file added list location of defect file here ticket added to pivotal for defect development team task pull request created the pull request will have additional tasks related to reviewing changes that fix this defect
1
220,116
24,562,374,455
IssuesEvent
2022-10-12 21:39:58
TreyM-WSS/concord
https://api.github.com/repos/TreyM-WSS/concord
closed
CVE-2021-33502 (High) detected in normalize-url-3.3.0.tgz, normalize-url-1.9.1.tgz - autoclosed
security vulnerability
## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary> <p> <details><summary><b>normalize-url-3.3.0.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p> <p>Path to dependency file: /console2/package.json</p> <p>Path to vulnerable library: /console2/node_modules/postcss-normalize-url/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - optimize-css-assets-webpack-plugin-5.0.3.tgz - cssnano-4.1.10.tgz - cssnano-preset-default-4.0.7.tgz - postcss-normalize-url-4.0.1.tgz - :x: **normalize-url-3.3.0.tgz** (Vulnerable Library) </details> <details><summary><b>normalize-url-1.9.1.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p> <p>Path to dependency file: /console2/package.json</p> <p>Path to vulnerable library: /console2/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - mini-css-extract-plugin-0.9.0.tgz - :x: **normalize-url-1.9.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/concord/commit/a0a888b3b97fbcfb092cc1f80f98558cfea2d71f">a0a888b3b97fbcfb092cc1f80f98558cfea2d71f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution (normalize-url): 4.5.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p><p>Fix Resolution (normalize-url): 4.5.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
True
CVE-2021-33502 (High) detected in normalize-url-3.3.0.tgz, normalize-url-1.9.1.tgz - autoclosed - ## CVE-2021-33502 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>normalize-url-3.3.0.tgz</b>, <b>normalize-url-1.9.1.tgz</b></p></summary> <p> <details><summary><b>normalize-url-3.3.0.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-3.3.0.tgz</a></p> <p>Path to dependency file: /console2/package.json</p> <p>Path to vulnerable library: /console2/node_modules/postcss-normalize-url/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - optimize-css-assets-webpack-plugin-5.0.3.tgz - cssnano-4.1.10.tgz - cssnano-preset-default-4.0.7.tgz - postcss-normalize-url-4.0.1.tgz - :x: **normalize-url-3.3.0.tgz** (Vulnerable Library) </details> <details><summary><b>normalize-url-1.9.1.tgz</b></p></summary> <p>Normalize a URL</p> <p>Library home page: <a href="https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz">https://registry.npmjs.org/normalize-url/-/normalize-url-1.9.1.tgz</a></p> <p>Path to dependency file: /console2/package.json</p> <p>Path to vulnerable library: /console2/node_modules/normalize-url/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.4.1.tgz (Root Library) - mini-css-extract-plugin-0.9.0.tgz - :x: **normalize-url-1.9.1.tgz** (Vulnerable Library) </details> <p>Found in HEAD commit: <a href="https://github.com/TreyM-WSS/concord/commit/a0a888b3b97fbcfb092cc1f80f98558cfea2d71f">a0a888b3b97fbcfb092cc1f80f98558cfea2d71f</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The normalize-url package before 4.5.1, 5.x before 5.3.1, and 6.x before 6.0.1 for Node.js has a ReDoS (regular expression denial of service) issue because it has exponential performance for data: URLs. <p>Publish Date: 2021-05-24 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-33502>CVE-2021-33502</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-33502</a></p> <p>Release Date: 2021-05-24</p> <p>Fix Resolution (normalize-url): 4.5.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p><p>Fix Resolution (normalize-url): 4.5.1</p> <p>Direct dependency fix Resolution (react-scripts): 5.0.0</p> </p> </details> <p></p> *** :rescue_worker_helmet: Automatic Remediation is available for this issue
non_defect
cve high detected in normalize url tgz normalize url tgz autoclosed cve high severity vulnerability vulnerable libraries normalize url tgz normalize url tgz normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules postcss normalize url node modules normalize url package json dependency hierarchy react scripts tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss normalize url tgz x normalize url tgz vulnerable library normalize url tgz normalize a url library home page a href path to dependency file package json path to vulnerable library node modules normalize url package json dependency hierarchy react scripts tgz root library mini css extract plugin tgz x normalize url tgz vulnerable library found in head commit a href found in base branch master vulnerability details the normalize url package before x before and x before for node js has a redos regular expression denial of service issue because it has exponential performance for data urls publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution normalize url direct dependency fix resolution react scripts fix resolution normalize url direct dependency fix resolution react scripts rescue worker helmet automatic remediation is available for this issue
0
37,450
8,403,904,457
IssuesEvent
2018-10-11 11:12:14
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
pdns auth 3.4.8 crashes on API zone update with non-object JSON
auth defect rest-api
Hello! Here's a stacktrace too: ``` Apr 15 09:56:26 manage pdns[20452]: Polled security status of version 3.4.8 at startup, no known issues reported: OK Apr 15 09:56:26 manage pdns[20452]: Listening for HTTP requests on 127.0.0.1:8081 Apr 15 09:56:26 manage pdns[20452]: Creating backend connection for TCP Apr 15 09:56:26 manage pdns[20452]: About to create 3 backend threads for UDP Apr 15 09:56:26 manage pdns[20452]: Done launching threads, ready to distribute questions Apr 15 09:58:00 manage pdns[20452]: Got a signal 6, attempting to print trace: Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x663b60] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x7fd2f06aad40] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7fd2f06aacc9] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7fd2f06ae0d8] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(+0x2fb86) [0x7fd2f06a3b86] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(+0x2fc32) [0x7fd2f06a3c32] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x5ca1b3] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN9rapidjson12GenericValueINS_4UTF8IcEENS_19MemoryPoolAllocatorINS_12CrtAllocatorEEEEixEPKc+0x9) [0x5ca1c9] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x644229] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZNK5boost9function2IvP11HttpRequestP12HttpResponseEclES2_S4_+0x18) [0x659ac8] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x6550cf] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN5boost6detail8function26void_function_obj_invoker2INS_3_bi6bind_tIvPFvNS_8functionIFvP11HttpRequestP12HttpResponseEEES7_S9_ENS3_5list3INS3_5valueISB_EENS_3argILi1EEENSH_ILi2EEEEEEEvS7_S9_E6invokeERNS1_15function_bufferES7_S9_+0x67) [0x658a47] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x654698] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN5boost6detail8function26void_function_obj_invoker2INS_3_bi6bind_tIvPFvNS_8functionIFvP11HttpRequestP12HttpResponseEEEPN6YaHTTP7RequestEPNSC_8ResponseEENS3_5list3INS3_5valueISB_EENS_3argILi1EEENSM_ILi2EEEEEEEvSE_SG_E6invokeERNS1_15function_bufferESE_SG_+0x67) [0x658b27] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN9WebServer13handleRequestE11HttpRequest+0x223) [0x6559c3] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN9WebServer15serveConnectionEP6Socket+0x1dc) [0x656cdc] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x6573b2] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7fd2f0a41182] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fd2f076e47d] Apr 15 09:58:01 manage pdns[13885]: Our pdns instance (20452) exited after signal 6 Apr 15 09:58:01 manage pdns[13885]: Dumped core Apr 15 09:58:01 manage pdns[13885]: Respawning Apr 15 09:58:02 manage pdns[20498]: Guardian is launching an instance ```
1.0
pdns auth 3.4.8 crashes on API zone update with non-object JSON - Hello! Here's a stacktrace too: ``` Apr 15 09:56:26 manage pdns[20452]: Polled security status of version 3.4.8 at startup, no known issues reported: OK Apr 15 09:56:26 manage pdns[20452]: Listening for HTTP requests on 127.0.0.1:8081 Apr 15 09:56:26 manage pdns[20452]: Creating backend connection for TCP Apr 15 09:56:26 manage pdns[20452]: About to create 3 backend threads for UDP Apr 15 09:56:26 manage pdns[20452]: Done launching threads, ready to distribute questions Apr 15 09:58:00 manage pdns[20452]: Got a signal 6, attempting to print trace: Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x663b60] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(+0x36d40) [0x7fd2f06aad40] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x39) [0x7fd2f06aacc9] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7fd2f06ae0d8] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(+0x2fb86) [0x7fd2f06a3b86] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(+0x2fc32) [0x7fd2f06a3c32] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x5ca1b3] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN9rapidjson12GenericValueINS_4UTF8IcEENS_19MemoryPoolAllocatorINS_12CrtAllocatorEEEEixEPKc+0x9) [0x5ca1c9] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x644229] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZNK5boost9function2IvP11HttpRequestP12HttpResponseEclES2_S4_+0x18) [0x659ac8] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x6550cf] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN5boost6detail8function26void_function_obj_invoker2INS_3_bi6bind_tIvPFvNS_8functionIFvP11HttpRequestP12HttpResponseEEES7_S9_ENS3_5list3INS3_5valueISB_EENS_3argILi1EEENSH_ILi2EEEEEEEvS7_S9_E6invokeERNS1_15function_bufferES7_S9_+0x67) [0x658a47] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x654698] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN5boost6detail8function26void_function_obj_invoker2INS_3_bi6bind_tIvPFvNS_8functionIFvP11HttpRequestP12HttpResponseEEEPN6YaHTTP7RequestEPNSC_8ResponseEENS3_5list3INS3_5valueISB_EENS_3argILi1EEENSM_ILi2EEEEEEEvSE_SG_E6invokeERNS1_15function_bufferESE_SG_+0x67) [0x658b27] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN9WebServer13handleRequestE11HttpRequest+0x223) [0x6559c3] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance(_ZN9WebServer15serveConnectionEP6Socket+0x1dc) [0x656cdc] Apr 15 09:58:00 manage pdns[20452]: /usr/sbin/pdns_server-instance() [0x6573b2] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libpthread.so.0(+0x8182) [0x7fd2f0a41182] Apr 15 09:58:00 manage pdns[20452]: /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fd2f076e47d] Apr 15 09:58:01 manage pdns[13885]: Our pdns instance (20452) exited after signal 6 Apr 15 09:58:01 manage pdns[13885]: Dumped core Apr 15 09:58:01 manage pdns[13885]: Respawning Apr 15 09:58:02 manage pdns[20498]: Guardian is launching an instance ```
defect
pdns auth crashes on api zone update with non object json hello here s a stacktrace too apr manage pdns polled security status of version at startup no known issues reported ok apr manage pdns listening for http requests on apr manage pdns creating backend connection for tcp apr manage pdns about to create backend threads for udp apr manage pdns done launching threads ready to distribute questions apr manage pdns got a signal attempting to print trace apr manage pdns usr sbin pdns server instance apr manage pdns lib linux gnu libc so apr manage pdns lib linux gnu libc so gsignal apr manage pdns lib linux gnu libc so abort apr manage pdns lib linux gnu libc so apr manage pdns lib linux gnu libc so apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance function obj tivpfvns eens apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance function obj tivpfvns eens sg bufferese sg apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance apr manage pdns usr sbin pdns server instance apr manage pdns lib linux gnu libpthread so apr manage pdns lib linux gnu libc so clone apr manage pdns our pdns instance exited after signal apr manage pdns dumped core apr manage pdns respawning apr manage pdns guardian is launching an instance
1
63,971
18,094,941,748
IssuesEvent
2021-09-22 08:00:53
Guake/guake
https://api.github.com/repos/Guake/guake
closed
Background image renders above the search box
Type: Defect
I missed something when checking the background image feature (in #1604), background image renders above the search box, so when you hit CTRL+SHIFT+F, you can't see the search box. Search still works fine, just can't see it. Pulling in @mlouielu because they'll know most about how to remedy this having written the background image thing themself **Describe the bug** Can't see search box with a background image **Expected behavior** Can see the search box **Actual behavior** Can't see the box **To Reproduce** Set a background image and hit CTRL+SHIFT+F. Must be on git latest to encounter this bug. ------------------------------------------------------------------------------- Please run `$ guake --support`, and paste the results here. Don't put backticks (`` ` ``) around it! The output already contains Markdown formatting. And make sure you run the command **OUTSIDE** the Guake. <details><summary>$ guake --support</summary> Guake Version: 3.7.1.dev95 Vte Version: 0.60.3 Vte Runtime Version: 0.60.3 -------------------------------------------------- GTK+ Version: 3.24.20 GDK Backend: <GdkX11.X11Display -------------------------------------------------- Desktop Session: cinnamon -------------------------------------------------- Display: :0 RGBA visual: True Composited: True * Monitor: 0 - CMN eDP-1-1 * Geometry: 1920 x 1080 at 0, 0 * Size: 344 x 194 mm² * Primary: True * Refresh rate: 60.05 Hz * Subpixel layout: unknown
1.0
Background image renders above the search box - I missed something when checking the background image feature (in #1604), background image renders above the search box, so when you hit CTRL+SHIFT+F, you can't see the search box. Search still works fine, just can't see it. Pulling in @mlouielu because they'll know most about how to remedy this having written the background image thing themself **Describe the bug** Can't see search box with a background image **Expected behavior** Can see the search box **Actual behavior** Can't see the box **To Reproduce** Set a background image and hit CTRL+SHIFT+F. Must be on git latest to encounter this bug. ------------------------------------------------------------------------------- Please run `$ guake --support`, and paste the results here. Don't put backticks (`` ` ``) around it! The output already contains Markdown formatting. And make sure you run the command **OUTSIDE** the Guake. <details><summary>$ guake --support</summary> Guake Version: 3.7.1.dev95 Vte Version: 0.60.3 Vte Runtime Version: 0.60.3 -------------------------------------------------- GTK+ Version: 3.24.20 GDK Backend: <GdkX11.X11Display -------------------------------------------------- Desktop Session: cinnamon -------------------------------------------------- Display: :0 RGBA visual: True Composited: True * Monitor: 0 - CMN eDP-1-1 * Geometry: 1920 x 1080 at 0, 0 * Size: 344 x 194 mm² * Primary: True * Refresh rate: 60.05 Hz * Subpixel layout: unknown
defect
background image renders above the search box i missed something when checking the background image feature in background image renders above the search box so when you hit ctrl shift f you can t see the search box search still works fine just can t see it pulling in mlouielu because they ll know most about how to remedy this having written the background image thing themself describe the bug can t see search box with a background image expected behavior can see the search box actual behavior can t see the box to reproduce set a background image and hit ctrl shift f must be on git latest to encounter this bug please run guake support and paste the results here don t put backticks around it the output already contains markdown formatting and make sure you run the command outside the guake guake support guake version vte version vte runtime version gtk version gdk backend desktop session cinnamon display rgba visual true composited true monitor cmn edp geometry x at size x mm² primary true refresh rate hz subpixel layout unknown
1
2,930
2,607,966,667
IssuesEvent
2015-02-26 00:42:39
chrsmithdemos/leveldb
https://api.github.com/repos/chrsmithdemos/leveldb
opened
It would be more usable to merge the android and windows version in trunk.
auto-migrated Priority-Medium Type-Defect
``` RT. ``` ----- Original issue reported on code.google.com by `hume...@gmail.com` on 28 Jul 2012 at 2:16
1.0
It would be more usable to merge the android and windows version in trunk. - ``` RT. ``` ----- Original issue reported on code.google.com by `hume...@gmail.com` on 28 Jul 2012 at 2:16
defect
it would be more usable to merge the android and windows version in trunk rt original issue reported on code google com by hume gmail com on jul at
1
27,749
5,094,046,380
IssuesEvent
2017-01-03 09:49:05
GoldenSoftwareLtd/gedemin
https://api.github.com/repos/GoldenSoftwareLtd/gedemin
closed
НДС по НМА
FA Priority-Medium Type-Defect
Originally reported on Google Code with ID 1043 ``` Почему когда я приходую НМА, на входной НДС при вводе в эксплуатацию ставится проводка Дт 01 Кт 18.01.03? НДС вроде должен приниматься к зачету равными частями 1/12 как и по основным. ``` Reported by `gs1994` on 2008-08-07 09:21:43
1.0
НДС по НМА - Originally reported on Google Code with ID 1043 ``` Почему когда я приходую НМА, на входной НДС при вводе в эксплуатацию ставится проводка Дт 01 Кт 18.01.03? НДС вроде должен приниматься к зачету равными частями 1/12 как и по основным. ``` Reported by `gs1994` on 2008-08-07 09:21:43
defect
ндс по нма originally reported on google code with id почему когда я приходую нма на входной ндс при вводе в эксплуатацию ставится проводка дт кт ндс вроде должен приниматься к зачету равными частями как и по основным reported by on
1
62,098
17,023,851,044
IssuesEvent
2021-07-03 04:10:30
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
libapache2-mod-tile cannot download 10m-populated-places.zip and 110m-admin-0-boundary-lines.zip causing install crash
Component: mod_tile Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 10.58pm, Friday, 25th January 2013]** OS: Ubuntu 12.10 Following directions from: http://switch2osm.org/serving-tiles/building-a-tile-server-from-packages/ libapache2-mod-tile produces a ERROR 404 when trying to download from: http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/10m-populated-places.zip and http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/110m-admin-0-boundary-lines.zip the files are not longer available on the server causing the install to crash later on.
1.0
libapache2-mod-tile cannot download 10m-populated-places.zip and 110m-admin-0-boundary-lines.zip causing install crash - **[Submitted to the original trac issue database at 10.58pm, Friday, 25th January 2013]** OS: Ubuntu 12.10 Following directions from: http://switch2osm.org/serving-tiles/building-a-tile-server-from-packages/ libapache2-mod-tile produces a ERROR 404 when trying to download from: http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/10m-populated-places.zip and http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/110m/cultural/110m-admin-0-boundary-lines.zip the files are not longer available on the server causing the install to crash later on.
defect
mod tile cannot download populated places zip and admin boundary lines zip causing install crash os ubuntu following directions from mod tile produces a error when trying to download from and the files are not longer available on the server causing the install to crash later on
1
535,306
15,686,281,064
IssuesEvent
2021-03-25 12:19:32
sodafoundation/delfin
https://api.github.com/repos/sodafoundation/delfin
closed
[Task manager ] Collect Storage Pool Information Periodically
Feature Medium Priority
*@NajmudheenCT commented on May 13, 2020, 5:23 AM UTC:* schedule Pool collection in periodic scheduler *This issue was moved by [kumarashit](https://github.com/kumarashit) from [sodafoundation/SIM-TempIssues#28](https://github.com/sodafoundation/SIM-TempIssues/issues/28).*
1.0
[Task manager ] Collect Storage Pool Information Periodically - *@NajmudheenCT commented on May 13, 2020, 5:23 AM UTC:* schedule Pool collection in periodic scheduler *This issue was moved by [kumarashit](https://github.com/kumarashit) from [sodafoundation/SIM-TempIssues#28](https://github.com/sodafoundation/SIM-TempIssues/issues/28).*
non_defect
collect storage pool information periodically najmudheenct commented on may am utc schedule pool collection in periodic scheduler this issue was moved by from
0
30,966
6,378,814,571
IssuesEvent
2017-08-02 13:33:55
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
Equality checking does not work properly for values returned from indexer of IList<T>
defect
Equality checking doesn't seem to work properly for values returned from the indexer of an IList<T> object. This seems to occur with value types, such as int, bool, etc. I think this might be related to issue #2909. ### Steps To Reproduce https://deck.net/ad157a75721efe6c3a242cb8cc12aff4 ```c# public class Program { public static void Main() { IList<int> list = new List<int> {0}; int num = list[0]; Console.WriteLine(num); if (num == 0) Console.WriteLine("Good"); else Console.WriteLine("Bad"); } } ``` ### Expected Result The first item in the list is equal to 0, so "Good" should be printed out to the console. ### Actual Result The equality checking doesn't seem to work properly, so "Bad" is printed instead.
1.0
Equality checking does not work properly for values returned from indexer of IList<T> - Equality checking doesn't seem to work properly for values returned from the indexer of an IList<T> object. This seems to occur with value types, such as int, bool, etc. I think this might be related to issue #2909. ### Steps To Reproduce https://deck.net/ad157a75721efe6c3a242cb8cc12aff4 ```c# public class Program { public static void Main() { IList<int> list = new List<int> {0}; int num = list[0]; Console.WriteLine(num); if (num == 0) Console.WriteLine("Good"); else Console.WriteLine("Bad"); } } ``` ### Expected Result The first item in the list is equal to 0, so "Good" should be printed out to the console. ### Actual Result The equality checking doesn't seem to work properly, so "Bad" is printed instead.
defect
equality checking does not work properly for values returned from indexer of ilist equality checking doesn t seem to work properly for values returned from the indexer of an ilist object this seems to occur with value types such as int bool etc i think this might be related to issue steps to reproduce c public class program public static void main ilist list new list int num list console writeline num if num console writeline good else console writeline bad expected result the first item in the list is equal to so good should be printed out to the console actual result the equality checking doesn t seem to work properly so bad is printed instead
1
160,428
12,511,109,267
IssuesEvent
2020-06-02 19:55:53
NixOS/nixpkgs
https://api.github.com/repos/NixOS/nixpkgs
closed
QEMU segfault when a test VM is interactively started
0.kind: bug 6.topic: testing
**Describe the bug** On the branch 19.09, when i start a test interactively, the QEMU process segfaults. It works well when executed non interactively. **To Reproduce** ``` $ git checkout 4fd551ee2f1 $ nix-build nixos/tests/simple.nix -A driver $ ./result/bin/nixos-run-vms starting VDE switch for network 1 running the VM test script starting all VMs machine: starting vm machine: QEMU running (pid 18179) (0.07 seconds) waiting for all VMs to finish machine: waiting for the VM to power off (0.10 seconds) (0.10 seconds) (0.17 seconds) collecting coverage data (0.00 seconds) syncing (0.00 seconds) test script finished in 0.17s vde_switch: EOF on stdin, cleaning up and exiting cleaning up (0.00 seconds) ``` In the kernel logs, i can see: ``` [ 2110.889976] qemu-system-x86[18179]: segfault at 0 ip 00007fabf5618ae0 sp 00007ffca850dc68 error 6 in libc-2.27.so[7fabf54e8000+13d000] [ 2110.889988] Code: fe 6f 06 c5 fe 6f 4e 20 c5 fe 6f 56 40 c5 fe 6f 5e 60 c5 fe 6f 64 16 e0 c5 fe 6f 6c 16 c0 c5 fe 6f 74 16 a0 c5 fe 6f 7c 16 80 <c5> fe 7f 07 c5 fe 7f 4f 20 c5 fe 7f 57 40 c5 fe 7f 5f 60 c5 fe 7f ``` **Additional context** The host kernel is `4.19.74` and the nixpkgs commit of the NixOS deployment is `51bc28fd29d689b6a1e8c663aa7113f9cb6a26ba` (release 19.03). It seems to be working fine for a friend of mine, so it may be related to the execution environment.
1.0
QEMU segfault when a test VM is interactively started - **Describe the bug** On the branch 19.09, when i start a test interactively, the QEMU process segfaults. It works well when executed non interactively. **To Reproduce** ``` $ git checkout 4fd551ee2f1 $ nix-build nixos/tests/simple.nix -A driver $ ./result/bin/nixos-run-vms starting VDE switch for network 1 running the VM test script starting all VMs machine: starting vm machine: QEMU running (pid 18179) (0.07 seconds) waiting for all VMs to finish machine: waiting for the VM to power off (0.10 seconds) (0.10 seconds) (0.17 seconds) collecting coverage data (0.00 seconds) syncing (0.00 seconds) test script finished in 0.17s vde_switch: EOF on stdin, cleaning up and exiting cleaning up (0.00 seconds) ``` In the kernel logs, i can see: ``` [ 2110.889976] qemu-system-x86[18179]: segfault at 0 ip 00007fabf5618ae0 sp 00007ffca850dc68 error 6 in libc-2.27.so[7fabf54e8000+13d000] [ 2110.889988] Code: fe 6f 06 c5 fe 6f 4e 20 c5 fe 6f 56 40 c5 fe 6f 5e 60 c5 fe 6f 64 16 e0 c5 fe 6f 6c 16 c0 c5 fe 6f 74 16 a0 c5 fe 6f 7c 16 80 <c5> fe 7f 07 c5 fe 7f 4f 20 c5 fe 7f 57 40 c5 fe 7f 5f 60 c5 fe 7f ``` **Additional context** The host kernel is `4.19.74` and the nixpkgs commit of the NixOS deployment is `51bc28fd29d689b6a1e8c663aa7113f9cb6a26ba` (release 19.03). It seems to be working fine for a friend of mine, so it may be related to the execution environment.
non_defect
qemu segfault when a test vm is interactively started describe the bug on the branch when i start a test interactively the qemu process segfaults it works well when executed non interactively to reproduce git checkout nix build nixos tests simple nix a driver result bin nixos run vms starting vde switch for network running the vm test script starting all vms machine starting vm machine qemu running pid seconds waiting for all vms to finish machine waiting for the vm to power off seconds seconds seconds collecting coverage data seconds syncing seconds test script finished in vde switch eof on stdin cleaning up and exiting cleaning up seconds in the kernel logs i can see qemu system segfault at ip sp error in libc so code fe fe fe fe fe fe fe fe fe fe fe fe fe additional context the host kernel is and the nixpkgs commit of the nixos deployment is release it seems to be working fine for a friend of mine so it may be related to the execution environment
0
73,933
24,871,905,531
IssuesEvent
2022-10-27 15:48:38
fecgov/fecfile-web-app
https://api.github.com/repos/fecgov/fecfile-web-app
closed
Defect - When selecting "Save and Add More" for additional contacts the Telephone (Optional) field not clearing
defect
This defect is to correct when selecting the "Save and Add More" for additional contacts the Telephone (Optional) field not clearing. (Note: This error occurs when entering any of the contacts). 1. Enter an "Individual" contact including the Telephone number (Optional): ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/2cb36ece-2d46-489c-b060-b3cf061d608a) 2. Select "Save and Add More" button blank contact page is displayed, but the Telephone number field IS NOT cleared. ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/c8581f65-7f0b-4556-8c1e-508362891900) 3. Upon manually clearing the Telephone number field, the Telephone number field then becomes "Required" and a phone number has to be entered to "Save" and/or "Save and Add More". ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/dcaf1be1-1862-4e81-b628-ed60b878d4db) Additional Error - when tabbing between contact fields the Telephone number field then becomes "Required" and a phone number has to be entered. ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/36e69b89-3379-4203-aba6-c92ecd555fa1) ## QA NOTES ## Note the Telephone Number (Optional) field error became evident after adding the International dropdown for telephone numbers.
1.0
Defect - When selecting "Save and Add More" for additional contacts the Telephone (Optional) field not clearing - This defect is to correct when selecting the "Save and Add More" for additional contacts the Telephone (Optional) field not clearing. (Note: This error occurs when entering any of the contacts). 1. Enter an "Individual" contact including the Telephone number (Optional): ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/2cb36ece-2d46-489c-b060-b3cf061d608a) 2. Select "Save and Add More" button blank contact page is displayed, but the Telephone number field IS NOT cleared. ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/c8581f65-7f0b-4556-8c1e-508362891900) 3. Upon manually clearing the Telephone number field, the Telephone number field then becomes "Required" and a phone number has to be entered to "Save" and/or "Save and Add More". ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/dcaf1be1-1862-4e81-b628-ed60b878d4db) Additional Error - when tabbing between contact fields the Telephone number field then becomes "Required" and a phone number has to be entered. ![image.png](https://images.zenhubusercontent.com/61ba01e428a658b4eb0ca758/36e69b89-3379-4203-aba6-c92ecd555fa1) ## QA NOTES ## Note the Telephone Number (Optional) field error became evident after adding the International dropdown for telephone numbers.
defect
defect when selecting save and add more for additional contacts the telephone optional field not clearing this defect is to correct when selecting the save and add more for additional contacts the telephone optional field not clearing note this error occurs when entering any of the contacts enter an individual contact including the telephone number optional select save and add more button blank contact page is displayed but the telephone number field is not cleared upon manually clearing the telephone number field the telephone number field then becomes required and a phone number has to be entered to save and or save and add more additional error when tabbing between contact fields the telephone number field then becomes required and a phone number has to be entered qa notes note the telephone number optional field error became evident after adding the international dropdown for telephone numbers
1
74,886
25,385,715,077
IssuesEvent
2022-11-21 21:39:45
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
AWOL VBA facility investigation
Defect ⭐️ Facilities VBA
## Description During the data migration from the VBA database we uncovered a missing VBA Integrated Disability Evaluation System (IDES) Site at Robins Air Force Base is no longer in our system api id: vba_316m former node id: 4027 We know we had it because it is still in the migrate map table and we know it existed as node 4027 and we know that node no longer exists. It can be restored by deleting the entry from the migration map. It will re-appear with the next morning migration. Initial thoughts. If it was deleted by hand, it should have removed itself from the migrate map table. Which would have caused it to re-appear on its own with the next migration. The fact that it still has an entry in the map table makes me think it may have been deleted by code rather than the delete button. This hypothesis needs testing We may also be able to find its removal in the logs. ## Acceptance Criteria - [ ] Cause is understood... or at least theories narrowed down - [ ] Facility is restored. ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [ ] `⭐️ Public Websites` - [x] `⭐️ Facilities` - [ ] `⭐️ User support`
1.0
AWOL VBA facility investigation - ## Description During the data migration from the VBA database we uncovered a missing VBA Integrated Disability Evaluation System (IDES) Site at Robins Air Force Base is no longer in our system api id: vba_316m former node id: 4027 We know we had it because it is still in the migrate map table and we know it existed as node 4027 and we know that node no longer exists. It can be restored by deleting the entry from the migration map. It will re-appear with the next morning migration. Initial thoughts. If it was deleted by hand, it should have removed itself from the migrate map table. Which would have caused it to re-appear on its own with the next migration. The fact that it still has an entry in the map table makes me think it may have been deleted by code rather than the delete button. This hypothesis needs testing We may also be able to find its removal in the logs. ## Acceptance Criteria - [ ] Cause is understood... or at least theories narrowed down - [ ] Facility is restored. ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [ ] `⭐️ Public Websites` - [x] `⭐️ Facilities` - [ ] `⭐️ User support`
defect
awol vba facility investigation description during the data migration from the vba database we uncovered a missing vba integrated disability evaluation system ides site at robins air force base is no longer in our system api id vba former node id we know we had it because it is still in the migrate map table and we know it existed as node and we know that node no longer exists it can be restored by deleting the entry from the migration map it will re appear with the next morning migration initial thoughts if it was deleted by hand it should have removed itself from the migrate map table which would have caused it to re appear on its own with the next migration the fact that it still has an entry in the map table makes me think it may have been deleted by code rather than the delete button this hypothesis needs testing we may also be able to find its removal in the logs acceptance criteria cause is understood or at least theories narrowed down facility is restored cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
1
25,285
4,281,621,508
IssuesEvent
2016-07-15 04:23:15
eczarny/spectacle
https://api.github.com/repos/eczarny/spectacle
closed
Spectacle in VMWare Unity mode causes continuous window flashing
defect investigating ★★
To reproduce: 1. Open a Windows 7 virtual machine in VMWare Fusion. Enable unity mode with view->Unity. The windows desktop is hidden. 2. Open Excel by clicking VMWare->Excel in the mac status bar at the top of the screen. 3. Click Alt->Cmd->Right in the Mac. This should move the new Excel window to the right edge of the desktop (using Spectacle's functionality). Expected: The window moves to the right. Actual: The window flashes between the right and the left. The only way to break the cycle is to either (1) turn off Spectacle before issuing this key combination or (2) switching the VM out of Unity mode.
1.0
Spectacle in VMWare Unity mode causes continuous window flashing - To reproduce: 1. Open a Windows 7 virtual machine in VMWare Fusion. Enable unity mode with view->Unity. The windows desktop is hidden. 2. Open Excel by clicking VMWare->Excel in the mac status bar at the top of the screen. 3. Click Alt->Cmd->Right in the Mac. This should move the new Excel window to the right edge of the desktop (using Spectacle's functionality). Expected: The window moves to the right. Actual: The window flashes between the right and the left. The only way to break the cycle is to either (1) turn off Spectacle before issuing this key combination or (2) switching the VM out of Unity mode.
defect
spectacle in vmware unity mode causes continuous window flashing to reproduce open a windows virtual machine in vmware fusion enable unity mode with view unity the windows desktop is hidden open excel by clicking vmware excel in the mac status bar at the top of the screen click alt cmd right in the mac this should move the new excel window to the right edge of the desktop using spectacle s functionality expected the window moves to the right actual the window flashes between the right and the left the only way to break the cycle is to either turn off spectacle before issuing this key combination or switching the vm out of unity mode
1
58,693
16,701,594,894
IssuesEvent
2021-06-09 03:47:12
microsoft/STL
https://api.github.com/repos/microsoft/STL
opened
P2231R1 Completing `constexpr` In `optional` And `variant`
cxx20 defect report
[P2231R1](https://wg21.link/P2231R1) Completing `constexpr` In `optional` And `variant` Feature-test macros: ```cpp #define __cpp_lib_optional INCREASED_VALUE_TO_BE_DETERMINED #define __cpp_lib_variant INCREASED_VALUE_TO_BE_DETERMINED ``` At the June 2021 virtual plenary meeting, this was accepted as a defect report for C++20, which means that this paper applies retroactively to the C++20 Standard.
1.0
P2231R1 Completing `constexpr` In `optional` And `variant` - [P2231R1](https://wg21.link/P2231R1) Completing `constexpr` In `optional` And `variant` Feature-test macros: ```cpp #define __cpp_lib_optional INCREASED_VALUE_TO_BE_DETERMINED #define __cpp_lib_variant INCREASED_VALUE_TO_BE_DETERMINED ``` At the June 2021 virtual plenary meeting, this was accepted as a defect report for C++20, which means that this paper applies retroactively to the C++20 Standard.
defect
completing constexpr in optional and variant completing constexpr in optional and variant feature test macros cpp define cpp lib optional increased value to be determined define cpp lib variant increased value to be determined at the june virtual plenary meeting this was accepted as a defect report for c which means that this paper applies retroactively to the c standard
1
145,424
5,575,391,757
IssuesEvent
2017-03-28 01:45:17
architecture-building-systems/CEAforArcGIS
https://api.github.com/repos/architecture-building-systems/CEAforArcGIS
opened
Code the program that can run the CEA and send back the result in Rhino/Grasshopper
Priority 1
![workflow status quo ubg](https://cloud.githubusercontent.com/assets/16640327/24385073/87ea9d40-139a-11e7-8518-34f6cd5ae9fa.png) Hi Darren, this issue is as we discussed when you were in Singapore. I also attached a diagram to explain the workflow. Let me know if you have any questions~
1.0
Code the program that can run the CEA and send back the result in Rhino/Grasshopper - ![workflow status quo ubg](https://cloud.githubusercontent.com/assets/16640327/24385073/87ea9d40-139a-11e7-8518-34f6cd5ae9fa.png) Hi Darren, this issue is as we discussed when you were in Singapore. I also attached a diagram to explain the workflow. Let me know if you have any questions~
non_defect
code the program that can run the cea and send back the result in rhino grasshopper hi darren this issue is as we discussed when you were in singapore i also attached a diagram to explain the workflow let me know if you have any questions
0
138,452
18,793,930,879
IssuesEvent
2021-11-08 19:53:30
Dima2022/hygieia-workflow-github-collector
https://api.github.com/repos/Dima2022/hygieia-workflow-github-collector
opened
CVE-2019-17531 (High) detected in jackson-databind-2.5.0.jar
security vulnerability
## CVE-2019-17531 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.5.0.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: hygieia-workflow-github-collector/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.5.0/jackson-databind-2.5.0.jar</p> <p> Dependency Hierarchy: - core-3.9.7.jar (Root Library) - spring-boot-starter-web-1.3.0.RELEASE.jar - :x: **jackson-databind-2.5.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2022/hygieia-workflow-github-collector/commit/236baaa856b74774f7b43ecb1eeade5a8d1d0496">236baaa856b74774f7b43ecb1eeade5a8d1d0496</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p> <p>Release Date: 2019-10-12</p> <p>Fix Resolution: 2.10</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.capitalone.dashboard:core:3.9.7;org.springframework.boot:spring-boot-starter-web:1.3.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-17531 (High) detected in jackson-databind-2.5.0.jar - ## CVE-2019-17531 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.5.0.jar</b></p></summary> <p>General data-binding functionality for Jackson: works on core streaming API</p> <p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p> <p>Path to dependency file: hygieia-workflow-github-collector/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.5.0/jackson-databind-2.5.0.jar</p> <p> Dependency Hierarchy: - core-3.9.7.jar (Root Library) - spring-boot-starter-web-1.3.0.RELEASE.jar - :x: **jackson-databind-2.5.0.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Dima2022/hygieia-workflow-github-collector/commit/236baaa856b74774f7b43ecb1eeade5a8d1d0496">236baaa856b74774f7b43ecb1eeade5a8d1d0496</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload. <p>Publish Date: 2019-10-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531>CVE-2019-17531</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-17531</a></p> <p>Release Date: 2019-10-12</p> <p>Fix Resolution: 2.10</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"com.fasterxml.jackson.core","packageName":"jackson-databind","packageVersion":"2.5.0","packageFilePaths":["/pom.xml"],"isTransitiveDependency":true,"dependencyTree":"com.capitalone.dashboard:core:3.9.7;org.springframework.boot:spring-boot-starter-web:1.3.0.RELEASE;com.fasterxml.jackson.core:jackson-databind:2.5.0","isMinimumFixVersionAvailable":true,"minimumFixVersion":"2.10"}],"baseBranches":["main"],"vulnerabilityIdentifier":"CVE-2019-17531","vulnerabilityDetails":"A Polymorphic Typing issue was discovered in FasterXML jackson-databind 2.0.0 through 2.9.10. When Default Typing is enabled (either globally or for a specific property) for an externally exposed JSON endpoint and the service has the apache-log4j-extra (version 1.2.x) jar in the classpath, and an attacker can provide a JNDI service to access, it is possible to make the service execute a malicious payload.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-17531","cvss3Severity":"high","cvss3Score":"9.8","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"High","UI":"None","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_defect
cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file hygieia workflow github collector pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy core jar root library spring boot starter web release jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree com capitalone dashboard core org springframework boot spring boot starter web release com fasterxml jackson core jackson databind isminimumfixversionavailable true minimumfixversion basebranches vulnerabilityidentifier cve vulnerabilitydetails a polymorphic typing issue was discovered in fasterxml jackson databind through when default typing is enabled either globally or for a specific property for an externally exposed json endpoint and the service has the apache extra version x jar in the classpath and an attacker can provide a jndi service to access it is possible to make the service execute a malicious payload vulnerabilityurl
0
30,430
5,796,174,928
IssuesEvent
2017-05-02 18:49:39
devops-alpha-s17/customers
https://api.github.com/repos/devops-alpha-s17/customers
closed
Create Swagger Documentation for searching a customer
documentation
**As a** Developer **I need** to create swagger documentation for searching a customer **So that** the method is properly documented **Assumptions:** * flasgger/swagger is imported **Acceptance Criteria:** ``` Given a running service When the documentation URL is visited , Then the searching customer's documentation should be available ```
1.0
Create Swagger Documentation for searching a customer - **As a** Developer **I need** to create swagger documentation for searching a customer **So that** the method is properly documented **Assumptions:** * flasgger/swagger is imported **Acceptance Criteria:** ``` Given a running service When the documentation URL is visited , Then the searching customer's documentation should be available ```
non_defect
create swagger documentation for searching a customer as a developer i need to create swagger documentation for searching a customer so that the method is properly documented assumptions flasgger swagger is imported acceptance criteria given a running service when the documentation url is visited then the searching customer s documentation should be available
0
33,380
7,108,038,984
IssuesEvent
2018-01-16 22:13:35
zealdocs/zeal
https://api.github.com/repos/zealdocs/zeal
closed
Oracle JDK causes Zeal to crash on Arch Linux
Component: UI/Web View Platform: Linux Resolution: Upstream Problem Type: Defect
My zeal is runing on manjaro linux, when I try to open a docset ,it exited.The log in terminal is as follows: *** Error in `zeal': free(): invalid pointer: 0x00007fcb78fd33c0 *** [1] 1887 abort (core dumped) zeal
1.0
Oracle JDK causes Zeal to crash on Arch Linux - My zeal is runing on manjaro linux, when I try to open a docset ,it exited.The log in terminal is as follows: *** Error in `zeal': free(): invalid pointer: 0x00007fcb78fd33c0 *** [1] 1887 abort (core dumped) zeal
defect
oracle jdk causes zeal to crash on arch linux my zeal is runing on manjaro linux when i try to open a docset it exited the log in terminal is as follows error in zeal free invalid pointer abort core dumped zeal
1
279,775
24,254,156,040
IssuesEvent
2022-09-27 16:18:31
xamarin/xamarin-macios
https://api.github.com/repos/xamarin/xamarin-macios
closed
[meta][xharness] azdo / ddfun specific issues
test-only-issue
These issues seems related to xharness and not the bots/devices from: https://dev.azure.com/devdiv/DevDiv/_build/results?buildId=3575009&view=ms.vss-test-web.build-test-results-tab&runId=11983954&paneView=attachments&resultId=118020 # azdo duration (table view) always shows "0:00:00.000" Makes it hard to see if it's a real timeout and, if so, what was the allotted time for the test to run. # Timeout too short ``` 09:14:45.7485710 Test launch timed out after 3 minute(s). ``` Tests were still executing. On some devices 3 minutes is likely not enough - even less for something like BCL tests group 1. # Download crash reports has issues ``` 09:14:56.5404540 /Library/Frameworks/Xamarin.iOS.framework/Versions/Current/bin/mlaunch --download-crash-report=./com.xamarin.bcltests.BCL tests group 1-2020-03-20-091453.ips --download-crash-report-to=/Users/xamarinqa/azdo/_work/49/s/xamarin-macios/jenkins-results/tests/[NUnit] Mono BCL tests group 1/1389/com.xamarin.bcltests.BCL tests group 1-2020-03-20-091453.ips --sdkroot=/Applications/Xcode113.app --devname=XQAiPadAira 09:14:57.0921820 error MT0010: Could not parse the command line arguments: System.AggregateException: One or more errors occurred. (Unknown command line argument: 'tests') (Unknown command line argument: 'group') (Unknown command line argument: '1-2020-03-20-091453.ips') (Unknown command line argument: 'Mono') (Unknown command line argument: 'BCL') (Unknown command line argument: 'tests') (Unknown command line argument: 'group') (Unknown command line argument: '1/1389/com.xamarin.bcltests.BCL') (Unknown command line argument: 'tests') (Unknown command line argument: 'group') (Unknown command line argument: '1-2020-03-20-091453.ips') ---> Xamarin.Launcher.LauncherException: Unknown command line argument: 'tests' ``` ## Double logs (already filed)
1.0
[meta][xharness] azdo / ddfun specific issues - These issues seems related to xharness and not the bots/devices from: https://dev.azure.com/devdiv/DevDiv/_build/results?buildId=3575009&view=ms.vss-test-web.build-test-results-tab&runId=11983954&paneView=attachments&resultId=118020 # azdo duration (table view) always shows "0:00:00.000" Makes it hard to see if it's a real timeout and, if so, what was the allotted time for the test to run. # Timeout too short ``` 09:14:45.7485710 Test launch timed out after 3 minute(s). ``` Tests were still executing. On some devices 3 minutes is likely not enough - even less for something like BCL tests group 1. # Download crash reports has issues ``` 09:14:56.5404540 /Library/Frameworks/Xamarin.iOS.framework/Versions/Current/bin/mlaunch --download-crash-report=./com.xamarin.bcltests.BCL tests group 1-2020-03-20-091453.ips --download-crash-report-to=/Users/xamarinqa/azdo/_work/49/s/xamarin-macios/jenkins-results/tests/[NUnit] Mono BCL tests group 1/1389/com.xamarin.bcltests.BCL tests group 1-2020-03-20-091453.ips --sdkroot=/Applications/Xcode113.app --devname=XQAiPadAira 09:14:57.0921820 error MT0010: Could not parse the command line arguments: System.AggregateException: One or more errors occurred. (Unknown command line argument: 'tests') (Unknown command line argument: 'group') (Unknown command line argument: '1-2020-03-20-091453.ips') (Unknown command line argument: 'Mono') (Unknown command line argument: 'BCL') (Unknown command line argument: 'tests') (Unknown command line argument: 'group') (Unknown command line argument: '1/1389/com.xamarin.bcltests.BCL') (Unknown command line argument: 'tests') (Unknown command line argument: 'group') (Unknown command line argument: '1-2020-03-20-091453.ips') ---> Xamarin.Launcher.LauncherException: Unknown command line argument: 'tests' ``` ## Double logs (already filed)
non_defect
azdo ddfun specific issues these issues seems related to xharness and not the bots devices from azdo duration table view always shows makes it hard to see if it s a real timeout and if so what was the allotted time for the test to run timeout too short test launch timed out after minute s tests were still executing on some devices minutes is likely not enough even less for something like bcl tests group download crash reports has issues library frameworks xamarin ios framework versions current bin mlaunch download crash report com xamarin bcltests bcl tests group ips download crash report to users xamarinqa azdo work s xamarin macios jenkins results tests mono bcl tests group com xamarin bcltests bcl tests group ips sdkroot applications app devname xqaipadaira error could not parse the command line arguments system aggregateexception one or more errors occurred unknown command line argument tests unknown command line argument group unknown command line argument ips unknown command line argument mono unknown command line argument bcl unknown command line argument tests unknown command line argument group unknown command line argument com xamarin bcltests bcl unknown command line argument tests unknown command line argument group unknown command line argument ips xamarin launcher launcherexception unknown command line argument tests double logs already filed
0
53,491
13,261,749,134
IssuesEvent
2020-08-20 20:27:52
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to (Trac #1524)
Migrated from Trac cmake defect
In trying `make tarball` using trunk of simulation meta-project it fails with the following error: CMake Error at cmake_install.cmake:36 (FILE): file cannot create directory: /usr/local/lib/icecube. Maybe need administrative privileges. In looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1524">https://code.icecube.wisc.edu/projects/icecube/ticket/1524</a>, reported by melanie.dayand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-01-12T00:12:46", "_ts": "1547251966149934", "description": "In trying `make tarball` using trunk of simulation meta-project it fails with the following error:\n\nCMake Error at cmake_install.cmake:36 (FILE):\nfile cannot create directory: /usr/local/lib/icecube. Maybe need\nadministrative privileges.\n\nIn looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local ", "reporter": "melanie.day", "cc": "", "resolution": "fixed", "time": "2016-01-22T17:22:22", "component": "cmake", "summary": "cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to (Trac #1524) - In trying `make tarball` using trunk of simulation meta-project it fails with the following error: CMake Error at cmake_install.cmake:36 (FILE): file cannot create directory: /usr/local/lib/icecube. Maybe need administrative privileges. In looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1524">https://code.icecube.wisc.edu/projects/icecube/ticket/1524</a>, reported by melanie.dayand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-01-12T00:12:46", "_ts": "1547251966149934", "description": "In trying `make tarball` using trunk of simulation meta-project it fails with the following error:\n\nCMake Error at cmake_install.cmake:36 (FILE):\nfile cannot create directory: /usr/local/lib/icecube. Maybe need\nadministrative privileges.\n\nIn looking in the CMakeCache.txt file the CMAKE_INSTALL_PREFIX:PATH is incorrectly set to /usr/local ", "reporter": "melanie.day", "cc": "", "resolution": "fixed", "time": "2016-01-22T17:22:22", "component": "cmake", "summary": "cmake trunk sets CMAKE_INSTALL_PREFIX:PATH=/usr/local which cannot be written to", "priority": "normal", "keywords": "", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
cmake trunk sets cmake install prefix path usr local which cannot be written to trac in trying make tarball using trunk of simulation meta project it fails with the following error cmake error at cmake install cmake file file cannot create directory usr local lib icecube maybe need administrative privileges in looking in the cmakecache txt file the cmake install prefix path is incorrectly set to usr local migrated from json status closed changetime ts description in trying make tarball using trunk of simulation meta project it fails with the following error n ncmake error at cmake install cmake file nfile cannot create directory usr local lib icecube maybe need nadministrative privileges n nin looking in the cmakecache txt file the cmake install prefix path is incorrectly set to usr local reporter melanie day cc resolution fixed time component cmake summary cmake trunk sets cmake install prefix path usr local which cannot be written to priority normal keywords milestone owner nega type defect
1
429,850
12,428,834,438
IssuesEvent
2020-05-25 07:13:20
SeldonIO/seldon-core
https://api.github.com/repos/SeldonIO/seldon-core
closed
RedHat version of core servers
priority/p1
Provide RedHat certified images for: * sklearnserver * mlflow server * xgboost server * tfserving
1.0
RedHat version of core servers - Provide RedHat certified images for: * sklearnserver * mlflow server * xgboost server * tfserving
non_defect
redhat version of core servers provide redhat certified images for sklearnserver mlflow server xgboost server tfserving
0
159,944
25,082,212,304
IssuesEvent
2022-11-07 20:21:27
carbon-design-system/carbon-platform
https://api.github.com/repos/carbon-design-system/carbon-platform
closed
[tech design] RMDX
role: dev 🤖 type: tech design 🏗️
# Feature technical design RMDX (Remote MDX) ## Summary This feature describes the usage of remote mdx in a secure way that is not vulnerable to code injection and arbitrary code execution (ACE) attacks. This will supersede existing MDX processing and provide stricter parsing and rendering. This means less overall customization, but a significantly more secure implementation. RMDX will be used in two places: 1. A microservice (called `rmdx-processing`) which can translate source MDX into a sanitized abstract syntax tree (AST), similar to that which would be retrieved from a CMS API such as Contentful. 2. A set of utility React components and functions that can be used to render a sanitized AST as a set of react components. The set of components rendered via the RMDX utilities is not defined as part of this tech design, and is instead expected to be provided as a "map" to the utility (additional details below). Having the interface act as a mapping will allow any arbitrary set of components to be used during translation. The goal is to have the RMDX utilities generate an AST that is as close to the Contentful data model as possible. This will make migration between the two as easy as possible. The maximum input size of MDX will be 1 MB. Output size may end up larger than this, but will remain under the RabbitMQ message threshold of 128 MB. ## Research - [x] Approved? https://app.mural.co/t/ibm14/m/ibm14/1667230506318/3c007d2b56bfc0b1c820e15d7d946285da5ae4a2?sender=jdharvey8136 https://github.com/contentful/rich-text/tree/master/packages/rich-text-types #1073 **Unanswered questions** None **New technologies** None **Proofs of concept** - [x] Go from mdx -> mdast -> JSON -> react components - This works as expected (see mural for details) ## UI/UX design - [x] Approved? None ## APIs - [x] Approved? **Programmatic APIs** New package: `rmdx` This will export the utilities for converting to and working with the MDX-based AST. `process(srcMdx: string): AST` - Returns an RMDX AST given an input string `<RmdxNode components={...} ast={...} />` - React component which takes an RMDX AST as input along with a `components` map, which maps AST node types to React components for rendering. The mapped components are given `children` to render as well as any relevant scalar props from the source MDX. **Data graph** There will eventually be an `rmdx` resolver for asset doc pages, however since there is not yet an asset resolver, this will probably be deferred until later. **Messages** **query**: `rmdx` A request/response based message to get a processed RMDX result, given an input string of raw MDX source ```ts // query message interface RmdxMessage { srcMdx: string // Max size = 1 MB } interface RmdxResponse { ast: Node<Data> // Either a unist tree or a custom AST similar to Contentful's model errors: Array<?> // List of errors encountered during processing } ``` **Future**: Should eventually respond to an `asset_discovered` message by pre-caching processed RMDX in an LRU cache. ## Security - [x] Approved? **MDX things that will not work under RMDX:** - Inline JSX blocks (outside of components) - Imports/exports/variable assignments - Properties on JSX elements which are not a number, boolean, or string (i.e. no functions, arrays, or objects) ## Error handling - [x] Approved? Error handling should have feature parity with existing mdx processing. TODO: Need to figure out the best approach for transmitting errors back to the caller. Tentative approach: Errors in the returned list of errors are numbered, and there are AST nodes in the returned RMDX which call out particular error numbers (and types), so knowing what to render is accomplished via a "lookup map". example: ```json { "ast": [ { "nodeType": "h1", "value": "this is a header" }, { "nodeType": "Error", "errorIndex": 0 } ], "errors": [ { "exception": "ImportFoundException", "line": 123, "text": "import thing from 'thing'" } ] } ``` `const error = theErrorrmdx.errors[0]` ## Test strategy - [x] Approved? > How will the new feature be tested? (e.g. unit tests, manual verification, automated e2e testing, > etc.) What interesting edge cases should be considered and tested? 76+% unit test coverage of all new code. Test existing known MDX exploits to ensure they can't be performed against RMDX ## Logging - [x] Approved? - Log incoming requests to process MDX - Log processing failures - Warn log when encountering portions of MDX that need to be removed for security reasons ## File and code layout - [x] Approved? Rough file layout: - packages - api - rmdx-processing - RmdxMessage - RmdxResponse - query_rmdx - rmdx - `process` - `RmdxNode` - services - rmdx-processing - rmdx-controller - rmdx-service ## Issue and work breakdown - [x] Approved? **Epics** - #1491 **Issues** - #1492 - #1493 - #1494 - #1495 - #1496
1.0
[tech design] RMDX - # Feature technical design RMDX (Remote MDX) ## Summary This feature describes the usage of remote mdx in a secure way that is not vulnerable to code injection and arbitrary code execution (ACE) attacks. This will supersede existing MDX processing and provide stricter parsing and rendering. This means less overall customization, but a significantly more secure implementation. RMDX will be used in two places: 1. A microservice (called `rmdx-processing`) which can translate source MDX into a sanitized abstract syntax tree (AST), similar to that which would be retrieved from a CMS API such as Contentful. 2. A set of utility React components and functions that can be used to render a sanitized AST as a set of react components. The set of components rendered via the RMDX utilities is not defined as part of this tech design, and is instead expected to be provided as a "map" to the utility (additional details below). Having the interface act as a mapping will allow any arbitrary set of components to be used during translation. The goal is to have the RMDX utilities generate an AST that is as close to the Contentful data model as possible. This will make migration between the two as easy as possible. The maximum input size of MDX will be 1 MB. Output size may end up larger than this, but will remain under the RabbitMQ message threshold of 128 MB. ## Research - [x] Approved? https://app.mural.co/t/ibm14/m/ibm14/1667230506318/3c007d2b56bfc0b1c820e15d7d946285da5ae4a2?sender=jdharvey8136 https://github.com/contentful/rich-text/tree/master/packages/rich-text-types #1073 **Unanswered questions** None **New technologies** None **Proofs of concept** - [x] Go from mdx -> mdast -> JSON -> react components - This works as expected (see mural for details) ## UI/UX design - [x] Approved? None ## APIs - [x] Approved? **Programmatic APIs** New package: `rmdx` This will export the utilities for converting to and working with the MDX-based AST. `process(srcMdx: string): AST` - Returns an RMDX AST given an input string `<RmdxNode components={...} ast={...} />` - React component which takes an RMDX AST as input along with a `components` map, which maps AST node types to React components for rendering. The mapped components are given `children` to render as well as any relevant scalar props from the source MDX. **Data graph** There will eventually be an `rmdx` resolver for asset doc pages, however since there is not yet an asset resolver, this will probably be deferred until later. **Messages** **query**: `rmdx` A request/response based message to get a processed RMDX result, given an input string of raw MDX source ```ts // query message interface RmdxMessage { srcMdx: string // Max size = 1 MB } interface RmdxResponse { ast: Node<Data> // Either a unist tree or a custom AST similar to Contentful's model errors: Array<?> // List of errors encountered during processing } ``` **Future**: Should eventually respond to an `asset_discovered` message by pre-caching processed RMDX in an LRU cache. ## Security - [x] Approved? **MDX things that will not work under RMDX:** - Inline JSX blocks (outside of components) - Imports/exports/variable assignments - Properties on JSX elements which are not a number, boolean, or string (i.e. no functions, arrays, or objects) ## Error handling - [x] Approved? Error handling should have feature parity with existing mdx processing. TODO: Need to figure out the best approach for transmitting errors back to the caller. Tentative approach: Errors in the returned list of errors are numbered, and there are AST nodes in the returned RMDX which call out particular error numbers (and types), so knowing what to render is accomplished via a "lookup map". example: ```json { "ast": [ { "nodeType": "h1", "value": "this is a header" }, { "nodeType": "Error", "errorIndex": 0 } ], "errors": [ { "exception": "ImportFoundException", "line": 123, "text": "import thing from 'thing'" } ] } ``` `const error = theErrorrmdx.errors[0]` ## Test strategy - [x] Approved? > How will the new feature be tested? (e.g. unit tests, manual verification, automated e2e testing, > etc.) What interesting edge cases should be considered and tested? 76+% unit test coverage of all new code. Test existing known MDX exploits to ensure they can't be performed against RMDX ## Logging - [x] Approved? - Log incoming requests to process MDX - Log processing failures - Warn log when encountering portions of MDX that need to be removed for security reasons ## File and code layout - [x] Approved? Rough file layout: - packages - api - rmdx-processing - RmdxMessage - RmdxResponse - query_rmdx - rmdx - `process` - `RmdxNode` - services - rmdx-processing - rmdx-controller - rmdx-service ## Issue and work breakdown - [x] Approved? **Epics** - #1491 **Issues** - #1492 - #1493 - #1494 - #1495 - #1496
non_defect
rmdx feature technical design rmdx remote mdx summary this feature describes the usage of remote mdx in a secure way that is not vulnerable to code injection and arbitrary code execution ace attacks this will supersede existing mdx processing and provide stricter parsing and rendering this means less overall customization but a significantly more secure implementation rmdx will be used in two places a microservice called rmdx processing which can translate source mdx into a sanitized abstract syntax tree ast similar to that which would be retrieved from a cms api such as contentful a set of utility react components and functions that can be used to render a sanitized ast as a set of react components the set of components rendered via the rmdx utilities is not defined as part of this tech design and is instead expected to be provided as a map to the utility additional details below having the interface act as a mapping will allow any arbitrary set of components to be used during translation the goal is to have the rmdx utilities generate an ast that is as close to the contentful data model as possible this will make migration between the two as easy as possible the maximum input size of mdx will be mb output size may end up larger than this but will remain under the rabbitmq message threshold of mb research approved unanswered questions none new technologies none proofs of concept go from mdx mdast json react components this works as expected see mural for details ui ux design approved none apis approved programmatic apis new package rmdx this will export the utilities for converting to and working with the mdx based ast process srcmdx string ast returns an rmdx ast given an input string react component which takes an rmdx ast as input along with a components map which maps ast node types to react components for rendering the mapped components are given children to render as well as any relevant scalar props from the source mdx data graph there will eventually be an rmdx resolver for asset doc pages however since there is not yet an asset resolver this will probably be deferred until later messages query rmdx a request response based message to get a processed rmdx result given an input string of raw mdx source ts query message interface rmdxmessage srcmdx string max size mb interface rmdxresponse ast node either a unist tree or a custom ast similar to contentful s model errors array list of errors encountered during processing future should eventually respond to an asset discovered message by pre caching processed rmdx in an lru cache security approved mdx things that will not work under rmdx inline jsx blocks outside of components imports exports variable assignments properties on jsx elements which are not a number boolean or string i e no functions arrays or objects error handling approved error handling should have feature parity with existing mdx processing todo need to figure out the best approach for transmitting errors back to the caller tentative approach errors in the returned list of errors are numbered and there are ast nodes in the returned rmdx which call out particular error numbers and types so knowing what to render is accomplished via a lookup map example json ast nodetype value this is a header nodetype error errorindex errors exception importfoundexception line text import thing from thing const error theerrorrmdx errors test strategy approved how will the new feature be tested e g unit tests manual verification automated testing etc what interesting edge cases should be considered and tested unit test coverage of all new code test existing known mdx exploits to ensure they can t be performed against rmdx logging approved log incoming requests to process mdx log processing failures warn log when encountering portions of mdx that need to be removed for security reasons file and code layout approved rough file layout packages api rmdx processing rmdxmessage rmdxresponse query rmdx rmdx process rmdxnode services rmdx processing rmdx controller rmdx service issue and work breakdown approved epics issues
0
20,656
3,392,137,244
IssuesEvent
2015-11-30 18:17:48
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
Nested class name should include parent class name even if [Namespace(false)]
defect
If parent class has [Namespace(false)] then nested class has no parent class name in own name ``` [Namespace(false)] public static class A { public class B { public B() { } } } var a = new A.B(); // should be var a = new A.B(); in javascript ```
1.0
Nested class name should include parent class name even if [Namespace(false)] - If parent class has [Namespace(false)] then nested class has no parent class name in own name ``` [Namespace(false)] public static class A { public class B { public B() { } } } var a = new A.B(); // should be var a = new A.B(); in javascript ```
defect
nested class name should include parent class name even if if parent class has then nested class has no parent class name in own name public static class a public class b public b var a new a b should be var a new a b in javascript
1
13,458
2,757,826,567
IssuesEvent
2015-04-27 16:50:39
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
OrderList fires multiple reorder events
5.1.17 5.2.2 defect
The OrderList component supports to reorder multiple items by pressing Ctrl and using the control buttons. This is a single reorder action but it produces a reorder event per item that was reorder in that action. In my case this causes a bug. I've a p:ajax attached to the OrderList which handles the reorder event in a listener and validates, that the reorder action produced a valid result. If the result is invalid, the previous item list is recovered and the OrderList is updated. Additionally I create a faces message that notifies the user about that recovery. The first reorder event produces the message and resets the list, while the second one is valid and won't update the list. The user gets no message (p:growl is used here which removes all messages, as the second message set is empty) but also a reset list. Is it possible to change that behaviour, so that a single reorder action only produces a single reorder event? It should be enough to move $this.fireReorderEvent(); out of the nested functions and only fire it if really some items have been reordered.
1.0
OrderList fires multiple reorder events - The OrderList component supports to reorder multiple items by pressing Ctrl and using the control buttons. This is a single reorder action but it produces a reorder event per item that was reorder in that action. In my case this causes a bug. I've a p:ajax attached to the OrderList which handles the reorder event in a listener and validates, that the reorder action produced a valid result. If the result is invalid, the previous item list is recovered and the OrderList is updated. Additionally I create a faces message that notifies the user about that recovery. The first reorder event produces the message and resets the list, while the second one is valid and won't update the list. The user gets no message (p:growl is used here which removes all messages, as the second message set is empty) but also a reset list. Is it possible to change that behaviour, so that a single reorder action only produces a single reorder event? It should be enough to move $this.fireReorderEvent(); out of the nested functions and only fire it if really some items have been reordered.
defect
orderlist fires multiple reorder events the orderlist component supports to reorder multiple items by pressing ctrl and using the control buttons this is a single reorder action but it produces a reorder event per item that was reorder in that action in my case this causes a bug i ve a p ajax attached to the orderlist which handles the reorder event in a listener and validates that the reorder action produced a valid result if the result is invalid the previous item list is recovered and the orderlist is updated additionally i create a faces message that notifies the user about that recovery the first reorder event produces the message and resets the list while the second one is valid and won t update the list the user gets no message p growl is used here which removes all messages as the second message set is empty but also a reset list is it possible to change that behaviour so that a single reorder action only produces a single reorder event it should be enough to move this firereorderevent out of the nested functions and only fire it if really some items have been reordered
1
16,132
2,872,987,063
IssuesEvent
2015-06-08 14:54:09
msimpson/pixelcity
https://api.github.com/repos/msimpson/pixelcity
closed
Car headlight sprites are not always facing the camera
auto-migrated Priority-Medium Type-Defect
``` side views of the sprites walking on the street ``` Original issue reported on code.google.com by `ahfat...@gmail.com` on 17 Jun 2009 at 2:00
1.0
Car headlight sprites are not always facing the camera - ``` side views of the sprites walking on the street ``` Original issue reported on code.google.com by `ahfat...@gmail.com` on 17 Jun 2009 at 2:00
defect
car headlight sprites are not always facing the camera side views of the sprites walking on the street original issue reported on code google com by ahfat gmail com on jun at
1
127,931
12,343,434,628
IssuesEvent
2020-05-15 04:00:57
swimlane/PSAttck
https://api.github.com/repos/swimlane/PSAttck
opened
Update documentation to focus on contextual data
documentation
Per comment on Reddit, main documentation should focus more on the contextual data aspect instead of strictly MITRE ATT&CK access/data.
1.0
Update documentation to focus on contextual data - Per comment on Reddit, main documentation should focus more on the contextual data aspect instead of strictly MITRE ATT&CK access/data.
non_defect
update documentation to focus on contextual data per comment on reddit main documentation should focus more on the contextual data aspect instead of strictly mitre att ck access data
0
43,568
11,758,135,526
IssuesEvent
2020-03-13 14:54:06
lagom/lagom
https://api.github.com/repos/lagom/lagom
closed
custom config resource no longer respected in 1.6.x devMode
type:defect
### Lagom Version (1.2.x / 1.3.x / etc) 1.6.0 ### API (Scala / Java / Neither / Both) Scala ### Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10) ``` console a1kemist@system:~$ uname -a Linux system76-pc 5.3.0-7625-generic #27~1576774585~18.04~c7868f8~dev-Ubuntu SMP Thu Dec 19 20:37:32 x86_64 x86_64 x86_64 GNU/Linux ``` ### JDK (Oracle 1.8.0_112, OpenJDK 1.8.x, Azul Zing) ``` console a1kemist@system:~$ java -version openjdk version "1.8.0_232" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode) ``` ### Library Dependencies N/A ### Expected Behavior Setting a custom config resource via `lagomDevSettings` will be respected in devMode's `runAll` task. eg. ``` scala lazy val `hello-impl` = (project in file("hello-impl")) .enablePlugins(LagomScala) .settings( libraryDependencies ++= Seq( lagomScaladslPersistenceCassandra, lagomScaladslPersistenceJdbc, lagomScaladslTestKit, h2, macwire, scalaTest ), lagomDevSettings := Seq( "config.resource" -> "dev.conf" ) ) .settings(lagomForkedTestSettings: _*) .dependsOn(`hello-api`) ``` would result in `dev.conf` being loaded in devMode via `runAll`. ### Actual Behavior #### Lagom 1.5.5: The custom config resource specified via `lagomDevSettings` **is** loaded and those values are respected. #### Lagom 1.6.0: The default config resource (`application.conf`) is loaded and values from the custom config resource are **not** respected. ### Reproducible Test Case I have created a fork of the lagom/lagom-samples repo with a working example from the `1.5.x` branch and a broken example from the `1.6.x` branch. #### Lagom 1.5.5 (works) [a1kemist/lagom-samples:devSettings-repro-1.5.x](https://github.com/a1kemist/lagom-samples/tree/devSettings-repro-1.5.x/mixed-persistence/mixed-persistence-scala-sbt) In this branch the custom config resource **is** loaded and respected and the correct value `42` is returned from the `ServiceCall` I added for demonstration: ``` console a1kemist@system:~$ curl http://localhost:65499/api/config 42 ``` #### Lagom 1.6.0 (does not work) [a1kemist/lagom-samples:devSettings-repro-1.6.x](https://github.com/a1kemist/lagom-samples/tree/devSettings-repro-1.6.x/mixed-persistence/mixed-persistence-scala-sbt) In this branch the custom config resource is **not** loaded and the default value from `application.conf` is returned from the `ServiceCall` I added for demonstration: ``` console a1kemist@system:~$ curl http://localhost:65499/api/config The meaning of life. ``` ### References #702
1.0
custom config resource no longer respected in 1.6.x devMode - ### Lagom Version (1.2.x / 1.3.x / etc) 1.6.0 ### API (Scala / Java / Neither / Both) Scala ### Operating System (Ubuntu 15.10 / MacOS 10.10 / Windows 10) ``` console a1kemist@system:~$ uname -a Linux system76-pc 5.3.0-7625-generic #27~1576774585~18.04~c7868f8~dev-Ubuntu SMP Thu Dec 19 20:37:32 x86_64 x86_64 x86_64 GNU/Linux ``` ### JDK (Oracle 1.8.0_112, OpenJDK 1.8.x, Azul Zing) ``` console a1kemist@system:~$ java -version openjdk version "1.8.0_232" OpenJDK Runtime Environment (AdoptOpenJDK)(build 1.8.0_232-b09) OpenJDK 64-Bit Server VM (AdoptOpenJDK)(build 25.232-b09, mixed mode) ``` ### Library Dependencies N/A ### Expected Behavior Setting a custom config resource via `lagomDevSettings` will be respected in devMode's `runAll` task. eg. ``` scala lazy val `hello-impl` = (project in file("hello-impl")) .enablePlugins(LagomScala) .settings( libraryDependencies ++= Seq( lagomScaladslPersistenceCassandra, lagomScaladslPersistenceJdbc, lagomScaladslTestKit, h2, macwire, scalaTest ), lagomDevSettings := Seq( "config.resource" -> "dev.conf" ) ) .settings(lagomForkedTestSettings: _*) .dependsOn(`hello-api`) ``` would result in `dev.conf` being loaded in devMode via `runAll`. ### Actual Behavior #### Lagom 1.5.5: The custom config resource specified via `lagomDevSettings` **is** loaded and those values are respected. #### Lagom 1.6.0: The default config resource (`application.conf`) is loaded and values from the custom config resource are **not** respected. ### Reproducible Test Case I have created a fork of the lagom/lagom-samples repo with a working example from the `1.5.x` branch and a broken example from the `1.6.x` branch. #### Lagom 1.5.5 (works) [a1kemist/lagom-samples:devSettings-repro-1.5.x](https://github.com/a1kemist/lagom-samples/tree/devSettings-repro-1.5.x/mixed-persistence/mixed-persistence-scala-sbt) In this branch the custom config resource **is** loaded and respected and the correct value `42` is returned from the `ServiceCall` I added for demonstration: ``` console a1kemist@system:~$ curl http://localhost:65499/api/config 42 ``` #### Lagom 1.6.0 (does not work) [a1kemist/lagom-samples:devSettings-repro-1.6.x](https://github.com/a1kemist/lagom-samples/tree/devSettings-repro-1.6.x/mixed-persistence/mixed-persistence-scala-sbt) In this branch the custom config resource is **not** loaded and the default value from `application.conf` is returned from the `ServiceCall` I added for demonstration: ``` console a1kemist@system:~$ curl http://localhost:65499/api/config The meaning of life. ``` ### References #702
defect
custom config resource no longer respected in x devmode lagom version x x etc api scala java neither both scala operating system ubuntu macos windows console system uname a linux pc generic dev ubuntu smp thu dec gnu linux jdk oracle openjdk x azul zing console system java version openjdk version openjdk runtime environment adoptopenjdk build openjdk bit server vm adoptopenjdk build mixed mode library dependencies n a expected behavior setting a custom config resource via lagomdevsettings will be respected in devmode s runall task eg scala lazy val hello impl project in file hello impl enableplugins lagomscala settings librarydependencies seq lagomscaladslpersistencecassandra lagomscaladslpersistencejdbc lagomscaladsltestkit macwire scalatest lagomdevsettings seq config resource dev conf settings lagomforkedtestsettings dependson hello api would result in dev conf being loaded in devmode via runall actual behavior lagom the custom config resource specified via lagomdevsettings is loaded and those values are respected lagom the default config resource application conf is loaded and values from the custom config resource are not respected reproducible test case i have created a fork of the lagom lagom samples repo with a working example from the x branch and a broken example from the x branch lagom works in this branch the custom config resource is loaded and respected and the correct value is returned from the servicecall i added for demonstration console system curl lagom does not work in this branch the custom config resource is not loaded and the default value from application conf is returned from the servicecall i added for demonstration console system curl the meaning of life references
1
47,012
13,056,015,132
IssuesEvent
2020-07-30 03:23:49
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
opened
signal injection via sni3_sn_sim is not possible for runs from 2018 (Trac #2271)
Incomplete Migration Migrated from Trac defect supernova
Migrated from https://code.icecube.wisc.edu/ticket/2271 ```json { "status": "new", "changetime": "2019-06-21T20:27:24", "description": "It is not possible to inject a supernova signal into the root files from 2018 and into some from 2017 with sni3_sn_sim. ", "reporter": "afritz", "cc": "", "resolution": "", "_ts": "1561148844263901", "component": "supernova", "summary": "signal injection via sni3_sn_sim is not possible for runs from 2018", "priority": "normal", "keywords": "sndaq", "time": "2019-04-16T15:07:29", "milestone": "", "owner": "sybenzvi", "type": "defect" } ```
1.0
signal injection via sni3_sn_sim is not possible for runs from 2018 (Trac #2271) - Migrated from https://code.icecube.wisc.edu/ticket/2271 ```json { "status": "new", "changetime": "2019-06-21T20:27:24", "description": "It is not possible to inject a supernova signal into the root files from 2018 and into some from 2017 with sni3_sn_sim. ", "reporter": "afritz", "cc": "", "resolution": "", "_ts": "1561148844263901", "component": "supernova", "summary": "signal injection via sni3_sn_sim is not possible for runs from 2018", "priority": "normal", "keywords": "sndaq", "time": "2019-04-16T15:07:29", "milestone": "", "owner": "sybenzvi", "type": "defect" } ```
defect
signal injection via sn sim is not possible for runs from trac migrated from json status new changetime description it is not possible to inject a supernova signal into the root files from and into some from with sn sim reporter afritz cc resolution ts component supernova summary signal injection via sn sim is not possible for runs from priority normal keywords sndaq time milestone owner sybenzvi type defect
1
5,946
2,610,218,392
IssuesEvent
2015-02-26 19:09:28
chrsmith/somefinders
https://api.github.com/repos/chrsmith/somefinders
opened
хлебопечка delfa dbm 938 инструкция.pdf
auto-migrated Priority-Medium Type-Defect
``` '''Антип Исаков''' Привет всем не подскажите где можно найти .хлебопечка delfa dbm 938 инструкция.pdf. как то выкладывали уже '''Баян Коновалов''' Вот держи линк http://bit.ly/1b4qdkW '''Александр Чернов''' Спасибо вроде то но просит телефон вводить '''Вилий Захаров''' Неа все ок у меня ничего не списало '''Вартан Артемьев''' Неа все ок у меня ничего не списало Информация о файле: хлебопечка delfa dbm 938 инструкция.pdf Загружен: В этом месяце Скачан раз: 1308 Рейтинг: 132 Средняя скорость скачивания: 1479 Похожих файлов: 10 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 1:43
1.0
хлебопечка delfa dbm 938 инструкция.pdf - ``` '''Антип Исаков''' Привет всем не подскажите где можно найти .хлебопечка delfa dbm 938 инструкция.pdf. как то выкладывали уже '''Баян Коновалов''' Вот держи линк http://bit.ly/1b4qdkW '''Александр Чернов''' Спасибо вроде то но просит телефон вводить '''Вилий Захаров''' Неа все ок у меня ничего не списало '''Вартан Артемьев''' Неа все ок у меня ничего не списало Информация о файле: хлебопечка delfa dbm 938 инструкция.pdf Загружен: В этом месяце Скачан раз: 1308 Рейтинг: 132 Средняя скорость скачивания: 1479 Похожих файлов: 10 ``` ----- Original issue reported on code.google.com by `kondense...@gmail.com` on 17 Dec 2013 at 1:43
defect
хлебопечка delfa dbm инструкция pdf антип исаков привет всем не подскажите где можно найти хлебопечка delfa dbm инструкция pdf как то выкладывали уже баян коновалов вот держи линк александр чернов спасибо вроде то но просит телефон вводить вилий захаров неа все ок у меня ничего не списало вартан артемьев неа все ок у меня ничего не списало информация о файле хлебопечка delfa dbm инструкция pdf загружен в этом месяце скачан раз рейтинг средняя скорость скачивания похожих файлов original issue reported on code google com by kondense gmail com on dec at
1
655
2,507,105,885
IssuesEvent
2015-01-12 16:09:57
deis/deis
https://api.github.com/repos/deis/deis
opened
Add "all buildpacks" integration tests
testing
The nightly-* jobs at https://ci.deis.io/ were too resource-intentsive for what they tested: we don't need to provision a new cluster for each buildpack test. Instead we should add a buildpacks_test.go, maybe with a build flag so it's not run by default.
1.0
Add "all buildpacks" integration tests - The nightly-* jobs at https://ci.deis.io/ were too resource-intentsive for what they tested: we don't need to provision a new cluster for each buildpack test. Instead we should add a buildpacks_test.go, maybe with a build flag so it's not run by default.
non_defect
add all buildpacks integration tests the nightly jobs at were too resource intentsive for what they tested we don t need to provision a new cluster for each buildpack test instead we should add a buildpacks test go maybe with a build flag so it s not run by default
0
12,100
2,685,003,788
IssuesEvent
2015-03-29 16:08:37
IssueMigrationTest/Test5
https://api.github.com/repos/IssueMigrationTest/Test5
closed
int/double conversion when initializing an array
auto-migrated Priority-Medium Type-Defect
**Issue by markdew...@gmail.com** _7 Feb 2008 at 1:18 GMT_ _Originally opened on Google Code_ ---- ``` The attached program gives different output when run from python and when run under shedskin. An array of doubles is initialized with an integer, and the type conversion doesn't work correctly. It appears to be a problem with variadic functions and type conversions (in the list constructor). See the attached vatest.cpp file. One possible solution is to add explicit casts in the constructor call, eg charges = (new list<double>(1,(double)6)); ``` Attachments: * [charges.py](https://storage.googleapis.com/google-code-attachments/shedskin/issue-5/comment-0/charges.py) * [vatest.cpp](https://storage.googleapis.com/google-code-attachments/shedskin/issue-5/comment-0/vatest.cpp)
1.0
int/double conversion when initializing an array - **Issue by markdew...@gmail.com** _7 Feb 2008 at 1:18 GMT_ _Originally opened on Google Code_ ---- ``` The attached program gives different output when run from python and when run under shedskin. An array of doubles is initialized with an integer, and the type conversion doesn't work correctly. It appears to be a problem with variadic functions and type conversions (in the list constructor). See the attached vatest.cpp file. One possible solution is to add explicit casts in the constructor call, eg charges = (new list<double>(1,(double)6)); ``` Attachments: * [charges.py](https://storage.googleapis.com/google-code-attachments/shedskin/issue-5/comment-0/charges.py) * [vatest.cpp](https://storage.googleapis.com/google-code-attachments/shedskin/issue-5/comment-0/vatest.cpp)
defect
int double conversion when initializing an array issue by markdew gmail com feb at gmt originally opened on google code the attached program gives different output when run from python and when run under shedskin an array of doubles is initialized with an integer and the type conversion doesn t work correctly it appears to be a problem with variadic functions and type conversions in the list constructor see the attached vatest cpp file one possible solution is to add explicit casts in the constructor call eg charges new list double attachments
1
34,652
7,458,443,618
IssuesEvent
2018-03-30 10:19:58
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
Määrata crowdsourcing domeeni avaleheks ülesannete nimekiri
C: AVAR P: highest R: fixed T: defect
**Reported by sven syld on 18 Dec 2014 07:55 UTC** Juhul kui keegi satub crowdsourcing.ra avalehele, tuleb ta suunata http://crowdsourcing.ra/et/task/list/ lehele.
1.0
Määrata crowdsourcing domeeni avaleheks ülesannete nimekiri - **Reported by sven syld on 18 Dec 2014 07:55 UTC** Juhul kui keegi satub crowdsourcing.ra avalehele, tuleb ta suunata http://crowdsourcing.ra/et/task/list/ lehele.
defect
määrata crowdsourcing domeeni avaleheks ülesannete nimekiri reported by sven syld on dec utc juhul kui keegi satub crowdsourcing ra avalehele tuleb ta suunata lehele
1
32,460
6,798,952,340
IssuesEvent
2017-11-02 08:31:41
Microsoft/testfx
https://api.github.com/repos/Microsoft/testfx
closed
[Test] Desktop CLI Tests do not verify stack trace information.
bug test defect wontfix
## Description Desktop CLI Tests do not show stacktrace information correctly for x64 scenarios and hence cannot be validated. ## Steps to reproduce The code at test/E2ETests/Automation.CLI/CLITestBase.ValidateFailedTests() skips verification of stack trace information in x64. This needs to be resolved.
1.0
[Test] Desktop CLI Tests do not verify stack trace information. - ## Description Desktop CLI Tests do not show stacktrace information correctly for x64 scenarios and hence cannot be validated. ## Steps to reproduce The code at test/E2ETests/Automation.CLI/CLITestBase.ValidateFailedTests() skips verification of stack trace information in x64. This needs to be resolved.
defect
desktop cli tests do not verify stack trace information description desktop cli tests do not show stacktrace information correctly for scenarios and hence cannot be validated steps to reproduce the code at test automation cli clitestbase validatefailedtests skips verification of stack trace information in this needs to be resolved
1
57,304
15,729,928,003
IssuesEvent
2021-03-29 15:21:47
danmar/testissues
https://api.github.com/repos/danmar/testissues
opened
segfault in src/token.cpp:455: (this=0x0) (Trac #287)
Incomplete Migration Migrated from Trac Other defect hyd_danmar
Migrated from https://trac.cppcheck.net/ticket/287 ```json { "status": "closed", "changetime": "2009-05-08T07:58:31", "description": "version 80fe293c197e3c3688b07fdc288f64bf509f02a7 (git HEAD)\n\nThe test case is radically stripped down from a real-world example, and doesn't make much sense, c++ wise. It compiles, though:\n\n\n{{{\ntemplate<typename T> class A;\ntemplate<typename T> class B;\n\ntypedef A<int> x1;\ntypedef A<long> x2;\ntypedef B<int> x3;\n\ntemplate<typename T>\nclass C {\n C() {\n T a = 0;\n B<T> b = B<T>::foo();\n\n T c = 0;\n if (c)\n c = 0;\n }\n};\n}}}\n\n\n{{{\n#0 0x0808454c in Token::previous (this=0x0) at src/token.cpp:455\n#1 0x0808a33c in Tokenizer::simplifyKnownVariables (this=0xbfffe418) at src/tokenize.cpp:2171\n#2 0x0808dcc4 in Tokenizer::simplifyTokenList (this=0xbfffe418) at src/tokenize.cpp:1358\n#3 0x08070bfb in CppCheck::checkFile (this=0xbfffe7c0, code=@0xbfffe6c0, FileName=0x80a735c \"tokenizer.cxx\") at src/cppcheck.cpp:397\n#4 0x08071a17 in CppCheck::check (this=0xbfffe7c0) at src/cppcheck.cpp:340\n#5 0x08077f21 in CppCheckExecutor::check (this=0xbfffe9b8, argc=2, argv=0xbfffea84) at src/cppcheckexecutor.cpp:54\n#6 0x0807b49c in main (argc=2, argv=0xbfffea84) at src/main.cpp:32\n}}}", "reporter": "mfranz", "cc": "", "resolution": "fixed", "_ts": "1241769511000000", "component": "Other", "summary": "segfault in src/token.cpp:455: (this=0x0)", "priority": "", "keywords": "token previous segfault", "time": "2009-05-06T20:13:00", "milestone": "1.32", "owner": "hyd_danmar", "type": "defect" } ```
1.0
segfault in src/token.cpp:455: (this=0x0) (Trac #287) - Migrated from https://trac.cppcheck.net/ticket/287 ```json { "status": "closed", "changetime": "2009-05-08T07:58:31", "description": "version 80fe293c197e3c3688b07fdc288f64bf509f02a7 (git HEAD)\n\nThe test case is radically stripped down from a real-world example, and doesn't make much sense, c++ wise. It compiles, though:\n\n\n{{{\ntemplate<typename T> class A;\ntemplate<typename T> class B;\n\ntypedef A<int> x1;\ntypedef A<long> x2;\ntypedef B<int> x3;\n\ntemplate<typename T>\nclass C {\n C() {\n T a = 0;\n B<T> b = B<T>::foo();\n\n T c = 0;\n if (c)\n c = 0;\n }\n};\n}}}\n\n\n{{{\n#0 0x0808454c in Token::previous (this=0x0) at src/token.cpp:455\n#1 0x0808a33c in Tokenizer::simplifyKnownVariables (this=0xbfffe418) at src/tokenize.cpp:2171\n#2 0x0808dcc4 in Tokenizer::simplifyTokenList (this=0xbfffe418) at src/tokenize.cpp:1358\n#3 0x08070bfb in CppCheck::checkFile (this=0xbfffe7c0, code=@0xbfffe6c0, FileName=0x80a735c \"tokenizer.cxx\") at src/cppcheck.cpp:397\n#4 0x08071a17 in CppCheck::check (this=0xbfffe7c0) at src/cppcheck.cpp:340\n#5 0x08077f21 in CppCheckExecutor::check (this=0xbfffe9b8, argc=2, argv=0xbfffea84) at src/cppcheckexecutor.cpp:54\n#6 0x0807b49c in main (argc=2, argv=0xbfffea84) at src/main.cpp:32\n}}}", "reporter": "mfranz", "cc": "", "resolution": "fixed", "_ts": "1241769511000000", "component": "Other", "summary": "segfault in src/token.cpp:455: (this=0x0)", "priority": "", "keywords": "token previous segfault", "time": "2009-05-06T20:13:00", "milestone": "1.32", "owner": "hyd_danmar", "type": "defect" } ```
defect
segfault in src token cpp this trac migrated from json status closed changetime description version git head n nthe test case is radically stripped down from a real world example and doesn t make much sense c wise it compiles though n n n ntemplate class a ntemplate class b n ntypedef a ntypedef a ntypedef b n ntemplate nclass c n c n t a n b b b foo n n t c n if c n c n n n n n n n in token previous this at src token cpp n in tokenizer simplifyknownvariables this at src tokenize cpp n in tokenizer simplifytokenlist this at src tokenize cpp n in cppcheck checkfile this code filename tokenizer cxx at src cppcheck cpp n in cppcheck check this at src cppcheck cpp n in cppcheckexecutor check this argc argv at src cppcheckexecutor cpp n in main argc argv at src main cpp n reporter mfranz cc resolution fixed ts component other summary segfault in src token cpp this priority keywords token previous segfault time milestone owner hyd danmar type defect
1
79,882
29,497,218,856
IssuesEvent
2023-06-02 18:05:11
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
reopened
[ISSUE] Medical records disclosure authorization check defaults to "authorized"
526ez-Defects
## Issue Description On Step 3 of the 526EZ form, the "Supporting evidence" step, the user is prompted to either upload their private medical records or authorize a doctor to disclose them. However, it appears selecting the "No, please get my records from a doctor." option **automatically checks the box indicating the user is consenting to authorize those records are released.** This seems contrary to most legal permission-based prompts in these types of web forms, where the user has to manually check the box acknowledging they consent. This box should be unchecked and the user should not be able to proceed with the medical release option unless they check it. This prevents a user accidentally consenting. ![Screenshot 2023-06-02 at 12.50.29 PM.png](https://images.zenhubusercontent.com/647627f92b5e617343beedd1/192fee46-8e16-4c0b-9c08-eb51955b6630) <!-- Please provide as many responses as able --> ### Information needed to assess the severity of the issue - Users Impacted: Anyone selecting the release records option intentionally or by mistake. - Issue Impact: At worst, a user authorizing a medical records release when they don't want to. This seems very unlikely in this case but it's still arguably unethical as folks don't always read forms carefully. - Workaround: <!-- Is there a workaround? If so, what is it?--> - Legal Requirement: It arguably means the VA is not keeping user consent at top of mind. - Loss of Service: <!-- Can people lose service as result of the issue? --> - Permanent Impact: <!-- Does the problem have a permanent impact?  For instance can you resubmit the form for a veteran, or do you lose the submission entirely?  Can the VSR reopen the case later and process it, or does the issue send out an incorrect message to a veteran? --> - Vulnerability: Certainly a privacy vulnerability if the user consents to something by mistake. ### Information that aids in the research and troubleshooting of the issue - Date and Time of Issue: <!-- The date and time (including timezone) the issue was observed.--> - Form/Page of Issue: http://localhost:3001/disability/file-disability-claim-form-21-526ez/supporting-evidence/private-medical-records - Error Message: <!-- Copy/Paste the error message, including a screenshot if able --> - How to reproduce: Select the "No please, get my records from my doctor" - Unique IDs: <!-- UID / ICN / FormID --> - Application ID: <!-- Specific example with application ID --> - Hardware Issue Observed On: <!-- Is it hardware specific? e.g older computer or mobile phone --> - Browser type / version: - Case#, if available: - Reporter name in va.gov: ### Any Additional Information: <!-- Examples: - Where does the user think they are in the process? - Does it impact a specific flow or population? - Is this issue seen consistently or has it just started? - When did we started seeing this issue? - What UX/UI components are the users interacting with before the error is produced? -->
1.0
[ISSUE] Medical records disclosure authorization check defaults to "authorized" - ## Issue Description On Step 3 of the 526EZ form, the "Supporting evidence" step, the user is prompted to either upload their private medical records or authorize a doctor to disclose them. However, it appears selecting the "No, please get my records from a doctor." option **automatically checks the box indicating the user is consenting to authorize those records are released.** This seems contrary to most legal permission-based prompts in these types of web forms, where the user has to manually check the box acknowledging they consent. This box should be unchecked and the user should not be able to proceed with the medical release option unless they check it. This prevents a user accidentally consenting. ![Screenshot 2023-06-02 at 12.50.29 PM.png](https://images.zenhubusercontent.com/647627f92b5e617343beedd1/192fee46-8e16-4c0b-9c08-eb51955b6630) <!-- Please provide as many responses as able --> ### Information needed to assess the severity of the issue - Users Impacted: Anyone selecting the release records option intentionally or by mistake. - Issue Impact: At worst, a user authorizing a medical records release when they don't want to. This seems very unlikely in this case but it's still arguably unethical as folks don't always read forms carefully. - Workaround: <!-- Is there a workaround? If so, what is it?--> - Legal Requirement: It arguably means the VA is not keeping user consent at top of mind. - Loss of Service: <!-- Can people lose service as result of the issue? --> - Permanent Impact: <!-- Does the problem have a permanent impact?  For instance can you resubmit the form for a veteran, or do you lose the submission entirely?  Can the VSR reopen the case later and process it, or does the issue send out an incorrect message to a veteran? --> - Vulnerability: Certainly a privacy vulnerability if the user consents to something by mistake. ### Information that aids in the research and troubleshooting of the issue - Date and Time of Issue: <!-- The date and time (including timezone) the issue was observed.--> - Form/Page of Issue: http://localhost:3001/disability/file-disability-claim-form-21-526ez/supporting-evidence/private-medical-records - Error Message: <!-- Copy/Paste the error message, including a screenshot if able --> - How to reproduce: Select the "No please, get my records from my doctor" - Unique IDs: <!-- UID / ICN / FormID --> - Application ID: <!-- Specific example with application ID --> - Hardware Issue Observed On: <!-- Is it hardware specific? e.g older computer or mobile phone --> - Browser type / version: - Case#, if available: - Reporter name in va.gov: ### Any Additional Information: <!-- Examples: - Where does the user think they are in the process? - Does it impact a specific flow or population? - Is this issue seen consistently or has it just started? - When did we started seeing this issue? - What UX/UI components are the users interacting with before the error is produced? -->
defect
medical records disclosure authorization check defaults to authorized issue description on step of the form the supporting evidence step the user is prompted to either upload their private medical records or authorize a doctor to disclose them however it appears selecting the no please get my records from a doctor option automatically checks the box indicating the user is consenting to authorize those records are released this seems contrary to most legal permission based prompts in these types of web forms where the user has to manually check the box acknowledging they consent this box should be unchecked and the user should not be able to proceed with the medical release option unless they check it this prevents a user accidentally consenting information needed to assess the severity of the issue users impacted anyone selecting the release records option intentionally or by mistake issue impact at worst a user authorizing a medical records release when they don t want to this seems very unlikely in this case but it s still arguably unethical as folks don t always read forms carefully workaround legal requirement it arguably means the va is not keeping user consent at top of mind loss of service permanent impact vulnerability certainly a privacy vulnerability if the user consents to something by mistake information that aids in the research and troubleshooting of the issue date and time of issue form page of issue error message how to reproduce select the no please get my records from my doctor unique ids application id hardware issue observed on browser type version case if available reporter name in va gov any additional information examples where does the user think they are in the process does it impact a specific flow or population is this issue seen consistently or has it just started when did we started seeing this issue what ux ui components are the users interacting with before the error is produced
1
526,268
15,284,948,479
IssuesEvent
2021-02-23 12:56:01
googleapis/doc-pipeline
https://api.github.com/repos/googleapis/doc-pipeline
closed
Only fetch the exact xrefmap files needed for the current build
priority: p1 priority: p1 type: feature request
Tarballs can specify the xrefmaps they need using the `xrefs` field in `docs.metadata`. Let's use that field to specify the exact xrefmap files needed for the current build, rather than downloading _every_ xrefmap for _every_ build. @jskeet came up with: ``` devsite://dotnet/Google.Api.Gax/2.5.0 ``` We can convert that to an xrefmap by removing `devsite://`, replacing the first and last `/` with `-`, and adding `.tar.gz.yml` at the end. It will be an error if that xrefmap does not exist. Another benefit of this is that one library can have multiple versions. Each version will have its own xrefmap. If _every_ xrefmap is pulled in, there will be multiple `xrefmap` files that register the same UIDs. Plus, when we support multiple versions, we'll need to use just the right version of the xrefmap as URLs will be different. Finally, this will benefit libraries without xrefs because they won't need to download anything. @jskeet will implement the change to the dotnet libraries. I will implement the change to doc-pipeline.
2.0
Only fetch the exact xrefmap files needed for the current build - Tarballs can specify the xrefmaps they need using the `xrefs` field in `docs.metadata`. Let's use that field to specify the exact xrefmap files needed for the current build, rather than downloading _every_ xrefmap for _every_ build. @jskeet came up with: ``` devsite://dotnet/Google.Api.Gax/2.5.0 ``` We can convert that to an xrefmap by removing `devsite://`, replacing the first and last `/` with `-`, and adding `.tar.gz.yml` at the end. It will be an error if that xrefmap does not exist. Another benefit of this is that one library can have multiple versions. Each version will have its own xrefmap. If _every_ xrefmap is pulled in, there will be multiple `xrefmap` files that register the same UIDs. Plus, when we support multiple versions, we'll need to use just the right version of the xrefmap as URLs will be different. Finally, this will benefit libraries without xrefs because they won't need to download anything. @jskeet will implement the change to the dotnet libraries. I will implement the change to doc-pipeline.
non_defect
only fetch the exact xrefmap files needed for the current build tarballs can specify the xrefmaps they need using the xrefs field in docs metadata let s use that field to specify the exact xrefmap files needed for the current build rather than downloading every xrefmap for every build jskeet came up with devsite dotnet google api gax we can convert that to an xrefmap by removing devsite replacing the first and last with and adding tar gz yml at the end it will be an error if that xrefmap does not exist another benefit of this is that one library can have multiple versions each version will have its own xrefmap if every xrefmap is pulled in there will be multiple xrefmap files that register the same uids plus when we support multiple versions we ll need to use just the right version of the xrefmap as urls will be different finally this will benefit libraries without xrefs because they won t need to download anything jskeet will implement the change to the dotnet libraries i will implement the change to doc pipeline
0
420,038
12,232,043,534
IssuesEvent
2020-05-04 08:58:07
TTT-2/TTT2
https://api.github.com/repos/TTT-2/TTT2
closed
Add categories to the shop
accepted enhancement priority/low
I think the traitor/detective shop can get pretty confusing when a lot of Traitor/Detective Item addons are installed. To make the shop easier to handle. I'd like to have categories which either the server admin or any player can create to sort the items. The only alternative I could find was the favorites system, which is already implemented. However I think that the favorites system is very limited and doesn't solve the aforementioned problems. I created a quick mockup to show how this could look: ![Desktop - 1](https://user-images.githubusercontent.com/17912127/78123709-e0512580-740e-11ea-88b5-0bfcb45623f4.png)
1.0
Add categories to the shop - I think the traitor/detective shop can get pretty confusing when a lot of Traitor/Detective Item addons are installed. To make the shop easier to handle. I'd like to have categories which either the server admin or any player can create to sort the items. The only alternative I could find was the favorites system, which is already implemented. However I think that the favorites system is very limited and doesn't solve the aforementioned problems. I created a quick mockup to show how this could look: ![Desktop - 1](https://user-images.githubusercontent.com/17912127/78123709-e0512580-740e-11ea-88b5-0bfcb45623f4.png)
non_defect
add categories to the shop i think the traitor detective shop can get pretty confusing when a lot of traitor detective item addons are installed to make the shop easier to handle i d like to have categories which either the server admin or any player can create to sort the items the only alternative i could find was the favorites system which is already implemented however i think that the favorites system is very limited and doesn t solve the aforementioned problems i created a quick mockup to show how this could look
0
438,047
12,610,279,832
IssuesEvent
2020-06-12 04:26:30
AtlasOfLivingAustralia/bie-plugin
https://api.github.com/repos/AtlasOfLivingAustralia/bie-plugin
closed
Expert distribution maps not showing on species pages
Medium-Priority
E.g. Major Mitchell's Cockatoo https://bie.ala.org.au/species/urn:lsid:biodiversity.org.au:afd.taxon:0217f06f-664c-4c64-bc59-1b54650fa23d Looks like there are two issues: - [x] spatial-service is not allowing CORS requests - [ ] bie-plugin is still requesting JSONP but spatial-service does not support it, so needs to be changed to plain JSON and CORS working (see above). See #116 .
1.0
Expert distribution maps not showing on species pages - E.g. Major Mitchell's Cockatoo https://bie.ala.org.au/species/urn:lsid:biodiversity.org.au:afd.taxon:0217f06f-664c-4c64-bc59-1b54650fa23d Looks like there are two issues: - [x] spatial-service is not allowing CORS requests - [ ] bie-plugin is still requesting JSONP but spatial-service does not support it, so needs to be changed to plain JSON and CORS working (see above). See #116 .
non_defect
expert distribution maps not showing on species pages e g major mitchell s cockatoo looks like there are two issues spatial service is not allowing cors requests bie plugin is still requesting jsonp but spatial service does not support it so needs to be changed to plain json and cors working see above see
0
153,226
19,703,149,270
IssuesEvent
2022-01-12 18:44:27
scriptex/atanas.info
https://api.github.com/repos/scriptex/atanas.info
closed
CVE-2022-0155 (High) detected in follow-redirects-1.14.6.tgz
security vulnerability
## CVE-2022-0155 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.6.tgz</b></p></summary> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.6.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.6.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - vuepress-1.9.5.tgz (Root Library) - core-1.9.5.tgz - webpack-dev-server-3.11.3.tgz - http-proxy-middleware-0.19.1.tgz - http-proxy-1.18.1.tgz - :x: **follow-redirects-1.14.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/scriptex/atanas.info/commit/3729c85d3ce174adc09cec54957091282b614ff7">3729c85d3ce174adc09cec54957091282b614ff7</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor <p>Publish Date: 2022-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution: follow-redirects - v1.14.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2022-0155 (High) detected in follow-redirects-1.14.6.tgz - ## CVE-2022-0155 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>follow-redirects-1.14.6.tgz</b></p></summary> <p>HTTP and HTTPS modules that follow redirects.</p> <p>Library home page: <a href="https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.6.tgz">https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.14.6.tgz</a></p> <p>Path to dependency file: /package.json</p> <p>Path to vulnerable library: /node_modules/follow-redirects/package.json</p> <p> Dependency Hierarchy: - vuepress-1.9.5.tgz (Root Library) - core-1.9.5.tgz - webpack-dev-server-3.11.3.tgz - http-proxy-middleware-0.19.1.tgz - http-proxy-1.18.1.tgz - :x: **follow-redirects-1.14.6.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/scriptex/atanas.info/commit/3729c85d3ce174adc09cec54957091282b614ff7">3729c85d3ce174adc09cec54957091282b614ff7</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> follow-redirects is vulnerable to Exposure of Private Personal Information to an Unauthorized Actor <p>Publish Date: 2022-01-10 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-0155>CVE-2022-0155</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/">https://huntr.dev/bounties/fc524e4b-ebb6-427d-ab67-a64181020406/</a></p> <p>Release Date: 2022-01-10</p> <p>Fix Resolution: follow-redirects - v1.14.7</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in follow redirects tgz cve high severity vulnerability vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file package json path to vulnerable library node modules follow redirects package json dependency hierarchy vuepress tgz root library core tgz webpack dev server tgz http proxy middleware tgz http proxy tgz x follow redirects tgz vulnerable library found in head commit a href vulnerability details follow redirects is vulnerable to exposure of private personal information to an unauthorized actor publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects step up your open source security game with whitesource
0