Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
38,544
8,882,058,377
IssuesEvent
2019-01-14 12:03:44
contao/contao
https://api.github.com/repos/contao/contao
closed
Sorting of root nodes in Backend-TreeView is ignored with filters
defect
**Affected version(s)** 4.x **Description** If filters have been activated in the Backend-TreeView the sorting of the root nodes is ignored. Without filters everything is fine! **How to reproduce** For example if you activate the published-filter in pageTree the root nodes won't be shown sorted anymore. **Possible Solution** https://github.com/contao/contao/blob/4b73fdd80c7c2c82906893a1ffb132d5c974bea1/core-bundle/src/Resources/contao/drivers/DC_Table.php#L3497 The line above could be replaced by: ``$objRoot = $this->Database->prepare("SELECT DISTINCT " . \Database::quoteIdentifier($fld) . ($blnHasSorting ? ", sorting" : '') . " FROM " . $this->strTable . " WHERE " . implode(' AND ', $this->procedure) . ($blnHasSorting ? " ORDER BY sorting" : ''))``
1.0
Sorting of root nodes in Backend-TreeView is ignored with filters - **Affected version(s)** 4.x **Description** If filters have been activated in the Backend-TreeView the sorting of the root nodes is ignored. Without filters everything is fine! **How to reproduce** For example if you activate the published-filter in pageTree the root nodes won't be shown sorted anymore. **Possible Solution** https://github.com/contao/contao/blob/4b73fdd80c7c2c82906893a1ffb132d5c974bea1/core-bundle/src/Resources/contao/drivers/DC_Table.php#L3497 The line above could be replaced by: ``$objRoot = $this->Database->prepare("SELECT DISTINCT " . \Database::quoteIdentifier($fld) . ($blnHasSorting ? ", sorting" : '') . " FROM " . $this->strTable . " WHERE " . implode(' AND ', $this->procedure) . ($blnHasSorting ? " ORDER BY sorting" : ''))``
defect
sorting of root nodes in backend treeview is ignored with filters affected version s x description if filters have been activated in the backend treeview the sorting of the root nodes is ignored without filters everything is fine how to reproduce for example if you activate the published filter in pagetree the root nodes won t be shown sorted anymore possible solution the line above could be replaced by objroot this database prepare select distinct database quoteidentifier fld blnhassorting sorting from this strtable where implode and this procedure blnhassorting order by sorting
1
52,148
13,211,393,032
IssuesEvent
2020-08-15 22:49:06
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
Problem while running detector simulation to test new IceTop Low Energy (IceTop_Volume) trigger. (Trac #1747)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1747">https://code.icecube.wisc.edu/projects/icecube/ticket/1747</a>, reported by rkoiralaand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "_ts": "1550067158057333", "description": "I tried to run an IceTop detector simulation for proton shower 10410 using the latest trunk version of simulation metaproject. The code uses TriggerSim segment with gcdfile=\"/data/exp/IceCube/2016/filtered/level2/VerifiedGCD/Level2_IC86.2016_data_Run00127991_0601_23_254_GCD.i3.gz\", and run_id=10410.\n\nThis is how the TriggerSim segment is called:\ntray.AddSegment(trigger_sim.TriggerSim, \"trigger\", gcd_file=dataio.I3File(gcdfile), run_id=10410)\n\nLooks like IceTop_Volume trigger is defined as 'CylinderTrigger' with name \u2018CylinderTrigger_0001\u2019. But it was not fired even once when 1 PeV proton shower was resampled 100 times within 300m from the origin. \nInput binary corsika shower is: /data/user/rkoirala/CORSIKA/DAT000011\nOutput i3files after detector simulation: /data/user/rkoirala/CORSIKA/DAT000011_00.i3", "reporter": "rkoirala", "cc": "", "resolution": "fixed", "time": "2016-06-16T00:09:47", "component": "combo simulation", "summary": "Problem while running detector simulation to test new IceTop Low Energy (IceTop_Volume) trigger.", "priority": "major", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
Problem while running detector simulation to test new IceTop Low Energy (IceTop_Volume) trigger. (Trac #1747) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1747">https://code.icecube.wisc.edu/projects/icecube/ticket/1747</a>, reported by rkoiralaand owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:38", "_ts": "1550067158057333", "description": "I tried to run an IceTop detector simulation for proton shower 10410 using the latest trunk version of simulation metaproject. The code uses TriggerSim segment with gcdfile=\"/data/exp/IceCube/2016/filtered/level2/VerifiedGCD/Level2_IC86.2016_data_Run00127991_0601_23_254_GCD.i3.gz\", and run_id=10410.\n\nThis is how the TriggerSim segment is called:\ntray.AddSegment(trigger_sim.TriggerSim, \"trigger\", gcd_file=dataio.I3File(gcdfile), run_id=10410)\n\nLooks like IceTop_Volume trigger is defined as 'CylinderTrigger' with name \u2018CylinderTrigger_0001\u2019. But it was not fired even once when 1 PeV proton shower was resampled 100 times within 300m from the origin. \nInput binary corsika shower is: /data/user/rkoirala/CORSIKA/DAT000011\nOutput i3files after detector simulation: /data/user/rkoirala/CORSIKA/DAT000011_00.i3", "reporter": "rkoirala", "cc": "", "resolution": "fixed", "time": "2016-06-16T00:09:47", "component": "combo simulation", "summary": "Problem while running detector simulation to test new IceTop Low Energy (IceTop_Volume) trigger.", "priority": "major", "keywords": "", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
problem while running detector simulation to test new icetop low energy icetop volume trigger trac migrated from json status closed changetime ts description i tried to run an icetop detector simulation for proton shower using the latest trunk version of simulation metaproject the code uses triggersim segment with gcdfile data exp icecube filtered verifiedgcd data gcd gz and run id n nthis is how the triggersim segment is called ntray addsegment trigger sim triggersim trigger gcd file dataio gcdfile run id n nlooks like icetop volume trigger is defined as cylindertrigger with name but it was not fired even once when pev proton shower was resampled times within from the origin ninput binary corsika shower is data user rkoirala corsika noutput after detector simulation data user rkoirala corsika reporter rkoirala cc resolution fixed time component combo simulation summary problem while running detector simulation to test new icetop low energy icetop volume trigger priority major keywords milestone owner olivas type defect
1
581,185
17,287,666,140
IssuesEvent
2021-07-24 03:20:04
RevivalEngine/WebClient
https://api.github.com/repos/RevivalEngine/WebClient
closed
Set up a new GitHub Actions workflow to run a static analysis tool on pull requests
Complexity: TBD Meta: Repository Priority: High Status: In Progress Type: Task
Goals: - [ ] The workflow should execute ``eslint`` or similar (TBD) in the exact same configuration as ``npm test`` (see https://github.com/RevivalEngine/WebClient/issues/49) - [ ] When the workflow fails, this fact should become visible in the PR itself somehow (ideally via GitHub and not a bot posting a comment to reduce clutter) - [ ] Merges should be blocked while it is failing - [ ] Releases should be unaffected (separate workflow) as otherwise the build workflow will become even more complex
1.0
Set up a new GitHub Actions workflow to run a static analysis tool on pull requests - Goals: - [ ] The workflow should execute ``eslint`` or similar (TBD) in the exact same configuration as ``npm test`` (see https://github.com/RevivalEngine/WebClient/issues/49) - [ ] When the workflow fails, this fact should become visible in the PR itself somehow (ideally via GitHub and not a bot posting a comment to reduce clutter) - [ ] Merges should be blocked while it is failing - [ ] Releases should be unaffected (separate workflow) as otherwise the build workflow will become even more complex
non_defect
set up a new github actions workflow to run a static analysis tool on pull requests goals the workflow should execute eslint or similar tbd in the exact same configuration as npm test see when the workflow fails this fact should become visible in the pr itself somehow ideally via github and not a bot posting a comment to reduce clutter merges should be blocked while it is failing releases should be unaffected separate workflow as otherwise the build workflow will become even more complex
0
109,335
16,843,679,136
IssuesEvent
2021-06-19 02:48:49
bharathirajatut/fitbit-api-example-java2
https://api.github.com/repos/bharathirajatut/fitbit-api-example-java2
opened
WS-2020-0293 (Medium) detected in spring-security-web-4.1.1.RELEASE.jar
security vulnerability
## WS-2020-0293 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-4.1.1.RELEASE.jar</b></p></summary> <p>spring-security-web</p> <p>Library home page: <a href="http://spring.io/spring-security">http://spring.io/spring-security</a></p> <p>Path to dependency file: fitbit-api-example-java2/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/4.1.1.RELEASE/spring-security-web-4.1.1.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-security-1.4.0.RELEASE.jar (Root Library) - :x: **spring-security-web-4.1.1.RELEASE.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/bharathirajatut/fitbit-api-example-java2/commits/8c153ad064e8f07a4ddade35ac13a9b485ca3dac">8c153ad064e8f07a4ddade35ac13a9b485ca3dac</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Spring Security before 5.2.9, 5.3.7, and 5.4.3 vulnerable to side-channel attacks. Vulnerable versions of Spring Security don't use constant time comparisons for CSRF tokens. <p>Publish Date: 2020-12-17 <p>URL: <a href=https://github.com/spring-projects/spring-security/commit/40e027c56d11b9b4c5071360bfc718165c937784>WS-2020-0293</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/spring-projects/spring-security/issues/9291">https://github.com/spring-projects/spring-security/issues/9291</a></p> <p>Release Date: 2020-12-17</p> <p>Fix Resolution: org.springframework.security:spring-security-web:5.2.9,5.3.7,5.4.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2020-0293 (Medium) detected in spring-security-web-4.1.1.RELEASE.jar - ## WS-2020-0293 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>spring-security-web-4.1.1.RELEASE.jar</b></p></summary> <p>spring-security-web</p> <p>Library home page: <a href="http://spring.io/spring-security">http://spring.io/spring-security</a></p> <p>Path to dependency file: fitbit-api-example-java2/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/springframework/security/spring-security-web/4.1.1.RELEASE/spring-security-web-4.1.1.RELEASE.jar</p> <p> Dependency Hierarchy: - spring-boot-starter-security-1.4.0.RELEASE.jar (Root Library) - :x: **spring-security-web-4.1.1.RELEASE.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://api.github.com/repos/bharathirajatut/fitbit-api-example-java2/commits/8c153ad064e8f07a4ddade35ac13a9b485ca3dac">8c153ad064e8f07a4ddade35ac13a9b485ca3dac</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Spring Security before 5.2.9, 5.3.7, and 5.4.3 vulnerable to side-channel attacks. Vulnerable versions of Spring Security don't use constant time comparisons for CSRF tokens. <p>Publish Date: 2020-12-17 <p>URL: <a href=https://github.com/spring-projects/spring-security/commit/40e027c56d11b9b4c5071360bfc718165c937784>WS-2020-0293</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/spring-projects/spring-security/issues/9291">https://github.com/spring-projects/spring-security/issues/9291</a></p> <p>Release Date: 2020-12-17</p> <p>Fix Resolution: org.springframework.security:spring-security-web:5.2.9,5.3.7,5.4.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
ws medium detected in spring security web release jar ws medium severity vulnerability vulnerable library spring security web release jar spring security web library home page a href path to dependency file fitbit api example pom xml path to vulnerable library home wss scanner repository org springframework security spring security web release spring security web release jar dependency hierarchy spring boot starter security release jar root library x spring security web release jar vulnerable library found in head commit a href found in base branch master vulnerability details spring security before and vulnerable to side channel attacks vulnerable versions of spring security don t use constant time comparisons for csrf tokens publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org springframework security spring security web step up your open source security game with whitesource
0
578,148
17,145,382,133
IssuesEvent
2021-07-13 14:07:24
geosolutions-it/MapStore2-C027
https://api.github.com/repos/geosolutions-it/MapStore2-C027
opened
Migration and update of GN2 cluster
Priority: High
The current cluster of GeoServer available in [sitgn2](https://docs.google.com/document/d/18ZmLLW3eaa_1SfkDwy77uo938gbddO3ulLLFFXBgKfI/edit#heading=h.hawqmkhlguu5) must be migrated to the [new VM](https://docs.google.com/drawings/d/1WD6zANu_jkGFat9zfOXnCyQWQE-bYE4YRAi12bzP6A0/edit) (new name is sitgs2) and the version updated to the latest stable 2.19.1 (current version 2.13.2). The spreadsheet with references to all new VMs is available in drive [here](https://drive.google.com/file/d/1GE7osiWixp_NOF22vUbZJ7Yk-X63c5mw/view?usp=sharing)
1.0
Migration and update of GN2 cluster - The current cluster of GeoServer available in [sitgn2](https://docs.google.com/document/d/18ZmLLW3eaa_1SfkDwy77uo938gbddO3ulLLFFXBgKfI/edit#heading=h.hawqmkhlguu5) must be migrated to the [new VM](https://docs.google.com/drawings/d/1WD6zANu_jkGFat9zfOXnCyQWQE-bYE4YRAi12bzP6A0/edit) (new name is sitgs2) and the version updated to the latest stable 2.19.1 (current version 2.13.2). The spreadsheet with references to all new VMs is available in drive [here](https://drive.google.com/file/d/1GE7osiWixp_NOF22vUbZJ7Yk-X63c5mw/view?usp=sharing)
non_defect
migration and update of cluster the current cluster of geoserver available in must be migrated to the new name is and the version updated to the latest stable current version the spreadsheet with references to all new vms is available in drive
0
32,662
6,888,650,254
IssuesEvent
2017-11-22 07:11:42
STEllAR-GROUP/hpx
https://api.github.com/repos/STEllAR-GROUP/hpx
reopened
parallel merge is not stable
category: algorithms type: defect
[merge_testcase.zip](https://github.com/STEllAR-GROUP/hpx/files/1392241/merge_testcase.zip) The documentation for the merge algorithm states _For equivalent elements in the original two ranges, the elements from the first range precede the elements from the second range_. I find that this is sometimes not the case when the parallel execution policy is chosen. The attached testcase merges two ranges of <int, char> pairs, ordered by the first value. For my compiler (gcc 6.3.0) I find that the resulting merged sequence interleaves equivalent elements from the input ranges, instead of using all of the equivalent elements from the first range first. Specifically, when merging `(3,a), (3,b)` with `(3,c)` the result is `(3,a), (3,c), (3,b)`. Due to the high threshold (65536) for using the parallel version of the algorithm, reproducing this requires modifying [this line](https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/algorithms/merge.hpp#L173) so the threshold is something low (I used 10).
1.0
parallel merge is not stable - [merge_testcase.zip](https://github.com/STEllAR-GROUP/hpx/files/1392241/merge_testcase.zip) The documentation for the merge algorithm states _For equivalent elements in the original two ranges, the elements from the first range precede the elements from the second range_. I find that this is sometimes not the case when the parallel execution policy is chosen. The attached testcase merges two ranges of <int, char> pairs, ordered by the first value. For my compiler (gcc 6.3.0) I find that the resulting merged sequence interleaves equivalent elements from the input ranges, instead of using all of the equivalent elements from the first range first. Specifically, when merging `(3,a), (3,b)` with `(3,c)` the result is `(3,a), (3,c), (3,b)`. Due to the high threshold (65536) for using the parallel version of the algorithm, reproducing this requires modifying [this line](https://github.com/STEllAR-GROUP/hpx/blob/master/hpx/parallel/algorithms/merge.hpp#L173) so the threshold is something low (I used 10).
defect
parallel merge is not stable the documentation for the merge algorithm states for equivalent elements in the original two ranges the elements from the first range precede the elements from the second range i find that this is sometimes not the case when the parallel execution policy is chosen the attached testcase merges two ranges of pairs ordered by the first value for my compiler gcc i find that the resulting merged sequence interleaves equivalent elements from the input ranges instead of using all of the equivalent elements from the first range first specifically when merging a b with c the result is a c b due to the high threshold for using the parallel version of the algorithm reproducing this requires modifying so the threshold is something low i used
1
67,055
20,825,766,785
IssuesEvent
2022-03-18 20:39:15
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
opened
Preview of uploaded BMP image doesn't work (image opens fine)
T-Defect
### Steps to reproduce 1. Upload an image to Element-Web in the BMP image format. 2. Look at the timeline after the upload is done. 3. Open the image. 4. Go back to the timeline. ### Outcome #### What did you expect? To see a preview of the image in the timeline. #### What happened instead? There is just a grey-filled box. Opening the image works fine. I can see what I've uploaded. But the timeline doesn't show a preview/thumbnail of the image. ### Operating system _No response_ ### Browser information Firefox 98 ### URL for webapp _No response_ ### Application version 1.10.7 ### Homeserver _No response_ ### Will you send logs? No
1.0
Preview of uploaded BMP image doesn't work (image opens fine) - ### Steps to reproduce 1. Upload an image to Element-Web in the BMP image format. 2. Look at the timeline after the upload is done. 3. Open the image. 4. Go back to the timeline. ### Outcome #### What did you expect? To see a preview of the image in the timeline. #### What happened instead? There is just a grey-filled box. Opening the image works fine. I can see what I've uploaded. But the timeline doesn't show a preview/thumbnail of the image. ### Operating system _No response_ ### Browser information Firefox 98 ### URL for webapp _No response_ ### Application version 1.10.7 ### Homeserver _No response_ ### Will you send logs? No
defect
preview of uploaded bmp image doesn t work image opens fine steps to reproduce upload an image to element web in the bmp image format look at the timeline after the upload is done open the image go back to the timeline outcome what did you expect to see a preview of the image in the timeline what happened instead there is just a grey filled box opening the image works fine i can see what i ve uploaded but the timeline doesn t show a preview thumbnail of the image operating system no response browser information firefox url for webapp no response application version homeserver no response will you send logs no
1
49,189
13,185,284,968
IssuesEvent
2020-08-12 21:05:22
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
genie - not found in I3_PORTS when SYSTEM_PACKAGES=ON (Trac #931)
Incomplete Migration Migrated from Trac cmake defect
<details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/931 , reported by nega and owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:45", "description": "", "reporter": "nega", "cc": "", "resolution": "fixed", "_ts": "1550067105393059", "component": "cmake", "summary": "genie - not found in I3_PORTS when SYSTEM_PACKAGES=ON", "priority": "normal", "keywords": "", "time": "2015-04-14T19:54:06", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
genie - not found in I3_PORTS when SYSTEM_PACKAGES=ON (Trac #931) - <details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/931 , reported by nega and owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:45", "description": "", "reporter": "nega", "cc": "", "resolution": "fixed", "_ts": "1550067105393059", "component": "cmake", "summary": "genie - not found in I3_PORTS when SYSTEM_PACKAGES=ON", "priority": "normal", "keywords": "", "time": "2015-04-14T19:54:06", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
genie not found in ports when system packages on trac migrated from reported by nega and owned by nega json status closed changetime description reporter nega cc resolution fixed ts component cmake summary genie not found in ports when system packages on priority normal keywords time milestone owner nega type defect
1
64,862
3,219,042,002
IssuesEvent
2015-10-08 07:21:11
cs2103aug2015-t16-2j/main
https://api.github.com/repos/cs2103aug2015-t16-2j/main
closed
As a user, I want to edit/ update my entries.
priority.high type.story
So I do not have to delete and retype when I want to change details of my scheduled entry.
1.0
As a user, I want to edit/ update my entries. - So I do not have to delete and retype when I want to change details of my scheduled entry.
non_defect
as a user i want to edit update my entries so i do not have to delete and retype when i want to change details of my scheduled entry
0
74,809
25,339,040,991
IssuesEvent
2022-11-18 19:36:20
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
systemd race condition at boot between zfs-import-cache and multipathd
Type: Defect Bot: Not Stale
### System information Type | Version/Name --- | --- Distribution Name | Rocky Linux 8 Distribution Version | 8.4 Kernel Version | 4.18.0-305.19.1.el8_4 Architecture | x86_64 OpenZFS Version | 2.1.1 ### Describe the problem you're observing multipath error creating map for devices in imported pool. ### Describe how to reproduce the problem Reboot system with zfs-import-cache.service enabled. And problem fixed by editing the systemd unit file for that service. ### Include any warning/errors/backtraces from the system logs ``` [root@zfs2 ~]# multipath |& head Oct 30 12:05:38 | sdbi: No SAS end device for 'end_device-0:4' Oct 30 12:05:38 | sdfy: No SAS end device for 'end_device-9:4' Oct 30 12:05:38 | 35000cca2531dd8b4: ignoring map Oct 30 12:05:38 | sdbj: No SAS end device for 'end_device-0:4' Oct 30 12:05:38 | sdfz: No SAS end device for 'end_device-9:4' Oct 30 12:05:38 | 35000cca25315aac4: ignoring map Oct 30 12:05:38 | sdbs: No SAS end device for 'end_device-0:4' Oct 30 12:05:38 | sdgi: No SAS end device for 'end_device-9:4' Oct 30 12:05:38 | 35000cca25316e118: ignoring map Oct 30 12:05:38 | sdbt: No SAS end device for 'end_device-0:4' ``` which I believe is due to those devices being part of an imported zpool, e.g., ``` [root@zfs2 ~]# zpool status | grep cca2531dd8b4 wwn-0x5000cca2531dd8b4 ONLINE 0 0 0 ``` This appears to be a race condition at boot time if `zfs-import-cache.service` completes before `multipathd.service`. For example, node the earlier timestamp for `zfs-import-cache.service` relative to `multipathd.service`. ``` [root@zfs2 ~]# systemctl status zfs-import-cache.service ● zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled) Active: active (exited) since Sat 2021-10-30 11:56:16 PDT; 6min ago Docs: man:zpool(8) Process: 9580 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS (code=exited, st> Main PID: 9580 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 3355442) Memory: 0B CGroup: /system.slice/zfs-import-cache.service Oct 30 11:56:13 zfs2 systemd[1]: Starting Import ZFS pools by cache file... Oct 30 11:56:16 zfs2 systemd[1]: Started Import ZFS pools by cache file. ``` ``` [root@zfs2 ~]# multipath -ll | grep -A5 cca253077224 [root@zfs2 ~]# systemctl status multipathd ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-10-30 11:56:48 PDT; 1min 14s ago Process: 9715 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS) Process: 9581 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exit> Main PID: 9717 (multipathd) Status: "up" Tasks: 7 Memory: 154.7M CGroup: /system.slice/multipathd.service └─9717 /sbin/multipathd -d -s Oct 30 11:56:48 zfs2 multipathd[9717]: sddn: No SAS end device for 'end_device-0:5' Oct 30 11:56:48 zfs2 multipathd[9717]: sdid: No SAS end device for 'end_device-9:5' Oct 30 11:56:48 zfs2 multipathd[9717]: 35000cca2b003dc68: ignoring map Oct 30 11:56:48 zfs2 multipathd[9717]: sddo: No SAS end device for 'end_device-0:5' Oct 30 11:56:48 zfs2 multipathd[9717]: sdie: No SAS end device for 'end_device-9:5' Oct 30 11:56:48 zfs2 multipathd[9717]: 35000cca25318c0cc: ignoring map Oct 30 11:56:48 zfs2 multipathd[9717]: sddp: No SAS end device for 'end_device-0:5' Oct 30 11:56:48 zfs2 multipathd[9717]: sdif: No SAS end device for 'end_device-9:5' Oct 30 11:56:48 zfs2 multipathd[9717]: 35000cca2530e04d8: ignoring map Oct 30 11:56:48 zfs2 systemd[1]: Started Device-Mapper Multipath Device Controller. ``` If I manually export the pool and run `multipath` before re-importing the pool then the zpool devices all have multipath maps, though there is a separate issue with "cannot mount...Cannot allocate memory" , which is very strange on this 1TByte server (and a subsequent mount succeeds). ``` [root@zfs2 ~]# time zpool export home4 real 0m36.943s user 0m0.008s sys 0m34.737s [root@zfs2 ~]# multipath Oct 30 12:11:39 | sdbi: No SAS end device for 'end_device-0:4' Oct 30 12:11:39 | sdfy: No SAS end device for 'end_device-9:4' create: 35000cca2531dd8b4 undef HGST,HUH721212AL5200 size=11T features='0' hwhandler='0' wp=undef `-+- policy='service-time 0' prio=1 status=undef |- 0:0:62:0 sdbi 67:192 undef ready running `- 9:0:62:0 sdfy 131:64 undef ready running Oct 30 12:11:39 | sdbj: No SAS end device for 'end_device-0:4' Oct 30 12:11:39 | sdfz: No SAS end device for 'end_device-9:4' create: 35000cca25315aac4 undef HGST,HUH721212AL5200 size=11T features='0' hwhandler='0' wp=undef `-+- policy='service-time 0' prio=1 status=undef |- 0:0:63:0 sdbj 67:208 undef ready running `- 9:0:63:0 sdfz 131:80 undef ready running ... [root@zfs2 ~]# time zpool import home4 cannot mount 'home4/rolland': Cannot allocate memory real 0m51.849s user 0m0.494s sys 0m4.070s [root@zfs2 ~]# df -h | grep rolland [root@zfs2 ~]# zfs mount -a [root@zfs2 ~]# df -h | grep rolland home4/rolland 293T 692M 293T 1% /home4/rolland [root@zfs2 ~]# multipath [root@zfs2 ~]# zpool list home4 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT home4 662T 252T 410T - - 0% 38% 1.00x ONLINE - [root@zfs2 ~]# zpool status | head pool: home4 state: ONLINE scan: resilvered 182M in 00:56:59 with 0 errors on Sat Oct 30 10:02:30 2021 config: NAME STATE READ WRITE CKSUM home4 ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 wwn-0x5000cca253077224 ONLINE 0 0 0 wwn-0x5000cca253077640 ONLINE 0 0 0 [root@zfs2 ~]# multipath -ll | grep -A5 cca253077224 35000cca253077224 dm-92 HGST,HUH721212AL5200 size=11T features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:98:0 sdcs 70:0 active ready running `- 9:0:98:0 sdhi 133:128 active ready running 35000cca2531e89dc dm-78 HGST,HUH721212AL5200 ``` I believe this is due to a typo in, ``` [root@zfs2 ~]# rpm -qf /usr/lib/systemd/system/zfs-import-cache.service zfs-2.1.1-1.el8.x86_64 [root@zfs2 ~]# cat /usr/lib/systemd/system/zfs-import-cache.service [Unit] Description=Import ZFS pools by cache file Documentation=man:zpool(8) DefaultDependencies=no Requires=systemd-udev-settle.service After=systemd-udev-settle.service After=cryptsetup.target After=multipathd.target After=systemd-remount-fs.service Before=zfs-import.target ConditionFileNotEmpty=/etc/zfs/zpool.cache ConditionPathIsDirectory=/sys/module/zfs [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS [Install] WantedBy=zfs-import.target ``` If I change `After=multipathd.target` to `After=multipathd.service` there was no problem after the next reboot. In particular, I see multipath maps for all of the zpool devices imported at boot, and expected relative service timestamps, ``` [root@zfs2 ~]# systemctl status multipathd ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-10-30 12:30:12 PDT; 35s ago Process: 9578 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS) Process: 9571 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exit> Main PID: 9580 (multipathd) Status: "up" Tasks: 7 Memory: 208.8M CGroup: /system.slice/multipathd.service └─9580 /sbin/multipathd -d -s Oct 30 12:30:12 zfs2 multipathd[9580]: sddn: No SAS end device for 'end_device-0:5' Oct 30 12:30:12 zfs2 multipathd[9580]: sdid: No SAS end device for 'end_device-9:5' Oct 30 12:30:12 zfs2 multipathd[9580]: 35000cca2b003dc68: load table [0 23437770752 multipath 0 0 1 1 servic> Oct 30 12:30:12 zfs2 multipathd[9580]: sddo: No SAS end device for 'end_device-0:5' Oct 30 12:30:12 zfs2 multipathd[9580]: sdie: No SAS end device for 'end_device-9:5' Oct 30 12:30:12 zfs2 multipathd[9580]: 35000cca25318c0cc: load table [0 23437770752 multipath 0 0 1 1 servic> Oct 30 12:30:12 zfs2 multipathd[9580]: sddp: No SAS end device for 'end_device-0:5' Oct 30 12:30:12 zfs2 multipathd[9580]: sdif: No SAS end device for 'end_device-9:5' Oct 30 12:30:12 zfs2 multipathd[9580]: 35000cca2530e04d8: load table [0 23437770752 multipath 0 0 1 1 servic> Oct 30 12:30:12 zfs2 systemd[1]: Started Device-Mapper Multipath Device Controller. [root@zfs2 ~]# systemctl status zfs-import-cache.service ● zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled) Active: active (exited) since Sat 2021-10-30 12:30:15 PDT; 51s ago Docs: man:zpool(8) Process: 11224 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS (code=exited, s> Main PID: 11224 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 3355442) Memory: 0B CGroup: /system.slice/zfs-import-cache.service Oct 30 12:30:12 zfs2 systemd[1]: Starting Import ZFS pools by cache file... Oct 30 12:30:15 zfs2 systemd[1]: Started Import ZFS pools by cache file. ```
1.0
systemd race condition at boot between zfs-import-cache and multipathd - ### System information Type | Version/Name --- | --- Distribution Name | Rocky Linux 8 Distribution Version | 8.4 Kernel Version | 4.18.0-305.19.1.el8_4 Architecture | x86_64 OpenZFS Version | 2.1.1 ### Describe the problem you're observing multipath error creating map for devices in imported pool. ### Describe how to reproduce the problem Reboot system with zfs-import-cache.service enabled. And problem fixed by editing the systemd unit file for that service. ### Include any warning/errors/backtraces from the system logs ``` [root@zfs2 ~]# multipath |& head Oct 30 12:05:38 | sdbi: No SAS end device for 'end_device-0:4' Oct 30 12:05:38 | sdfy: No SAS end device for 'end_device-9:4' Oct 30 12:05:38 | 35000cca2531dd8b4: ignoring map Oct 30 12:05:38 | sdbj: No SAS end device for 'end_device-0:4' Oct 30 12:05:38 | sdfz: No SAS end device for 'end_device-9:4' Oct 30 12:05:38 | 35000cca25315aac4: ignoring map Oct 30 12:05:38 | sdbs: No SAS end device for 'end_device-0:4' Oct 30 12:05:38 | sdgi: No SAS end device for 'end_device-9:4' Oct 30 12:05:38 | 35000cca25316e118: ignoring map Oct 30 12:05:38 | sdbt: No SAS end device for 'end_device-0:4' ``` which I believe is due to those devices being part of an imported zpool, e.g., ``` [root@zfs2 ~]# zpool status | grep cca2531dd8b4 wwn-0x5000cca2531dd8b4 ONLINE 0 0 0 ``` This appears to be a race condition at boot time if `zfs-import-cache.service` completes before `multipathd.service`. For example, node the earlier timestamp for `zfs-import-cache.service` relative to `multipathd.service`. ``` [root@zfs2 ~]# systemctl status zfs-import-cache.service ● zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled) Active: active (exited) since Sat 2021-10-30 11:56:16 PDT; 6min ago Docs: man:zpool(8) Process: 9580 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS (code=exited, st> Main PID: 9580 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 3355442) Memory: 0B CGroup: /system.slice/zfs-import-cache.service Oct 30 11:56:13 zfs2 systemd[1]: Starting Import ZFS pools by cache file... Oct 30 11:56:16 zfs2 systemd[1]: Started Import ZFS pools by cache file. ``` ``` [root@zfs2 ~]# multipath -ll | grep -A5 cca253077224 [root@zfs2 ~]# systemctl status multipathd ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-10-30 11:56:48 PDT; 1min 14s ago Process: 9715 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS) Process: 9581 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exit> Main PID: 9717 (multipathd) Status: "up" Tasks: 7 Memory: 154.7M CGroup: /system.slice/multipathd.service └─9717 /sbin/multipathd -d -s Oct 30 11:56:48 zfs2 multipathd[9717]: sddn: No SAS end device for 'end_device-0:5' Oct 30 11:56:48 zfs2 multipathd[9717]: sdid: No SAS end device for 'end_device-9:5' Oct 30 11:56:48 zfs2 multipathd[9717]: 35000cca2b003dc68: ignoring map Oct 30 11:56:48 zfs2 multipathd[9717]: sddo: No SAS end device for 'end_device-0:5' Oct 30 11:56:48 zfs2 multipathd[9717]: sdie: No SAS end device for 'end_device-9:5' Oct 30 11:56:48 zfs2 multipathd[9717]: 35000cca25318c0cc: ignoring map Oct 30 11:56:48 zfs2 multipathd[9717]: sddp: No SAS end device for 'end_device-0:5' Oct 30 11:56:48 zfs2 multipathd[9717]: sdif: No SAS end device for 'end_device-9:5' Oct 30 11:56:48 zfs2 multipathd[9717]: 35000cca2530e04d8: ignoring map Oct 30 11:56:48 zfs2 systemd[1]: Started Device-Mapper Multipath Device Controller. ``` If I manually export the pool and run `multipath` before re-importing the pool then the zpool devices all have multipath maps, though there is a separate issue with "cannot mount...Cannot allocate memory" , which is very strange on this 1TByte server (and a subsequent mount succeeds). ``` [root@zfs2 ~]# time zpool export home4 real 0m36.943s user 0m0.008s sys 0m34.737s [root@zfs2 ~]# multipath Oct 30 12:11:39 | sdbi: No SAS end device for 'end_device-0:4' Oct 30 12:11:39 | sdfy: No SAS end device for 'end_device-9:4' create: 35000cca2531dd8b4 undef HGST,HUH721212AL5200 size=11T features='0' hwhandler='0' wp=undef `-+- policy='service-time 0' prio=1 status=undef |- 0:0:62:0 sdbi 67:192 undef ready running `- 9:0:62:0 sdfy 131:64 undef ready running Oct 30 12:11:39 | sdbj: No SAS end device for 'end_device-0:4' Oct 30 12:11:39 | sdfz: No SAS end device for 'end_device-9:4' create: 35000cca25315aac4 undef HGST,HUH721212AL5200 size=11T features='0' hwhandler='0' wp=undef `-+- policy='service-time 0' prio=1 status=undef |- 0:0:63:0 sdbj 67:208 undef ready running `- 9:0:63:0 sdfz 131:80 undef ready running ... [root@zfs2 ~]# time zpool import home4 cannot mount 'home4/rolland': Cannot allocate memory real 0m51.849s user 0m0.494s sys 0m4.070s [root@zfs2 ~]# df -h | grep rolland [root@zfs2 ~]# zfs mount -a [root@zfs2 ~]# df -h | grep rolland home4/rolland 293T 692M 293T 1% /home4/rolland [root@zfs2 ~]# multipath [root@zfs2 ~]# zpool list home4 NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT home4 662T 252T 410T - - 0% 38% 1.00x ONLINE - [root@zfs2 ~]# zpool status | head pool: home4 state: ONLINE scan: resilvered 182M in 00:56:59 with 0 errors on Sat Oct 30 10:02:30 2021 config: NAME STATE READ WRITE CKSUM home4 ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 wwn-0x5000cca253077224 ONLINE 0 0 0 wwn-0x5000cca253077640 ONLINE 0 0 0 [root@zfs2 ~]# multipath -ll | grep -A5 cca253077224 35000cca253077224 dm-92 HGST,HUH721212AL5200 size=11T features='0' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active |- 0:0:98:0 sdcs 70:0 active ready running `- 9:0:98:0 sdhi 133:128 active ready running 35000cca2531e89dc dm-78 HGST,HUH721212AL5200 ``` I believe this is due to a typo in, ``` [root@zfs2 ~]# rpm -qf /usr/lib/systemd/system/zfs-import-cache.service zfs-2.1.1-1.el8.x86_64 [root@zfs2 ~]# cat /usr/lib/systemd/system/zfs-import-cache.service [Unit] Description=Import ZFS pools by cache file Documentation=man:zpool(8) DefaultDependencies=no Requires=systemd-udev-settle.service After=systemd-udev-settle.service After=cryptsetup.target After=multipathd.target After=systemd-remount-fs.service Before=zfs-import.target ConditionFileNotEmpty=/etc/zfs/zpool.cache ConditionPathIsDirectory=/sys/module/zfs [Service] Type=oneshot RemainAfterExit=yes ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS [Install] WantedBy=zfs-import.target ``` If I change `After=multipathd.target` to `After=multipathd.service` there was no problem after the next reboot. In particular, I see multipath maps for all of the zpool devices imported at boot, and expected relative service timestamps, ``` [root@zfs2 ~]# systemctl status multipathd ● multipathd.service - Device-Mapper Multipath Device Controller Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled) Active: active (running) since Sat 2021-10-30 12:30:12 PDT; 35s ago Process: 9578 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS) Process: 9571 ExecStartPre=/sbin/modprobe -a scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm-multipath (code=exit> Main PID: 9580 (multipathd) Status: "up" Tasks: 7 Memory: 208.8M CGroup: /system.slice/multipathd.service └─9580 /sbin/multipathd -d -s Oct 30 12:30:12 zfs2 multipathd[9580]: sddn: No SAS end device for 'end_device-0:5' Oct 30 12:30:12 zfs2 multipathd[9580]: sdid: No SAS end device for 'end_device-9:5' Oct 30 12:30:12 zfs2 multipathd[9580]: 35000cca2b003dc68: load table [0 23437770752 multipath 0 0 1 1 servic> Oct 30 12:30:12 zfs2 multipathd[9580]: sddo: No SAS end device for 'end_device-0:5' Oct 30 12:30:12 zfs2 multipathd[9580]: sdie: No SAS end device for 'end_device-9:5' Oct 30 12:30:12 zfs2 multipathd[9580]: 35000cca25318c0cc: load table [0 23437770752 multipath 0 0 1 1 servic> Oct 30 12:30:12 zfs2 multipathd[9580]: sddp: No SAS end device for 'end_device-0:5' Oct 30 12:30:12 zfs2 multipathd[9580]: sdif: No SAS end device for 'end_device-9:5' Oct 30 12:30:12 zfs2 multipathd[9580]: 35000cca2530e04d8: load table [0 23437770752 multipath 0 0 1 1 servic> Oct 30 12:30:12 zfs2 systemd[1]: Started Device-Mapper Multipath Device Controller. [root@zfs2 ~]# systemctl status zfs-import-cache.service ● zfs-import-cache.service - Import ZFS pools by cache file Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled) Active: active (exited) since Sat 2021-10-30 12:30:15 PDT; 51s ago Docs: man:zpool(8) Process: 11224 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN $ZPOOL_IMPORT_OPTS (code=exited, s> Main PID: 11224 (code=exited, status=0/SUCCESS) Tasks: 0 (limit: 3355442) Memory: 0B CGroup: /system.slice/zfs-import-cache.service Oct 30 12:30:12 zfs2 systemd[1]: Starting Import ZFS pools by cache file... Oct 30 12:30:15 zfs2 systemd[1]: Started Import ZFS pools by cache file. ```
defect
systemd race condition at boot between zfs import cache and multipathd system information type version name distribution name rocky linux distribution version kernel version architecture openzfs version describe the problem you re observing multipath error creating map for devices in imported pool describe how to reproduce the problem reboot system with zfs import cache service enabled and problem fixed by editing the systemd unit file for that service include any warning errors backtraces from the system logs multipath head oct sdbi no sas end device for end device oct sdfy no sas end device for end device oct ignoring map oct sdbj no sas end device for end device oct sdfz no sas end device for end device oct ignoring map oct sdbs no sas end device for end device oct sdgi no sas end device for end device oct ignoring map oct sdbt no sas end device for end device which i believe is due to those devices being part of an imported zpool e g zpool status grep wwn online this appears to be a race condition at boot time if zfs import cache service completes before multipathd service for example node the earlier timestamp for zfs import cache service relative to multipathd service systemctl status zfs import cache service ● zfs import cache service import zfs pools by cache file loaded loaded usr lib systemd system zfs import cache service enabled vendor preset enabled active active exited since sat pdt ago docs man zpool process execstart sbin zpool import c etc zfs zpool cache an zpool import opts code exited st main pid code exited status success tasks limit memory cgroup system slice zfs import cache service oct systemd starting import zfs pools by cache file oct systemd started import zfs pools by cache file multipath ll grep systemctl status multipathd ● multipathd service device mapper multipath device controller loaded loaded usr lib systemd system multipathd service enabled vendor preset enabled active active running since sat pdt ago process execstartpre sbin multipath a code exited status success process execstartpre sbin modprobe a scsi dh alua scsi dh emc scsi dh rdac dm multipath code exit main pid multipathd status up tasks memory cgroup system slice multipathd service └─ sbin multipathd d s oct multipathd sddn no sas end device for end device oct multipathd sdid no sas end device for end device oct multipathd ignoring map oct multipathd sddo no sas end device for end device oct multipathd sdie no sas end device for end device oct multipathd ignoring map oct multipathd sddp no sas end device for end device oct multipathd sdif no sas end device for end device oct multipathd ignoring map oct systemd started device mapper multipath device controller if i manually export the pool and run multipath before re importing the pool then the zpool devices all have multipath maps though there is a separate issue with cannot mount cannot allocate memory which is very strange on this server and a subsequent mount succeeds time zpool export real user sys multipath oct sdbi no sas end device for end device oct sdfy no sas end device for end device create undef hgst size features hwhandler wp undef policy service time prio status undef sdbi undef ready running sdfy undef ready running oct sdbj no sas end device for end device oct sdfz no sas end device for end device create undef hgst size features hwhandler wp undef policy service time prio status undef sdbj undef ready running sdfz undef ready running time zpool import cannot mount rolland cannot allocate memory real user sys df h grep rolland zfs mount a df h grep rolland rolland rolland multipath zpool list name size alloc free ckpoint expandsz frag cap dedup health altroot online zpool status head pool state online scan resilvered in with errors on sat oct config name state read write cksum online online wwn online wwn online multipath ll grep dm hgst size features hwhandler wp rw policy service time prio status active sdcs active ready running sdhi active ready running dm hgst i believe this is due to a typo in rpm qf usr lib systemd system zfs import cache service zfs cat usr lib systemd system zfs import cache service description import zfs pools by cache file documentation man zpool defaultdependencies no requires systemd udev settle service after systemd udev settle service after cryptsetup target after multipathd target after systemd remount fs service before zfs import target conditionfilenotempty etc zfs zpool cache conditionpathisdirectory sys module zfs type oneshot remainafterexit yes execstart sbin zpool import c etc zfs zpool cache an zpool import opts wantedby zfs import target if i change after multipathd target to after multipathd service there was no problem after the next reboot in particular i see multipath maps for all of the zpool devices imported at boot and expected relative service timestamps systemctl status multipathd ● multipathd service device mapper multipath device controller loaded loaded usr lib systemd system multipathd service enabled vendor preset enabled active active running since sat pdt ago process execstartpre sbin multipath a code exited status success process execstartpre sbin modprobe a scsi dh alua scsi dh emc scsi dh rdac dm multipath code exit main pid multipathd status up tasks memory cgroup system slice multipathd service └─ sbin multipathd d s oct multipathd sddn no sas end device for end device oct multipathd sdid no sas end device for end device oct multipathd load table multipath servic oct multipathd sddo no sas end device for end device oct multipathd sdie no sas end device for end device oct multipathd load table multipath servic oct multipathd sddp no sas end device for end device oct multipathd sdif no sas end device for end device oct multipathd load table multipath servic oct systemd started device mapper multipath device controller systemctl status zfs import cache service ● zfs import cache service import zfs pools by cache file loaded loaded usr lib systemd system zfs import cache service enabled vendor preset enabled active active exited since sat pdt ago docs man zpool process execstart sbin zpool import c etc zfs zpool cache an zpool import opts code exited s main pid code exited status success tasks limit memory cgroup system slice zfs import cache service oct systemd starting import zfs pools by cache file oct systemd started import zfs pools by cache file
1
34,427
7,451,338,464
IssuesEvent
2018-03-29 02:20:51
kerdokullamae/test_koik_issued
https://api.github.com/repos/kerdokullamae/test_koik_issued
closed
Akti salvestamine ebaõnnestub
P: highest R: fixed T: defect
**Reported by sven syld on 25 Mar 2013 13:31 UTC** '''Object''' [Akti haldus](http://dev.dira.teepub/et/act/edit/4/) '''Description''' Peale salvestamist tekib viga: Ups, midagi läks valesti! Unable to execute INSERT statement [INTO "dira"."search_index_sync_queue" ("id", "object_type_id", "object_id", "action_id") VALUES (:p0, :p1, :p2, :p3)](INSERT) [SQLSTATE[23505](wrapped:): Unique violation: 7 ERROR: duplicate key value violates unique constraint "search_index_sync_queue_uq" DETAIL: Key (object_type_id, object_id)=(ACT, 4) already exists.] Kui esimese salvestamisega ei teki, siis proovi uuesti. Probleem on selles, et search_index_sync_queue ei tehta tühjaks. '''Todo''' Parandada
1.0
Akti salvestamine ebaõnnestub - **Reported by sven syld on 25 Mar 2013 13:31 UTC** '''Object''' [Akti haldus](http://dev.dira.teepub/et/act/edit/4/) '''Description''' Peale salvestamist tekib viga: Ups, midagi läks valesti! Unable to execute INSERT statement [INTO "dira"."search_index_sync_queue" ("id", "object_type_id", "object_id", "action_id") VALUES (:p0, :p1, :p2, :p3)](INSERT) [SQLSTATE[23505](wrapped:): Unique violation: 7 ERROR: duplicate key value violates unique constraint "search_index_sync_queue_uq" DETAIL: Key (object_type_id, object_id)=(ACT, 4) already exists.] Kui esimese salvestamisega ei teki, siis proovi uuesti. Probleem on selles, et search_index_sync_queue ei tehta tühjaks. '''Todo''' Parandada
defect
akti salvestamine ebaõnnestub reported by sven syld on mar utc object description peale salvestamist tekib viga ups midagi läks valesti unable to execute insert statement insert wrapped unique violation error duplicate key value violates unique constraint search index sync queue uq detail key object type id object id act already exists kui esimese salvestamisega ei teki siis proovi uuesti probleem on selles et search index sync queue ei tehta tühjaks todo parandada
1
4,024
18,779,387,068
IssuesEvent
2021-11-08 03:18:22
MDAnalysis/mdanalysis
https://api.github.com/repos/MDAnalysis/mdanalysis
opened
MAINT: updating Cython imports
maintainability
Related to this for newer versions of Cython/NumPy: https://github.com/scipy/scipy/pull/14813 There are a few places we could probably add in `np.import_array()` just to be safe later on. Compare the output of `git grep -E -i 'cimport numpy'` and `git grep -E -i 'import_array'` to see the spots where we might add it in. I don't think it is urgent though, and to be fair MDA already does the right thing in a few places based on those greps. Probably just a useful guard to add at some point.
True
MAINT: updating Cython imports - Related to this for newer versions of Cython/NumPy: https://github.com/scipy/scipy/pull/14813 There are a few places we could probably add in `np.import_array()` just to be safe later on. Compare the output of `git grep -E -i 'cimport numpy'` and `git grep -E -i 'import_array'` to see the spots where we might add it in. I don't think it is urgent though, and to be fair MDA already does the right thing in a few places based on those greps. Probably just a useful guard to add at some point.
non_defect
maint updating cython imports related to this for newer versions of cython numpy there are a few places we could probably add in np import array just to be safe later on compare the output of git grep e i cimport numpy and git grep e i import array to see the spots where we might add it in i don t think it is urgent though and to be fair mda already does the right thing in a few places based on those greps probably just a useful guard to add at some point
0
10,641
2,622,178,415
IssuesEvent
2015-03-04 00:17:45
byzhang/leveldb
https://api.github.com/repos/byzhang/leveldb
opened
Error with android port
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. compile for armV6 2. In application.mk set "APP_ABI := armeabi-v7a armeabi" 3. launch build-ndk What is the expected output? What do you see instead? In file included from ./port/port.h:18, from ./db/filename.h:14, from jni/.././db/builder.cc:7: ./port/port_android.h:81: error: expected initializer before 'ATTRIBUTE_WEAK' ./port/port_android.h: In member function 'void leveldb::port::AtomicPointer::MemoryBarrier() const': ./port/port_android.h:95: error: 'pLinuxKernelMemoryBarrier' was not declared in this scope make: *** [obj/local/armeabi/objs-debug/leveldb/__/./db/builder.o] Error 1 What version of the product are you using? On what operating system? I'm on windows 7 with the last NDK Please provide any additional information below. It's works with just "APP_ABI := armeabi-v7a" ``` Original issue reported on code.google.com by `bruno.ro...@gmail.com` on 6 Jul 2012 at 8:22
1.0
Error with android port - ``` What steps will reproduce the problem? 1. compile for armV6 2. In application.mk set "APP_ABI := armeabi-v7a armeabi" 3. launch build-ndk What is the expected output? What do you see instead? In file included from ./port/port.h:18, from ./db/filename.h:14, from jni/.././db/builder.cc:7: ./port/port_android.h:81: error: expected initializer before 'ATTRIBUTE_WEAK' ./port/port_android.h: In member function 'void leveldb::port::AtomicPointer::MemoryBarrier() const': ./port/port_android.h:95: error: 'pLinuxKernelMemoryBarrier' was not declared in this scope make: *** [obj/local/armeabi/objs-debug/leveldb/__/./db/builder.o] Error 1 What version of the product are you using? On what operating system? I'm on windows 7 with the last NDK Please provide any additional information below. It's works with just "APP_ABI := armeabi-v7a" ``` Original issue reported on code.google.com by `bruno.ro...@gmail.com` on 6 Jul 2012 at 8:22
defect
error with android port what steps will reproduce the problem compile for in application mk set app abi armeabi armeabi launch build ndk what is the expected output what do you see instead in file included from port port h from db filename h from jni db builder cc port port android h error expected initializer before attribute weak port port android h in member function void leveldb port atomicpointer memorybarrier const port port android h error plinuxkernelmemorybarrier was not declared in this scope make error what version of the product are you using on what operating system i m on windows with the last ndk please provide any additional information below it s works with just app abi armeabi original issue reported on code google com by bruno ro gmail com on jul at
1
3,755
2,610,068,241
IssuesEvent
2015-02-26 18:20:02
chrsmith/jsjsj122
https://api.github.com/repos/chrsmith/jsjsj122
opened
路桥看前列腺炎哪家效果最好
auto-migrated Priority-Medium Type-Defect
``` 路桥看前列腺炎哪家效果最好【台州五洲生殖医院】24小时健 康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址: 台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104� ��108、118、198及椒江一金清公交车直达枫南小区,乘坐107、105 、109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 9:01
1.0
路桥看前列腺炎哪家效果最好 - ``` 路桥看前列腺炎哪家效果最好【台州五洲生殖医院】24小时健 康咨询热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址: 台州市椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104� ��108、118、198及椒江一金清公交车直达枫南小区,乘坐107、105 、109、112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 9:01
defect
路桥看前列腺炎哪家效果最好 路桥看前列腺炎哪家效果最好【台州五洲生殖医院】 康咨询热线 微信号tzwzszyy 医院地址 (枫南大转盘旁)乘车线路 � �� 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
1
39,872
9,703,217,863
IssuesEvent
2019-05-27 10:45:50
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
Serialization fails when table contains SQLDataType.INSTANT columns or when using Converter.ofNullable() converters
C: Functionality E: All Editions P: Medium R: Fixed T: Defect
The `DefaultInstantBinding` class contains a lambda converter: ```java private static final Converter<OffsetDateTime, Instant> CONVERTER = Converter.ofNullable( OffsetDateTime.class, Instant.class, o -> o.toInstant(), i -> OffsetDateTime.ofInstant(i, ZoneOffset.UTC) ); ``` Lambdas by default are not serializable. When serializing a column that has such a data type, we get an exception: ``` java.io.NotSerializableException: org.jooq.Converter$$Lambda$111/0x000000080029ec40 at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1185) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1379) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1175) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1379) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1175) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:349) at java.base/java.util.ArrayList.writeObject(ArrayList.java:896) at java.base/jdk.internal.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at java.base/java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1130) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1497) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:349) at org.jooq.test.all.testcases.SerializationTests.runSerialisation(SerializationTests.java:176) at org.jooq.test.all.testcases.SerializationTests.testSerialisation(SerializationTests.java:148) at org.jooq.test.jOOQAbstractTest.testSerialisation(jOOQAbstractTest.java:4345) ``` It should be easy to make these lambdas serializable
1.0
Serialization fails when table contains SQLDataType.INSTANT columns or when using Converter.ofNullable() converters - The `DefaultInstantBinding` class contains a lambda converter: ```java private static final Converter<OffsetDateTime, Instant> CONVERTER = Converter.ofNullable( OffsetDateTime.class, Instant.class, o -> o.toInstant(), i -> OffsetDateTime.ofInstant(i, ZoneOffset.UTC) ); ``` Lambdas by default are not serializable. When serializing a column that has such a data type, we get an exception: ``` java.io.NotSerializableException: org.jooq.Converter$$Lambda$111/0x000000080029ec40 at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1185) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1379) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1175) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeArray(ObjectOutputStream.java:1379) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1175) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:349) at java.base/java.util.ArrayList.writeObject(ArrayList.java:896) at java.base/jdk.internal.reflect.GeneratedMethodAccessor1.invoke(Unknown Source) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at java.base/java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1130) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1497) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1553) at java.base/java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1510) at java.base/java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1433) at java.base/java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1179) at java.base/java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:349) at org.jooq.test.all.testcases.SerializationTests.runSerialisation(SerializationTests.java:176) at org.jooq.test.all.testcases.SerializationTests.testSerialisation(SerializationTests.java:148) at org.jooq.test.jOOQAbstractTest.testSerialisation(jOOQAbstractTest.java:4345) ``` It should be easy to make these lambdas serializable
defect
serialization fails when table contains sqldatatype instant columns or when using converter ofnullable converters the defaultinstantbinding class contains a lambda converter java private static final converter converter converter ofnullable offsetdatetime class instant class o o toinstant i offsetdatetime ofinstant i zoneoffset utc lambdas by default are not serializable when serializing a column that has such a data type we get an exception java io notserializableexception org jooq converter lambda at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream writearray objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream writearray objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream writeobject objectoutputstream java at java base java util arraylist writeobject arraylist java at java base jdk internal reflect invoke unknown source at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at java base java io objectstreamclass invokewriteobject objectstreamclass java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream defaultwritefields objectoutputstream java at java base java io objectoutputstream writeserialdata objectoutputstream java at java base java io objectoutputstream writeordinaryobject objectoutputstream java at java base java io objectoutputstream objectoutputstream java at java base java io objectoutputstream writeobject objectoutputstream java at org jooq test all testcases serializationtests runserialisation serializationtests java at org jooq test all testcases serializationtests testserialisation serializationtests java at org jooq test jooqabstracttest testserialisation jooqabstracttest java it should be easy to make these lambdas serializable
1
31,315
6,497,657,896
IssuesEvent
2017-08-22 14:42:17
STEllAR-GROUP/hpx
https://api.github.com/repos/STEllAR-GROUP/hpx
closed
Components have to be default constructible
category: components type: defect
Here is another problem related to types without an accessible default constructor. `tests/unit/component/new_.cpp` does not compile after it's changed like this, ```c++ diff --git a/tests/unit/component/new_.cpp b/tests/unit/component/new_.cpp index 16185e1fa3..341e858dd4 100644 --- a/tests/unit/component/new_.cpp +++ b/tests/unit/component/new_.cpp @@ -15,8 +15,12 @@ /////////////////////////////////////////////////////////////////////////////// struct test_server : hpx::components::simple_component_base<test_server> { + test_server() = delete; + test_server(int i) : i(i) {} hpx::id_type call() const { return hpx::find_here(); } + int i; + HPX_DEFINE_COMPONENT_ACTION(test_server, call); }; @@ -49,7 +53,7 @@ void test_create_single_instance() // make sure created objects live on locality they are supposed to be for (hpx::id_type const& loc: hpx::find_all_localities()) { - hpx::id_type id = hpx::new_<test_server>(loc).get(); + hpx::id_type id = hpx::new_<test_server>(loc, 42).get(); HPX_TEST(hpx::async<call_action>(id).get() == loc); } @@ -60,7 +64,7 @@ void test_create_single_instance() } // make sure distribution policy is properly used - hpx::id_type id = hpx::new_<test_server>(hpx::default_layout).get(); + hpx::id_type id = hpx::new_<test_server>(hpx::default_layout, 42).get(); HPX_TEST(hpx::async<call_action>(id).get() == hpx::find_here()); test_client t2 = hpx::new_<test_client>(hpx::default_layout); @@ -68,7 +72,7 @@ void test_create_single_instance() for (hpx::id_type const& loc: hpx::find_all_localities()) { - hpx::id_type id = hpx::new_<test_server>(hpx::default_layout(loc)).get(); + hpx::id_type id = hpx::new_<test_server>(hpx::default_layout(loc), 42).get(); HPX_TEST(hpx::async<call_action>(id).get() == loc); } @@ -85,7 +89,7 @@ void test_create_multiple_instances() // make sure created objects live on locality they are supposed to be for (hpx::id_type const& loc: hpx::find_all_localities()) { - std::vector<hpx::id_type> ids = hpx::new_<test_server[]>(loc, 10).get(); + std::vector<hpx::id_type> ids = hpx::new_<test_server[]>(loc, 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (hpx::id_type const& id: ids) @@ -96,7 +100,7 @@ void test_create_multiple_instances() for (hpx::id_type const& loc: hpx::find_all_localities()) { - std::vector<test_client> ids = hpx::new_<test_client[]>(loc, 10).get(); + std::vector<test_client> ids = hpx::new_<test_client[]>(loc, 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (test_client const& c: ids) @@ -107,7 +111,7 @@ void test_create_multiple_instances() // make sure distribution policy is properly used std::vector<hpx::id_type> ids = - hpx::new_<test_server[]>(hpx::default_layout, 10).get(); + hpx::new_<test_server[]>(hpx::default_layout, 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (hpx::id_type const& id: ids) { @@ -115,7 +119,7 @@ void test_create_multiple_instances() } std::vector<test_client> clients = - hpx::new_<test_client[]>(hpx::default_layout, 10).get(); + hpx::new_<test_client[]>(hpx::default_layout, 10, 42).get(); HPX_TEST_EQ(clients.size(), std::size_t(10)); for (test_client const& c: clients) { @@ -125,7 +129,7 @@ void test_create_multiple_instances() for (hpx::id_type const& loc: hpx::find_all_localities()) { std::vector<hpx::id_type> ids = - hpx::new_<test_server[]>(hpx::default_layout(loc), 10).get(); + hpx::new_<test_server[]>(hpx::default_layout(loc), 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (hpx::id_type const& id: ids) @@ -137,7 +141,7 @@ void test_create_multiple_instances() for (hpx::id_type const& loc: hpx::find_all_localities()) { std::vector<test_client> ids = - hpx::new_<test_client[]>(hpx::default_layout(loc), 10).get(); + hpx::new_<test_client[]>(hpx::default_layout(loc), 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (test_client const& c: ids) ``` ``` hpx.git/hpx/runtime/components/server/simple_component_base.hpp:304:64: error: use of deleted function ‘test_server::test_server()’ return static_cast<component_type*>(new Component()); //-V572 ^ hpx.git/tests/unit/component/new_.cpp:18:5: note: declared here test_server() = delete; ```
1.0
Components have to be default constructible - Here is another problem related to types without an accessible default constructor. `tests/unit/component/new_.cpp` does not compile after it's changed like this, ```c++ diff --git a/tests/unit/component/new_.cpp b/tests/unit/component/new_.cpp index 16185e1fa3..341e858dd4 100644 --- a/tests/unit/component/new_.cpp +++ b/tests/unit/component/new_.cpp @@ -15,8 +15,12 @@ /////////////////////////////////////////////////////////////////////////////// struct test_server : hpx::components::simple_component_base<test_server> { + test_server() = delete; + test_server(int i) : i(i) {} hpx::id_type call() const { return hpx::find_here(); } + int i; + HPX_DEFINE_COMPONENT_ACTION(test_server, call); }; @@ -49,7 +53,7 @@ void test_create_single_instance() // make sure created objects live on locality they are supposed to be for (hpx::id_type const& loc: hpx::find_all_localities()) { - hpx::id_type id = hpx::new_<test_server>(loc).get(); + hpx::id_type id = hpx::new_<test_server>(loc, 42).get(); HPX_TEST(hpx::async<call_action>(id).get() == loc); } @@ -60,7 +64,7 @@ void test_create_single_instance() } // make sure distribution policy is properly used - hpx::id_type id = hpx::new_<test_server>(hpx::default_layout).get(); + hpx::id_type id = hpx::new_<test_server>(hpx::default_layout, 42).get(); HPX_TEST(hpx::async<call_action>(id).get() == hpx::find_here()); test_client t2 = hpx::new_<test_client>(hpx::default_layout); @@ -68,7 +72,7 @@ void test_create_single_instance() for (hpx::id_type const& loc: hpx::find_all_localities()) { - hpx::id_type id = hpx::new_<test_server>(hpx::default_layout(loc)).get(); + hpx::id_type id = hpx::new_<test_server>(hpx::default_layout(loc), 42).get(); HPX_TEST(hpx::async<call_action>(id).get() == loc); } @@ -85,7 +89,7 @@ void test_create_multiple_instances() // make sure created objects live on locality they are supposed to be for (hpx::id_type const& loc: hpx::find_all_localities()) { - std::vector<hpx::id_type> ids = hpx::new_<test_server[]>(loc, 10).get(); + std::vector<hpx::id_type> ids = hpx::new_<test_server[]>(loc, 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (hpx::id_type const& id: ids) @@ -96,7 +100,7 @@ void test_create_multiple_instances() for (hpx::id_type const& loc: hpx::find_all_localities()) { - std::vector<test_client> ids = hpx::new_<test_client[]>(loc, 10).get(); + std::vector<test_client> ids = hpx::new_<test_client[]>(loc, 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (test_client const& c: ids) @@ -107,7 +111,7 @@ void test_create_multiple_instances() // make sure distribution policy is properly used std::vector<hpx::id_type> ids = - hpx::new_<test_server[]>(hpx::default_layout, 10).get(); + hpx::new_<test_server[]>(hpx::default_layout, 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (hpx::id_type const& id: ids) { @@ -115,7 +119,7 @@ void test_create_multiple_instances() } std::vector<test_client> clients = - hpx::new_<test_client[]>(hpx::default_layout, 10).get(); + hpx::new_<test_client[]>(hpx::default_layout, 10, 42).get(); HPX_TEST_EQ(clients.size(), std::size_t(10)); for (test_client const& c: clients) { @@ -125,7 +129,7 @@ void test_create_multiple_instances() for (hpx::id_type const& loc: hpx::find_all_localities()) { std::vector<hpx::id_type> ids = - hpx::new_<test_server[]>(hpx::default_layout(loc), 10).get(); + hpx::new_<test_server[]>(hpx::default_layout(loc), 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (hpx::id_type const& id: ids) @@ -137,7 +141,7 @@ void test_create_multiple_instances() for (hpx::id_type const& loc: hpx::find_all_localities()) { std::vector<test_client> ids = - hpx::new_<test_client[]>(hpx::default_layout(loc), 10).get(); + hpx::new_<test_client[]>(hpx::default_layout(loc), 10, 42).get(); HPX_TEST_EQ(ids.size(), std::size_t(10)); for (test_client const& c: ids) ``` ``` hpx.git/hpx/runtime/components/server/simple_component_base.hpp:304:64: error: use of deleted function ‘test_server::test_server()’ return static_cast<component_type*>(new Component()); //-V572 ^ hpx.git/tests/unit/component/new_.cpp:18:5: note: declared here test_server() = delete; ```
defect
components have to be default constructible here is another problem related to types without an accessible default constructor tests unit component new cpp does not compile after it s changed like this c diff git a tests unit component new cpp b tests unit component new cpp index a tests unit component new cpp b tests unit component new cpp struct test server hpx components simple component base test server delete test server int i i i hpx id type call const return hpx find here int i hpx define component action test server call void test create single instance make sure created objects live on locality they are supposed to be for hpx id type const loc hpx find all localities hpx id type id hpx new loc get hpx id type id hpx new loc get hpx test hpx async id get loc void test create single instance make sure distribution policy is properly used hpx id type id hpx new hpx default layout get hpx id type id hpx new hpx default layout get hpx test hpx async id get hpx find here test client hpx new hpx default layout void test create single instance for hpx id type const loc hpx find all localities hpx id type id hpx new hpx default layout loc get hpx id type id hpx new hpx default layout loc get hpx test hpx async id get loc void test create multiple instances make sure created objects live on locality they are supposed to be for hpx id type const loc hpx find all localities std vector ids hpx new loc get std vector ids hpx new loc get hpx test eq ids size std size t for hpx id type const id ids void test create multiple instances for hpx id type const loc hpx find all localities std vector ids hpx new loc get std vector ids hpx new loc get hpx test eq ids size std size t for test client const c ids void test create multiple instances make sure distribution policy is properly used std vector ids hpx new hpx default layout get hpx new hpx default layout get hpx test eq ids size std size t for hpx id type const id ids void test create multiple instances std vector clients hpx new hpx default layout get hpx new hpx default layout get hpx test eq clients size std size t for test client const c clients void test create multiple instances for hpx id type const loc hpx find all localities std vector ids hpx new hpx default layout loc get hpx new hpx default layout loc get hpx test eq ids size std size t for hpx id type const id ids void test create multiple instances for hpx id type const loc hpx find all localities std vector ids hpx new hpx default layout loc get hpx new hpx default layout loc get hpx test eq ids size std size t for test client const c ids hpx git hpx runtime components server simple component base hpp error use of deleted function ‘test server test server ’ return static cast new component hpx git tests unit component new cpp note declared here test server delete
1
267,569
28,509,106,764
IssuesEvent
2023-04-19 01:35:49
dpteam/RK3188_TABLET
https://api.github.com/repos/dpteam/RK3188_TABLET
closed
CVE-2018-17182 (High) detected in linuxv3.0.70 - autoclosed
Mend: dependency security vulnerability
## CVE-2018-17182 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0.70</b></p></summary> <p> <p>Development tree</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/vm_event_item.h</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/vm_event_item.h</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/vm_event_item.h</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel through 4.18.8. The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. <p>Publish Date: 2018-09-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-17182>CVE-2018-17182</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-17182">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-17182</a></p> <p>Release Date: 2018-09-19</p> <p>Fix Resolution: v4.19-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2018-17182 (High) detected in linuxv3.0.70 - autoclosed - ## CVE-2018-17182 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxv3.0.70</b></p></summary> <p> <p>Development tree</p> <p>Library home page: <a href=https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git>https://git.kernel.org/pub/scm/linux/kernel/git/keithp/linux.git</a></p> <p>Found in HEAD commit: <a href="https://github.com/dpteam/RK3188_TABLET/commit/0c501f5a0fd72c7b2ac82904235363bd44fd8f9e">0c501f5a0fd72c7b2ac82904235363bd44fd8f9e</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (3)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/vm_event_item.h</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/vm_event_item.h</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/include/linux/vm_event_item.h</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> An issue was discovered in the Linux kernel through 4.18.8. The vmacache_flush_all function in mm/vmacache.c mishandles sequence number overflows. An attacker can trigger a use-after-free (and possibly gain privileges) via certain thread creation, map, unmap, invalidation, and dereference operations. <p>Publish Date: 2018-09-19 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-17182>CVE-2018-17182</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-17182">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2018-17182</a></p> <p>Release Date: 2018-09-19</p> <p>Fix Resolution: v4.19-rc4</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in autoclosed cve high severity vulnerability vulnerable library development tree library home page a href found in head commit a href found in base branch master vulnerable source files include linux vm event item h include linux vm event item h include linux vm event item h vulnerability details an issue was discovered in the linux kernel through the vmacache flush all function in mm vmacache c mishandles sequence number overflows an attacker can trigger a use after free and possibly gain privileges via certain thread creation map unmap invalidation and dereference operations publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
66,128
20,014,207,064
IssuesEvent
2022-02-01 10:20:38
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
SelectOneRadio: HTML validation error when using required attribute
defect accessibility
**Describe the defect** When using selectOneRadio with attributes layout="grid", columns="3" and required="true", the selectOneRadio is rendered as a \<table>-element in the HTML DOM with the attribute aria-required="true". This HTML syntax causes a validation error when analyzing the site with the [Nu HTML Checker](https://validator.w3.org/nu/) as the attribute aria-required is not allowed on \<table>-elements. **Environment:** - PF Version: _10.0_ **To Reproduce** Steps to reproduce the behavior: 1. Add the example code below to your application. 2. Copy your site's HTML code into the [Nu HTML Checker](https://validator.w3.org/nu/) and start validation. 3. See validation error from Screenshot below. **Expected behavior** The attribute "aria-required" should not be added to the \<table>-element within selectOneRadios as it causes an HTML validation error and thus violates [WCAG success criterion 4.1.1](https://www.w3.org/TR/WCAG21/#parsing) for accessibility. **Example XHTML** ```html <h:form id="frmTest"> <div class="p-grid ui-fluid"> <div class="p-col-12"> <div class="card"> <h5>SelectOneRadio</h5> <p:selectOneRadio id="testRadio" columns="3" required="true" layout="grid"> <f:selectItem itemLabel="Option1" itemValue="Option1"/> <f:selectItem itemLabel="Option2" itemValue="Option2"/> <f:selectItem itemLabel="Option3" itemValue="Option3"/> <f:selectItem itemLabel="Option4" itemValue="Option4"/> <f:selectItem itemLabel="Option5" itemValue="Option5"/> <f:selectItem itemLabel="Option6" itemValue="Option6"/> <f:selectItem itemLabel="Option7" itemValue="Option7"/> <f:selectItem itemLabel="Option8" itemValue="Option8"/> <f:selectItem itemLabel="Option9" itemValue="Option9"/> </p:selectOneRadio> </div> </div> </div> </h:form> ``` **Screenshot rendered HTML** ![Screenshot_HTML_SelectOneRadio](https://user-images.githubusercontent.com/77431692/150933107-ac86e016-7ea5-4fce-b175-807dc49ec5f6.png) **Screenshot HTML validation error** ![Screenshot_HTML_Validation_SelectOneRadio](https://user-images.githubusercontent.com/77431692/150933165-3821d656-6c91-4538-8167-b0b1d603abea.png)
1.0
SelectOneRadio: HTML validation error when using required attribute - **Describe the defect** When using selectOneRadio with attributes layout="grid", columns="3" and required="true", the selectOneRadio is rendered as a \<table>-element in the HTML DOM with the attribute aria-required="true". This HTML syntax causes a validation error when analyzing the site with the [Nu HTML Checker](https://validator.w3.org/nu/) as the attribute aria-required is not allowed on \<table>-elements. **Environment:** - PF Version: _10.0_ **To Reproduce** Steps to reproduce the behavior: 1. Add the example code below to your application. 2. Copy your site's HTML code into the [Nu HTML Checker](https://validator.w3.org/nu/) and start validation. 3. See validation error from Screenshot below. **Expected behavior** The attribute "aria-required" should not be added to the \<table>-element within selectOneRadios as it causes an HTML validation error and thus violates [WCAG success criterion 4.1.1](https://www.w3.org/TR/WCAG21/#parsing) for accessibility. **Example XHTML** ```html <h:form id="frmTest"> <div class="p-grid ui-fluid"> <div class="p-col-12"> <div class="card"> <h5>SelectOneRadio</h5> <p:selectOneRadio id="testRadio" columns="3" required="true" layout="grid"> <f:selectItem itemLabel="Option1" itemValue="Option1"/> <f:selectItem itemLabel="Option2" itemValue="Option2"/> <f:selectItem itemLabel="Option3" itemValue="Option3"/> <f:selectItem itemLabel="Option4" itemValue="Option4"/> <f:selectItem itemLabel="Option5" itemValue="Option5"/> <f:selectItem itemLabel="Option6" itemValue="Option6"/> <f:selectItem itemLabel="Option7" itemValue="Option7"/> <f:selectItem itemLabel="Option8" itemValue="Option8"/> <f:selectItem itemLabel="Option9" itemValue="Option9"/> </p:selectOneRadio> </div> </div> </div> </h:form> ``` **Screenshot rendered HTML** ![Screenshot_HTML_SelectOneRadio](https://user-images.githubusercontent.com/77431692/150933107-ac86e016-7ea5-4fce-b175-807dc49ec5f6.png) **Screenshot HTML validation error** ![Screenshot_HTML_Validation_SelectOneRadio](https://user-images.githubusercontent.com/77431692/150933165-3821d656-6c91-4538-8167-b0b1d603abea.png)
defect
selectoneradio html validation error when using required attribute describe the defect when using selectoneradio with attributes layout grid columns and required true the selectoneradio is rendered as a element in the html dom with the attribute aria required true this html syntax causes a validation error when analyzing the site with the as the attribute aria required is not allowed on elements environment pf version to reproduce steps to reproduce the behavior add the example code below to your application copy your site s html code into the and start validation see validation error from screenshot below expected behavior the attribute aria required should not be added to the element within selectoneradios as it causes an html validation error and thus violates for accessibility example xhtml html selectoneradio screenshot rendered html screenshot html validation error
1
51,822
10,729,727,036
IssuesEvent
2019-10-28 16:05:06
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: X-Pack Jest Tests.x-pack/plugins/code/server/lsp - passive launcher can start and end a process
Team:Code failed-test
A test failed on a tracked branch ``` Error: expect(received).toBe(expected) // Object.is equality Expected: "process started" Received: "socket connected" at toBe (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/code/server/lsp/abstract_launcher.test.ts:197:42) at testFn (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/code/server/lsp/abstract_launcher.test.ts:132:5) at retryUtil (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/code/server/lsp/abstract_launcher.test.ts:136:13) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=x-pack-intake,node=immutable/163/) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/code/server/lsp","test.name":"passive launcher can start and end a process","test.failCount":1}} -->
1.0
Failing test: X-Pack Jest Tests.x-pack/plugins/code/server/lsp - passive launcher can start and end a process - A test failed on a tracked branch ``` Error: expect(received).toBe(expected) // Object.is equality Expected: "process started" Received: "socket connected" at toBe (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/code/server/lsp/abstract_launcher.test.ts:197:42) at testFn (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/code/server/lsp/abstract_launcher.test.ts:132:5) at retryUtil (/var/lib/jenkins/workspace/elastic+kibana+master/JOB/x-pack-intake/node/immutable/kibana/x-pack/plugins/code/server/lsp/abstract_launcher.test.ts:136:13) ``` First failure: [Jenkins Build](https://kibana-ci.elastic.co/job/elastic+kibana+master/JOB=x-pack-intake,node=immutable/163/) <!-- kibanaCiData = {"failed-test":{"test.class":"X-Pack Jest Tests.x-pack/plugins/code/server/lsp","test.name":"passive launcher can start and end a process","test.failCount":1}} -->
non_defect
failing test x pack jest tests x pack plugins code server lsp passive launcher can start and end a process a test failed on a tracked branch error expect received tobe expected object is equality expected process started received socket connected at tobe var lib jenkins workspace elastic kibana master job x pack intake node immutable kibana x pack plugins code server lsp abstract launcher test ts at testfn var lib jenkins workspace elastic kibana master job x pack intake node immutable kibana x pack plugins code server lsp abstract launcher test ts at retryutil var lib jenkins workspace elastic kibana master job x pack intake node immutable kibana x pack plugins code server lsp abstract launcher test ts first failure
0
49,207
13,185,293,007
IssuesEvent
2020-08-12 21:06:17
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as "Primary" (Trac #958)
Incomplete Migration Migrated from Trac combo core defect
<details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/958 , reported by hdembinski and owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2016-03-18T21:13:59", "description": "I describe the Python side of this issue, but the same holds for C++.\n\nget_most_energetic_primary filters the tree for I3Particles with is_primary is true and picks the most energetic. It returns none if there are no particles with the property is_primary is true.\n\nOn the other hand, I3MCTree.primaries yields all particles the head of the tree, independent of whether is_primary is true or not. It is based solely on the tree topology.\n\nI found a real-life example of tree where I3MCTree.primaries yields some primary particles, while get_most_energetic_primary yields nothing. The reason is that the primary particles are not properly marked as is_primary. I expect this conflict to happen often.\n\nWe should fix either I3MCTree.primaries or get_most_energetic_primary, since they yield conflicting information for such a case. I suggest to fix get_most_energetic_primary, by making it also taking particles into account, which are primaries by topology of the tree.", "reporter": "hdembinski", "cc": "", "resolution": "fixed", "_ts": "1458335639558230", "component": "combo core", "summary": "[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as \"Primary\"", "priority": "normal", "keywords": "dataio I3MCTreePhysicsLibrary dataclasses", "time": "2015-05-02T22:55:49", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
1.0
[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as "Primary" (Trac #958) - <details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/958 , reported by hdembinski and owned by olivas</em></summary> <p> ```json { "status": "closed", "changetime": "2016-03-18T21:13:59", "description": "I describe the Python side of this issue, but the same holds for C++.\n\nget_most_energetic_primary filters the tree for I3Particles with is_primary is true and picks the most energetic. It returns none if there are no particles with the property is_primary is true.\n\nOn the other hand, I3MCTree.primaries yields all particles the head of the tree, independent of whether is_primary is true or not. It is based solely on the tree topology.\n\nI found a real-life example of tree where I3MCTree.primaries yields some primary particles, while get_most_energetic_primary yields nothing. The reason is that the primary particles are not properly marked as is_primary. I expect this conflict to happen often.\n\nWe should fix either I3MCTree.primaries or get_most_energetic_primary, since they yield conflicting information for such a case. I suggest to fix get_most_energetic_primary, by making it also taking particles into account, which are primaries by topology of the tree.", "reporter": "hdembinski", "cc": "", "resolution": "fixed", "_ts": "1458335639558230", "component": "combo core", "summary": "[dataclasses] I3MCTreePhysicsLibrary.get_most_energetic_primary fails if particles in tree-head are not marked properly as \"Primary\"", "priority": "normal", "keywords": "dataio I3MCTreePhysicsLibrary dataclasses", "time": "2015-05-02T22:55:49", "milestone": "", "owner": "olivas", "type": "defect" } ``` </p> </details>
defect
get most energetic primary fails if particles in tree head are not marked properly as primary trac migrated from reported by hdembinski and owned by olivas json status closed changetime description i describe the python side of this issue but the same holds for c n nget most energetic primary filters the tree for with is primary is true and picks the most energetic it returns none if there are no particles with the property is primary is true n non the other hand primaries yields all particles the head of the tree independent of whether is primary is true or not it is based solely on the tree topology n ni found a real life example of tree where primaries yields some primary particles while get most energetic primary yields nothing the reason is that the primary particles are not properly marked as is primary i expect this conflict to happen often n nwe should fix either primaries or get most energetic primary since they yield conflicting information for such a case i suggest to fix get most energetic primary by making it also taking particles into account which are primaries by topology of the tree reporter hdembinski cc resolution fixed ts component combo core summary get most energetic primary fails if particles in tree head are not marked properly as primary priority normal keywords dataio dataclasses time milestone owner olivas type defect
1
164,255
13,938,900,699
IssuesEvent
2020-10-22 15:48:39
yugabyte/yugabyte-db
https://api.github.com/repos/yugabyte/yugabyte-db
opened
[DOCS] How to reset YSQL password when can't connect
area/documentation area/ysql
Using hba_conf to allow login without password "trust".
1.0
[DOCS] How to reset YSQL password when can't connect - Using hba_conf to allow login without password "trust".
non_defect
how to reset ysql password when can t connect using hba conf to allow login without password trust
0
63,191
17,439,068,326
IssuesEvent
2021-08-05 00:33:23
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Upload image does not obfuscate original filename and does not strip EXIF
T-Defect
Maybe someone wants that to be the case, and that's fine but it should not be this is a huge security risk and I can't recommend this service until it's fixed
1.0
Upload image does not obfuscate original filename and does not strip EXIF - Maybe someone wants that to be the case, and that's fine but it should not be this is a huge security risk and I can't recommend this service until it's fixed
defect
upload image does not obfuscate original filename and does not strip exif maybe someone wants that to be the case and that s fine but it should not be this is a huge security risk and i can t recommend this service until it s fixed
1
403,993
11,850,505,383
IssuesEvent
2020-03-24 16:43:56
gbif/vocabulary
https://api.github.com/repos/gbif/vocabulary
closed
Running tests in parallel
enhancement priority:low
Originally, all the tests developed were intended to be run in parallel but it breaks in Jenkins, so the parallel execution of vocabulary-rest-ws tests are now turned off (https://github.com/gbif/vocabulary/blob/master/vocabulary-rest-ws/src/test/resources/junit-platform.properties)
1.0
Running tests in parallel - Originally, all the tests developed were intended to be run in parallel but it breaks in Jenkins, so the parallel execution of vocabulary-rest-ws tests are now turned off (https://github.com/gbif/vocabulary/blob/master/vocabulary-rest-ws/src/test/resources/junit-platform.properties)
non_defect
running tests in parallel originally all the tests developed were intended to be run in parallel but it breaks in jenkins so the parallel execution of vocabulary rest ws tests are now turned off
0
58,704
16,704,370,976
IssuesEvent
2021-06-09 08:13:59
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Rendering large MELS is very slow
A-Performance T-Defect Z-Mozilla
If you have tens/hundreds of membership changes in a row, MELS takes a surprisingly long time to render when viewing a timeline for the first time. It's as if every new event added to scrollback calls the MELS (or the whole timeline?) to re-render from scratch. Instead, we should only update the MELS atomically for every batch of scrollback loaded via /messages, rather than on a message by message basis.
1.0
Rendering large MELS is very slow - If you have tens/hundreds of membership changes in a row, MELS takes a surprisingly long time to render when viewing a timeline for the first time. It's as if every new event added to scrollback calls the MELS (or the whole timeline?) to re-render from scratch. Instead, we should only update the MELS atomically for every batch of scrollback loaded via /messages, rather than on a message by message basis.
defect
rendering large mels is very slow if you have tens hundreds of membership changes in a row mels takes a surprisingly long time to render when viewing a timeline for the first time it s as if every new event added to scrollback calls the mels or the whole timeline to re render from scratch instead we should only update the mels atomically for every batch of scrollback loaded via messages rather than on a message by message basis
1
441,004
12,707,015,007
IssuesEvent
2020-06-23 08:13:06
luna/luna
https://api.github.com/repos/luna/luna
closed
Add Parsing of Modifer Operators (e.g. `+=`) to Luna
Category: Compiler Category: Syntax Change: Non-Breaking Difficulty: Core Contributor Priority: Highest Type: Bug
### Summary Currently CI is failing due to the inability of Luna to parse operators used in StdTest. This task exists to track the implementation of modifier operators into the parser and IR. ### Value Modifier operators are potentially useful functionality for our users, but this is also keeping CI in the red, reducing the availability of valuable feedback to team members. ### Specification - Design IR for such operators (either raw or as a desugaring). - Implement IR for these operators. - Implement parsing for such operators in the [parser](https://github.com/luna/luna/blob/master/syntax/text/parser3/src/Luna/Pass/Parsing/Parser.hs#L251). ### Acceptance Criteria & Test Cases - Luna successfully parses modifier operators into valid IR. - StdTest passes on CI with the `FieldModifications` test enabled.
1.0
Add Parsing of Modifer Operators (e.g. `+=`) to Luna - ### Summary Currently CI is failing due to the inability of Luna to parse operators used in StdTest. This task exists to track the implementation of modifier operators into the parser and IR. ### Value Modifier operators are potentially useful functionality for our users, but this is also keeping CI in the red, reducing the availability of valuable feedback to team members. ### Specification - Design IR for such operators (either raw or as a desugaring). - Implement IR for these operators. - Implement parsing for such operators in the [parser](https://github.com/luna/luna/blob/master/syntax/text/parser3/src/Luna/Pass/Parsing/Parser.hs#L251). ### Acceptance Criteria & Test Cases - Luna successfully parses modifier operators into valid IR. - StdTest passes on CI with the `FieldModifications` test enabled.
non_defect
add parsing of modifer operators e g to luna summary currently ci is failing due to the inability of luna to parse operators used in stdtest this task exists to track the implementation of modifier operators into the parser and ir value modifier operators are potentially useful functionality for our users but this is also keeping ci in the red reducing the availability of valuable feedback to team members specification design ir for such operators either raw or as a desugaring implement ir for these operators implement parsing for such operators in the acceptance criteria test cases luna successfully parses modifier operators into valid ir stdtest passes on ci with the fieldmodifications test enabled
0
48,182
13,067,499,522
IssuesEvent
2020-07-31 00:39:28
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation (Trac #1916)
Migrated from Trac combo simulation defect
2013 data has SplitInIcePulsesTimeRange in the frame, but 2013 simulation doesn't. Migrated from https://code.icecube.wisc.edu/ticket/1916 ```json { "status": "closed", "changetime": "2019-02-12T20:28:04", "description": "2013 data has SplitInIcePulsesTimeRange in the frame, but 2013 simulation doesn't. ", "reporter": "yiqian.xu", "cc": "david.schultz, olivas", "resolution": "fixed", "_ts": "1550003284179803", "component": "combo simulation", "summary": "[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation", "priority": "normal", "keywords": "", "time": "2016-12-01T17:17:20", "milestone": "Vernal Equinox 2019", "owner": "juancarlos", "type": "defect" } ```
1.0
[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation (Trac #1916) - 2013 data has SplitInIcePulsesTimeRange in the frame, but 2013 simulation doesn't. Migrated from https://code.icecube.wisc.edu/ticket/1916 ```json { "status": "closed", "changetime": "2019-02-12T20:28:04", "description": "2013 data has SplitInIcePulsesTimeRange in the frame, but 2013 simulation doesn't. ", "reporter": "yiqian.xu", "cc": "david.schultz, olivas", "resolution": "fixed", "_ts": "1550003284179803", "component": "combo simulation", "summary": "[simprod] Missing Time Range for SplitInIcePulses in 2013 simulation", "priority": "normal", "keywords": "", "time": "2016-12-01T17:17:20", "milestone": "Vernal Equinox 2019", "owner": "juancarlos", "type": "defect" } ```
defect
missing time range for splitinicepulses in simulation trac data has splitinicepulsestimerange in the frame but simulation doesn t migrated from json status closed changetime description data has splitinicepulsestimerange in the frame but simulation doesn t reporter yiqian xu cc david schultz olivas resolution fixed ts component combo simulation summary missing time range for splitinicepulses in simulation priority normal keywords time milestone vernal equinox owner juancarlos type defect
1
65,255
19,301,421,866
IssuesEvent
2021-12-13 06:18:06
vector-im/element-ios
https://api.github.com/repos/vector-im/element-ios
closed
Voice Messages from FluffyChat are not displayed correctly
T-Defect
#### Describe the bug. When receiving a voice message from FluffyChat IOS, an AAC file, it is not displayed correctly. #### Steps to reproduce: Steps to reproduce the behavior: 1. Receive a voice message from FluffyChat IOS 2. Open the chat #### Expected behavior The voice message is rendered as one and therefore I can play and pause it inside of Element. #### Screenshots ![517F0D5A-6544-46B9-9FFD-A33ECE03FECC](https://user-images.githubusercontent.com/39308834/129793824-ce2e63e0-58ca-4be8-94e3-ee38054502ee.jpeg) At first you can see a correctly displayed voice message sent by Element IOS, followed by one sent by FluffyChat. #### Contextual information: - Device: iPhone 8 - OS: 14.7.1 - App Version: 1.5.1
1.0
Voice Messages from FluffyChat are not displayed correctly - #### Describe the bug. When receiving a voice message from FluffyChat IOS, an AAC file, it is not displayed correctly. #### Steps to reproduce: Steps to reproduce the behavior: 1. Receive a voice message from FluffyChat IOS 2. Open the chat #### Expected behavior The voice message is rendered as one and therefore I can play and pause it inside of Element. #### Screenshots ![517F0D5A-6544-46B9-9FFD-A33ECE03FECC](https://user-images.githubusercontent.com/39308834/129793824-ce2e63e0-58ca-4be8-94e3-ee38054502ee.jpeg) At first you can see a correctly displayed voice message sent by Element IOS, followed by one sent by FluffyChat. #### Contextual information: - Device: iPhone 8 - OS: 14.7.1 - App Version: 1.5.1
defect
voice messages from fluffychat are not displayed correctly describe the bug when receiving a voice message from fluffychat ios an aac file it is not displayed correctly steps to reproduce steps to reproduce the behavior receive a voice message from fluffychat ios open the chat expected behavior the voice message is rendered as one and therefore i can play and pause it inside of element screenshots at first you can see a correctly displayed voice message sent by element ios followed by one sent by fluffychat contextual information device iphone os app version
1
69,004
22,049,852,823
IssuesEvent
2022-05-30 07:39:02
cython/cython
https://api.github.com/repos/cython/cython
closed
[ENH] Make function pointer exception specification matching more lenient
defect P: blocker Type Analysis
**Is your feature request related to a problem? Please describe.** This relates to https://github.com/cython/cython/issues/4280 (where it's unclear what exception specification should be assumed for function pointers). In my mind a `noexcept` function is compatible with a pointer with an `except *` or an `except ?value` pointer specification (but not an `except value` or `except +` specification). This is because it simply adds an unnecessary check that always passes. Similarly an `except value` function should be compatible with `except *` and an `except ?value` specification. (And a few other similar combinations) Code to demonstrate (that doesn't currently work): ```cython cdef extern from *: cdef int extern_f() # noexcept cdef int takes_func(int (*f)() except *): return f() def call(): takes_func(extern_f) # fails, but shouldn't ``` **Additional context** This is consistent with what C++ does (which doesn't necessarily say anything about what Cython should do...) ```c++ int noexcept_func() noexcept; int regular_func(); int (*f)() = noexcept_func; // OK int (*g)() = regular_func; // OK int (*h)() noexcept = noexcept_func; // OK int (*i)() noexcept = regular_func; // fails ``` The advantage to this is that we could assume `except *` for function pointers by default and the vast majority of code would keep working. I think it's probably worth doing independently whatever we decide on https://github.com/cython/cython/issues/4280 (and thus I create a separate issue)
1.0
[ENH] Make function pointer exception specification matching more lenient - **Is your feature request related to a problem? Please describe.** This relates to https://github.com/cython/cython/issues/4280 (where it's unclear what exception specification should be assumed for function pointers). In my mind a `noexcept` function is compatible with a pointer with an `except *` or an `except ?value` pointer specification (but not an `except value` or `except +` specification). This is because it simply adds an unnecessary check that always passes. Similarly an `except value` function should be compatible with `except *` and an `except ?value` specification. (And a few other similar combinations) Code to demonstrate (that doesn't currently work): ```cython cdef extern from *: cdef int extern_f() # noexcept cdef int takes_func(int (*f)() except *): return f() def call(): takes_func(extern_f) # fails, but shouldn't ``` **Additional context** This is consistent with what C++ does (which doesn't necessarily say anything about what Cython should do...) ```c++ int noexcept_func() noexcept; int regular_func(); int (*f)() = noexcept_func; // OK int (*g)() = regular_func; // OK int (*h)() noexcept = noexcept_func; // OK int (*i)() noexcept = regular_func; // fails ``` The advantage to this is that we could assume `except *` for function pointers by default and the vast majority of code would keep working. I think it's probably worth doing independently whatever we decide on https://github.com/cython/cython/issues/4280 (and thus I create a separate issue)
defect
make function pointer exception specification matching more lenient is your feature request related to a problem please describe this relates to where it s unclear what exception specification should be assumed for function pointers in my mind a noexcept function is compatible with a pointer with an except or an except value pointer specification but not an except value or except specification this is because it simply adds an unnecessary check that always passes similarly an except value function should be compatible with except and an except value specification and a few other similar combinations code to demonstrate that doesn t currently work cython cdef extern from cdef int extern f noexcept cdef int takes func int f except return f def call takes func extern f fails but shouldn t additional context this is consistent with what c does which doesn t necessarily say anything about what cython should do c int noexcept func noexcept int regular func int f noexcept func ok int g regular func ok int h noexcept noexcept func ok int i noexcept regular func fails the advantage to this is that we could assume except for function pointers by default and the vast majority of code would keep working i think it s probably worth doing independently whatever we decide on and thus i create a separate issue
1
80,073
29,992,715,977
IssuesEvent
2023-06-26 00:55:17
zed-industries/community
https://api.github.com/repos/zed-industries/community
closed
buffers scroll too far down
defect
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it when scrolling down in a buffer it scrolls until the last line is at the top. that seems rather useless to me. I often wanna quickly go to the bottom of the file, but with thed after flicking down I always have to move back up a bit to actually see anything ### Environment Zed: v0.90.2 (stable) OS: macOS 13.2.1 Memory: 16 GiB Architecture: aarch64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
1.0
buffers scroll too far down - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it when scrolling down in a buffer it scrolls until the last line is at the top. that seems rather useless to me. I often wanna quickly go to the bottom of the file, but with thed after flicking down I always have to move back up a bit to actually see anything ### Environment Zed: v0.90.2 (stable) OS: macOS 13.2.1 Memory: 16 GiB Architecture: aarch64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
defect
buffers scroll too far down check for existing issues completed describe the bug provide steps to reproduce it when scrolling down in a buffer it scrolls until the last line is at the top that seems rather useless to me i often wanna quickly go to the bottom of the file but with thed after flicking down i always have to move back up a bit to actually see anything environment zed stable os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
1
410,846
12,002,761,645
IssuesEvent
2020-04-09 08:23:03
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
closed
[0.9.0 staging-1503] Labor requirements for specialty in some tooltips can confused players
Priority: Low Status: Fixed
Step to reproduce: - take masonry specialty. - so I have Masonry 1. Look at labor tooltip in available recipe, for example, Mortared stone: ![image](https://user-images.githubusercontent.com/45708377/78755160-64f8f200-7981-11ea-92c3-5b90c9d92419.png) - seems all right, but now look at labor tooltip in locked recipe, for example, Mortared stone bench. It requires Masonry 3: ![image](https://user-images.githubusercontent.com/45708377/78755385-b903d680-7981-11ea-9847-7fc6e928a888.png) - So this tooltip said that I need Masonry to add labor, and that I don't have Masonry. But it's wrong. We need to slight change this tooltip: ![image](https://user-images.githubusercontent.com/45708377/78755744-57903780-7982-11ea-8051-338591c27084.png) We also need to change recipe tooltip with same way: ![image](https://user-images.githubusercontent.com/45708377/78756310-4136ab80-7983-11ea-8c8a-4229814f1b31.png)
1.0
[0.9.0 staging-1503] Labor requirements for specialty in some tooltips can confused players - Step to reproduce: - take masonry specialty. - so I have Masonry 1. Look at labor tooltip in available recipe, for example, Mortared stone: ![image](https://user-images.githubusercontent.com/45708377/78755160-64f8f200-7981-11ea-92c3-5b90c9d92419.png) - seems all right, but now look at labor tooltip in locked recipe, for example, Mortared stone bench. It requires Masonry 3: ![image](https://user-images.githubusercontent.com/45708377/78755385-b903d680-7981-11ea-9847-7fc6e928a888.png) - So this tooltip said that I need Masonry to add labor, and that I don't have Masonry. But it's wrong. We need to slight change this tooltip: ![image](https://user-images.githubusercontent.com/45708377/78755744-57903780-7982-11ea-8051-338591c27084.png) We also need to change recipe tooltip with same way: ![image](https://user-images.githubusercontent.com/45708377/78756310-4136ab80-7983-11ea-8c8a-4229814f1b31.png)
non_defect
labor requirements for specialty in some tooltips can confused players step to reproduce take masonry specialty so i have masonry look at labor tooltip in available recipe for example mortared stone seems all right but now look at labor tooltip in locked recipe for example mortared stone bench it requires masonry so this tooltip said that i need masonry to add labor and that i don t have masonry but it s wrong we need to slight change this tooltip we also need to change recipe tooltip with same way
0
42,630
11,188,218,432
IssuesEvent
2020-01-02 03:29:41
hazendaz/htmlcompressor
https://api.github.com/repos/hazendaz/htmlcompressor
closed
Velocity Version
Priority-Medium Type-Defect auto-migrated
``` What is the Velocity version does this support? #compressHtml() & #end is not getting recognised ``` Original issue reported on code.google.com by `kalims...@gmail.com` on 27 Feb 2015 at 11:19
1.0
Velocity Version - ``` What is the Velocity version does this support? #compressHtml() & #end is not getting recognised ``` Original issue reported on code.google.com by `kalims...@gmail.com` on 27 Feb 2015 at 11:19
defect
velocity version what is the velocity version does this support compresshtml end is not getting recognised original issue reported on code google com by kalims gmail com on feb at
1
168,212
14,142,274,965
IssuesEvent
2020-11-10 13:51:55
dotnet/dotnet-docker
https://api.github.com/repos/dotnet/dotnet-docker
closed
Windows images should be grouped by .NET Version in the Tag Listing of the readme
area-documentation bug triaged
The 5.0 Windows Server Core images are not grouped with the 5.0 Nano Server images in the Tag Listing section of the readme. Example from https://hub.docker.com/_/microsoft-dotnet-nightly-sdk/: ![image.png](https://images.zenhubusercontent.com/583dda7ab9dc3c622022afdb/dde9479d-456d-4634-8314-c8af8bc36613) Just like the Linux sections, all images should be grouped by .NET version.
1.0
Windows images should be grouped by .NET Version in the Tag Listing of the readme - The 5.0 Windows Server Core images are not grouped with the 5.0 Nano Server images in the Tag Listing section of the readme. Example from https://hub.docker.com/_/microsoft-dotnet-nightly-sdk/: ![image.png](https://images.zenhubusercontent.com/583dda7ab9dc3c622022afdb/dde9479d-456d-4634-8314-c8af8bc36613) Just like the Linux sections, all images should be grouped by .NET version.
non_defect
windows images should be grouped by net version in the tag listing of the readme the windows server core images are not grouped with the nano server images in the tag listing section of the readme example from just like the linux sections all images should be grouped by net version
0
84,814
10,418,942,268
IssuesEvent
2019-09-15 12:58:06
matplotlib/matplotlib
https://api.github.com/repos/matplotlib/matplotlib
closed
Matplotlib NavigationToolbar2Tk disappears when reducing window size
Documentation GUI/tk Good first issue
Using the example in the link below, the toolbar will disappear when reducing the height of the window. I have read about this bug in here and it was stated as resolved but I still have that problem. https://matplotlib.org/gallery/user_interfaces/embedding_in_tk_sgskip.html#sphx-glr-gallery-user-interfaces-embedding-in-tk-sgskip-py One solution is to use grid and one frame for FigureCanvasTkAgg and one frame for NavigationToolbar2Tk but then the cursor dont change appearance depending on if zoom, pan etc is selected. Im using Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)] on win32 and matplotlib v 3.0.2 Kind regards, Daniel
1.0
Matplotlib NavigationToolbar2Tk disappears when reducing window size - Using the example in the link below, the toolbar will disappear when reducing the height of the window. I have read about this bug in here and it was stated as resolved but I still have that problem. https://matplotlib.org/gallery/user_interfaces/embedding_in_tk_sgskip.html#sphx-glr-gallery-user-interfaces-embedding-in-tk-sgskip-py One solution is to use grid and one frame for FigureCanvasTkAgg and one frame for NavigationToolbar2Tk but then the cursor dont change appearance depending on if zoom, pan etc is selected. Im using Python 3.7.1 (v3.7.1:260ec2c36a, Oct 20 2018, 14:57:15) [MSC v.1915 64 bit (AMD64)] on win32 and matplotlib v 3.0.2 Kind regards, Daniel
non_defect
matplotlib disappears when reducing window size using the example in the link below the toolbar will disappear when reducing the height of the window i have read about this bug in here and it was stated as resolved but i still have that problem one solution is to use grid and one frame for figurecanvastkagg and one frame for but then the cursor dont change appearance depending on if zoom pan etc is selected im using python oct on and matplotlib v kind regards daniel
0
69,975
22,773,172,483
IssuesEvent
2022-07-08 12:04:54
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
opened
[🐛 Bug]: org.asynchttpclient.exception.RemotelyClosedException when starting 3 tests at once with Sauce Labs
I-defect needs-triaging
### What happened? Not sure if this belongs here or in Sauce Labs but: When starting three simple tests at the same time with Selenium 4.3.0 the end result is always (at least on my M1 Mac) that test 1 and test 3 run and work fine (although in Saucelabs they show up as 'errored') but test 2 fails to start with ``` org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failur e. Build info: version: '4.3.0', revision: 'a4995e2c09*' System info: host: 'myhostname', ip: 'myip', os.name: 'Mac OS X', os.arch: 'aarch64', os.version: '12.4', java.ver sion: '17.0.1' Driver info: org.openqa.selenium.remote.RemoteWebDriver Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extens ions: []}, sauce:options: {access_key: <key removed>, name: test2, tunnelIdentifier: my-tunnel-id, username: my-sa uce-username}}], desiredCapabilities=Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}, sauce:options: {access_key: <key removed>, name: test2, tunnelIdentifier: my-tunnel-id, username: my-sauce-u sername}}}] Capabilities {} at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:587) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:264) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:179) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:161) at com.example.application.MyIT.test(MyIT.java:52) at com.example.application.MyIT.test2(MyIT.java:30) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43 ) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at org.junit.runner.JUnitCore.run(JUnitCore.java:115) at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220) at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188) at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) Caused by: java.io.UncheckedIOException: org.asynchttpclient.exception.RemotelyClosedException: Remotely closed at org.openqa.selenium.remote.http.netty.NettyHttpHandler.makeCall(NettyHttpHandler.java:73) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyHttpHandler.execute(NettyHttpHandler.java:49) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyClient.execute(NettyClient.java:98) at org.openqa.selenium.remote.tracing.TracedHttpClient.execute(TracedHttpClient.java:55) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:120) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:102) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:67) at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:156) at org.openqa.selenium.remote.TracedCommandExecutor.execute(TracedCommandExecutor.java:51) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:569) ... 42 more Caused by: org.asynchttpclient.exception.RemotelyClosedException: Remotely closed at org.asynchttpclient.exception.RemotelyClosedException.INSTANCE(Unknown Source) ``` ### How can we reproduce the issue? ```shell https://github.com/Artur-/saucetest ``` ### Relevant log output ```shell The full log of one run can be found here https://gist.github.com/Artur-/5e5ec1b70949560b1e75361ba27145a4 ``` ### Operating System macOs Big Sur ### Selenium version 4.3.0 ### What are the browser(s) and version(s) where you see this issue? Chrome ### What are the browser driver(s) and version(s) where you see this issue? Remotedriver + Saucelabs ### Are you using Selenium Grid? No
1.0
[🐛 Bug]: org.asynchttpclient.exception.RemotelyClosedException when starting 3 tests at once with Sauce Labs - ### What happened? Not sure if this belongs here or in Sauce Labs but: When starting three simple tests at the same time with Selenium 4.3.0 the end result is always (at least on my M1 Mac) that test 1 and test 3 run and work fine (although in Saucelabs they show up as 'errored') but test 2 fails to start with ``` org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Possible causes are invalid address of the remote server or browser start-up failur e. Build info: version: '4.3.0', revision: 'a4995e2c09*' System info: host: 'myhostname', ip: 'myip', os.name: 'Mac OS X', os.arch: 'aarch64', os.version: '12.4', java.ver sion: '17.0.1' Driver info: org.openqa.selenium.remote.RemoteWebDriver Command: [null, newSession {capabilities=[Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extens ions: []}, sauce:options: {access_key: <key removed>, name: test2, tunnelIdentifier: my-tunnel-id, username: my-sa uce-username}}], desiredCapabilities=Capabilities {browserName: chrome, goog:chromeOptions: {args: [], extensions: []}, sauce:options: {access_key: <key removed>, name: test2, tunnelIdentifier: my-tunnel-id, username: my-sauce-u sername}}}] Capabilities {} at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:587) at org.openqa.selenium.remote.RemoteWebDriver.startSession(RemoteWebDriver.java:264) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:179) at org.openqa.selenium.remote.RemoteWebDriver.<init>(RemoteWebDriver.java:161) at com.example.application.MyIT.test(MyIT.java:52) at com.example.application.MyIT.test2(MyIT.java:30) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43 ) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at org.junit.runner.JUnitCore.run(JUnitCore.java:115) at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220) at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188) at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150) at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:124) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418) Caused by: java.io.UncheckedIOException: org.asynchttpclient.exception.RemotelyClosedException: Remotely closed at org.openqa.selenium.remote.http.netty.NettyHttpHandler.makeCall(NettyHttpHandler.java:73) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyHttpHandler.execute(NettyHttpHandler.java:49) at org.openqa.selenium.remote.http.AddSeleniumUserAgent.lambda$apply$0(AddSeleniumUserAgent.java:42) at org.openqa.selenium.remote.http.Filter.lambda$andFinally$1(Filter.java:56) at org.openqa.selenium.remote.http.netty.NettyClient.execute(NettyClient.java:98) at org.openqa.selenium.remote.tracing.TracedHttpClient.execute(TracedHttpClient.java:55) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:120) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:102) at org.openqa.selenium.remote.ProtocolHandshake.createSession(ProtocolHandshake.java:67) at org.openqa.selenium.remote.HttpCommandExecutor.execute(HttpCommandExecutor.java:156) at org.openqa.selenium.remote.TracedCommandExecutor.execute(TracedCommandExecutor.java:51) at org.openqa.selenium.remote.RemoteWebDriver.execute(RemoteWebDriver.java:569) ... 42 more Caused by: org.asynchttpclient.exception.RemotelyClosedException: Remotely closed at org.asynchttpclient.exception.RemotelyClosedException.INSTANCE(Unknown Source) ``` ### How can we reproduce the issue? ```shell https://github.com/Artur-/saucetest ``` ### Relevant log output ```shell The full log of one run can be found here https://gist.github.com/Artur-/5e5ec1b70949560b1e75361ba27145a4 ``` ### Operating System macOs Big Sur ### Selenium version 4.3.0 ### What are the browser(s) and version(s) where you see this issue? Chrome ### What are the browser driver(s) and version(s) where you see this issue? Remotedriver + Saucelabs ### Are you using Selenium Grid? No
defect
org asynchttpclient exception remotelyclosedexception when starting tests at once with sauce labs what happened not sure if this belongs here or in sauce labs but when starting three simple tests at the same time with selenium the end result is always at least on my mac that test and test run and work fine although in saucelabs they show up as errored but test fails to start with org openqa selenium sessionnotcreatedexception could not start a new session possible causes are invalid address of the remote server or browser start up failur e build info version revision system info host myhostname ip myip os name mac os x os arch os version java ver sion driver info org openqa selenium remote remotewebdriver command extens ions sauce options access key name tunnelidentifier my tunnel id username my sa uce username desiredcapabilities capabilities browsername chrome goog chromeoptions args extensions sauce options access key name tunnelidentifier my tunnel id username my sauce u sername capabilities at org openqa selenium remote remotewebdriver execute remotewebdriver java at org openqa selenium remote remotewebdriver startsession remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at org openqa selenium remote remotewebdriver remotewebdriver java at com example application myit test myit java at com example application myit myit java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java base java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit rules testwatcher evaluate testwatcher java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org junit runners parentrunner schedule parentrunner java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runner junitcore run junitcore java at org junit runner junitcore run junitcore java at org junit vintage engine execution runnerexecutor execute runnerexecutor java at org junit vintage engine vintagetestengine executeallchildren vintagetestengine java at org junit vintage engine vintagetestengine execute vintagetestengine java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org junit platform launcher core defaultlauncher lambda execute defaultlauncher java at org junit platform launcher core defaultlauncher withinterceptedstreams defaultlauncher java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org junit platform launcher core defaultlauncher execute defaultlauncher java at org apache maven surefire junitplatform junitplatformprovider invokealltests junitplatformprovider java at org apache maven surefire junitplatform junitplatformprovider invoke junitplatformprovider java at org apache maven surefire booter forkedbooter invokeproviderinsameclassloader forkedbooter java at org apache maven surefire booter forkedbooter runsuitesinprocess forkedbooter java at org apache maven surefire booter forkedbooter execute forkedbooter java at org apache maven surefire booter forkedbooter main forkedbooter java caused by java io uncheckedioexception org asynchttpclient exception remotelyclosedexception remotely closed at org openqa selenium remote http netty nettyhttphandler makecall nettyhttphandler java at org openqa selenium remote http addseleniumuseragent lambda apply addseleniumuseragent java at org openqa selenium remote http filter lambda andfinally filter java at org openqa selenium remote http netty nettyhttphandler execute nettyhttphandler java at org openqa selenium remote http addseleniumuseragent lambda apply addseleniumuseragent java at org openqa selenium remote http filter lambda andfinally filter java at org openqa selenium remote http netty nettyclient execute nettyclient java at org openqa selenium remote tracing tracedhttpclient execute tracedhttpclient java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote protocolhandshake createsession protocolhandshake java at org openqa selenium remote httpcommandexecutor execute httpcommandexecutor java at org openqa selenium remote tracedcommandexecutor execute tracedcommandexecutor java at org openqa selenium remote remotewebdriver execute remotewebdriver java more caused by org asynchttpclient exception remotelyclosedexception remotely closed at org asynchttpclient exception remotelyclosedexception instance unknown source how can we reproduce the issue shell relevant log output shell the full log of one run can be found here operating system macos big sur selenium version what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue remotedriver saucelabs are you using selenium grid no
1
651,135
21,466,376,164
IssuesEvent
2022-04-26 04:30:32
wso2/product-is
https://api.github.com/repos/wso2/product-is
closed
My account login gives 404 error
ui Priority/Highest Severity/Blocker bug myaccount Affected-5.12.0 QA-Reported
**Describe the issue:** When user login to the my account at first time it gives page not found 404 error. <img width="741" alt="Screenshot 2022-04-24 at 22 30 37" src="https://user-images.githubusercontent.com/39077751/164987783-441e4f8f-ec0d-434e-9c5a-c2ec47e0f462.png"> https://user-images.githubusercontent.com/39077751/164987798-0c5d6312-59fc-4452-b016-97d7a79a0d85.mov **How to reproduce:** 1. Login to console 2. Create user 3. Navigate to my account login page 4. Login to the myaccount with created new user **Expected behavior:** Logged user should direct to the myaccount overview page **Environment information** (_Please complete the following information; remove any unnecessary fields_) **:** - Product Version: IS 5.12.0-alpha20 - OS: Mac - Database: MSSQL - Userstore: JDBC - Browser: chrome/chromium --- ### Optional Fields **Related issues:** <!-- Any related issues from this/other repositories--> **Suggested labels:** <!-- Only to be used by non-members -->
1.0
My account login gives 404 error - **Describe the issue:** When user login to the my account at first time it gives page not found 404 error. <img width="741" alt="Screenshot 2022-04-24 at 22 30 37" src="https://user-images.githubusercontent.com/39077751/164987783-441e4f8f-ec0d-434e-9c5a-c2ec47e0f462.png"> https://user-images.githubusercontent.com/39077751/164987798-0c5d6312-59fc-4452-b016-97d7a79a0d85.mov **How to reproduce:** 1. Login to console 2. Create user 3. Navigate to my account login page 4. Login to the myaccount with created new user **Expected behavior:** Logged user should direct to the myaccount overview page **Environment information** (_Please complete the following information; remove any unnecessary fields_) **:** - Product Version: IS 5.12.0-alpha20 - OS: Mac - Database: MSSQL - Userstore: JDBC - Browser: chrome/chromium --- ### Optional Fields **Related issues:** <!-- Any related issues from this/other repositories--> **Suggested labels:** <!-- Only to be used by non-members -->
non_defect
my account login gives error describe the issue when user login to the my account at first time it gives page not found error img width alt screenshot at src how to reproduce login to console create user navigate to my account login page login to the myaccount with created new user expected behavior logged user should direct to the myaccount overview page environment information please complete the following information remove any unnecessary fields product version is os mac database mssql userstore jdbc browser chrome chromium optional fields related issues suggested labels
0
185,151
14,344,146,756
IssuesEvent
2020-11-28 13:06:27
sugarlabs/musicblocks
https://api.github.com/repos/sugarlabs/musicblocks
closed
Store in Block Issue
Issue-Regression WF6-Needs testing
I seem to be encountering some issue that is causing new `box` blocks created via `store in box` block to fail. I don't yet really have a grasp of what needs to be done to reproduce the issue, but below is the project, logs, and a screenshot. [Store-in-BUG.html.zip](https://github.com/sugarlabs/musicblocks/files/5576404/Store-in-BUG.html.zip) ``` Uncaught (in promise) TypeError: Cannot read property '0' of null at Block.postProcess (blocks.js:2894) at Block._finishImageLoad (block.js:1277) at __callback (block.js:895) at checkBounds (block.js:201) at block.js:208 at new Promise (<anonymous>) at Block._createCache (block.js:176) at __processHighlightBitmap (block.js:920) at Image.img.onload (block.js:8382) logo.js:1603 Uncaught TypeError: logo.blocks.blockList[blk].protoblock.flow is not a function at Logo.runFromBlockNow (logo.js:1603) at logo.js:1477 ``` ![Screenshot at 2020-11-20 22:41:35](https://user-images.githubusercontent.com/13454579/99856755-92498780-2b81-11eb-8ba3-ea7e66f13bda.png)
1.0
Store in Block Issue - I seem to be encountering some issue that is causing new `box` blocks created via `store in box` block to fail. I don't yet really have a grasp of what needs to be done to reproduce the issue, but below is the project, logs, and a screenshot. [Store-in-BUG.html.zip](https://github.com/sugarlabs/musicblocks/files/5576404/Store-in-BUG.html.zip) ``` Uncaught (in promise) TypeError: Cannot read property '0' of null at Block.postProcess (blocks.js:2894) at Block._finishImageLoad (block.js:1277) at __callback (block.js:895) at checkBounds (block.js:201) at block.js:208 at new Promise (<anonymous>) at Block._createCache (block.js:176) at __processHighlightBitmap (block.js:920) at Image.img.onload (block.js:8382) logo.js:1603 Uncaught TypeError: logo.blocks.blockList[blk].protoblock.flow is not a function at Logo.runFromBlockNow (logo.js:1603) at logo.js:1477 ``` ![Screenshot at 2020-11-20 22:41:35](https://user-images.githubusercontent.com/13454579/99856755-92498780-2b81-11eb-8ba3-ea7e66f13bda.png)
non_defect
store in block issue i seem to be encountering some issue that is causing new box blocks created via store in box block to fail i don t yet really have a grasp of what needs to be done to reproduce the issue but below is the project logs and a screenshot uncaught in promise typeerror cannot read property of null at block postprocess blocks js at block finishimageload block js at callback block js at checkbounds block js at block js at new promise at block createcache block js at processhighlightbitmap block js at image img onload block js logo js uncaught typeerror logo blocks blocklist protoblock flow is not a function at logo runfromblocknow logo js at logo js
0
654,661
21,658,900,521
IssuesEvent
2022-05-06 16:53:46
IBMa/equal-access
https://api.github.com/repos/IBMa/equal-access
closed
warn for aria-readonly="true" on any element that also has a readonly attribute
Bug Engine priority-2 (med) Ready for QA
Steps: - https://w3c.github.io/html-aria/tests/readonly-test.html - run accessibility checker - note that for Test 1 (aria-readonly="true" with readonly) input type=password, date, month, week, time are flagged as "The WAI-ARIA role or attribute 'aria-readonly' is not valid for the element `<input>`" violations - instead, should flag all 13 inputs in the test with a warning saying authors shouldn't use aria-readonly="true" with readonly
1.0
warn for aria-readonly="true" on any element that also has a readonly attribute - Steps: - https://w3c.github.io/html-aria/tests/readonly-test.html - run accessibility checker - note that for Test 1 (aria-readonly="true" with readonly) input type=password, date, month, week, time are flagged as "The WAI-ARIA role or attribute 'aria-readonly' is not valid for the element `<input>`" violations - instead, should flag all 13 inputs in the test with a warning saying authors shouldn't use aria-readonly="true" with readonly
non_defect
warn for aria readonly true on any element that also has a readonly attribute steps run accessibility checker note that for test aria readonly true with readonly input type password date month week time are flagged as the wai aria role or attribute aria readonly is not valid for the element violations instead should flag all inputs in the test with a warning saying authors shouldn t use aria readonly true with readonly
0
657,608
21,797,966,258
IssuesEvent
2022-05-15 22:23:36
jerboa88/Mergist
https://api.github.com/repos/jerboa88/Mergist
closed
Reset button state upon merge error
⏰ Status: In Progress 🗃 Type: Bug 🛡 Priority: High
**Helpful info** - Browser & version: Chrome 101.0.4951.54, probably others **Describe the bug** When a serious error is encountered in the merging process (ex. corrupted PDF file), an error alert is shown and we exit the createMergedFile method but the action button is never reset and still shows the progress percent. **Steps to reproduce** Steps to reproduce the behavior: 1. Upload a corrupted file (ex. [Mergist Test Document 5 (corrupt).pdf](https://github.com/jerboa88/Mergist/blob/main/src/tests/samples/Mergist%20Test%20Document%205%20(corrupt).pdf)) 2. Click the merge button 3. See action button in indeterminate state **Expected behavior** We should either reset the button to its initial state (Merge), or change it to a disabled error button so that the user can remove the problematic file before trying again. **Screenshots** ![image](https://user-images.githubusercontent.com/9030780/167963133-0d55a737-cb70-414a-b189-411bf6ba4c82.png)
1.0
Reset button state upon merge error - **Helpful info** - Browser & version: Chrome 101.0.4951.54, probably others **Describe the bug** When a serious error is encountered in the merging process (ex. corrupted PDF file), an error alert is shown and we exit the createMergedFile method but the action button is never reset and still shows the progress percent. **Steps to reproduce** Steps to reproduce the behavior: 1. Upload a corrupted file (ex. [Mergist Test Document 5 (corrupt).pdf](https://github.com/jerboa88/Mergist/blob/main/src/tests/samples/Mergist%20Test%20Document%205%20(corrupt).pdf)) 2. Click the merge button 3. See action button in indeterminate state **Expected behavior** We should either reset the button to its initial state (Merge), or change it to a disabled error button so that the user can remove the problematic file before trying again. **Screenshots** ![image](https://user-images.githubusercontent.com/9030780/167963133-0d55a737-cb70-414a-b189-411bf6ba4c82.png)
non_defect
reset button state upon merge error helpful info browser version chrome probably others describe the bug when a serious error is encountered in the merging process ex corrupted pdf file an error alert is shown and we exit the createmergedfile method but the action button is never reset and still shows the progress percent steps to reproduce steps to reproduce the behavior upload a corrupted file ex click the merge button see action button in indeterminate state expected behavior we should either reset the button to its initial state merge or change it to a disabled error button so that the user can remove the problematic file before trying again screenshots
0
37,163
8,272,289,495
IssuesEvent
2018-09-16 18:34:11
slacka/WoeUSB
https://api.github.com/repos/slacka/WoeUSB
closed
Unbound local true? Feel free to move & edit post
defect duplicate
Installation failed! Exit code: 256 Log: WoeUSB v@@WOEUSB_VERSION@@ ============================== Mounting source filesystem... Wiping all existing partition table and filesystem signatures in /dev/sdb... /dev/sdb: 2 bytes were erased at offset 0x000001fe (dos): 55 aa /dev/sdb: calling ioctl to re-read partition table: Success Ensure that /dev/sdb is really wiped... Creating new partition table on /dev/sdb... Creating target partition... Making system realize that partition table has changed... Wait 3 seconds for block device nodes to populate... mkfs.fat: warning - lowercase labels might not work properly with DOS or Windows mkfs.fat 4.1 (2017-01-24) Mounting target filesystem... Applying workaround to prevent 64-bit systems with big primary memory from being unresponsive during copying files. /usr/bin/woeusb: line 1348: true: unbound variable Resetting workaround to prevent 64-bit systems with big primary memory from being unresponsive during copying files. Unmounting and removing "/media/woeusb_source_1537116872_20564"... Unmounting and removing "/media/woeusb_target_1537116872_20564"... You may now safely detach the target device
1.0
Unbound local true? Feel free to move & edit post - Installation failed! Exit code: 256 Log: WoeUSB v@@WOEUSB_VERSION@@ ============================== Mounting source filesystem... Wiping all existing partition table and filesystem signatures in /dev/sdb... /dev/sdb: 2 bytes were erased at offset 0x000001fe (dos): 55 aa /dev/sdb: calling ioctl to re-read partition table: Success Ensure that /dev/sdb is really wiped... Creating new partition table on /dev/sdb... Creating target partition... Making system realize that partition table has changed... Wait 3 seconds for block device nodes to populate... mkfs.fat: warning - lowercase labels might not work properly with DOS or Windows mkfs.fat 4.1 (2017-01-24) Mounting target filesystem... Applying workaround to prevent 64-bit systems with big primary memory from being unresponsive during copying files. /usr/bin/woeusb: line 1348: true: unbound variable Resetting workaround to prevent 64-bit systems with big primary memory from being unresponsive during copying files. Unmounting and removing "/media/woeusb_source_1537116872_20564"... Unmounting and removing "/media/woeusb_target_1537116872_20564"... You may now safely detach the target device
defect
unbound local true feel free to move edit post installation failed exit code log woeusb v woeusb version mounting source filesystem wiping all existing partition table and filesystem signatures in dev sdb dev sdb bytes were erased at offset dos aa dev sdb calling ioctl to re read partition table success ensure that dev sdb is really wiped creating new partition table on dev sdb creating target partition making system realize that partition table has changed wait seconds for block device nodes to populate mkfs fat warning lowercase labels might not work properly with dos or windows mkfs fat mounting target filesystem applying workaround to prevent bit systems with big primary memory from being unresponsive during copying files usr bin woeusb line true unbound variable resetting workaround to prevent bit systems with big primary memory from being unresponsive during copying files unmounting and removing media woeusb source unmounting and removing media woeusb target you may now safely detach the target device
1
32,768
6,925,499,033
IssuesEvent
2017-11-30 16:06:41
omniti-labs/mungo
https://api.github.com/repos/omniti-labs/mungo
closed
Large file upload broken
P: Major R: fixed T: defect
**Reported by jesus on 1 Jan 1970 00:19 UTC** Large file uploads result in zero bytes available on the file handle.
1.0
Large file upload broken - **Reported by jesus on 1 Jan 1970 00:19 UTC** Large file uploads result in zero bytes available on the file handle.
defect
large file upload broken reported by jesus on jan utc large file uploads result in zero bytes available on the file handle
1
34,298
2,776,730,101
IssuesEvent
2015-05-04 23:49:57
GoogleCloudPlatform/kubernetes
https://api.github.com/repos/GoogleCloudPlatform/kubernetes
closed
LivenessProbe of type exec appears completely broken
priority/P1 team/node
As far as I can tell, container liveness probes of type exec flat out don't work against clusters built at head. I had the kubelet always log the output of the exec'ed command, and it's always empty no matter what I exec, causing the probe to fail since it requires the output to be equal to the string "ok". If you want to test it, try adding a liveness probe of this form to any container: ``` livenessProbe: exec: command: - "/bin/sh" - "-c" - "echo ok" ``` I also tried these lovely forms, to more closely match the liveness example under examples/: ``` livenessProbe: exec: command: - "/bin/sh" - "-c" - "echo ok > temp; cat temp" ``` ``` livenessProbe: exec: command: - "echo" - "ok" ``` On the plus side, this may be an opportunistic time to switch over to #7587
1.0
LivenessProbe of type exec appears completely broken - As far as I can tell, container liveness probes of type exec flat out don't work against clusters built at head. I had the kubelet always log the output of the exec'ed command, and it's always empty no matter what I exec, causing the probe to fail since it requires the output to be equal to the string "ok". If you want to test it, try adding a liveness probe of this form to any container: ``` livenessProbe: exec: command: - "/bin/sh" - "-c" - "echo ok" ``` I also tried these lovely forms, to more closely match the liveness example under examples/: ``` livenessProbe: exec: command: - "/bin/sh" - "-c" - "echo ok > temp; cat temp" ``` ``` livenessProbe: exec: command: - "echo" - "ok" ``` On the plus side, this may be an opportunistic time to switch over to #7587
non_defect
livenessprobe of type exec appears completely broken as far as i can tell container liveness probes of type exec flat out don t work against clusters built at head i had the kubelet always log the output of the exec ed command and it s always empty no matter what i exec causing the probe to fail since it requires the output to be equal to the string ok if you want to test it try adding a liveness probe of this form to any container livenessprobe exec command bin sh c echo ok i also tried these lovely forms to more closely match the liveness example under examples livenessprobe exec command bin sh c echo ok temp cat temp livenessprobe exec command echo ok on the plus side this may be an opportunistic time to switch over to
0
44,857
11,520,948,621
IssuesEvent
2020-02-14 15:44:40
BatchDrake/suscan
https://api.github.com/repos/BatchDrake/suscan
closed
SoapySDR headers not found on macOS
build-issue
Hey As you know I test on 2 MacBooks. On one MacBook, I have compiled SoapySDR from source, so there's no issues. On the other one, I have installed it via brew. On this second machine, I get error: `/Users/mehdi/Documents/source/suscan/analyzer/source.h:30:10: fatal error: 'SoapySDR/Device.h' file not found` Well I fixed it temporarily by setting the CPATH variable in command line. Couldn't find any strange things in your CMakeLists
1.0
SoapySDR headers not found on macOS - Hey As you know I test on 2 MacBooks. On one MacBook, I have compiled SoapySDR from source, so there's no issues. On the other one, I have installed it via brew. On this second machine, I get error: `/Users/mehdi/Documents/source/suscan/analyzer/source.h:30:10: fatal error: 'SoapySDR/Device.h' file not found` Well I fixed it temporarily by setting the CPATH variable in command line. Couldn't find any strange things in your CMakeLists
non_defect
soapysdr headers not found on macos hey as you know i test on macbooks on one macbook i have compiled soapysdr from source so there s no issues on the other one i have installed it via brew on this second machine i get error users mehdi documents source suscan analyzer source h fatal error soapysdr device h file not found well i fixed it temporarily by setting the cpath variable in command line couldn t find any strange things in your cmakelists
0
283,644
30,913,509,956
IssuesEvent
2023-08-05 02:06:19
hshivhare67/kernel_v4.19.72
https://api.github.com/repos/hshivhare67/kernel_v4.19.72
reopened
CVE-2023-0590 (Medium) detected in linuxlinux-4.19.282
Mend: dependency security vulnerability
## CVE-2023-0590 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.282</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/hshivhare67/kernel_v4.19.72/commit/139c4e073703974ca0b05255c4cff6dcd52a8e31">139c4e073703974ca0b05255c4cff6dcd52a8e31</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/sch_api.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/sch_api.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A use-after-free flaw was found in qdisc_graft in net/sched/sch_api.c in the Linux Kernel due to a race problem. This flaw leads to a denial of service issue. If patch ebda44da44f6 ("net: sched: fix race condition in qdisc_graft()") not applied yet, then kernel could be affected. <p>Publish Date: 2023-03-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-0590>CVE-2023-0590</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-0590">https://www.linuxkernelcves.com/cves/CVE-2023-0590</a></p> <p>Release Date: 2023-03-23</p> <p>Fix Resolution: v5.10.152,v5.15.76,v6.0.6,v6.1-rc2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2023-0590 (Medium) detected in linuxlinux-4.19.282 - ## CVE-2023-0590 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.19.282</b></p></summary> <p> <p>The Linux Kernel</p> <p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p> <p>Found in HEAD commit: <a href="https://github.com/hshivhare67/kernel_v4.19.72/commit/139c4e073703974ca0b05255c4cff6dcd52a8e31">139c4e073703974ca0b05255c4cff6dcd52a8e31</a></p> <p>Found in base branch: <b>master</b></p></p> </details> </p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary> <p></p> <p> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/sch_api.c</b> <img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/sched/sch_api.c</b> </p> </details> <p></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary> <p> A use-after-free flaw was found in qdisc_graft in net/sched/sch_api.c in the Linux Kernel due to a race problem. This flaw leads to a denial of service issue. If patch ebda44da44f6 ("net: sched: fix race condition in qdisc_graft()") not applied yet, then kernel could be affected. <p>Publish Date: 2023-03-23 <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2023-0590>CVE-2023-0590</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>4.7</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: High - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.linuxkernelcves.com/cves/CVE-2023-0590">https://www.linuxkernelcves.com/cves/CVE-2023-0590</a></p> <p>Release Date: 2023-03-23</p> <p>Fix Resolution: v5.10.152,v5.15.76,v6.0.6,v6.1-rc2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net sched sch api c net sched sch api c vulnerability details a use after free flaw was found in qdisc graft in net sched sch api c in the linux kernel due to a race problem this flaw leads to a denial of service issue if patch net sched fix race condition in qdisc graft not applied yet then kernel could be affected publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
0
54,971
14,106,441,860
IssuesEvent
2020-11-06 14:56:15
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Misleading code examples ALTER TABLE
T: Defect
### Expected behavior In the [ALTER TABLE](https://www.jooq.org/doc/latest/manual/sql-building/ddl-statements/alter-statement/alter-table-statement/) documentation, I can find an example of how to add multiple columns to a table in one go ```java create.alterTable("table").add("column1", INTEGER).add("column2", INTEGER).execute(); ``` According to doc, this code should work starting from `3.11.x` version. I expect this code example to compile and execute successfully. ### Actual behavior Compilation error ```bash java: cannot find symbol symbol: method add(java.lang.String,org.jooq.DataType<java.lang.Integer>) location: interface org.jooq.AlterTableFinalStep ``` ### Steps to reproduce the problem - Use the aforementioned code snippet ### Versions - jOOQ: 3.11.x - Java: 8, 11, 15 - Database (include vendor): N/A - OS: N/A - JDBC Driver (include name if inofficial driver): N/A
1.0
Misleading code examples ALTER TABLE - ### Expected behavior In the [ALTER TABLE](https://www.jooq.org/doc/latest/manual/sql-building/ddl-statements/alter-statement/alter-table-statement/) documentation, I can find an example of how to add multiple columns to a table in one go ```java create.alterTable("table").add("column1", INTEGER).add("column2", INTEGER).execute(); ``` According to doc, this code should work starting from `3.11.x` version. I expect this code example to compile and execute successfully. ### Actual behavior Compilation error ```bash java: cannot find symbol symbol: method add(java.lang.String,org.jooq.DataType<java.lang.Integer>) location: interface org.jooq.AlterTableFinalStep ``` ### Steps to reproduce the problem - Use the aforementioned code snippet ### Versions - jOOQ: 3.11.x - Java: 8, 11, 15 - Database (include vendor): N/A - OS: N/A - JDBC Driver (include name if inofficial driver): N/A
defect
misleading code examples alter table expected behavior in the documentation i can find an example of how to add multiple columns to a table in one go java create altertable table add integer add integer execute according to doc this code should work starting from x version i expect this code example to compile and execute successfully actual behavior compilation error bash java cannot find symbol symbol method add java lang string org jooq datatype location interface org jooq altertablefinalstep steps to reproduce the problem use the aforementioned code snippet versions jooq x java database include vendor n a os n a jdbc driver include name if inofficial driver n a
1
61,083
17,023,597,456
IssuesEvent
2021-07-03 02:50:39
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
IP blocked
Component: nominatim Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 2.30pm, Tuesday, 25th May 2010]** Hi, I am working with the nominatim OSM but now my IP is blocked. I haven't noticed that I should include my emailadress to the request. How is it possible to continue my work? Thanks for your help Miriam
1.0
IP blocked - **[Submitted to the original trac issue database at 2.30pm, Tuesday, 25th May 2010]** Hi, I am working with the nominatim OSM but now my IP is blocked. I haven't noticed that I should include my emailadress to the request. How is it possible to continue my work? Thanks for your help Miriam
defect
ip blocked hi i am working with the nominatim osm but now my ip is blocked i haven t noticed that i should include my emailadress to the request how is it possible to continue my work thanks for your help miriam
1
13,782
2,784,101,447
IssuesEvent
2015-05-07 07:13:58
sylingd/phpsocks5
https://api.github.com/repos/sylingd/phpsocks5
closed
请问“修改为PHP虚拟主机提供的数据库配置”是什么
auto-migrated Priority-Medium Type-Defect
``` 您好, 请问使用方法中 1、修改socks5.php前5行代码的数据库配置,修改为PHP虚拟主机�� �供的数据库配置。这里的“PHP虚拟主机提供的数据库配置”� ��指什么,怎么得到这些数据,是自己新建一个MySQL数据库供p hpsocks5代理使用吗? 谢谢 ``` Original issue reported on code.google.com by `yong...@gmail.com` on 27 Feb 2011 at 12:19
1.0
请问“修改为PHP虚拟主机提供的数据库配置”是什么 - ``` 您好, 请问使用方法中 1、修改socks5.php前5行代码的数据库配置,修改为PHP虚拟主机�� �供的数据库配置。这里的“PHP虚拟主机提供的数据库配置”� ��指什么,怎么得到这些数据,是自己新建一个MySQL数据库供p hpsocks5代理使用吗? 谢谢 ``` Original issue reported on code.google.com by `yong...@gmail.com` on 27 Feb 2011 at 12:19
defect
请问“修改为php虚拟主机提供的数据库配置”是什么 您好, 请问使用方法中 、 ,修改为php虚拟主机�� �供的数据库配置。这里的“php虚拟主机提供的数据库配置”� ��指什么,怎么得到这些数据,是自己新建一个mysql数据库供p ? 谢谢 original issue reported on code google com by yong gmail com on feb at
1
318,850
27,326,347,167
IssuesEvent
2023-02-25 03:51:21
nrwl/nx
https://api.github.com/repos/nrwl/nx
closed
support "BUILD ONLY" mode for cypress builder
type: feature scope: testing tools
_[Please make sure you have read the submission guidelines before posting an issue](https://github.com/nrwl/nx/blob/master/CONTRIBUTING.md#-submitting-issue)_ # Prerequisites Please answer the following questions for yourself before submitting an issue. **YOU MAY DELETE THE PREREQUISITES SECTION.** - [x] I am running the latest version - [x] I checked the documentation and found no answer - [x] I checked to make sure that this issue has not already been filed - [x] I'm reporting the issue to the correct repository (not related to Angular, AngularCLI or any dependency) ## Expected Behavior cypress builder should allow an opt-in option to skip the runner and only perform the build task (compile TS specs, copy fixtures, etc...) This option should also support watch mode. ## Current Behavior Couldn't find a way to skip the runner, the builder will build TS specs and then run the dev server using the target defined. ## Other Perhaps I get it all wrong, but with the current behaviour local development is great. When working in the CI, the first action is to perform our build and then I would want to run cypress on the production build (using a standalone http server). To do this I need the compiled files (TS spec, fixtures etc...) Once I have them I can run cypress manually using the same `cypress.json` This is a whole-lot faster then running cypress under the angular cli... > If it's possible to perform a build of the app + build of the e2e app in one command it would be great (I don't know if multi-target builders are supported by the cli) Thanks
1.0
support "BUILD ONLY" mode for cypress builder - _[Please make sure you have read the submission guidelines before posting an issue](https://github.com/nrwl/nx/blob/master/CONTRIBUTING.md#-submitting-issue)_ # Prerequisites Please answer the following questions for yourself before submitting an issue. **YOU MAY DELETE THE PREREQUISITES SECTION.** - [x] I am running the latest version - [x] I checked the documentation and found no answer - [x] I checked to make sure that this issue has not already been filed - [x] I'm reporting the issue to the correct repository (not related to Angular, AngularCLI or any dependency) ## Expected Behavior cypress builder should allow an opt-in option to skip the runner and only perform the build task (compile TS specs, copy fixtures, etc...) This option should also support watch mode. ## Current Behavior Couldn't find a way to skip the runner, the builder will build TS specs and then run the dev server using the target defined. ## Other Perhaps I get it all wrong, but with the current behaviour local development is great. When working in the CI, the first action is to perform our build and then I would want to run cypress on the production build (using a standalone http server). To do this I need the compiled files (TS spec, fixtures etc...) Once I have them I can run cypress manually using the same `cypress.json` This is a whole-lot faster then running cypress under the angular cli... > If it's possible to perform a build of the app + build of the e2e app in one command it would be great (I don't know if multi-target builders are supported by the cli) Thanks
non_defect
support build only mode for cypress builder prerequisites please answer the following questions for yourself before submitting an issue you may delete the prerequisites section i am running the latest version i checked the documentation and found no answer i checked to make sure that this issue has not already been filed i m reporting the issue to the correct repository not related to angular angularcli or any dependency expected behavior cypress builder should allow an opt in option to skip the runner and only perform the build task compile ts specs copy fixtures etc this option should also support watch mode current behavior couldn t find a way to skip the runner the builder will build ts specs and then run the dev server using the target defined other perhaps i get it all wrong but with the current behaviour local development is great when working in the ci the first action is to perform our build and then i would want to run cypress on the production build using a standalone http server to do this i need the compiled files ts spec fixtures etc once i have them i can run cypress manually using the same cypress json this is a whole lot faster then running cypress under the angular cli if it s possible to perform a build of the app build of the app in one command it would be great i don t know if multi target builders are supported by the cli thanks
0
106,974
13,402,233,940
IssuesEvent
2020-09-03 18:37:20
EduRAIN/edurain-client
https://api.github.com/repos/EduRAIN/edurain-client
opened
Display user profile info
need designs new feature
Display user profile info in editable format so users can review and update in a single page.
1.0
Display user profile info - Display user profile info in editable format so users can review and update in a single page.
non_defect
display user profile info display user profile info in editable format so users can review and update in a single page
0
240,110
20,011,945,796
IssuesEvent
2022-02-01 07:55:21
kyma-project/busola
https://api.github.com/repos/kyma-project/busola
closed
Fail the whole suite if the first test fails
estimate: 3 test-enhancement
<!-- Thank you for your contribution. Before you submit the issue: 1. Search open and closed issues for duplicates. 2. Read the contributing guidelines. --> **Description** In one `test.spec.js` we have steps and after failing one step the whole test should be broken and failed. Because if the first or second step fails we don't want to and don't have to wait for all failing steps. <!-- Provide a clear and concise description of the feature. --> **Acceptance criteria** - [ ] `test.spec.js` breaks after fail in one inside step - [ ] the rest of tests are running <!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. --> <!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
1.0
Fail the whole suite if the first test fails - <!-- Thank you for your contribution. Before you submit the issue: 1. Search open and closed issues for duplicates. 2. Read the contributing guidelines. --> **Description** In one `test.spec.js` we have steps and after failing one step the whole test should be broken and failed. Because if the first or second step fails we don't want to and don't have to wait for all failing steps. <!-- Provide a clear and concise description of the feature. --> **Acceptance criteria** - [ ] `test.spec.js` breaks after fail in one inside step - [ ] the rest of tests are running <!-- Explain why we should add this feature. Provide use cases to illustrate its benefits. --> <!-- Attach any files, links, code samples, or screenshots that will convince us to your idea. -->
non_defect
fail the whole suite if the first test fails thank you for your contribution before you submit the issue search open and closed issues for duplicates read the contributing guidelines description in one test spec js we have steps and after failing one step the whole test should be broken and failed because if the first or second step fails we don t want to and don t have to wait for all failing steps acceptance criteria test spec js breaks after fail in one inside step the rest of tests are running
0
21,594
3,521,683,505
IssuesEvent
2016-01-13 03:48:59
Javier-DarthPalpatine/firefox-hide-caption-titlebar-plus
https://api.github.com/repos/Javier-DarthPalpatine/firefox-hide-caption-titlebar-plus
closed
[SmallTabs pref.] HCTP 2.9.5 Conflicts with Tab Utilities Multirow Tabs
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1.Enabling HCTP 2.9.5 with Tab Utilities 1.5.28.1.1-signed, 1.6pre21, Tab Utilities Fixed 1.5.2015.04.05 2.Disabling HCTP or rolling back to 2.9.4.1 solves the issue. 3. What is the expected output? Multirow Tabs. Two in my case. What do you see instead? Only top row of two tabs is displayed. What version of "Hide Caption Titlebar Plus" addon are you using? Rolled back to Version 2.9.4.1-signed so it works. What version of Firefox? 32.0.3 On what operating system? Windows 7 ``` Original issue reported on code.google.com by `vab...@gmail.com` on 29 May 2015 at 12:02
1.0
[SmallTabs pref.] HCTP 2.9.5 Conflicts with Tab Utilities Multirow Tabs - ``` What steps will reproduce the problem? 1.Enabling HCTP 2.9.5 with Tab Utilities 1.5.28.1.1-signed, 1.6pre21, Tab Utilities Fixed 1.5.2015.04.05 2.Disabling HCTP or rolling back to 2.9.4.1 solves the issue. 3. What is the expected output? Multirow Tabs. Two in my case. What do you see instead? Only top row of two tabs is displayed. What version of "Hide Caption Titlebar Plus" addon are you using? Rolled back to Version 2.9.4.1-signed so it works. What version of Firefox? 32.0.3 On what operating system? Windows 7 ``` Original issue reported on code.google.com by `vab...@gmail.com` on 29 May 2015 at 12:02
defect
hctp conflicts with tab utilities multirow tabs what steps will reproduce the problem enabling hctp with tab utilities signed tab utilities fixed disabling hctp or rolling back to solves the issue what is the expected output multirow tabs two in my case what do you see instead only top row of two tabs is displayed what version of hide caption titlebar plus addon are you using rolled back to version signed so it works what version of firefox on what operating system windows original issue reported on code google com by vab gmail com on may at
1
8,599
2,611,532,287
IssuesEvent
2015-02-27 06:03:35
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
opened
Stupid check for DLC/Stupid asterisk
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Pack game resources into .hwp file 2. Try network play 3. All your resources are prepended with asterisk effectively preventing other from seeing the match to your settings on their side ("*Highlander" for example). Also potentially there could be other sources of triggering DLC flag from this code: QString scriptPath = PHYSFS_getRealDir(QString("Scripts/Multiplayer/%1.lua").arg(script).toLocal8Bit( ).data()); bool isDLC = !scriptPath.startsWith(datadir->absolutePath()); "/" vs "\" mismatch for example ``` Original issue reported on code.google.com by `unC0Rr` on 2 Jun 2013 at 7:06
1.0
Stupid check for DLC/Stupid asterisk - ``` What steps will reproduce the problem? 1. Pack game resources into .hwp file 2. Try network play 3. All your resources are prepended with asterisk effectively preventing other from seeing the match to your settings on their side ("*Highlander" for example). Also potentially there could be other sources of triggering DLC flag from this code: QString scriptPath = PHYSFS_getRealDir(QString("Scripts/Multiplayer/%1.lua").arg(script).toLocal8Bit( ).data()); bool isDLC = !scriptPath.startsWith(datadir->absolutePath()); "/" vs "\" mismatch for example ``` Original issue reported on code.google.com by `unC0Rr` on 2 Jun 2013 at 7:06
defect
stupid check for dlc stupid asterisk what steps will reproduce the problem pack game resources into hwp file try network play all your resources are prepended with asterisk effectively preventing other from seeing the match to your settings on their side highlander for example also potentially there could be other sources of triggering dlc flag from this code qstring scriptpath physfs getrealdir qstring scripts multiplayer lua arg script data bool isdlc scriptpath startswith datadir absolutepath vs mismatch for example original issue reported on code google com by on jun at
1
108,374
11,590,525,425
IssuesEvent
2020-02-24 07:02:03
milvus-io/docs
https://api.github.com/repos/milvus-io/docs
closed
[Suggestion] Add a question to the FAQ to explain how data is stored
documentation
> Note: This repository is ONLY used to solve issues related to DOCS. > For other issues, please move to [other repositories](https://github.com/milvus-io/). **Is there anything that's missing or inappropriate in the docs? Please describe.** Add a question to the FAQ to explain how data is stored. Metadata is stored in the database, while search data is stored as files.
1.0
[Suggestion] Add a question to the FAQ to explain how data is stored - > Note: This repository is ONLY used to solve issues related to DOCS. > For other issues, please move to [other repositories](https://github.com/milvus-io/). **Is there anything that's missing or inappropriate in the docs? Please describe.** Add a question to the FAQ to explain how data is stored. Metadata is stored in the database, while search data is stored as files.
non_defect
add a question to the faq to explain how data is stored note this repository is only used to solve issues related to docs for other issues please move to is there anything that s missing or inappropriate in the docs please describe add a question to the faq to explain how data is stored metadata is stored in the database while search data is stored as files
0
21,006
3,441,894,231
IssuesEvent
2015-12-14 20:20:07
wdg/blacktree-secrets
https://api.github.com/repos/wdg/blacktree-secrets
closed
Secrets crashes when I try to update it
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Clicking on the update button 2. Turns into spinning beaching 3. then quits What is the expected output? What do you see instead? As I said it will then quit and cause the sys prefs to fail What version of the product are you using? On what operating system? My version 10.5.8 and secrets is PrefPane 1.0.6 Please provide any additional information below. There is not more to say, if I do not press the update button, it run fine. Also would please clarify how the blue & black, bold black & blue differ in functionality and if they are meant for other systems other than 10.5.8. ``` Original issue reported on code.google.com by `gameboybobalpha@gmail.com` on 13 Oct 2012 at 7:55
1.0
Secrets crashes when I try to update it - ``` What steps will reproduce the problem? 1. Clicking on the update button 2. Turns into spinning beaching 3. then quits What is the expected output? What do you see instead? As I said it will then quit and cause the sys prefs to fail What version of the product are you using? On what operating system? My version 10.5.8 and secrets is PrefPane 1.0.6 Please provide any additional information below. There is not more to say, if I do not press the update button, it run fine. Also would please clarify how the blue & black, bold black & blue differ in functionality and if they are meant for other systems other than 10.5.8. ``` Original issue reported on code.google.com by `gameboybobalpha@gmail.com` on 13 Oct 2012 at 7:55
defect
secrets crashes when i try to update it what steps will reproduce the problem clicking on the update button turns into spinning beaching then quits what is the expected output what do you see instead as i said it will then quit and cause the sys prefs to fail what version of the product are you using on what operating system my version and secrets is prefpane please provide any additional information below there is not more to say if i do not press the update button it run fine also would please clarify how the blue black bold black blue differ in functionality and if they are meant for other systems other than original issue reported on code google com by gameboybobalpha gmail com on oct at
1
6,503
2,610,255,881
IssuesEvent
2015-02-26 19:21:45
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳激光祛除痘疤
auto-migrated Priority-Medium Type-Defect
``` 深圳激光祛除痘疤【深圳韩方科颜全国热线400-869-1818,24小时 QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:28
1.0
深圳激光祛除痘疤 - ``` 深圳激光祛除痘疤【深圳韩方科颜全国热线400-869-1818,24小时 QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:28
defect
深圳激光祛除痘疤 深圳激光祛除痘疤【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩国秘�� �——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,韩方� ��颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹”健 康祛痘技术并结合先进“先进豪华彩光”仪,开创国内专业�� �疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上的痘� ��。 original issue reported on code google com by szft com on may at
1
9,872
2,616,005,177
IssuesEvent
2015-03-02 00:49:41
jasonhall/bwapi
https://api.github.com/repos/jasonhall/bwapi
closed
Current player and enemy race bug
auto-migrated Maintainability Priority-Critical Type-Defect Usability
``` Current player and enemy race seem to be detected based on previous game? Additionally, Team Melee no longer works properly. Will fix. ``` Original issue reported on code.google.com by `AHeinerm` on 16 Dec 2008 at 6:25
1.0
Current player and enemy race bug - ``` Current player and enemy race seem to be detected based on previous game? Additionally, Team Melee no longer works properly. Will fix. ``` Original issue reported on code.google.com by `AHeinerm` on 16 Dec 2008 at 6:25
defect
current player and enemy race bug current player and enemy race seem to be detected based on previous game additionally team melee no longer works properly will fix original issue reported on code google com by aheinerm on dec at
1
15,027
2,838,913,381
IssuesEvent
2015-05-27 10:36:32
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
WrongTargetException WARN in HZ 3.5-EA
Team: Core Type: Defect
when my server starts it logs a bunch of the following warnings: <pre><code> 2015-05-18 14:55:00.248 [hz.dataInstance.async.thread-1] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=23, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=23, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=100, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 23, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:01.058 [hz.dataInstance.async.thread-2] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=172, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=172, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=110, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 172, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:02.288 [hz.dataInstance.async.thread-2] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=255, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=255, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=100, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 255, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:04.108 [hz.dataInstance.async.thread-1] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=262, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=262, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=120, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 262, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:04.326 [hz.dataInstance.async.thread-4] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=201, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=201, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=100, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 201, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService </pre></code> the corresponding HZ config is: <hz:hazelcast id="dataInstance"> <hz:config> <hz:instance-name>dataInstance</hz:instance-name> <hz:group name="sysData" password="${xdm.data.pwd}"/> <hz:properties> <hz:property name="hazelcast.jmx">true</hz:property> <hz:property name="hazelcast.jmx.detailed">true</hz:property> <hz:property name="hazelcast.logging.type">slf4j</hz:property> </hz:properties> <hz:network port="${xdm.data.port}" port-auto-increment="true"> <hz:join> <hz:multicast enabled="false"/> <hz:tcp-ip enabled="true" connection-timeout-seconds="5"> <hz:members>${xdm.cluster.members}</hz:members> </hz:tcp-ip> </hz:join> </hz:network> <hz:map name="modules"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="moduleCacheStore"/> </hz:map> <hz:map name="nodes"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="nodeCacheStore"/> </hz:map> <hz:map name="schemas"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="schemaCacheStore"/> </hz:map> <hz:map name="roles"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="roleCacheStore"/> </hz:map> <hz:map name="users"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="userCacheStore"/> </hz:map> <hz:serialization> <hz:serializers>...</hz:serializers> </hz:serialization> </hz:config> </hz:hazelcast> the system starts to log warnings after ~30 sec after cache start, it does it 15 times for the same 5 partitions, then stops. All configured caches were properly populated. The warnings should belong to schemas cache which has 5 entries (all other caches has less). Population logs for this cache is: <pre><code>2015-05-18 14:54:15.586 [cached8] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAllKeys.enter; 2015-05-18 14:54:15.586 [cached8] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAllKeys.exit; returning 5 keys 2015-05-18 14:54:15.602 [cached19] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [TPoX] 2015-05-18 14:54:15.602 [cached19] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.602 [cached7] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [XDM] 2015-05-18 14:54:15.602 [cached7] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.602 [cached11] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [XMark] 2015-05-18 14:54:15.602 [cached11] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.602 [cached18] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [TPoX2] 2015-05-18 14:54:15.602 [cached18] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.618 [cached23] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [TPoX-J] 2015-05-18 14:54:15.618 [cached23] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities </pre></code> I got this issue in HZ 3.5-EA. in HZ 3.4.2 everything works fine. Thanks, Denis.
1.0
WrongTargetException WARN in HZ 3.5-EA - when my server starts it logs a bunch of the following warnings: <pre><code> 2015-05-18 14:55:00.248 [hz.dataInstance.async.thread-1] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=23, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=23, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=100, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 23, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:01.058 [hz.dataInstance.async.thread-2] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=172, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=172, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=110, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 172, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:02.288 [hz.dataInstance.async.thread-2] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=255, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=255, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=100, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 255, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:04.108 [hz.dataInstance.async.thread-1] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=262, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=262, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=120, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 262, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService 2015-05-18 14:55:04.326 [hz.dataInstance.async.thread-4] WARN com.hazelcast.spi.impl.operationservice.impl.Invocation - [localhost]:5771 [sysData] [3.5-EA] Retrying invocation: Invocation{ serviceName='hz:impl:mapService', op=com.hazelcast.map.impl.operation.LoadAllOperation{serviceName='null', partitionId=201, callId=0, invocationTime=-1, waitTimeout=-1, callTimeout=60000}, partitionId=201, replicaIndex=1, tryCount=250, tryPauseMillis=500, invokeCount=100, callTimeout=60000, target=null, backupsExpected=0, backupsCompleted=0}, Reason: com.hazelcast.spi.exception.WrongTargetException: WrongTarget! this:Address[localhost]:5771, target:null, partitionId: 201, replicaIndex: 1, operation: com.hazelcast.map.impl.operation.LoadAllOperation, service: hz:impl:mapService </pre></code> the corresponding HZ config is: <hz:hazelcast id="dataInstance"> <hz:config> <hz:instance-name>dataInstance</hz:instance-name> <hz:group name="sysData" password="${xdm.data.pwd}"/> <hz:properties> <hz:property name="hazelcast.jmx">true</hz:property> <hz:property name="hazelcast.jmx.detailed">true</hz:property> <hz:property name="hazelcast.logging.type">slf4j</hz:property> </hz:properties> <hz:network port="${xdm.data.port}" port-auto-increment="true"> <hz:join> <hz:multicast enabled="false"/> <hz:tcp-ip enabled="true" connection-timeout-seconds="5"> <hz:members>${xdm.cluster.members}</hz:members> </hz:tcp-ip> </hz:join> </hz:network> <hz:map name="modules"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="moduleCacheStore"/> </hz:map> <hz:map name="nodes"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="nodeCacheStore"/> </hz:map> <hz:map name="schemas"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="schemaCacheStore"/> </hz:map> <hz:map name="roles"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="roleCacheStore"/> </hz:map> <hz:map name="users"> <hz:map-store enabled="true" write-delay-seconds="10" initial-mode="EAGER" implementation="userCacheStore"/> </hz:map> <hz:serialization> <hz:serializers>...</hz:serializers> </hz:serialization> </hz:config> </hz:hazelcast> the system starts to log warnings after ~30 sec after cache start, it does it 15 times for the same 5 partitions, then stops. All configured caches were properly populated. The warnings should belong to schemas cache which has 5 entries (all other caches has less). Population logs for this cache is: <pre><code>2015-05-18 14:54:15.586 [cached8] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAllKeys.enter; 2015-05-18 14:54:15.586 [cached8] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAllKeys.exit; returning 5 keys 2015-05-18 14:54:15.602 [cached19] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [TPoX] 2015-05-18 14:54:15.602 [cached19] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.602 [cached7] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [XDM] 2015-05-18 14:54:15.602 [cached7] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.602 [cached11] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [XMark] 2015-05-18 14:54:15.602 [cached11] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.602 [cached18] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [TPoX2] 2015-05-18 14:54:15.602 [cached18] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities 2015-05-18 14:54:15.618 [cached23] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.enter; keys: [TPoX-J] 2015-05-18 14:54:15.618 [cached23] TRACE com.bagri.xdm.cache.hazelcast.store.system.SchemaCacheStore - loadAll.exit; returning 1 entities </pre></code> I got this issue in HZ 3.5-EA. in HZ 3.4.2 everything works fine. Thanks, Denis.
defect
wrongtargetexception warn in hz ea when my server starts it logs a bunch of the following warnings warn com hazelcast spi impl operationservice impl invocation retrying invocation invocation servicename hz impl mapservice op com hazelcast map impl operation loadalloperation servicename null partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target null backupsexpected backupscompleted reason com hazelcast spi exception wrongtargetexception wrongtarget this address target null partitionid replicaindex operation com hazelcast map impl operation loadalloperation service hz impl mapservice warn com hazelcast spi impl operationservice impl invocation retrying invocation invocation servicename hz impl mapservice op com hazelcast map impl operation loadalloperation servicename null partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target null backupsexpected backupscompleted reason com hazelcast spi exception wrongtargetexception wrongtarget this address target null partitionid replicaindex operation com hazelcast map impl operation loadalloperation service hz impl mapservice warn com hazelcast spi impl operationservice impl invocation retrying invocation invocation servicename hz impl mapservice op com hazelcast map impl operation loadalloperation servicename null partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target null backupsexpected backupscompleted reason com hazelcast spi exception wrongtargetexception wrongtarget this address target null partitionid replicaindex operation com hazelcast map impl operation loadalloperation service hz impl mapservice warn com hazelcast spi impl operationservice impl invocation retrying invocation invocation servicename hz impl mapservice op com hazelcast map impl operation loadalloperation servicename null partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target null backupsexpected backupscompleted reason com hazelcast spi exception wrongtargetexception wrongtarget this address target null partitionid replicaindex operation com hazelcast map impl operation loadalloperation service hz impl mapservice warn com hazelcast spi impl operationservice impl invocation retrying invocation invocation servicename hz impl mapservice op com hazelcast map impl operation loadalloperation servicename null partitionid callid invocationtime waittimeout calltimeout partitionid replicaindex trycount trypausemillis invokecount calltimeout target null backupsexpected backupscompleted reason com hazelcast spi exception wrongtargetexception wrongtarget this address target null partitionid replicaindex operation com hazelcast map impl operation loadalloperation service hz impl mapservice the corresponding hz config is datainstance true true xdm cluster members the system starts to log warnings after sec after cache start it does it times for the same partitions then stops all configured caches were properly populated the warnings should belong to schemas cache which has entries all other caches has less population logs for this cache is trace com bagri xdm cache hazelcast store system schemacachestore loadallkeys enter trace com bagri xdm cache hazelcast store system schemacachestore loadallkeys exit returning keys trace com bagri xdm cache hazelcast store system schemacachestore loadall enter keys trace com bagri xdm cache hazelcast store system schemacachestore loadall exit returning entities trace com bagri xdm cache hazelcast store system schemacachestore loadall enter keys trace com bagri xdm cache hazelcast store system schemacachestore loadall exit returning entities trace com bagri xdm cache hazelcast store system schemacachestore loadall enter keys trace com bagri xdm cache hazelcast store system schemacachestore loadall exit returning entities trace com bagri xdm cache hazelcast store system schemacachestore loadall enter keys trace com bagri xdm cache hazelcast store system schemacachestore loadall exit returning entities trace com bagri xdm cache hazelcast store system schemacachestore loadall enter keys trace com bagri xdm cache hazelcast store system schemacachestore loadall exit returning entities i got this issue in hz ea in hz everything works fine thanks denis
1
50,024
13,187,310,204
IssuesEvent
2020-08-13 03:00:29
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
kill root_oarchive (Trac #23)
IceTray Migrated from Trac defect
<details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/23 , reported by troy and owned by troy_</summary> <p> ```json { "status": "closed", "changetime": "2007-11-11T03:51:18", "description": "", "reporter": "troy", "cc": "", "resolution": "fixed", "_ts": "1194753078000000", "component": "IceTray", "summary": "kill root_oarchive", "priority": "normal", "keywords": "", "time": "2007-06-03T16:39:33", "milestone": "", "owner": "troy", "type": "defect" } ``` </p> </details>
1.0
kill root_oarchive (Trac #23) - <details> <summary>_Migrated from https://code.icecube.wisc.edu/ticket/23 , reported by troy and owned by troy_</summary> <p> ```json { "status": "closed", "changetime": "2007-11-11T03:51:18", "description": "", "reporter": "troy", "cc": "", "resolution": "fixed", "_ts": "1194753078000000", "component": "IceTray", "summary": "kill root_oarchive", "priority": "normal", "keywords": "", "time": "2007-06-03T16:39:33", "milestone": "", "owner": "troy", "type": "defect" } ``` </p> </details>
defect
kill root oarchive trac migrated from reported by troy and owned by troy json status closed changetime description reporter troy cc resolution fixed ts component icetray summary kill root oarchive priority normal keywords time milestone owner troy type defect
1
677,580
23,166,488,455
IssuesEvent
2022-07-30 03:07:03
DynamoRIO/dynamorio
https://api.github.com/repos/DynamoRIO/dynamorio
closed
Regression in tool.drcacheoff.burst* tests on Github Actions
Priority-High Bug-Assert Bug-DRCrash
The following tests have recently started failing on the Github Actions workflow for x86-64: ``` code_api|tool.drcacheoff.burst_static code_api|tool.drcacheoff.burst_replace code_api|tool.drcacheoff.burst_replaceall code_api|tool.drcacheoff.burst_noreach code_api|tool.drcacheoff.burst_maps code_api|tool.drcacheoff.burst_traceopts code_api|tool.drcacheoff.burst_threads code_api|tool.drcacheoff.burst_malloc code_api|tool.drcacheoff.burst_reattach code_api|tool.drcacheoff.burst_threadL0filter code_api|tool.drcacheoff.burst_client ``` This is blocking some PRs, including #5569, #5568 and #5562. Logs: https://github.com/DynamoRIO/dynamorio/runs/7430951767?check_suite_focus=true ``` 2022-07-20T14:16:23.9265812Z 349: <Application 2022-07-20T14:16:23.9266263Z 349: /home/runner/work/dynamorio/dynamorio/build_debug-internal-64/clients/bin64/tool.drcacheoff.burst_threads 2022-07-20T14:16:23.9266655Z 349: (24791). Internal Error: DynamoRIO debug check failure: 2022-07-20T14:16:23.9266991Z 349: /home/runner/work/dynamorio/dynamorio/core/unix/signal.c:5861 2022-07-20T14:16:23.9267422Z 349: syscall_signal || safe_is_in_fcache(dcontext, pc, (byte *)sc->SC_XSP) ```
1.0
Regression in tool.drcacheoff.burst* tests on Github Actions - The following tests have recently started failing on the Github Actions workflow for x86-64: ``` code_api|tool.drcacheoff.burst_static code_api|tool.drcacheoff.burst_replace code_api|tool.drcacheoff.burst_replaceall code_api|tool.drcacheoff.burst_noreach code_api|tool.drcacheoff.burst_maps code_api|tool.drcacheoff.burst_traceopts code_api|tool.drcacheoff.burst_threads code_api|tool.drcacheoff.burst_malloc code_api|tool.drcacheoff.burst_reattach code_api|tool.drcacheoff.burst_threadL0filter code_api|tool.drcacheoff.burst_client ``` This is blocking some PRs, including #5569, #5568 and #5562. Logs: https://github.com/DynamoRIO/dynamorio/runs/7430951767?check_suite_focus=true ``` 2022-07-20T14:16:23.9265812Z 349: <Application 2022-07-20T14:16:23.9266263Z 349: /home/runner/work/dynamorio/dynamorio/build_debug-internal-64/clients/bin64/tool.drcacheoff.burst_threads 2022-07-20T14:16:23.9266655Z 349: (24791). Internal Error: DynamoRIO debug check failure: 2022-07-20T14:16:23.9266991Z 349: /home/runner/work/dynamorio/dynamorio/core/unix/signal.c:5861 2022-07-20T14:16:23.9267422Z 349: syscall_signal || safe_is_in_fcache(dcontext, pc, (byte *)sc->SC_XSP) ```
non_defect
regression in tool drcacheoff burst tests on github actions the following tests have recently started failing on the github actions workflow for code api tool drcacheoff burst static code api tool drcacheoff burst replace code api tool drcacheoff burst replaceall code api tool drcacheoff burst noreach code api tool drcacheoff burst maps code api tool drcacheoff burst traceopts code api tool drcacheoff burst threads code api tool drcacheoff burst malloc code api tool drcacheoff burst reattach code api tool drcacheoff burst code api tool drcacheoff burst client this is blocking some prs including and logs application home runner work dynamorio dynamorio build debug internal clients tool drcacheoff burst threads internal error dynamorio debug check failure home runner work dynamorio dynamorio core unix signal c syscall signal safe is in fcache dcontext pc byte sc sc xsp
0
21,853
3,573,323,097
IssuesEvent
2016-01-27 05:27:16
ariya/phantomjs
https://api.github.com/repos/ariya/phantomjs
closed
Expose tcp connection count
old.Priority-Medium old.Status-New old.Type-Defect
_**[mark.spi...@gmail.com](http://code.google.com/u/100150714566636599732/) commented:**_ > This is an enhancement to expose the number of new tcp connections used to complete the page load. > > Diffed against 1.6 > > Patch attached. **Disclaimer:** This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #642](http://code.google.com/p/phantomjs/issues/detail?id=642). :star2: &nbsp; **3** people had starred this issue at the time of migration.
1.0
Expose tcp connection count - _**[mark.spi...@gmail.com](http://code.google.com/u/100150714566636599732/) commented:**_ > This is an enhancement to expose the number of new tcp connections used to complete the page load. > > Diffed against 1.6 > > Patch attached. **Disclaimer:** This issue was migrated on 2013-03-15 from the project's former issue tracker on Google Code, [Issue #642](http://code.google.com/p/phantomjs/issues/detail?id=642). :star2: &nbsp; **3** people had starred this issue at the time of migration.
defect
expose tcp connection count commented this is an enhancement to expose the number of new tcp connections used to complete the page load diffed against patch attached disclaimer this issue was migrated on from the project s former issue tracker on google code nbsp people had starred this issue at the time of migration
1
87,322
8,071,910,580
IssuesEvent
2018-08-06 14:31:25
openSUSE/open-build-service
https://api.github.com/repos/openSUSE/open-build-service
closed
Periodically run rspec without VCR
Feature 💡 Test Suite :syringe:
Travis has an option to run ['cron' like jobs](https://docs.travis-ci.com/user/cron-jobs/) now. We should make use of this feature to run our rspec test suite without VCR against a realy backend.
1.0
Periodically run rspec without VCR - Travis has an option to run ['cron' like jobs](https://docs.travis-ci.com/user/cron-jobs/) now. We should make use of this feature to run our rspec test suite without VCR against a realy backend.
non_defect
periodically run rspec without vcr travis has an option to run now we should make use of this feature to run our rspec test suite without vcr against a realy backend
0
52,739
13,224,992,440
IssuesEvent
2020-08-17 20:16:14
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
PFWriter/SuperDST combination causing missing I3EventHeader (Trac #262)
Migrated from Trac defect jeb + pnf
2011-05-13 10:26:02 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve r/private/jebserver/JEBWriter.cxx:153 no event header in frame 2011-05-13 10:27:59 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni cation/private/pfcommunication/I3Broker.cxx:233 "PFWriter" already registered at Name Se rvice ... register it anyway 2011-05-13 10:27:59 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux iliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 2 2011-05-13 10:27:59 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve r/private/jebserver/JEBWriter.cxx:153 no event header in frame 2011-05-13 10:38:10 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni cation/private/pfcommunication/I3Broker.cxx:233 "PFWriter" already registered at Name Se rvice ... register it anyway 2011-05-13 10:38:10 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux iliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 19 2011-05-13 10:38:10 [GMT] INFO JEBFile : /scratch/blaufuss/pnf/V11-05-00/src/jebserver/ private/jebserver/JEBFile.cxx:196 opened file "/mnt/data/pnflocal/PFFilt_PhysicsTrig_Phy sicsFiltering_Run00118175_Subrun00000000_00000000.i3" This is understood...in the handling of SuperDST-only filters, we clean the frame of non-SuperDST entries, including the I3EventHeader. So if the first event is a SuperDST only event, the I3EventHeader is removed. This FIRST I3EventHeader is used by the PFWriter to detect run transitions and make run transitions neat. The PFWriter expects this to be there. A super-dst only event would happen first in the file roughly 1/6 of the time, so this explains why this wasn't a problem in the 24 hr test run or at SPTS. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/262">https://code.icecube.wisc.edu/projects/icecube/ticket/262</a>, reported by blaufussand owned by tschmidt</em></summary> <p> ```json { "status": "closed", "changetime": "2012-05-25T13:56:04", "_ts": "1337954164000000", "description": "2011-05-13 10:26:02 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve\nr/private/jebserver/JEBWriter.cxx:153 no event header in frame\n2011-05-13 10:27:59 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni\ncation/private/pfcommunication/I3Broker.cxx:233 \"PFWriter\" already registered at Name Se\nrvice ... register it anyway\n2011-05-13 10:27:59 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux\niliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 2\n2011-05-13 10:27:59 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve\nr/private/jebserver/JEBWriter.cxx:153 no event header in frame\n2011-05-13 10:38:10 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni\ncation/private/pfcommunication/I3Broker.cxx:233 \"PFWriter\" already registered at Name Se\nrvice ... register it anyway\n2011-05-13 10:38:10 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux\niliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 19\n2011-05-13 10:38:10 [GMT] INFO JEBFile : /scratch/blaufuss/pnf/V11-05-00/src/jebserver/\nprivate/jebserver/JEBFile.cxx:196 opened file \"/mnt/data/pnflocal/PFFilt_PhysicsTrig_Phy\nsicsFiltering_Run00118175_Subrun00000000_00000000.i3\"\n\nThis is understood...in the handling of SuperDST-only filters, we clean the frame\nof non-SuperDST entries, including the I3EventHeader. So if the first event\nis a SuperDST only event, the I3EventHeader is removed.\n\nThis FIRST I3EventHeader is used by the PFWriter to detect run transitions and\nmake run transitions neat. The PFWriter expects this to be there.\nA super-dst only event would happen first in the file roughly 1/6 of the time,\nso this explains why this wasn't a problem in the 24 hr test run or at SPTS.\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "time": "2011-05-13T13:58:35", "component": "jeb + pnf", "summary": "PFWriter/SuperDST combination causing missing I3EventHeader", "priority": "normal", "keywords": "", "milestone": "", "owner": "tschmidt", "type": "defect" } ``` </p> </details>
1.0
PFWriter/SuperDST combination causing missing I3EventHeader (Trac #262) - 2011-05-13 10:26:02 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve r/private/jebserver/JEBWriter.cxx:153 no event header in frame 2011-05-13 10:27:59 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni cation/private/pfcommunication/I3Broker.cxx:233 "PFWriter" already registered at Name Se rvice ... register it anyway 2011-05-13 10:27:59 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux iliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 2 2011-05-13 10:27:59 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve r/private/jebserver/JEBWriter.cxx:153 no event header in frame 2011-05-13 10:38:10 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni cation/private/pfcommunication/I3Broker.cxx:233 "PFWriter" already registered at Name Se rvice ... register it anyway 2011-05-13 10:38:10 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux iliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 19 2011-05-13 10:38:10 [GMT] INFO JEBFile : /scratch/blaufuss/pnf/V11-05-00/src/jebserver/ private/jebserver/JEBFile.cxx:196 opened file "/mnt/data/pnflocal/PFFilt_PhysicsTrig_Phy sicsFiltering_Run00118175_Subrun00000000_00000000.i3" This is understood...in the handling of SuperDST-only filters, we clean the frame of non-SuperDST entries, including the I3EventHeader. So if the first event is a SuperDST only event, the I3EventHeader is removed. This FIRST I3EventHeader is used by the PFWriter to detect run transitions and make run transitions neat. The PFWriter expects this to be there. A super-dst only event would happen first in the file roughly 1/6 of the time, so this explains why this wasn't a problem in the 24 hr test run or at SPTS. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/262">https://code.icecube.wisc.edu/projects/icecube/ticket/262</a>, reported by blaufussand owned by tschmidt</em></summary> <p> ```json { "status": "closed", "changetime": "2012-05-25T13:56:04", "_ts": "1337954164000000", "description": "2011-05-13 10:26:02 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve\nr/private/jebserver/JEBWriter.cxx:153 no event header in frame\n2011-05-13 10:27:59 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni\ncation/private/pfcommunication/I3Broker.cxx:233 \"PFWriter\" already registered at Name Se\nrvice ... register it anyway\n2011-05-13 10:27:59 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux\niliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 2\n2011-05-13 10:27:59 [GMT] FATAL JEBWriter : /scratch/blaufuss/pnf/V11-05-00/src/jebserve\nr/private/jebserver/JEBWriter.cxx:153 no event header in frame\n2011-05-13 10:38:10 [GMT] WARN I3Broker : /scratch/blaufuss/pnf/V11-05-00/src/pfcommuni\ncation/private/pfcommunication/I3Broker.cxx:233 \"PFWriter\" already registered at Name Se\nrvice ... register it anyway\n2011-05-13 10:38:10 [GMT] WARN PFContinuity : /scratch/blaufuss/pnf/V11-05-00/src/pfaux\niliary/private/pfauxiliary/PFContinuity.cxx:150 missing events at run 118175, event 19\n2011-05-13 10:38:10 [GMT] INFO JEBFile : /scratch/blaufuss/pnf/V11-05-00/src/jebserver/\nprivate/jebserver/JEBFile.cxx:196 opened file \"/mnt/data/pnflocal/PFFilt_PhysicsTrig_Phy\nsicsFiltering_Run00118175_Subrun00000000_00000000.i3\"\n\nThis is understood...in the handling of SuperDST-only filters, we clean the frame\nof non-SuperDST entries, including the I3EventHeader. So if the first event\nis a SuperDST only event, the I3EventHeader is removed.\n\nThis FIRST I3EventHeader is used by the PFWriter to detect run transitions and\nmake run transitions neat. The PFWriter expects this to be there.\nA super-dst only event would happen first in the file roughly 1/6 of the time,\nso this explains why this wasn't a problem in the 24 hr test run or at SPTS.\n", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "time": "2011-05-13T13:58:35", "component": "jeb + pnf", "summary": "PFWriter/SuperDST combination causing missing I3EventHeader", "priority": "normal", "keywords": "", "milestone": "", "owner": "tschmidt", "type": "defect" } ``` </p> </details>
defect
pfwriter superdst combination causing missing trac fatal jebwriter scratch blaufuss pnf src jebserve r private jebserver jebwriter cxx no event header in frame warn scratch blaufuss pnf src pfcommuni cation private pfcommunication cxx pfwriter already registered at name se rvice register it anyway warn pfcontinuity scratch blaufuss pnf src pfaux iliary private pfauxiliary pfcontinuity cxx missing events at run event fatal jebwriter scratch blaufuss pnf src jebserve r private jebserver jebwriter cxx no event header in frame warn scratch blaufuss pnf src pfcommuni cation private pfcommunication cxx pfwriter already registered at name se rvice register it anyway warn pfcontinuity scratch blaufuss pnf src pfaux iliary private pfauxiliary pfcontinuity cxx missing events at run event info jebfile scratch blaufuss pnf src jebserver private jebserver jebfile cxx opened file mnt data pnflocal pffilt physicstrig phy sicsfiltering this is understood in the handling of superdst only filters we clean the frame of non superdst entries including the so if the first event is a superdst only event the is removed this first is used by the pfwriter to detect run transitions and make run transitions neat the pfwriter expects this to be there a super dst only event would happen first in the file roughly of the time so this explains why this wasn t a problem in the hr test run or at spts migrated from json status closed changetime ts description fatal jebwriter scratch blaufuss pnf src jebserve nr private jebserver jebwriter cxx no event header in frame warn scratch blaufuss pnf src pfcommuni ncation private pfcommunication cxx pfwriter already registered at name se nrvice register it anyway warn pfcontinuity scratch blaufuss pnf src pfaux niliary private pfauxiliary pfcontinuity cxx missing events at run event fatal jebwriter scratch blaufuss pnf src jebserve nr private jebserver jebwriter cxx no event header in frame warn scratch blaufuss pnf src pfcommuni ncation private pfcommunication cxx pfwriter already registered at name se nrvice register it anyway warn pfcontinuity scratch blaufuss pnf src pfaux niliary private pfauxiliary pfcontinuity cxx missing events at run event info jebfile scratch blaufuss pnf src jebserver nprivate jebserver jebfile cxx opened file mnt data pnflocal pffilt physicstrig phy nsicsfiltering n nthis is understood in the handling of superdst only filters we clean the frame nof non superdst entries including the so if the first event nis a superdst only event the is removed n nthis first is used by the pfwriter to detect run transitions and nmake run transitions neat the pfwriter expects this to be there na super dst only event would happen first in the file roughly of the time nso this explains why this wasn t a problem in the hr test run or at spts n reporter blaufuss cc resolution fixed time component jeb pnf summary pfwriter superdst combination causing missing priority normal keywords milestone owner tschmidt type defect
1
15
2,492,097,405
IssuesEvent
2015-01-04 12:25:26
scipy/scipy
https://api.github.com/repos/scipy/scipy
closed
scipy.optimize.anneal does not respect lower/upper bounds (Trac #1126)
defect Migrated from Trac prio-normal scipy.optimize
_Original ticket http://projects.scipy.org/scipy/ticket/1126 on 2010-03-04 by trac user lboussouf, assigned to unknown._ Running the code below raises an exception as anneal try to evaluate f at 1.97065187015 which is forbidden as specified in bounds. If we look at line 106 in the code : [http://projects.scipy.org/scipy/browser/trunk/scipy/optimize/anneal.py] It's obvious why it does not respect bounds : xc is designed to respect them, but as we do xnew = x0 + xc the following line, there is no chance to respect bounds. And the same problem appears in all update_guess subroutines. Is anybody supporting that code or would it be better to handle it myself ? I can't stand thinking people are using this code with such mistakes. My optimizer soul is hurt ... import scipy.optimize # Define function def f(x): print x if x < -1 or x > 1: raise Exception else: return x**2 # Solve minimization problem using SA x = scipy.optimize.anneal(func=f,x0=0,lower=-1,upper=1)
1.0
scipy.optimize.anneal does not respect lower/upper bounds (Trac #1126) - _Original ticket http://projects.scipy.org/scipy/ticket/1126 on 2010-03-04 by trac user lboussouf, assigned to unknown._ Running the code below raises an exception as anneal try to evaluate f at 1.97065187015 which is forbidden as specified in bounds. If we look at line 106 in the code : [http://projects.scipy.org/scipy/browser/trunk/scipy/optimize/anneal.py] It's obvious why it does not respect bounds : xc is designed to respect them, but as we do xnew = x0 + xc the following line, there is no chance to respect bounds. And the same problem appears in all update_guess subroutines. Is anybody supporting that code or would it be better to handle it myself ? I can't stand thinking people are using this code with such mistakes. My optimizer soul is hurt ... import scipy.optimize # Define function def f(x): print x if x < -1 or x > 1: raise Exception else: return x**2 # Solve minimization problem using SA x = scipy.optimize.anneal(func=f,x0=0,lower=-1,upper=1)
defect
scipy optimize anneal does not respect lower upper bounds trac original ticket on by trac user lboussouf assigned to unknown running the code below raises an exception as anneal try to evaluate f at which is forbidden as specified in bounds if we look at line in the code it s obvious why it does not respect bounds xc is designed to respect them but as we do xnew xc the following line there is no chance to respect bounds and the same problem appears in all update guess subroutines is anybody supporting that code or would it be better to handle it myself i can t stand thinking people are using this code with such mistakes my optimizer soul is hurt import scipy optimize define function def f x print x if x raise exception else return x solve minimization problem using sa x scipy optimize anneal func f lower upper
1
45,407
12,795,852,860
IssuesEvent
2020-07-02 09:26:01
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
DataAccessException.sqlStateClass() always returns OTHER for SQLite
C: DB: SQLite C: Functionality E: All Editions P: Medium T: Defect
The xerial JDBC driver doesn't translate SQLite's vendor-specific error codes to standard SQL states. We should do that, instead. Error codes are here: https://sqlite.org/c3ref/c_abort.html
1.0
DataAccessException.sqlStateClass() always returns OTHER for SQLite - The xerial JDBC driver doesn't translate SQLite's vendor-specific error codes to standard SQL states. We should do that, instead. Error codes are here: https://sqlite.org/c3ref/c_abort.html
defect
dataaccessexception sqlstateclass always returns other for sqlite the xerial jdbc driver doesn t translate sqlite s vendor specific error codes to standard sql states we should do that instead error codes are here
1
166,295
6,302,229,806
IssuesEvent
2017-07-21 10:15:53
oSoc17/oasis-frontend
https://api.github.com/repos/oSoc17/oasis-frontend
closed
Remove iRail API calls
duplicate Priority 2
Remove iRail API service as well as the fake data from stations within that service to increase initial loading time
1.0
Remove iRail API calls - Remove iRail API service as well as the fake data from stations within that service to increase initial loading time
non_defect
remove irail api calls remove irail api service as well as the fake data from stations within that service to increase initial loading time
0
42,241
12,883,944,397
IssuesEvent
2020-07-13 01:02:34
jgeraigery/azure-iot-platform-dotnet
https://api.github.com/repos/jgeraigery/azure-iot-platform-dotnet
opened
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz
security vulnerability
## CVE-2020-7693 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary> <p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p> <p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/azure-iot-platform-dotnet/src/webui/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/package.json,/tmp/ws-scm/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/package.json</p> <p> Dependency Hierarchy: - react-styleguidist-7.3.11.tgz (Root Library) - webpack-dev-server-2.11.5.tgz - :x: **sockjs-0.3.19.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20. <p>Publish Date: 2020-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p> <p>Release Date: 2020-07-09</p> <p>Fix Resolution: sockjs - 0.3.20</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"sockjs","packageVersion":"0.3.19","isTransitiveDependency":true,"dependencyTree":"react-styleguidist:7.3.11;webpack-dev-server:2.11.5;sockjs:0.3.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"sockjs - 0.3.20"}],"vulnerabilityIdentifier":"CVE-2020-7693","vulnerabilityDetails":"Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-7693 (Medium) detected in sockjs-0.3.19.tgz - ## CVE-2020-7693 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>sockjs-0.3.19.tgz</b></p></summary> <p>SockJS-node is a server counterpart of SockJS-client a JavaScript library that provides a WebSocket-like object in the browser. SockJS gives you a coherent, cross-browser, Javascript API which creates a low latency, full duplex, cross-domain communication</p> <p>Library home page: <a href="https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz">https://registry.npmjs.org/sockjs/-/sockjs-0.3.19.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/azure-iot-platform-dotnet/src/webui/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/package.json,/tmp/ws-scm/azure-iot-platform-dotnet/src/webui/node_modules/sockjs/package.json</p> <p> Dependency Hierarchy: - react-styleguidist-7.3.11.tgz (Root Library) - webpack-dev-server-2.11.5.tgz - :x: **sockjs-0.3.19.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20. <p>Publish Date: 2020-07-09 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693>CVE-2020-7693</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: N/A - Attack Complexity: N/A - Privileges Required: N/A - User Interaction: N/A - Scope: N/A - Impact Metrics: - Confidentiality Impact: N/A - Integrity Impact: N/A - Availability Impact: N/A </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/sockjs/sockjs-node/pull/265">https://github.com/sockjs/sockjs-node/pull/265</a></p> <p>Release Date: 2020-07-09</p> <p>Fix Resolution: sockjs - 0.3.20</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":true,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"sockjs","packageVersion":"0.3.19","isTransitiveDependency":true,"dependencyTree":"react-styleguidist:7.3.11;webpack-dev-server:2.11.5;sockjs:0.3.19","isMinimumFixVersionAvailable":true,"minimumFixVersion":"sockjs - 0.3.20"}],"vulnerabilityIdentifier":"CVE-2020-7693","vulnerabilityDetails":"Incorrect handling of Upgrade header with the value websocket leads in crashing of containers hosting sockjs apps. This affects the package sockjs before 0.3.20.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7693","cvss3Severity":"medium","cvss3Score":"5.3","cvss3Metrics":{"A":"N/A","AC":"N/A","PR":"N/A","S":"N/A","C":"N/A","UI":"N/A","AV":"N/A","I":"N/A"},"extraData":{}}</REMEDIATE> -->
non_defect
cve medium detected in sockjs tgz cve medium severity vulnerability vulnerable library sockjs tgz sockjs node is a server counterpart of sockjs client a javascript library that provides a websocket like object in the browser sockjs gives you a coherent cross browser javascript api which creates a low latency full duplex cross domain communication library home page a href path to dependency file tmp ws scm azure iot platform dotnet src webui package json path to vulnerable library tmp ws scm azure iot platform dotnet src webui node modules sockjs package json tmp ws scm azure iot platform dotnet src webui node modules sockjs package json dependency hierarchy react styleguidist tgz root library webpack dev server tgz x sockjs tgz vulnerable library vulnerability details incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution sockjs isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails incorrect handling of upgrade header with the value websocket leads in crashing of containers hosting sockjs apps this affects the package sockjs before vulnerabilityurl
0
7,300
5,965,685,063
IssuesEvent
2017-05-30 12:18:10
AccessiDys/AccessiDys
https://api.github.com/repos/AccessiDys/AccessiDys
closed
Document adapté : Chargement trop long
bug Must (Urgent/Important) Performance
NB : Bug de performance Après analyse, merci de nous proposer une ou plusieurs solutions répondant au critère suivant : Le délai d'attente de l'utilisateur pour afficher un document ne doit pas dépasser 3 secondes. Périmètre : Document dans l'Éditeur de textes et Consultation de document (Aperçu, affichage, via bookmarlet ou via le lien partagé) Piste de solutions : Mettre un mode de chargement progressif avec affichage des premières lignes visibles puis affichage des autres lignes en fonction du position du curseur ou scoll. Alerte : Dans l'éditeur de textes, tenir compte du fait que l'utilisateur pourrait modifier le document ==> Quid en cas d'enregistrement (délai d'attente, garantie d'enregistre tout le document, ..). En fonction de votre analyse, nous validerons sa mise en place. Environnement : recette/prod
True
Document adapté : Chargement trop long - NB : Bug de performance Après analyse, merci de nous proposer une ou plusieurs solutions répondant au critère suivant : Le délai d'attente de l'utilisateur pour afficher un document ne doit pas dépasser 3 secondes. Périmètre : Document dans l'Éditeur de textes et Consultation de document (Aperçu, affichage, via bookmarlet ou via le lien partagé) Piste de solutions : Mettre un mode de chargement progressif avec affichage des premières lignes visibles puis affichage des autres lignes en fonction du position du curseur ou scoll. Alerte : Dans l'éditeur de textes, tenir compte du fait que l'utilisateur pourrait modifier le document ==> Quid en cas d'enregistrement (délai d'attente, garantie d'enregistre tout le document, ..). En fonction de votre analyse, nous validerons sa mise en place. Environnement : recette/prod
non_defect
document adapté chargement trop long nb bug de performance après analyse merci de nous proposer une ou plusieurs solutions répondant au critère suivant le délai d attente de l utilisateur pour afficher un document ne doit pas dépasser secondes périmètre document dans l éditeur de textes et consultation de document aperçu affichage via bookmarlet ou via le lien partagé piste de solutions mettre un mode de chargement progressif avec affichage des premières lignes visibles puis affichage des autres lignes en fonction du position du curseur ou scoll alerte dans l éditeur de textes tenir compte du fait que l utilisateur pourrait modifier le document quid en cas d enregistrement délai d attente garantie d enregistre tout le document en fonction de votre analyse nous validerons sa mise en place environnement recette prod
0
507,722
14,680,163,796
IssuesEvent
2020-12-31 09:14:37
k8smeetup/website-tasks
https://api.github.com/repos/k8smeetup/website-tasks
opened
/docs/tasks/run-application/horizontal-pod-autoscale.md
lang/zh priority/P0 sync/update version/master welcome
Source File: [/docs/tasks/run-application/horizontal-pod-autoscale.md](https://github.com/kubernetes/website/blob/master/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md) Diff 命令参考: ```bash # 查看原始文档与翻译文档更新差异 git diff --no-index -- content/en/docs/tasks/run-application/horizontal-pod-autoscale.md content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md # 跨分支持查看原始文档更新差异 git diff release-1.19 master -- content/en/docs/tasks/run-application/horizontal-pod-autoscale.md ```
1.0
/docs/tasks/run-application/horizontal-pod-autoscale.md - Source File: [/docs/tasks/run-application/horizontal-pod-autoscale.md](https://github.com/kubernetes/website/blob/master/content/en/docs/tasks/run-application/horizontal-pod-autoscale.md) Diff 命令参考: ```bash # 查看原始文档与翻译文档更新差异 git diff --no-index -- content/en/docs/tasks/run-application/horizontal-pod-autoscale.md content/zh/docs/tasks/run-application/horizontal-pod-autoscale.md # 跨分支持查看原始文档更新差异 git diff release-1.19 master -- content/en/docs/tasks/run-application/horizontal-pod-autoscale.md ```
non_defect
docs tasks run application horizontal pod autoscale md source file diff 命令参考 bash 查看原始文档与翻译文档更新差异 git diff no index content en docs tasks run application horizontal pod autoscale md content zh docs tasks run application horizontal pod autoscale md 跨分支持查看原始文档更新差异 git diff release master content en docs tasks run application horizontal pod autoscale md
0
446,468
31,478,491,916
IssuesEvent
2023-08-30 12:26:32
trilinos/Trilinos
https://api.github.com/repos/trilinos/Trilinos
closed
Wrong recommendation in the documentation: -D Trilinos_INSTALL_INCLUDE_DIR="/usr/Trilinos_include"
type: question impacting: documentation MARKED_FOR_CLOSURE CLOSED_DUE_TO_INACTIVITY
The documentation page [here](https://docs.trilinos.org/files/TrilinosBuildReference.html#installing) recommends ```-D Trilinos_INSTALL_INCLUDE_DIR="/usr/Trilinos_include"``` which causes headers installed into ```/usr/usr/Trilinos_include```. This is probably not what is desired. It should read: ```-D Trilinos_INSTALL_INCLUDE_DIR="Trilinos_include"``` or ```-D Trilinos_INSTALL_INCLUDE_DIR="include/trilinos"```.
1.0
Wrong recommendation in the documentation: -D Trilinos_INSTALL_INCLUDE_DIR="/usr/Trilinos_include" - The documentation page [here](https://docs.trilinos.org/files/TrilinosBuildReference.html#installing) recommends ```-D Trilinos_INSTALL_INCLUDE_DIR="/usr/Trilinos_include"``` which causes headers installed into ```/usr/usr/Trilinos_include```. This is probably not what is desired. It should read: ```-D Trilinos_INSTALL_INCLUDE_DIR="Trilinos_include"``` or ```-D Trilinos_INSTALL_INCLUDE_DIR="include/trilinos"```.
non_defect
wrong recommendation in the documentation d trilinos install include dir usr trilinos include the documentation page recommends d trilinos install include dir usr trilinos include which causes headers installed into usr usr trilinos include this is probably not what is desired it should read d trilinos install include dir trilinos include or d trilinos install include dir include trilinos
0
69,111
22,162,126,748
IssuesEvent
2022-06-04 16:57:08
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
The screen cannot be turned on when there is an incoming call.
T-Defect
### Steps to reproduce Make a voice or video call to this device. ### Outcome #### What did you expect? Lights up the screen within the time frame set according to the sync interval; displays the incoming call screen, and plays a ringtone. #### What happened instead? If the screen is off (when the screen is locked) for a certain period of time (such as at least 1 minute), there is a high chance that the incoming call will not be displayed normally, and there will be no ringtone. After the screen is manually turned on, the screen will switch from the lock screen screen to the incoming call screen. and play a ringtone. This problem persists even if the sync interval is set to 1 second. Element is allowed to run in the background, and notification popups are allowed. ### Your phone model Mi 9T (Redmi K20) ### Operating system version MIUI 12.5.2 (based on Android 11) ### Application version and app store Element 1.4.14 ### Homeserver Synapse 1.60.0 ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
The screen cannot be turned on when there is an incoming call. - ### Steps to reproduce Make a voice or video call to this device. ### Outcome #### What did you expect? Lights up the screen within the time frame set according to the sync interval; displays the incoming call screen, and plays a ringtone. #### What happened instead? If the screen is off (when the screen is locked) for a certain period of time (such as at least 1 minute), there is a high chance that the incoming call will not be displayed normally, and there will be no ringtone. After the screen is manually turned on, the screen will switch from the lock screen screen to the incoming call screen. and play a ringtone. This problem persists even if the sync interval is set to 1 second. Element is allowed to run in the background, and notification popups are allowed. ### Your phone model Mi 9T (Redmi K20) ### Operating system version MIUI 12.5.2 (based on Android 11) ### Application version and app store Element 1.4.14 ### Homeserver Synapse 1.60.0 ### Will you send logs? No ### Are you willing to provide a PR? No
defect
the screen cannot be turned on when there is an incoming call steps to reproduce make a voice or video call to this device outcome what did you expect lights up the screen within the time frame set according to the sync interval displays the incoming call screen and plays a ringtone what happened instead if the screen is off when the screen is locked for a certain period of time such as at least minute there is a high chance that the incoming call will not be displayed normally and there will be no ringtone after the screen is manually turned on the screen will switch from the lock screen screen to the incoming call screen and play a ringtone this problem persists even if the sync interval is set to second element is allowed to run in the background and notification popups are allowed your phone model mi redmi operating system version miui based on android application version and app store element homeserver synapse will you send logs no are you willing to provide a pr no
1
129,473
18,102,526,635
IssuesEvent
2021-09-22 15:32:49
gms-ws-demo/JS-Demo-Sep2021
https://api.github.com/repos/gms-ws-demo/JS-Demo-Sep2021
opened
CVE-2021-32803 (High) detected in tar-4.4.8.tgz
security vulnerability
## CVE-2021-32803 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.8.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p> <p> Dependency Hierarchy: - nodemon-1.19.1.tgz (Root Library) - chokidar-2.1.6.tgz - fsevents-1.2.9.tgz - node-pre-gyp-0.12.0.tgz - :x: **tar-4.4.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/JS-Demo-Sep2021/commit/e8cd219daa23fb09c60a7e7095b13c9e8372f529">e8cd219daa23fb09c60a7e7095b13c9e8372f529</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2. <p>Publish Date: 2021-08-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803>CVE-2021-32803</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p> <p>Release Date: 2021-08-03</p> <p>Fix Resolution: tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.8","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"nodemon:1.19.1;chokidar:2.1.6;fsevents:1.2.9;node-pre-gyp:0.12.0;tar:4.4.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32803","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
True
CVE-2021-32803 (High) detected in tar-4.4.8.tgz - ## CVE-2021-32803 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>tar-4.4.8.tgz</b></p></summary> <p>tar for node</p> <p>Library home page: <a href="https://registry.npmjs.org/tar/-/tar-4.4.8.tgz">https://registry.npmjs.org/tar/-/tar-4.4.8.tgz</a></p> <p> Dependency Hierarchy: - nodemon-1.19.1.tgz (Root Library) - chokidar-2.1.6.tgz - fsevents-1.2.9.tgz - node-pre-gyp-0.12.0.tgz - :x: **tar-4.4.8.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/JS-Demo-Sep2021/commit/e8cd219daa23fb09c60a7e7095b13c9e8372f529">e8cd219daa23fb09c60a7e7095b13c9e8372f529</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm package "tar" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2. <p>Publish Date: 2021-08-03 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803>CVE-2021-32803</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw">https://github.com/npm/node-tar/security/advisories/GHSA-r628-mhmh-qjhw</a></p> <p>Release Date: 2021-08-03</p> <p>Fix Resolution: tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2</p> </p> </details> <p></p> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"javascript/Node.js","packageName":"tar","packageVersion":"4.4.8","packageFilePaths":[],"isTransitiveDependency":true,"dependencyTree":"nodemon:1.19.1;chokidar:2.1.6;fsevents:1.2.9;node-pre-gyp:0.12.0;tar:4.4.8","isMinimumFixVersionAvailable":true,"minimumFixVersion":"tar - 3.2.3, 4.4.15, 5.0.7, 6.1.2"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2021-32803","vulnerabilityDetails":"The npm package \"tar\" (aka node-tar) before versions 6.1.2, 5.0.7, 4.4.15, and 3.2.3 has an arbitrary File Creation/Overwrite vulnerability via insufficient symlink protection. `node-tar` aims to guarantee that any file whose location would be modified by a symbolic link is not extracted. This is, in part, achieved by ensuring that extracted directories are not symlinks. Additionally, in order to prevent unnecessary `stat` calls to determine whether a given path is a directory, paths are cached when directories are created. This logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory. This order of operations resulted in the directory being created and added to the `node-tar` directory cache. When a directory is present in the directory cache, subsequent calls to mkdir for that directory are skipped. However, this is also where `node-tar` checks for symlinks occur. By first creating a directory, and then replacing that directory with a symlink, it was thus possible to bypass `node-tar` symlink checks on directories, essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location, thus allowing arbitrary file creation and overwrite. This issue was addressed in releases 3.2.3, 4.4.15, 5.0.7 and 6.1.2.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-32803","cvss3Severity":"high","cvss3Score":"8.1","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"Required","AV":"Network","I":"High"},"extraData":{}}</REMEDIATE> -->
non_defect
cve high detected in tar tgz cve high severity vulnerability vulnerable library tar tgz tar for node library home page a href dependency hierarchy nodemon tgz root library chokidar tgz fsevents tgz node pre gyp tgz x tar tgz vulnerable library found in head commit a href found in base branch master vulnerability details the npm package tar aka node tar before versions and has an arbitrary file creation overwrite vulnerability via insufficient symlink protection node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory this order of operations resulted in the directory being created and added to the node tar directory cache when a directory is present in the directory cache subsequent calls to mkdir for that directory are skipped however this is also where node tar checks for symlinks occur by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite this issue was addressed in releases and publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution tar isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency true dependencytree nodemon chokidar fsevents node pre gyp tar isminimumfixversionavailable true minimumfixversion tar basebranches vulnerabilityidentifier cve vulnerabilitydetails the npm package tar aka node tar before versions and has an arbitrary file creation overwrite vulnerability via insufficient symlink protection node tar aims to guarantee that any file whose location would be modified by a symbolic link is not extracted this is in part achieved by ensuring that extracted directories are not symlinks additionally in order to prevent unnecessary stat calls to determine whether a given path is a directory paths are cached when directories are created this logic was insufficient when extracting tar files that contained both a directory and a symlink with the same name as the directory this order of operations resulted in the directory being created and added to the node tar directory cache when a directory is present in the directory cache subsequent calls to mkdir for that directory are skipped however this is also where node tar checks for symlinks occur by first creating a directory and then replacing that directory with a symlink it was thus possible to bypass node tar symlink checks on directories essentially allowing an untrusted tar file to symlink into an arbitrary location and subsequently extracting arbitrary files into that location thus allowing arbitrary file creation and overwrite this issue was addressed in releases and vulnerabilityurl
0
45,390
12,758,813,484
IssuesEvent
2020-06-29 03:40:35
SasView/sasview
https://api.github.com/repos/SasView/sasview
opened
Invariant does not report the total invariant
critical defect
In 5.x series the invariant appears to only always report the invariant under the actual data on the front panel (under `Invariant Total [Q]` regardless of whether or not extrapolation is chosen. The only way to get the total would be to click the status button and sum the contributions. Given the value being reported is called "Total" (and would be expected to be the total) I am labeling this as critical.
1.0
Invariant does not report the total invariant - In 5.x series the invariant appears to only always report the invariant under the actual data on the front panel (under `Invariant Total [Q]` regardless of whether or not extrapolation is chosen. The only way to get the total would be to click the status button and sum the contributions. Given the value being reported is called "Total" (and would be expected to be the total) I am labeling this as critical.
defect
invariant does not report the total invariant in x series the invariant appears to only always report the invariant under the actual data on the front panel under invariant total regardless of whether or not extrapolation is chosen the only way to get the total would be to click the status button and sum the contributions given the value being reported is called total and would be expected to be the total i am labeling this as critical
1
54,016
13,308,609,239
IssuesEvent
2020-08-26 01:31:28
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
opened
Ordered list is broken in the WYSIWYG
Content forms Defect Drupal engineering Stretch goal
**Describe the defect** Ordered list appears as unordered in the WYSIWYG **To Reproduce** Steps to reproduce the behavior: 1. Go to /node/add/page 2. Add a WYSIWYG content block 3. Add an ordered list 4. Note that it appears as an unordered list in the WYSIWYG **Expected behavior** It should appear as an ordered list. **Screenshots** ![Edit_CMS_Help_Page_Access_Training___VA_gov_CMS](https://user-images.githubusercontent.com/643678/91244419-4de15380-e71a-11ea-88fb-6101e4070c01.jpg)
1.0
Ordered list is broken in the WYSIWYG - **Describe the defect** Ordered list appears as unordered in the WYSIWYG **To Reproduce** Steps to reproduce the behavior: 1. Go to /node/add/page 2. Add a WYSIWYG content block 3. Add an ordered list 4. Note that it appears as an unordered list in the WYSIWYG **Expected behavior** It should appear as an ordered list. **Screenshots** ![Edit_CMS_Help_Page_Access_Training___VA_gov_CMS](https://user-images.githubusercontent.com/643678/91244419-4de15380-e71a-11ea-88fb-6101e4070c01.jpg)
defect
ordered list is broken in the wysiwyg describe the defect ordered list appears as unordered in the wysiwyg to reproduce steps to reproduce the behavior go to node add page add a wysiwyg content block add an ordered list note that it appears as an unordered list in the wysiwyg expected behavior it should appear as an ordered list screenshots
1
49,925
13,187,295,508
IssuesEvent
2020-08-13 02:57:44
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
Multi-Core template jobs request lots of RAM (Trac #2324)
Incomplete Migration Migrated from Trac csky defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2324">https://code.icecube.wisc.edu/ticket/2324</a>, reported by steve.sclafani and owned by steve.sclafani</em></summary> <p> ```json { "status": "closed", "changetime": "2019-06-11T16:07:37", "description": "Calculating template sensitivity causes jobs to request ~9GB per cpu instead of 9 total, this immediatly eats up all the RAM available on a cobalt cluster. This happens when the sensitivity trials start, not creating the PDFs or anything. So far have only tested on cobalts", "reporter": "steve.sclafani", "cc": "", "resolution": "fixed", "_ts": "1560269257226754", "component": "csky", "summary": "Multi-Core template jobs request lots of RAM", "priority": "normal", "keywords": "", "time": "2019-06-10T18:37:17", "milestone": "", "owner": "steve.sclafani", "type": "defect" } ``` </p> </details>
1.0
Multi-Core template jobs request lots of RAM (Trac #2324) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2324">https://code.icecube.wisc.edu/ticket/2324</a>, reported by steve.sclafani and owned by steve.sclafani</em></summary> <p> ```json { "status": "closed", "changetime": "2019-06-11T16:07:37", "description": "Calculating template sensitivity causes jobs to request ~9GB per cpu instead of 9 total, this immediatly eats up all the RAM available on a cobalt cluster. This happens when the sensitivity trials start, not creating the PDFs or anything. So far have only tested on cobalts", "reporter": "steve.sclafani", "cc": "", "resolution": "fixed", "_ts": "1560269257226754", "component": "csky", "summary": "Multi-Core template jobs request lots of RAM", "priority": "normal", "keywords": "", "time": "2019-06-10T18:37:17", "milestone": "", "owner": "steve.sclafani", "type": "defect" } ``` </p> </details>
defect
multi core template jobs request lots of ram trac migrated from json status closed changetime description calculating template sensitivity causes jobs to request per cpu instead of total this immediatly eats up all the ram available on a cobalt cluster this happens when the sensitivity trials start not creating the pdfs or anything so far have only tested on cobalts reporter steve sclafani cc resolution fixed ts component csky summary multi core template jobs request lots of ram priority normal keywords time milestone owner steve sclafani type defect
1
49,974
13,187,301,918
IssuesEvent
2020-08-13 02:58:58
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
PROPOSAL x-sections error (Trac #2392)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2392">https://code.icecube.wisc.edu/ticket/2392</a>, reported by juancarlos and owned by jsoedingrekso</em></summary> <p> ```json { "status": "closed", "changetime": "2019-12-20T13:05:24", "description": "In testing the simulation chain I ran into the following exception:\n\nFATAL (PROPOSAL): No Cross Section was found!!! (Propagator.cxx:740 in void PROPOSAL::Propagator::ChooseCurrentCollection(const PROPOSAL::Vector3D&, const PROPOSAL::Vector3D&))\nERROR (I3Module): propagator_propagator: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))\n\nThis happens both with the I3_TESTDATA and when generating tables by hand.\n\nThis error was produced while processing a corsika 5-component simulation. Similar test with NuGen did not reproduce the error.\n\nThe input file used for this test can be found in cobalt:/data/user/juancarlos/PROPOSAL-test/cors.i3\n\nExecution script from simprod-scripts:\n\npython simprod-scripts/resources/scripts/clsim.py --gcdfile /cvmfs/icecube.opensciencegrid.org/data/GCD/GeoCalibDetectorStatus_AVG_55697-57531_PASS2_SPE_withStdNoise.i3.gz --inputfile nugen.i3.zst --outputfile photons.i3 --UseGPUs --UseGSLRNG", "reporter": "juancarlos", "cc": "", "resolution": "fixed", "_ts": "1576847124392693", "component": "combo simulation", "summary": "PROPOSAL x-sections error", "priority": "blocker", "keywords": "PROPOSAL", "time": "2019-12-19T17:00:09", "milestone": "", "owner": "jsoedingrekso", "type": "defect" } ``` </p> </details>
1.0
PROPOSAL x-sections error (Trac #2392) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/2392">https://code.icecube.wisc.edu/ticket/2392</a>, reported by juancarlos and owned by jsoedingrekso</em></summary> <p> ```json { "status": "closed", "changetime": "2019-12-20T13:05:24", "description": "In testing the simulation chain I ran into the following exception:\n\nFATAL (PROPOSAL): No Cross Section was found!!! (Propagator.cxx:740 in void PROPOSAL::Propagator::ChooseCurrentCollection(const PROPOSAL::Vector3D&, const PROPOSAL::Vector3D&))\nERROR (I3Module): propagator_propagator: Exception thrown (I3Module.cxx:123 in void I3Module::Do(void (I3Module::*)()))\n\nThis happens both with the I3_TESTDATA and when generating tables by hand.\n\nThis error was produced while processing a corsika 5-component simulation. Similar test with NuGen did not reproduce the error.\n\nThe input file used for this test can be found in cobalt:/data/user/juancarlos/PROPOSAL-test/cors.i3\n\nExecution script from simprod-scripts:\n\npython simprod-scripts/resources/scripts/clsim.py --gcdfile /cvmfs/icecube.opensciencegrid.org/data/GCD/GeoCalibDetectorStatus_AVG_55697-57531_PASS2_SPE_withStdNoise.i3.gz --inputfile nugen.i3.zst --outputfile photons.i3 --UseGPUs --UseGSLRNG", "reporter": "juancarlos", "cc": "", "resolution": "fixed", "_ts": "1576847124392693", "component": "combo simulation", "summary": "PROPOSAL x-sections error", "priority": "blocker", "keywords": "PROPOSAL", "time": "2019-12-19T17:00:09", "milestone": "", "owner": "jsoedingrekso", "type": "defect" } ``` </p> </details>
defect
proposal x sections error trac migrated from json status closed changetime description in testing the simulation chain i ran into the following exception n nfatal proposal no cross section was found propagator cxx in void proposal propagator choosecurrentcollection const proposal const proposal nerror propagator propagator exception thrown cxx in void do void n nthis happens both with the testdata and when generating tables by hand n nthis error was produced while processing a corsika component simulation similar test with nugen did not reproduce the error n nthe input file used for this test can be found in cobalt data user juancarlos proposal test cors n nexecution script from simprod scripts n npython simprod scripts resources scripts clsim py gcdfile cvmfs icecube opensciencegrid org data gcd geocalibdetectorstatus avg spe withstdnoise gz inputfile nugen zst outputfile photons usegpus usegslrng reporter juancarlos cc resolution fixed ts component combo simulation summary proposal x sections error priority blocker keywords proposal time milestone owner jsoedingrekso type defect
1
37,298
8,343,517,833
IssuesEvent
2018-09-30 05:44:03
liuxuewei/bluebee-accounting-system
https://api.github.com/repos/liuxuewei/bluebee-accounting-system
closed
反馈
Priority-Medium Type-Defect auto-migrated
``` 软件还是比较好的,试用了一下午,发现了不上问题,有时�� �会响应比较慢,添加的商品要重启软件之后才能出来,进货� ��保存无返回值,继续点保存会保存多条数据,总之还是有不 少bug的,希望开发者再接再励!! ``` Original issue reported on code.google.com by `wsjta...@gmail.com` on 18 Aug 2013 at 5:06
1.0
反馈 - ``` 软件还是比较好的,试用了一下午,发现了不上问题,有时�� �会响应比较慢,添加的商品要重启软件之后才能出来,进货� ��保存无返回值,继续点保存会保存多条数据,总之还是有不 少bug的,希望开发者再接再励!! ``` Original issue reported on code.google.com by `wsjta...@gmail.com` on 18 Aug 2013 at 5:06
defect
反馈 软件还是比较好的,试用了一下午,发现了不上问题,有时�� �会响应比较慢,添加的商品要重启软件之后才能出来,进货� ��保存无返回值,继续点保存会保存多条数据,总之还是有不 少bug的,希望开发者再接再励!! original issue reported on code google com by wsjta gmail com on aug at
1
26,655
6,782,268,339
IssuesEvent
2017-10-30 07:09:36
w3c/aria-practices
https://api.github.com/repos/w3c/aria-practices
closed
Review scrollable listbox example
code example Needs Review
The [scrollable listbox example](http://w3c.github.io/aria-practices/examples/listbox/listbox-scrollable.html) developed for issue #123 is ready for task force review. #### Reviews Requested as of October 6, 2017 While the practices task force encourages feedback from anyone in the web and accessibility engineering communities, peer review is requested from the following task force members: - [x] Review by Ann (@annabbott) - [x] Review by James (@jnurthen) - [x] Review by Siri (@shirsha)
1.0
Review scrollable listbox example - The [scrollable listbox example](http://w3c.github.io/aria-practices/examples/listbox/listbox-scrollable.html) developed for issue #123 is ready for task force review. #### Reviews Requested as of October 6, 2017 While the practices task force encourages feedback from anyone in the web and accessibility engineering communities, peer review is requested from the following task force members: - [x] Review by Ann (@annabbott) - [x] Review by James (@jnurthen) - [x] Review by Siri (@shirsha)
non_defect
review scrollable listbox example the developed for issue is ready for task force review reviews requested as of october while the practices task force encourages feedback from anyone in the web and accessibility engineering communities peer review is requested from the following task force members review by ann annabbott review by james jnurthen review by siri shirsha
0
79,641
7,722,026,580
IssuesEvent
2018-05-24 07:58:58
MajkiIT/polish-ads-filter
https://api.github.com/repos/MajkiIT/polish-ads-filter
closed
Grupa Polska Press
cookies reguły gotowe/testowanie
``` http://www.polskatimes.pl https://plus.polskatimes.pl http://indeks.polskatimes.pl http://www.gs24.pl http://indeks.gs24.pl https://plus.gs24.pl https://www.telemagazyn.pl https://gratka.pl http://www.gp24.pl http://indeks.gp24.pl/ https://plus.gp24.pl http://www.dziennikpolski24.pl http://indeks.dziennikpolski24.pl http://www.dziennikbaltycki.pl http://indeks.dziennikbaltycki.pl https://plus.dziennikbaltycki.pl http://www.dzienniklodzki.pl http://indeks.dzienniklodzki.pl https://plus.dzienniklodzki.pl http://www.dziennikzachodni.pl http://indeks.dziennikzachodni.pl https://plus.dziennikzachodni.pl http://www.echodnia.eu http://indeks.echodnia.eu http://www.expressbydgoski.pl http://indeks.expressbydgoski.pl http://www.expressilustrowany.pl http://indeks.expressilustrowany.pl/ http://www.gol24.pl https://www.motofakty.pl http://www.nowiny24.pl http://indeks.nowiny24.pl/ https://plus.nowiny24.pl http://www.gazetakrakowska.pl http://indeks.gazetakrakowska.pl/ https://plus.gazetakrakowska.pl http://www.gazetalubuska.pl http://indeks.gazetalubuska.pl/ https://plus.gazetalubuska.pl http://www.pomorska.pl http://indeks.pomorska.pl/ https://plus.pomorska.pl http://www.gazetawroclawska.pl http://indeks.gazetawroclawska.pl/ https://plus.gazetawroclawska.pl http://www.wspolczesna.pl http://indeks.wspolczesna.pl/ https://plus.wspolczesna.pl http://regiodom.pl http://www.regiopraca.pl http://www.gloswielkopolski.pl http://indeks.gloswielkopolski.pl/ https://plus.gloswielkopolski.pl http://www.kurierlubelski.pl http://indeks.kurierlubelski.pl/ https://plus.kurierlubelski.pl http://www.poranny.pl http://indeks.poranny.pl/ https://plus.poranny.pl/ https://www.naszahistoria.pl http://naszemiasto.pl http://www.nto.pl http://indeks.nto.pl/ https://plus.nto.pl http://www.to.com.pl http://indeks.to.com.pl/ https://plus.to.com.pl/ http://www.strefaagro.pl http://www.strefabiznesu.pl ``` ![opera zdjecie_2018-05-24_054720_www gs24 pl](https://user-images.githubusercontent.com/36385327/40464424-838c6ba2-5f1b-11e8-9738-ced156a45d04.png)
1.0
Grupa Polska Press - ``` http://www.polskatimes.pl https://plus.polskatimes.pl http://indeks.polskatimes.pl http://www.gs24.pl http://indeks.gs24.pl https://plus.gs24.pl https://www.telemagazyn.pl https://gratka.pl http://www.gp24.pl http://indeks.gp24.pl/ https://plus.gp24.pl http://www.dziennikpolski24.pl http://indeks.dziennikpolski24.pl http://www.dziennikbaltycki.pl http://indeks.dziennikbaltycki.pl https://plus.dziennikbaltycki.pl http://www.dzienniklodzki.pl http://indeks.dzienniklodzki.pl https://plus.dzienniklodzki.pl http://www.dziennikzachodni.pl http://indeks.dziennikzachodni.pl https://plus.dziennikzachodni.pl http://www.echodnia.eu http://indeks.echodnia.eu http://www.expressbydgoski.pl http://indeks.expressbydgoski.pl http://www.expressilustrowany.pl http://indeks.expressilustrowany.pl/ http://www.gol24.pl https://www.motofakty.pl http://www.nowiny24.pl http://indeks.nowiny24.pl/ https://plus.nowiny24.pl http://www.gazetakrakowska.pl http://indeks.gazetakrakowska.pl/ https://plus.gazetakrakowska.pl http://www.gazetalubuska.pl http://indeks.gazetalubuska.pl/ https://plus.gazetalubuska.pl http://www.pomorska.pl http://indeks.pomorska.pl/ https://plus.pomorska.pl http://www.gazetawroclawska.pl http://indeks.gazetawroclawska.pl/ https://plus.gazetawroclawska.pl http://www.wspolczesna.pl http://indeks.wspolczesna.pl/ https://plus.wspolczesna.pl http://regiodom.pl http://www.regiopraca.pl http://www.gloswielkopolski.pl http://indeks.gloswielkopolski.pl/ https://plus.gloswielkopolski.pl http://www.kurierlubelski.pl http://indeks.kurierlubelski.pl/ https://plus.kurierlubelski.pl http://www.poranny.pl http://indeks.poranny.pl/ https://plus.poranny.pl/ https://www.naszahistoria.pl http://naszemiasto.pl http://www.nto.pl http://indeks.nto.pl/ https://plus.nto.pl http://www.to.com.pl http://indeks.to.com.pl/ https://plus.to.com.pl/ http://www.strefaagro.pl http://www.strefabiznesu.pl ``` ![opera zdjecie_2018-05-24_054720_www gs24 pl](https://user-images.githubusercontent.com/36385327/40464424-838c6ba2-5f1b-11e8-9738-ced156a45d04.png)
non_defect
grupa polska press
0
71,374
23,593,674,090
IssuesEvent
2022-08-23 17:16:56
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
closed
Kernel panic VERIFY3(sa.sa_magic == SA_MAGIC) failed
Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Ubuntu Distribution Version | 21.10 Kernel Version | 5.13.0.19 Architecture | x64 OpenZFS Version | 2.0.6 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing I get kernel panic on target during initial synchronization between source and ssh remote machine using syncoid. Once i get this error the only way to fix it is to reboot the target. The zfs recv process stays in "D" state on target machine. Please suggest how i can at least kill the process. Both machines have same version of zfs,kernel and ubuntu I tested with any types of compression or switch it off on both sides with no luck. The only difference is that target has encrypted parent dataset and source doesn't. ### Describe how to reproduce the problem (source) sudo zfs create -o compression=lz4 ssd/private (target) sudo zfs create -o compression=lz4 -o keyformat=passphrase -o keylocation=file:///home/xxxx/backuppass -o canmount=noauto -o encryption=on backup/encrypted crash happens during syncoid run. sudo syncoid --no-stream --debug --create-bookmark --no-sync-snap ssd/private root@hostname:backup/encrypted/private Same happens with zfs-replicate which calls this command internally zfs send -p -c -L -v -R ssd/private@snap1| ssh root@host zfs recv -F -v -s -x encryption backup/encrypted/private The process succeeds if i send to unencrypted base dataset. ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> kernel: [53264.927615] VERIFY3(sa.sa_magic == SA_MAGIC) failed (3511224769 == 3100762) kernel: [53264.927624] PANIC at zfs_quota.c:89:zpl_get_file_info() kernel: [53264.927628] Showing stack for process 1211 kernel: [53264.927631] CPU: 2 PID: 1211 Comm: z_upgrade Tainted: P O 5.13.0-19-generic #19-Ubuntu kernel: [53264.927635] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H81M-DGS, BIOS P2.00 03/10/2016 kernel: [53264.927638] Call Trace: kernel: [53264.927644] show_stack+0x52/0x58 kernel: [53264.927652] dump_stack+0x7d/0x9c kernel: [53264.927663] spl_dumpstack+0x29/0x2b [spl] kernel: [53264.927685] spl_panic+0xd4/0xfc [spl] kernel: [53264.927701] ? __cond_resched+0x1a/0x50 kernel: [53264.927707] ? __mutex_lock.constprop.0+0x35/0x4f0 kernel: [53264.927712] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53264.927881] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53264.928010] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53264.928019] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53264.928024] ? __cond_resched+0x1a/0x50 kernel: [53264.928028] ? slab_pre_alloc_hook.constprop.0+0x96/0xe0 kernel: [53264.928036] zpl_get_file_info+0xa0/0x230 [zfs] kernel: [53264.928236] dmu_objset_userquota_get_ids+0x161/0x440 [zfs] kernel: [53264.928384] dnode_setdirty+0x38/0xf0 [zfs] kernel: [53264.928540] dbuf_dirty+0x44b/0x6d0 [zfs] kernel: [53264.928681] dmu_buf_will_dirty_impl+0xb7/0x110 [zfs] kernel: [53264.928821] dmu_buf_will_dirty+0x16/0x20 [zfs] kernel: [53264.928959] dmu_objset_space_upgrade+0xca/0x1c0 [zfs] kernel: [53264.929107] dmu_objset_id_quota_upgrade_cb+0xae/0x190 [zfs] kernel: [53264.929205] dmu_objset_upgrade_task_cb+0xd2/0x100 [zfs] kernel: [53264.929293] taskq_thread+0x235/0x430 [spl] kernel: [53264.929309] ? wake_up_q+0xa0/0xa0 kernel: [53264.929314] kthread+0x11f/0x140 kernel: [53264.929318] ? param_set_taskq_kick+0xf0/0xf0 [spl] kernel: [53264.929329] ? set_kthread_struct+0x50/0x50 kernel: [53264.929332] ret_from_fork+0x22/0x30 kernel: [53282.420220] VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, &zp->z_sa_hdl)) failed kernel: [53282.420225] PANIC at zfs_znode.c:339:zfs_znode_sa_init() kernel: [53282.420228] Showing stack for process 9492 kernel: [53282.420230] CPU: 1 PID: 9492 Comm: ls Tainted: P O 5.13.0-19-generic #19-Ubuntu kernel: [53282.420232] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H81M-DGS, BIOS P2.00 03/10/2016 kernel: [53282.420234] Call Trace: kernel: [53282.420237] show_stack+0x52/0x58 kernel: [53282.420244] dump_stack+0x7d/0x9c kernel: [53282.420251] spl_dumpstack+0x29/0x2b [spl] kernel: [53282.420267] spl_panic+0xd4/0xfc [spl] kernel: [53282.420278] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.420405] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.420499] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.420592] ? dmu_buf_replace_user+0x65/0x80 [zfs] kernel: [53282.420688] ? dmu_buf_set_user+0x13/0x20 [zfs] kernel: [53282.420783] ? dmu_buf_set_user_ie+0x15/0x20 [zfs] kernel: [53282.420878] zfs_znode_sa_init+0xd9/0xe0 [zfs] kernel: [53282.421023] zfs_znode_alloc+0x101/0x560 [zfs] kernel: [53282.421168] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421262] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421354] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421447] ? dbuf_rele_and_unlock+0x13b/0x520 [zfs] kernel: [53282.421540] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421632] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53282.421639] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53282.421643] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421748] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421853] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53282.421857] ? dmu_object_info_from_dnode+0x8e/0xa0 [zfs] kernel: [53282.421956] zfs_zget+0x235/0x280 [zfs] kernel: [53282.422099] zfs_dirent_lock+0x420/0x560 [zfs] kernel: [53282.422244] zfs_dirlook+0x91/0x2a0 [zfs] kernel: [53282.422388] zfs_lookup+0x1f8/0x3f0 [zfs] kernel: [53282.422537] zpl_lookup+0xcb/0x220 [zfs] kernel: [53282.422684] __lookup_slow+0x84/0x150 kernel: [53282.422687] walk_component+0x141/0x1b0 kernel: [53282.422689] path_lookupat+0x6e/0x1c0 kernel: [53282.422692] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.422824] filename_lookup+0xbf/0x1c0 kernel: [53282.422827] ? __virt_addr_valid+0x49/0x70 kernel: [53282.422832] ? __check_object_size.part.0+0x128/0x150 kernel: [53282.422835] ? __check_object_size+0x1c/0x20 kernel: [53282.422837] ? strncpy_from_user+0x44/0x140 kernel: [53282.422843] ? getname_flags.part.0+0x4c/0x1b0 kernel: [53282.422845] user_path_at_empty+0x59/0x90 kernel: [53282.422848] vfs_statx+0x7a/0x120 kernel: [53282.422851] ? __mark_inode_dirty+0x2b6/0x2f0 kernel: [53282.422856] do_statx+0x45/0x80 kernel: [53282.422860] ? iterate_dir+0x121/0x1c0 kernel: [53282.422864] ? __x64_sys_getdents64+0xd5/0x120 kernel: [53282.422867] ? __ia32_sys_getdents+0x120/0x120 kernel: [53282.422870] __x64_sys_statx+0x1f/0x30 kernel: [53282.422873] do_syscall_64+0x61/0xb0 kernel: [53282.422878] ? do_syscall_64+0x6e/0xb0 kernel: [53282.422880] ? exc_page_fault+0x8f/0x170 kernel: [53282.422884] ? asm_exc_page_fault+0x8/0x30 kernel: [53282.422887] entry_SYSCALL_64_after_hwframe+0x44/0xae kernel: [53282.422892] RIP: 0033:0x7fd04b27a16e
1.0
Kernel panic VERIFY3(sa.sa_magic == SA_MAGIC) failed - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Version/Name --- | --- Distribution Name | Ubuntu Distribution Version | 21.10 Kernel Version | 5.13.0.19 Architecture | x64 OpenZFS Version | 2.0.6 <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing I get kernel panic on target during initial synchronization between source and ssh remote machine using syncoid. Once i get this error the only way to fix it is to reboot the target. The zfs recv process stays in "D" state on target machine. Please suggest how i can at least kill the process. Both machines have same version of zfs,kernel and ubuntu I tested with any types of compression or switch it off on both sides with no luck. The only difference is that target has encrypted parent dataset and source doesn't. ### Describe how to reproduce the problem (source) sudo zfs create -o compression=lz4 ssd/private (target) sudo zfs create -o compression=lz4 -o keyformat=passphrase -o keylocation=file:///home/xxxx/backuppass -o canmount=noauto -o encryption=on backup/encrypted crash happens during syncoid run. sudo syncoid --no-stream --debug --create-bookmark --no-sync-snap ssd/private root@hostname:backup/encrypted/private Same happens with zfs-replicate which calls this command internally zfs send -p -c -L -v -R ssd/private@snap1| ssh root@host zfs recv -F -v -s -x encryption backup/encrypted/private The process succeeds if i send to unencrypted base dataset. ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> kernel: [53264.927615] VERIFY3(sa.sa_magic == SA_MAGIC) failed (3511224769 == 3100762) kernel: [53264.927624] PANIC at zfs_quota.c:89:zpl_get_file_info() kernel: [53264.927628] Showing stack for process 1211 kernel: [53264.927631] CPU: 2 PID: 1211 Comm: z_upgrade Tainted: P O 5.13.0-19-generic #19-Ubuntu kernel: [53264.927635] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H81M-DGS, BIOS P2.00 03/10/2016 kernel: [53264.927638] Call Trace: kernel: [53264.927644] show_stack+0x52/0x58 kernel: [53264.927652] dump_stack+0x7d/0x9c kernel: [53264.927663] spl_dumpstack+0x29/0x2b [spl] kernel: [53264.927685] spl_panic+0xd4/0xfc [spl] kernel: [53264.927701] ? __cond_resched+0x1a/0x50 kernel: [53264.927707] ? __mutex_lock.constprop.0+0x35/0x4f0 kernel: [53264.927712] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53264.927881] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53264.928010] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53264.928019] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53264.928024] ? __cond_resched+0x1a/0x50 kernel: [53264.928028] ? slab_pre_alloc_hook.constprop.0+0x96/0xe0 kernel: [53264.928036] zpl_get_file_info+0xa0/0x230 [zfs] kernel: [53264.928236] dmu_objset_userquota_get_ids+0x161/0x440 [zfs] kernel: [53264.928384] dnode_setdirty+0x38/0xf0 [zfs] kernel: [53264.928540] dbuf_dirty+0x44b/0x6d0 [zfs] kernel: [53264.928681] dmu_buf_will_dirty_impl+0xb7/0x110 [zfs] kernel: [53264.928821] dmu_buf_will_dirty+0x16/0x20 [zfs] kernel: [53264.928959] dmu_objset_space_upgrade+0xca/0x1c0 [zfs] kernel: [53264.929107] dmu_objset_id_quota_upgrade_cb+0xae/0x190 [zfs] kernel: [53264.929205] dmu_objset_upgrade_task_cb+0xd2/0x100 [zfs] kernel: [53264.929293] taskq_thread+0x235/0x430 [spl] kernel: [53264.929309] ? wake_up_q+0xa0/0xa0 kernel: [53264.929314] kthread+0x11f/0x140 kernel: [53264.929318] ? param_set_taskq_kick+0xf0/0xf0 [spl] kernel: [53264.929329] ? set_kthread_struct+0x50/0x50 kernel: [53264.929332] ret_from_fork+0x22/0x30 kernel: [53282.420220] VERIFY(0 == sa_handle_get_from_db(zfsvfs->z_os, db, zp, SA_HDL_SHARED, &zp->z_sa_hdl)) failed kernel: [53282.420225] PANIC at zfs_znode.c:339:zfs_znode_sa_init() kernel: [53282.420228] Showing stack for process 9492 kernel: [53282.420230] CPU: 1 PID: 9492 Comm: ls Tainted: P O 5.13.0-19-generic #19-Ubuntu kernel: [53282.420232] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./H81M-DGS, BIOS P2.00 03/10/2016 kernel: [53282.420234] Call Trace: kernel: [53282.420237] show_stack+0x52/0x58 kernel: [53282.420244] dump_stack+0x7d/0x9c kernel: [53282.420251] spl_dumpstack+0x29/0x2b [spl] kernel: [53282.420267] spl_panic+0xd4/0xfc [spl] kernel: [53282.420278] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.420405] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.420499] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.420592] ? dmu_buf_replace_user+0x65/0x80 [zfs] kernel: [53282.420688] ? dmu_buf_set_user+0x13/0x20 [zfs] kernel: [53282.420783] ? dmu_buf_set_user_ie+0x15/0x20 [zfs] kernel: [53282.420878] zfs_znode_sa_init+0xd9/0xe0 [zfs] kernel: [53282.421023] zfs_znode_alloc+0x101/0x560 [zfs] kernel: [53282.421168] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421262] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421354] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421447] ? dbuf_rele_and_unlock+0x13b/0x520 [zfs] kernel: [53282.421540] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421632] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53282.421639] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53282.421643] ? queued_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421748] ? do_raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.421853] ? __raw_callee_save___native_queued_spin_unlock+0x15/0x23 kernel: [53282.421857] ? dmu_object_info_from_dnode+0x8e/0xa0 [zfs] kernel: [53282.421956] zfs_zget+0x235/0x280 [zfs] kernel: [53282.422099] zfs_dirent_lock+0x420/0x560 [zfs] kernel: [53282.422244] zfs_dirlook+0x91/0x2a0 [zfs] kernel: [53282.422388] zfs_lookup+0x1f8/0x3f0 [zfs] kernel: [53282.422537] zpl_lookup+0xcb/0x220 [zfs] kernel: [53282.422684] __lookup_slow+0x84/0x150 kernel: [53282.422687] walk_component+0x141/0x1b0 kernel: [53282.422689] path_lookupat+0x6e/0x1c0 kernel: [53282.422692] ? __raw_spin_unlock+0x9/0x10 [zfs] kernel: [53282.422824] filename_lookup+0xbf/0x1c0 kernel: [53282.422827] ? __virt_addr_valid+0x49/0x70 kernel: [53282.422832] ? __check_object_size.part.0+0x128/0x150 kernel: [53282.422835] ? __check_object_size+0x1c/0x20 kernel: [53282.422837] ? strncpy_from_user+0x44/0x140 kernel: [53282.422843] ? getname_flags.part.0+0x4c/0x1b0 kernel: [53282.422845] user_path_at_empty+0x59/0x90 kernel: [53282.422848] vfs_statx+0x7a/0x120 kernel: [53282.422851] ? __mark_inode_dirty+0x2b6/0x2f0 kernel: [53282.422856] do_statx+0x45/0x80 kernel: [53282.422860] ? iterate_dir+0x121/0x1c0 kernel: [53282.422864] ? __x64_sys_getdents64+0xd5/0x120 kernel: [53282.422867] ? __ia32_sys_getdents+0x120/0x120 kernel: [53282.422870] __x64_sys_statx+0x1f/0x30 kernel: [53282.422873] do_syscall_64+0x61/0xb0 kernel: [53282.422878] ? do_syscall_64+0x6e/0xb0 kernel: [53282.422880] ? exc_page_fault+0x8f/0x170 kernel: [53282.422884] ? asm_exc_page_fault+0x8/0x30 kernel: [53282.422887] entry_SYSCALL_64_after_hwframe+0x44/0xae kernel: [53282.422892] RIP: 0033:0x7fd04b27a16e
defect
kernel panic sa sa magic sa magic failed thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type version name distribution name ubuntu distribution version kernel version architecture openzfs version command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing i get kernel panic on target during initial synchronization between source and ssh remote machine using syncoid once i get this error the only way to fix it is to reboot the target the zfs recv process stays in d state on target machine please suggest how i can at least kill the process both machines have same version of zfs kernel and ubuntu i tested with any types of compression or switch it off on both sides with no luck the only difference is that target has encrypted parent dataset and source doesn t describe how to reproduce the problem source sudo zfs create o compression ssd private target sudo zfs create o compression o keyformat passphrase o keylocation file home xxxx backuppass o canmount noauto o encryption on backup encrypted crash happens during syncoid run sudo syncoid no stream debug create bookmark no sync snap ssd private root hostname backup encrypted private same happens with zfs replicate which calls this command internally zfs send p c l v r ssd private ssh root host zfs recv f v s x encryption backup encrypted private the process succeeds if i send to unencrypted base dataset include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with kernel sa sa magic sa magic failed kernel panic at zfs quota c zpl get file info kernel showing stack for process kernel cpu pid comm z upgrade tainted p o generic ubuntu kernel hardware name to be filled by o e m to be filled by o e m dgs bios kernel call trace kernel show stack kernel dump stack kernel spl dumpstack kernel spl panic kernel cond resched kernel mutex lock constprop kernel do raw spin unlock kernel raw spin unlock kernel raw callee save native queued spin unlock kernel raw callee save native queued spin unlock kernel cond resched kernel slab pre alloc hook constprop kernel zpl get file info kernel dmu objset userquota get ids kernel dnode setdirty kernel dbuf dirty kernel dmu buf will dirty impl kernel dmu buf will dirty kernel dmu objset space upgrade kernel dmu objset id quota upgrade cb kernel dmu objset upgrade task cb kernel taskq thread kernel wake up q kernel kthread kernel param set taskq kick kernel set kthread struct kernel ret from fork kernel verify sa handle get from db zfsvfs z os db zp sa hdl shared zp z sa hdl failed kernel panic at zfs znode c zfs znode sa init kernel showing stack for process kernel cpu pid comm ls tainted p o generic ubuntu kernel hardware name to be filled by o e m to be filled by o e m dgs bios kernel call trace kernel show stack kernel dump stack kernel spl dumpstack kernel spl panic kernel queued spin unlock kernel do raw spin unlock kernel raw spin unlock kernel dmu buf replace user kernel dmu buf set user kernel dmu buf set user ie kernel zfs znode sa init kernel zfs znode alloc kernel queued spin unlock kernel do raw spin unlock kernel raw spin unlock kernel dbuf rele and unlock kernel queued spin unlock kernel raw callee save native queued spin unlock kernel raw callee save native queued spin unlock kernel queued spin unlock kernel do raw spin unlock kernel raw callee save native queued spin unlock kernel dmu object info from dnode kernel zfs zget kernel zfs dirent lock kernel zfs dirlook kernel zfs lookup kernel zpl lookup kernel lookup slow kernel walk component kernel path lookupat kernel raw spin unlock kernel filename lookup kernel virt addr valid kernel check object size part kernel check object size kernel strncpy from user kernel getname flags part kernel user path at empty kernel vfs statx kernel mark inode dirty kernel do statx kernel iterate dir kernel sys kernel sys getdents kernel sys statx kernel do syscall kernel do syscall kernel exc page fault kernel asm exc page fault kernel entry syscall after hwframe kernel rip
1
79,682
28,496,218,796
IssuesEvent
2023-04-18 14:25:03
vector-im/element-desktop
https://api.github.com/repos/vector-im/element-desktop
opened
Updater bug
T-Defect
### Description Riot does not run after running updater. ### Steps to reproduce -Ran updater -opened riot.exe -does not run -ran updater again -updater does not run Logs being sent: yes/no [Riot bug.txt](https://github.com/vector-im/riot-web/files/4808486/Riot.bug.txt) ### Version information <!-- IMPORTANT: please answer the following questions, to help us narrow down the problem --> - **Platform**: desktop - **OS**: Windows 8.1 - **Version**: 1.6.4 updating to 1.6.5
1.0
Updater bug - ### Description Riot does not run after running updater. ### Steps to reproduce -Ran updater -opened riot.exe -does not run -ran updater again -updater does not run Logs being sent: yes/no [Riot bug.txt](https://github.com/vector-im/riot-web/files/4808486/Riot.bug.txt) ### Version information <!-- IMPORTANT: please answer the following questions, to help us narrow down the problem --> - **Platform**: desktop - **OS**: Windows 8.1 - **Version**: 1.6.4 updating to 1.6.5
defect
updater bug description riot does not run after running updater steps to reproduce ran updater opened riot exe does not run ran updater again updater does not run logs being sent yes no version information platform desktop os windows version updating to
1
130,198
18,050,270,484
IssuesEvent
2021-09-19 16:17:17
towhee-io/towhee
https://api.github.com/repos/towhee-io/towhee
closed
The key abstraction of pipeline framework and component interface design
type/design
# A Hello World Example --- ```python class Inc(Operator): @create_op_in_pipeline def __call__(self, x:int) -> int: return x+1 class Add(Operator): @create_op_in_pipeline def __call__(self, x1:int, x2:int) -> int: return x1+x2 @create_pipeline def my_pipeline(x1:int, x2:int): inc1 = Inc() inc2 = Inc() add = Add() return add(inc1(x1), inc2(x2)) ``` In this example, we have three operators in a pipeline, where *add* depends on the results of *inc1* and *inc2*. Behind the scene, Towhee's compiler will create a DAG during the execution of *my_pipeline*. The DAG construction is driven by the operator decorator and pipeline decorator. When the program executes the line *add(inc1(x1), inc2(x2))*, the decorator @create_op_in_pipeline will be called before the Operators' __call__ method. It will put *inc1*, *inc2*, *add* into the pipeline's context, link the pipeline's input *x1*, *x2* to *inc1*, *inc2* respectively, and set *inc1*, *inc2*'s output as *add*'s input. The operators' dependencies are also settled during the DAG construction. # Main Components --- To support the example mentioned above, Towhee needs five major components. * **Operator** An operator is a set of code that performs one step in the pipeline, such as preprocessing, model inference, postprocessing, etc. An operator is analogous to a function, in that it has a name, parameters, return values, and a body. Once involved in a pipeline, an operator will be construct as a node in the DAG. * **Compiler** There are two phase compilation. The first step is to convert a pipeline description (eg. *my_pipeline*) and operators (eg. *Inc, Add*) into a intermediate graph, where the variables, operators, dependencies are precisely described. The second step is to convert the intermediate graph to a backend executable, such as a local python-driven DAG, or a kubeflow pipeline. * **Variable** The data abstraction of pipeline, operator's inputs and outputs. It is partitionable and iterable. The difference of device memory, the cross-device memory copy, data transfer is also handled by Variable. * **Pipeline** The runtime scheduling context of a pipeline, including variables, tasks, and execution flow. All the statefull part during taks execution is maintained in *Pipeline*. * **Engine** The engine driving the task execution on a pipeline. It takes a *Pipeline* as its scheduling context, and performs all the necessary data paritition, operator parallelization, resource management, etc. # Works Remaining --- #### Operator - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposal: operator abstraction #### Pipeline - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposals #### Engine - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposals #### Variable - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposal: variable abstraction #### Compiler - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposal: intermediate graph representation - [ ] design proposal: graph construction driven mechanism - [ ] design proposal: convert intermediate graph to initial pipeline context
1.0
The key abstraction of pipeline framework and component interface design - # A Hello World Example --- ```python class Inc(Operator): @create_op_in_pipeline def __call__(self, x:int) -> int: return x+1 class Add(Operator): @create_op_in_pipeline def __call__(self, x1:int, x2:int) -> int: return x1+x2 @create_pipeline def my_pipeline(x1:int, x2:int): inc1 = Inc() inc2 = Inc() add = Add() return add(inc1(x1), inc2(x2)) ``` In this example, we have three operators in a pipeline, where *add* depends on the results of *inc1* and *inc2*. Behind the scene, Towhee's compiler will create a DAG during the execution of *my_pipeline*. The DAG construction is driven by the operator decorator and pipeline decorator. When the program executes the line *add(inc1(x1), inc2(x2))*, the decorator @create_op_in_pipeline will be called before the Operators' __call__ method. It will put *inc1*, *inc2*, *add* into the pipeline's context, link the pipeline's input *x1*, *x2* to *inc1*, *inc2* respectively, and set *inc1*, *inc2*'s output as *add*'s input. The operators' dependencies are also settled during the DAG construction. # Main Components --- To support the example mentioned above, Towhee needs five major components. * **Operator** An operator is a set of code that performs one step in the pipeline, such as preprocessing, model inference, postprocessing, etc. An operator is analogous to a function, in that it has a name, parameters, return values, and a body. Once involved in a pipeline, an operator will be construct as a node in the DAG. * **Compiler** There are two phase compilation. The first step is to convert a pipeline description (eg. *my_pipeline*) and operators (eg. *Inc, Add*) into a intermediate graph, where the variables, operators, dependencies are precisely described. The second step is to convert the intermediate graph to a backend executable, such as a local python-driven DAG, or a kubeflow pipeline. * **Variable** The data abstraction of pipeline, operator's inputs and outputs. It is partitionable and iterable. The difference of device memory, the cross-device memory copy, data transfer is also handled by Variable. * **Pipeline** The runtime scheduling context of a pipeline, including variables, tasks, and execution flow. All the statefull part during taks execution is maintained in *Pipeline*. * **Engine** The engine driving the task execution on a pipeline. It takes a *Pipeline* as its scheduling context, and performs all the necessary data paritition, operator parallelization, resource management, etc. # Works Remaining --- #### Operator - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposal: operator abstraction #### Pipeline - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposals #### Engine - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposals #### Variable - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposal: variable abstraction #### Compiler - [x] interface design - [x] internal key classes design - [x] docstrings - [ ] design proposal: intermediate graph representation - [ ] design proposal: graph construction driven mechanism - [ ] design proposal: convert intermediate graph to initial pipeline context
non_defect
the key abstraction of pipeline framework and component interface design a hello world example python class inc operator create op in pipeline def call self x int int return x class add operator create op in pipeline def call self int int int return create pipeline def my pipeline int int inc inc add add return add in this example we have three operators in a pipeline where add depends on the results of and behind the scene towhee s compiler will create a dag during the execution of my pipeline the dag construction is driven by the operator decorator and pipeline decorator when the program executes the line add the decorator create op in pipeline will be called before the operators call method it will put add into the pipeline s context link the pipeline s input to respectively and set s output as add s input the operators dependencies are also settled during the dag construction main components to support the example mentioned above towhee needs five major components operator an operator is a set of code that performs one step in the pipeline such as preprocessing model inference postprocessing etc an operator is analogous to a function in that it has a name parameters return values and a body once involved in a pipeline an operator will be construct as a node in the dag compiler there are two phase compilation the first step is to convert a pipeline description eg my pipeline and operators eg inc add into a intermediate graph where the variables operators dependencies are precisely described the second step is to convert the intermediate graph to a backend executable such as a local python driven dag or a kubeflow pipeline variable the data abstraction of pipeline operator s inputs and outputs it is partitionable and iterable the difference of device memory the cross device memory copy data transfer is also handled by variable pipeline the runtime scheduling context of a pipeline including variables tasks and execution flow all the statefull part during taks execution is maintained in pipeline engine the engine driving the task execution on a pipeline it takes a pipeline as its scheduling context and performs all the necessary data paritition operator parallelization resource management etc works remaining operator interface design internal key classes design docstrings design proposal operator abstraction pipeline interface design internal key classes design docstrings design proposals engine interface design internal key classes design docstrings design proposals variable interface design internal key classes design docstrings design proposal variable abstraction compiler interface design internal key classes design docstrings design proposal intermediate graph representation design proposal graph construction driven mechanism design proposal convert intermediate graph to initial pipeline context
0
103,878
12,977,715,929
IssuesEvent
2020-07-21 21:12:00
alice-i-cecile/Fonts-of-Power
https://api.github.com/repos/alice-i-cecile/Fonts-of-Power
opened
Merge innate and enchantment affixes?
design
Common affixes can be modified with Craftsmanship, shared pool
1.0
Merge innate and enchantment affixes? - Common affixes can be modified with Craftsmanship, shared pool
non_defect
merge innate and enchantment affixes common affixes can be modified with craftsmanship shared pool
0
38,526
8,872,882,209
IssuesEvent
2019-01-11 16:31:39
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
Fix a variety of compiler warnings when building jOOQ
C: Functionality E: All Editions P: Low R: Fixed T: Defect
There are a few warnings when compiling jOOQ, which will not be listed here. These could be fixed relatively easily.
1.0
Fix a variety of compiler warnings when building jOOQ - There are a few warnings when compiling jOOQ, which will not be listed here. These could be fixed relatively easily.
defect
fix a variety of compiler warnings when building jooq there are a few warnings when compiling jooq which will not be listed here these could be fixed relatively easily
1
17,657
3,012,799,036
IssuesEvent
2015-07-29 02:43:32
yawlfoundation/yawl
https://api.github.com/repos/yawlfoundation/yawl
closed
Unnecessary to report this curly bracket issue
auto-migrated Component-Editor Priority-Medium Type-Defect
``` Please watch the Spielberg movie https://fileshare.qut.edu.au/public/clemens/test_0000.mpeg ``` Original issue reported on code.google.com by `stephan....@googlemail.com` on 26 Sep 2008 at 1:06
1.0
Unnecessary to report this curly bracket issue - ``` Please watch the Spielberg movie https://fileshare.qut.edu.au/public/clemens/test_0000.mpeg ``` Original issue reported on code.google.com by `stephan....@googlemail.com` on 26 Sep 2008 at 1:06
defect
unnecessary to report this curly bracket issue please watch the spielberg movie original issue reported on code google com by stephan googlemail com on sep at
1
797,912
28,209,069,564
IssuesEvent
2023-04-05 01:27:39
janus-idp/software-templates
https://api.github.com/repos/janus-idp/software-templates
closed
Add a documentation template
kind/feature priority/medium
The template will ask which component to link with and create a new repo. ``` EntityPicker: type: string ui:field: EntityPicker ui:options: catalogFilter: - kind: component ``` For example a component with the name `node-website` will have a new repo `node-website-techdocs` The template skeleton will contain the `mkdocs.yaml` with the `/docs` folder contains a sample index.md It will also create the GitHub Action pipeline. For example the one from Showcase: https://github.com/janus-idp/backstage-showcase/blob/main/.github/workflows/techdocs.yaml No need to do Tekton pipeline yet.
1.0
Add a documentation template - The template will ask which component to link with and create a new repo. ``` EntityPicker: type: string ui:field: EntityPicker ui:options: catalogFilter: - kind: component ``` For example a component with the name `node-website` will have a new repo `node-website-techdocs` The template skeleton will contain the `mkdocs.yaml` with the `/docs` folder contains a sample index.md It will also create the GitHub Action pipeline. For example the one from Showcase: https://github.com/janus-idp/backstage-showcase/blob/main/.github/workflows/techdocs.yaml No need to do Tekton pipeline yet.
non_defect
add a documentation template the template will ask which component to link with and create a new repo entitypicker type string ui field entitypicker ui options catalogfilter kind component for example a component with the name node website will have a new repo node website techdocs the template skeleton will contain the mkdocs yaml with the docs folder contains a sample index md it will also create the github action pipeline for example the one from showcase no need to do tekton pipeline yet
0
8,984
2,615,116,801
IssuesEvent
2015-03-01 05:41:57
chrsmith/google-api-java-client
https://api.github.com/repos/chrsmith/google-api-java-client
closed
Dependency for xpp3
auto-migrated Component-Release Milestone-Version1.2.0 Priority-Medium Type-Defect
``` Version of google-api-java-client (e.g. 1.1.0-alpha)? 1.1.1-alpha Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)? Java 6 Describe the problem. When I call new AtomParser(), I get java.lang.NoClassDefFoundError: org/xmlpull/v1/XmlPullParserException How would you expect it to be fixed? Add dependency on kxml2 (or xpp3) from the pom.xml. Note: this was reported by a user on the Google Group ``` Original issue reported on code.google.com by `yan...@google.com` on 29 Sep 2010 at 7:35
1.0
Dependency for xpp3 - ``` Version of google-api-java-client (e.g. 1.1.0-alpha)? 1.1.1-alpha Java environment (e.g. Java 6, Android 2.2, App Engine 1.3.7)? Java 6 Describe the problem. When I call new AtomParser(), I get java.lang.NoClassDefFoundError: org/xmlpull/v1/XmlPullParserException How would you expect it to be fixed? Add dependency on kxml2 (or xpp3) from the pom.xml. Note: this was reported by a user on the Google Group ``` Original issue reported on code.google.com by `yan...@google.com` on 29 Sep 2010 at 7:35
defect
dependency for version of google api java client e g alpha alpha java environment e g java android app engine java describe the problem when i call new atomparser i get java lang noclassdeffounderror org xmlpull xmlpullparserexception how would you expect it to be fixed add dependency on or from the pom xml note this was reported by a user on the google group original issue reported on code google com by yan google com on sep at
1
56,669
11,624,582,173
IssuesEvent
2020-02-27 11:01:28
microsoft/vscode-python
https://api.github.com/repos/microsoft/vscode-python
closed
Update CI to run tests against Python 3.8
cause-CI/CD needs PR type-code health
Python 3.8.0 is now available on Azure DevOps (see #8296 )
1.0
Update CI to run tests against Python 3.8 - Python 3.8.0 is now available on Azure DevOps (see #8296 )
non_defect
update ci to run tests against python python is now available on azure devops see
0
808,972
30,119,173,908
IssuesEvent
2023-06-30 13:55:12
AmplifyCreations/AmplifyShaderEditor-Feedback
https://api.github.com/repos/AmplifyCreations/AmplifyShaderEditor-Feedback
opened
Update Node – IndirectDiffuseLighting
high priority Update Function
URP Check for changes in ase IndirectDiffuseLighting Node Review for changes per api version in Unity Lighting.hlsl api 10.2.2 -- 10.5.0 api 10.5.1 -- 10.10.1 api 11.0.0 api 12.1.0 -- 12.1.6 api 12.1.7 -- 12.1.12 api 13.1.8 -- 13.1.9 api 14.0.4 -- 14.0.7 api 14.0.8 api 15.0.6 -- 16.0.2 in URP 15x check for changes in OUTPUT_SH
1.0
Update Node – IndirectDiffuseLighting - URP Check for changes in ase IndirectDiffuseLighting Node Review for changes per api version in Unity Lighting.hlsl api 10.2.2 -- 10.5.0 api 10.5.1 -- 10.10.1 api 11.0.0 api 12.1.0 -- 12.1.6 api 12.1.7 -- 12.1.12 api 13.1.8 -- 13.1.9 api 14.0.4 -- 14.0.7 api 14.0.8 api 15.0.6 -- 16.0.2 in URP 15x check for changes in OUTPUT_SH
non_defect
update node – indirectdiffuselighting urp check for changes in ase indirectdiffuselighting node review for changes per api version in unity lighting hlsl api api api api api api api api api in urp check for changes in output sh
0
18,511
10,132,482,902
IssuesEvent
2019-08-01 22:38:19
flutter/flutter
https://api.github.com/repos/flutter/flutter
opened
Improve performance dashboard
severe: performance team
Our current [performance dashboard](https://flutter-dashboard.appspot.com/benchmarks.html) is very slow to load and does not support - zoom in or zoom out in time - move the time window - automatic regression alerts It would be nice to migrate all our performance benchmarks data to some well maintained dashboard such as https://chromeperf.appspot.com or https://perf.skia.org to support those functions. It would be even better if we can port all our performance metrics in various dashboards to a single place for easier discovery and analysis. We can also share the alert mechanism among all of them.
True
Improve performance dashboard - Our current [performance dashboard](https://flutter-dashboard.appspot.com/benchmarks.html) is very slow to load and does not support - zoom in or zoom out in time - move the time window - automatic regression alerts It would be nice to migrate all our performance benchmarks data to some well maintained dashboard such as https://chromeperf.appspot.com or https://perf.skia.org to support those functions. It would be even better if we can port all our performance metrics in various dashboards to a single place for easier discovery and analysis. We can also share the alert mechanism among all of them.
non_defect
improve performance dashboard our current is very slow to load and does not support zoom in or zoom out in time move the time window automatic regression alerts it would be nice to migrate all our performance benchmarks data to some well maintained dashboard such as or to support those functions it would be even better if we can port all our performance metrics in various dashboards to a single place for easier discovery and analysis we can also share the alert mechanism among all of them
0
71,477
23,644,691,936
IssuesEvent
2022-08-25 20:41:40
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
riot silently swallows M_FORBIDDEN errors from invites
T-Defect P1 S-Minor A-Invite A-Error-Message O-Uncommon
so you have no feedback about the failure
1.0
riot silently swallows M_FORBIDDEN errors from invites - so you have no feedback about the failure
defect
riot silently swallows m forbidden errors from invites so you have no feedback about the failure
1
42,877
11,349,478,755
IssuesEvent
2020-01-24 05:07:36
idaholab/raven
https://api.github.com/repos/idaholab/raven
closed
Unexpected exit in optimizer
defect priority_critical
-------- Issue Description -------- ##### What did you expect to see happen? When running with `debug` verbosity, the optimizer should always have an report a good reason for finishing at the end of the multirun. ##### What did you see instead? In the special event that: - There are multiple trajectories, - The first trajectory is killed to follow the second trajectory, - There are outstanding runs for the first trajectory that are collected after it is killed, - The second trajectory is waiting to collect new opt points, it seems that the optimizer can get caught in a loop and claim it is not ready to provide new points (since it's waiting to collect the new opt point) but also not submit any new points. ##### Do you have a suggested fix for the development team? On inspection, this is because in GradientBasedOptimizer.localFinalizeActualSampling, if any trajectory does not have an update to solutionExport pending, then the "else" uses a "break" instead of a "continue", causing other trajectories not to be updated. Changing this "else" seems to fix the niche issue. Note this fix has been tested on a branch, talbpaul/optimizer_sprint_fixes, but needs a regression test to cover the bug in the future before merging onto the main development branch. ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or improvement? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. __No wrong results, instead a failure to produce results__ - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [ ] 1. If the issue is a defect, is the defect fixed? - [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)? - [x] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
1.0
Unexpected exit in optimizer - -------- Issue Description -------- ##### What did you expect to see happen? When running with `debug` verbosity, the optimizer should always have an report a good reason for finishing at the end of the multirun. ##### What did you see instead? In the special event that: - There are multiple trajectories, - The first trajectory is killed to follow the second trajectory, - There are outstanding runs for the first trajectory that are collected after it is killed, - The second trajectory is waiting to collect new opt points, it seems that the optimizer can get caught in a loop and claim it is not ready to provide new points (since it's waiting to collect the new opt point) but also not submit any new points. ##### Do you have a suggested fix for the development team? On inspection, this is because in GradientBasedOptimizer.localFinalizeActualSampling, if any trajectory does not have an update to solutionExport pending, then the "else" uses a "break" instead of a "continue", causing other trajectories not to be updated. Changing this "else" seems to fix the niche issue. Note this fix has been tested on a branch, talbpaul/optimizer_sprint_fixes, but needs a regression test to cover the bug in the future before merging onto the main development branch. ---------------- For Change Control Board: Issue Review ---------------- This review should occur before any development is performed as a response to this issue. - [x] 1. Is it tagged with a type: defect or improvement? - [x] 2. Is it tagged with a priority: critical, normal or minor? - [x] 3. If it will impact requirements or requirements tests, is it tagged with requirements? - [x] 4. If it is a defect, can it cause wrong results for users? If so an email needs to be sent to the users. __No wrong results, instead a failure to produce results__ - [x] 5. Is a rationale provided? (Such as explaining why the improvement is needed or why current code is wrong.) ------- For Change Control Board: Issue Closure ------- This review should occur when the issue is imminently going to be closed. - [ ] 1. If the issue is a defect, is the defect fixed? - [ ] 2. If the issue is a defect, is the defect tested for in the regression test system? (If not explain why not.) - [ ] 3. If the issue can impact users, has an email to the users group been written (the email should specify if the defect impacts stable or master)? - [ ] 4. If the issue is a defect, does it impact the latest stable branch? If yes, is there any issue tagged with stable (create if needed)? - [x] 5. If the issue is being closed without a merge request, has an explanation of why it is being closed been provided?
defect
unexpected exit in optimizer issue description what did you expect to see happen when running with debug verbosity the optimizer should always have an report a good reason for finishing at the end of the multirun what did you see instead in the special event that there are multiple trajectories the first trajectory is killed to follow the second trajectory there are outstanding runs for the first trajectory that are collected after it is killed the second trajectory is waiting to collect new opt points it seems that the optimizer can get caught in a loop and claim it is not ready to provide new points since it s waiting to collect the new opt point but also not submit any new points do you have a suggested fix for the development team on inspection this is because in gradientbasedoptimizer localfinalizeactualsampling if any trajectory does not have an update to solutionexport pending then the else uses a break instead of a continue causing other trajectories not to be updated changing this else seems to fix the niche issue note this fix has been tested on a branch talbpaul optimizer sprint fixes but needs a regression test to cover the bug in the future before merging onto the main development branch for change control board issue review this review should occur before any development is performed as a response to this issue is it tagged with a type defect or improvement is it tagged with a priority critical normal or minor if it will impact requirements or requirements tests is it tagged with requirements if it is a defect can it cause wrong results for users if so an email needs to be sent to the users no wrong results instead a failure to produce results is a rationale provided such as explaining why the improvement is needed or why current code is wrong for change control board issue closure this review should occur when the issue is imminently going to be closed if the issue is a defect is the defect fixed if the issue is a defect is the defect tested for in the regression test system if not explain why not if the issue can impact users has an email to the users group been written the email should specify if the defect impacts stable or master if the issue is a defect does it impact the latest stable branch if yes is there any issue tagged with stable create if needed if the issue is being closed without a merge request has an explanation of why it is being closed been provided
1
49,000
13,185,189,231
IssuesEvent
2020-08-12 20:54:04
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
Get a continus build system back in action (Trac #586)
Incomplete Migration Migrated from Trac defect tools/ports
<details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/586 , reported by blaufuss and owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2011-03-24T18:46:46", "description": "Currently non are running. Previous incarnations included:\n-snowblower\n-dart nodes\n\nOn potential candidate is the \ncmake/dash stuff. \n\nMaybe discuss rolling our own python foo.", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1300992406000000", "component": "tools/ports", "summary": "Get a continus build system back in action", "priority": "normal", "keywords": "", "time": "2010-01-19T20:50:46", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
Get a continus build system back in action (Trac #586) - <details> <summary><em>Migrated from https://code.icecube.wisc.edu/ticket/586 , reported by blaufuss and owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2011-03-24T18:46:46", "description": "Currently non are running. Previous incarnations included:\n-snowblower\n-dart nodes\n\nOn potential candidate is the \ncmake/dash stuff. \n\nMaybe discuss rolling our own python foo.", "reporter": "blaufuss", "cc": "", "resolution": "fixed", "_ts": "1300992406000000", "component": "tools/ports", "summary": "Get a continus build system back in action", "priority": "normal", "keywords": "", "time": "2010-01-19T20:50:46", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
get a continus build system back in action trac migrated from reported by blaufuss and owned by nega json status closed changetime description currently non are running previous incarnations included n snowblower n dart nodes n non potential candidate is the ncmake dash stuff n nmaybe discuss rolling our own python foo reporter blaufuss cc resolution fixed ts component tools ports summary get a continus build system back in action priority normal keywords time milestone owner nega type defect
1
334,723
24,433,378,329
IssuesEvent
2022-10-06 09:42:41
SAP/luigi
https://api.github.com/repos/SAP/luigi
opened
Document option to ignore events from inactive iframes
documentation
Document option to ignore events from inactive iframes See: https://github.com/SAP/luigi/pull/2908/files
1.0
Document option to ignore events from inactive iframes - Document option to ignore events from inactive iframes See: https://github.com/SAP/luigi/pull/2908/files
non_defect
document option to ignore events from inactive iframes document option to ignore events from inactive iframes see
0
22,587
3,670,756,307
IssuesEvent
2016-02-22 01:01:01
networkx/networkx
https://api.github.com/repos/networkx/networkx
closed
copying a graph without data fails for non-Graph classes
Defect
See #1876. The unit test there was insufficient for other graph classes, including directed and multigraphs.
1.0
copying a graph without data fails for non-Graph classes - See #1876. The unit test there was insufficient for other graph classes, including directed and multigraphs.
defect
copying a graph without data fails for non graph classes see the unit test there was insufficient for other graph classes including directed and multigraphs
1
57,796
16,065,355,334
IssuesEvent
2021-04-23 18:10:47
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
zed.d/pool_import-led.sh fallback path terminally broken
Status: Triage Needed Type: Defect
### Describe the problem you're observing See `cmd/zed/zed.d/zed.d/pool_import-led.sh`: ``` cmd='echo led_token=$(cat "$VDEV_ENC_SYSFS_PATH/fault"),"$VDEV_ENC_SYSFS_PATH",' out=$($ZPOOL status -vc "$cmd" "$pool" | grep 'led_token=') ``` This will simply never work: `-c` takes script names nowadays, and silently ignores this argument (if only because it has a slash in it). ### Describe how to reproduce the problem This appears to date back to the original implementation by @tonyhutter in b291029e8661dfc2f03118921e854eec4e5bbb75 (Fri Feb 10 16:09:45 2017 -0800), and the commands used to be run using popen(3), which passes them through `/bin/sh -c`. This has obviously changed, but the ZEDLET hasn't, and, according to the comment at the top, is now broken for `pool_import` events, which are the only ones that hit this fallback.
1.0
zed.d/pool_import-led.sh fallback path terminally broken - ### Describe the problem you're observing See `cmd/zed/zed.d/zed.d/pool_import-led.sh`: ``` cmd='echo led_token=$(cat "$VDEV_ENC_SYSFS_PATH/fault"),"$VDEV_ENC_SYSFS_PATH",' out=$($ZPOOL status -vc "$cmd" "$pool" | grep 'led_token=') ``` This will simply never work: `-c` takes script names nowadays, and silently ignores this argument (if only because it has a slash in it). ### Describe how to reproduce the problem This appears to date back to the original implementation by @tonyhutter in b291029e8661dfc2f03118921e854eec4e5bbb75 (Fri Feb 10 16:09:45 2017 -0800), and the commands used to be run using popen(3), which passes them through `/bin/sh -c`. This has obviously changed, but the ZEDLET hasn't, and, according to the comment at the top, is now broken for `pool_import` events, which are the only ones that hit this fallback.
defect
zed d pool import led sh fallback path terminally broken describe the problem you re observing see cmd zed zed d zed d pool import led sh cmd echo led token cat vdev enc sysfs path fault vdev enc sysfs path out zpool status vc cmd pool grep led token this will simply never work c takes script names nowadays and silently ignores this argument if only because it has a slash in it describe how to reproduce the problem this appears to date back to the original implementation by tonyhutter in fri feb and the commands used to be run using popen which passes them through bin sh c this has obviously changed but the zedlet hasn t and according to the comment at the top is now broken for pool import events which are the only ones that hit this fallback
1
43,437
11,717,046,893
IssuesEvent
2020-03-09 16:36:19
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
opened
[COGNITION]: Content SHOULD not be cut off; IE11 accordion content is not flowing into columns
508-defect-3 508-issue-cognition 508/Accessibility vsa vsa-public-websites
**Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description Please see screenshot below. Content **SHOULD** not be cut off. In IE11, under Compensation heading, when the accordion for "Find addresses for other benefit types" is opened the content shown displays in a single row, resulting in a horizontal scroll that may not be readily obvious, especially for older users and those with cognitive considerations. Checking the same component in Chrome, the content appears in columns and does not exceed the width of the show/hide accordion heading. ## Point of Contact **VFS Point of Contact:** Jennifer ## Acceptance Criteria As an IE11 user, I want to open "Find addresses for other benefits types" accordion and see the content without a horizontal scroll. ## Environment * Operating System: Windows 10 * Browser: IE11 * Server destination: production ## Steps to Recreate 1. Enter `https://www.va.gov/decision-reviews/supplemental-claim/` in browser 2. Navigate to Compensation heading 3. Open the accordion for "Find addresses for other benefits types" 4. Verify that the content currently results in a horizontal scroll ## Possible Fixes (optional) `flex-wrap` has partial support in IE11, and may be the culprit here. It may be necessary to swap to a `display: inline-block` approach. ## WCAG or Vendor Guidance (optional) * [Can I use flex-wrap?](https://caniuse.com/#search=flex-wrap) ## Screenshots or Trace Logs ![vsa-pw-ie-hz-scroll-bug](https://user-images.githubusercontent.com/57469/76234229-ce110c80-61ff-11ea-84a5-f0d9d872197f.png)
1.0
[COGNITION]: Content SHOULD not be cut off; IE11 accordion content is not flowing into columns - **Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description Please see screenshot below. Content **SHOULD** not be cut off. In IE11, under Compensation heading, when the accordion for "Find addresses for other benefit types" is opened the content shown displays in a single row, resulting in a horizontal scroll that may not be readily obvious, especially for older users and those with cognitive considerations. Checking the same component in Chrome, the content appears in columns and does not exceed the width of the show/hide accordion heading. ## Point of Contact **VFS Point of Contact:** Jennifer ## Acceptance Criteria As an IE11 user, I want to open "Find addresses for other benefits types" accordion and see the content without a horizontal scroll. ## Environment * Operating System: Windows 10 * Browser: IE11 * Server destination: production ## Steps to Recreate 1. Enter `https://www.va.gov/decision-reviews/supplemental-claim/` in browser 2. Navigate to Compensation heading 3. Open the accordion for "Find addresses for other benefits types" 4. Verify that the content currently results in a horizontal scroll ## Possible Fixes (optional) `flex-wrap` has partial support in IE11, and may be the culprit here. It may be necessary to swap to a `display: inline-block` approach. ## WCAG or Vendor Guidance (optional) * [Can I use flex-wrap?](https://caniuse.com/#search=flex-wrap) ## Screenshots or Trace Logs ![vsa-pw-ie-hz-scroll-bug](https://user-images.githubusercontent.com/57469/76234229-ce110c80-61ff-11ea-84a5-f0d9d872197f.png)
defect
content should not be cut off accordion content is not flowing into columns feedback framework ❗️ must for if the feedback must be applied ⚠️should if the feedback is best practice ✔️ consider for suggestions enhancements description please see screenshot below content should not be cut off in under compensation heading when the accordion for find addresses for other benefit types is opened the content shown displays in a single row resulting in a horizontal scroll that may not be readily obvious especially for older users and those with cognitive considerations checking the same component in chrome the content appears in columns and does not exceed the width of the show hide accordion heading point of contact vfs point of contact jennifer acceptance criteria as an user i want to open find addresses for other benefits types accordion and see the content without a horizontal scroll environment operating system windows browser server destination production steps to recreate enter in browser navigate to compensation heading open the accordion for find addresses for other benefits types verify that the content currently results in a horizontal scroll possible fixes optional flex wrap has partial support in and may be the culprit here it may be necessary to swap to a display inline block approach wcag or vendor guidance optional screenshots or trace logs
1
224,909
24,795,424,833
IssuesEvent
2022-10-24 16:49:39
Jacksole/Learning-JavaScript
https://api.github.com/repos/Jacksole/Learning-JavaScript
closed
CVE-2021-29059 (High) detected in is-svg-3.0.0.tgz - autoclosed
security vulnerability
## CVE-2021-29059 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-svg-3.0.0.tgz</b></p></summary> <p>Check if a string or buffer is SVG</p> <p>Library home page: <a href="https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz">https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz</a></p> <p>Path to dependency file: /React/mern-todo-app/package.json</p> <p>Path to vulnerable library: /React/mern-todo-app/node_modules/is-svg/package.json,/React/react-form-validation-demo/node_modules/is-svg/package.json,/React/fullstack_app/client/node_modules/is-svg/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.2.0.tgz (Root Library) - optimize-css-assets-webpack-plugin-5.0.3.tgz - cssnano-4.1.10.tgz - cssnano-preset-default-4.0.7.tgz - postcss-svgo-4.0.2.tgz - :x: **is-svg-3.0.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was discovered in IS-SVG version 2.1.0 to 4.2.2 and below where a Regular Expression Denial of Service (ReDOS) occurs if the application is provided and checks a crafted invalid SVG string. <p>Publish Date: Jun 21, 2021 4:15:00 PM <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-29059>CVE-2021-29059</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: Jun 21, 2021 4:15:00 PM</p> <p>Fix Resolution (is-svg): 4.3.0</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-29059 (High) detected in is-svg-3.0.0.tgz - autoclosed - ## CVE-2021-29059 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>is-svg-3.0.0.tgz</b></p></summary> <p>Check if a string or buffer is SVG</p> <p>Library home page: <a href="https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz">https://registry.npmjs.org/is-svg/-/is-svg-3.0.0.tgz</a></p> <p>Path to dependency file: /React/mern-todo-app/package.json</p> <p>Path to vulnerable library: /React/mern-todo-app/node_modules/is-svg/package.json,/React/react-form-validation-demo/node_modules/is-svg/package.json,/React/fullstack_app/client/node_modules/is-svg/package.json</p> <p> Dependency Hierarchy: - react-scripts-3.2.0.tgz (Root Library) - optimize-css-assets-webpack-plugin-5.0.3.tgz - cssnano-4.1.10.tgz - cssnano-preset-default-4.0.7.tgz - postcss-svgo-4.0.2.tgz - :x: **is-svg-3.0.0.tgz** (Vulnerable Library) <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A vulnerability was discovered in IS-SVG version 2.1.0 to 4.2.2 and below where a Regular Expression Denial of Service (ReDOS) occurs if the application is provided and checks a crafted invalid SVG string. <p>Publish Date: Jun 21, 2021 4:15:00 PM <p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2021-29059>CVE-2021-29059</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Release Date: Jun 21, 2021 4:15:00 PM</p> <p>Fix Resolution (is-svg): 4.3.0</p> <p>Direct dependency fix Resolution (react-scripts): 3.3.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in is svg tgz autoclosed cve high severity vulnerability vulnerable library is svg tgz check if a string or buffer is svg library home page a href path to dependency file react mern todo app package json path to vulnerable library react mern todo app node modules is svg package json react react form validation demo node modules is svg package json react fullstack app client node modules is svg package json dependency hierarchy react scripts tgz root library optimize css assets webpack plugin tgz cssnano tgz cssnano preset default tgz postcss svgo tgz x is svg tgz vulnerable library found in base branch master vulnerability details a vulnerability was discovered in is svg version to and below where a regular expression denial of service redos occurs if the application is provided and checks a crafted invalid svg string publish date jun pm url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version release date jun pm fix resolution is svg direct dependency fix resolution react scripts step up your open source security game with mend
0
51,093
13,188,107,378
IssuesEvent
2020-08-13 05:34:31
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
common_variables.direct_hits.default_definitions is dangerous (Trac #1967)
Migrated from Trac combo reconstruction defect
The existence of this variable leads users to write code like the following, which has been found in the wild: ```text from icecube.common_variables import direct_hits DirectHitsDefs=direct_hits.default_definitions DirectHitsDefs.append(direct_hits.I3DirectHitsDefinition("E",-15.,250.)) tray.AddSegment(direct_hits.I3DirectHitsCalculatorSegment, DirectHitsDefinitionSeries=DirectHitsDefs,...) ``` This code works as intended as long as it is invoked exactly once. If called a second time, however, it fails with > FATAL (CommonVariables): common_variables::direct_hits::CalculateDirectHits: Could not insert direct hits values into I3DirectHitsValuesMap using key "E"! The reason is that `direct_hits.default_definitions` is being modified, rather than copied, so it ends up with two copies of the 'E' entry, and this affects not only the second `I3DirectHitsCalculatorSegment`, but the first as well. This leads to the extremely confusing situation that the second instance appears to the user to a-temporally interfere with the first. Indeed, this will happen even if the two instances are in different trays. I think that there are two problems here: First, `I3DirectHitsCalculator` does not sanitize its input, so the error is caught later than it could be, and with a less clear message to the user. That can be addressed with this patch: ```text Index: python/direct_hits/I3DirectHitsCalculator.py =================================================================== --- python/direct_hits/I3DirectHitsCalculator.py (revision 153929) +++ python/direct_hits/I3DirectHitsCalculator.py (working copy) @@ -190,6 +190,16 @@ 'not specified!'% (self.name) ) + # check for redundancy in definitions + definitions={} + for definition in self.dh_definitions: + if definition.name in definitions: + from icecube.icetray import logging + logging.log_fatal( + "Conflicting/duplicate DirectHitsDefinitions:\n " \ + +str(definitions[definition.name])+"\n " \ + +str(definition)) + definitions[definition.name]=definition ``` Second, and more seriously, `direct_hits.default_definitions` is global state which users tend to modify accidentally. The best solution I can see is to simply eliminate it, and have users instead call the underlying `get_default_definitions`, which has the correct behavior of returning a new object on each call. This is, unfortunately, a breaking change, so I do not want to make it without feedback. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1967">https://code.icecube.wisc.edu/ticket/1967</a>, reported by cweaver and owned by mwolf</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:14:55", "description": "The existence of this variable leads users to write code like the following, which has been found in the wild:\n\n{{{\n from icecube.common_variables import direct_hits\n DirectHitsDefs=direct_hits.default_definitions\n DirectHitsDefs.append(direct_hits.I3DirectHitsDefinition(\"E\",-15.,250.))\n tray.AddSegment(direct_hits.I3DirectHitsCalculatorSegment,\n DirectHitsDefinitionSeries=DirectHitsDefs,...)\n}}}\n\nThis code works as intended as long as it is invoked exactly once. If called a second time, however, it fails with\n\n> FATAL (CommonVariables): common_variables::direct_hits::CalculateDirectHits: Could not insert direct hits values into I3DirectHitsValuesMap using key \"E\"!\n\nThe reason is that `direct_hits.default_definitions` is being modified, rather than copied, so it ends up with two copies of the 'E' entry, and this affects not only the second `I3DirectHitsCalculatorSegment`, but the first as well. This leads to the extremely confusing situation that the second instance appears to the user to a-temporally interfere with the first. Indeed, this will happen even if the two instances are in different trays.\n\nI think that there are two problems here: First, `I3DirectHitsCalculator` does not sanitize its input, so the error is caught later than it could be, and with a less clear message to the user. That can be addressed with this patch:\n\n{{{\nIndex: python/direct_hits/I3DirectHitsCalculator.py\n===================================================================\n--- python/direct_hits/I3DirectHitsCalculator.py\t(revision 153929)\n+++ python/direct_hits/I3DirectHitsCalculator.py\t(working copy)\n@@ -190,6 +190,16 @@\n 'not specified!'%\n (self.name)\n )\n+ # check for redundancy in definitions\n+ definitions={}\n+ for definition in self.dh_definitions:\n+ if definition.name in definitions:\n+ from icecube.icetray import logging\n+ logging.log_fatal(\n+ \"Conflicting/duplicate DirectHitsDefinitions:\\n \" \\\n+ +str(definitions[definition.name])+\"\\n \" \\\n+ +str(definition))\n+ definitions[definition.name]=definition\n}}}\n\nSecond, and more seriously, `direct_hits.default_definitions` is global state which users tend to modify accidentally. The best solution I can see is to simply eliminate it, and have users instead call the underlying `get_default_definitions`, which has the correct behavior of returning a new object on each call. This is, unfortunately, a breaking change, so I do not want to make it without feedback. ", "reporter": "cweaver", "cc": "", "resolution": "fixed", "_ts": "1550067295757382", "component": "combo reconstruction", "summary": "common_variables.direct_hits.default_definitions is dangerous", "priority": "normal", "keywords": "", "time": "2017-03-18T16:56:17", "milestone": "", "owner": "mwolf", "type": "defect" } ``` </p> </details>
1.0
common_variables.direct_hits.default_definitions is dangerous (Trac #1967) - The existence of this variable leads users to write code like the following, which has been found in the wild: ```text from icecube.common_variables import direct_hits DirectHitsDefs=direct_hits.default_definitions DirectHitsDefs.append(direct_hits.I3DirectHitsDefinition("E",-15.,250.)) tray.AddSegment(direct_hits.I3DirectHitsCalculatorSegment, DirectHitsDefinitionSeries=DirectHitsDefs,...) ``` This code works as intended as long as it is invoked exactly once. If called a second time, however, it fails with > FATAL (CommonVariables): common_variables::direct_hits::CalculateDirectHits: Could not insert direct hits values into I3DirectHitsValuesMap using key "E"! The reason is that `direct_hits.default_definitions` is being modified, rather than copied, so it ends up with two copies of the 'E' entry, and this affects not only the second `I3DirectHitsCalculatorSegment`, but the first as well. This leads to the extremely confusing situation that the second instance appears to the user to a-temporally interfere with the first. Indeed, this will happen even if the two instances are in different trays. I think that there are two problems here: First, `I3DirectHitsCalculator` does not sanitize its input, so the error is caught later than it could be, and with a less clear message to the user. That can be addressed with this patch: ```text Index: python/direct_hits/I3DirectHitsCalculator.py =================================================================== --- python/direct_hits/I3DirectHitsCalculator.py (revision 153929) +++ python/direct_hits/I3DirectHitsCalculator.py (working copy) @@ -190,6 +190,16 @@ 'not specified!'% (self.name) ) + # check for redundancy in definitions + definitions={} + for definition in self.dh_definitions: + if definition.name in definitions: + from icecube.icetray import logging + logging.log_fatal( + "Conflicting/duplicate DirectHitsDefinitions:\n " \ + +str(definitions[definition.name])+"\n " \ + +str(definition)) + definitions[definition.name]=definition ``` Second, and more seriously, `direct_hits.default_definitions` is global state which users tend to modify accidentally. The best solution I can see is to simply eliminate it, and have users instead call the underlying `get_default_definitions`, which has the correct behavior of returning a new object on each call. This is, unfortunately, a breaking change, so I do not want to make it without feedback. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1967">https://code.icecube.wisc.edu/ticket/1967</a>, reported by cweaver and owned by mwolf</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:14:55", "description": "The existence of this variable leads users to write code like the following, which has been found in the wild:\n\n{{{\n from icecube.common_variables import direct_hits\n DirectHitsDefs=direct_hits.default_definitions\n DirectHitsDefs.append(direct_hits.I3DirectHitsDefinition(\"E\",-15.,250.))\n tray.AddSegment(direct_hits.I3DirectHitsCalculatorSegment,\n DirectHitsDefinitionSeries=DirectHitsDefs,...)\n}}}\n\nThis code works as intended as long as it is invoked exactly once. If called a second time, however, it fails with\n\n> FATAL (CommonVariables): common_variables::direct_hits::CalculateDirectHits: Could not insert direct hits values into I3DirectHitsValuesMap using key \"E\"!\n\nThe reason is that `direct_hits.default_definitions` is being modified, rather than copied, so it ends up with two copies of the 'E' entry, and this affects not only the second `I3DirectHitsCalculatorSegment`, but the first as well. This leads to the extremely confusing situation that the second instance appears to the user to a-temporally interfere with the first. Indeed, this will happen even if the two instances are in different trays.\n\nI think that there are two problems here: First, `I3DirectHitsCalculator` does not sanitize its input, so the error is caught later than it could be, and with a less clear message to the user. That can be addressed with this patch:\n\n{{{\nIndex: python/direct_hits/I3DirectHitsCalculator.py\n===================================================================\n--- python/direct_hits/I3DirectHitsCalculator.py\t(revision 153929)\n+++ python/direct_hits/I3DirectHitsCalculator.py\t(working copy)\n@@ -190,6 +190,16 @@\n 'not specified!'%\n (self.name)\n )\n+ # check for redundancy in definitions\n+ definitions={}\n+ for definition in self.dh_definitions:\n+ if definition.name in definitions:\n+ from icecube.icetray import logging\n+ logging.log_fatal(\n+ \"Conflicting/duplicate DirectHitsDefinitions:\\n \" \\\n+ +str(definitions[definition.name])+\"\\n \" \\\n+ +str(definition))\n+ definitions[definition.name]=definition\n}}}\n\nSecond, and more seriously, `direct_hits.default_definitions` is global state which users tend to modify accidentally. The best solution I can see is to simply eliminate it, and have users instead call the underlying `get_default_definitions`, which has the correct behavior of returning a new object on each call. This is, unfortunately, a breaking change, so I do not want to make it without feedback. ", "reporter": "cweaver", "cc": "", "resolution": "fixed", "_ts": "1550067295757382", "component": "combo reconstruction", "summary": "common_variables.direct_hits.default_definitions is dangerous", "priority": "normal", "keywords": "", "time": "2017-03-18T16:56:17", "milestone": "", "owner": "mwolf", "type": "defect" } ``` </p> </details>
defect
common variables direct hits default definitions is dangerous trac the existence of this variable leads users to write code like the following which has been found in the wild text from icecube common variables import direct hits directhitsdefs direct hits default definitions directhitsdefs append direct hits e tray addsegment direct hits directhitsdefinitionseries directhitsdefs this code works as intended as long as it is invoked exactly once if called a second time however it fails with fatal commonvariables common variables direct hits calculatedirecthits could not insert direct hits values into using key e the reason is that direct hits default definitions is being modified rather than copied so it ends up with two copies of the e entry and this affects not only the second but the first as well this leads to the extremely confusing situation that the second instance appears to the user to a temporally interfere with the first indeed this will happen even if the two instances are in different trays i think that there are two problems here first does not sanitize its input so the error is caught later than it could be and with a less clear message to the user that can be addressed with this patch text index python direct hits py python direct hits py revision python direct hits py working copy not specified self name check for redundancy in definitions definitions for definition in self dh definitions if definition name in definitions from icecube icetray import logging logging log fatal conflicting duplicate directhitsdefinitions n str definitions n str definition definitions definition second and more seriously direct hits default definitions is global state which users tend to modify accidentally the best solution i can see is to simply eliminate it and have users instead call the underlying get default definitions which has the correct behavior of returning a new object on each call this is unfortunately a breaking change so i do not want to make it without feedback migrated from json status closed changetime description the existence of this variable leads users to write code like the following which has been found in the wild n n n from icecube common variables import direct hits n directhitsdefs direct hits default definitions n directhitsdefs append direct hits e n tray addsegment direct hits n directhitsdefinitionseries directhitsdefs n n nthis code works as intended as long as it is invoked exactly once if called a second time however it fails with n n fatal commonvariables common variables direct hits calculatedirecthits could not insert direct hits values into using key e n nthe reason is that direct hits default definitions is being modified rather than copied so it ends up with two copies of the e entry and this affects not only the second but the first as well this leads to the extremely confusing situation that the second instance appears to the user to a temporally interfere with the first indeed this will happen even if the two instances are in different trays n ni think that there are two problems here first does not sanitize its input so the error is caught later than it could be and with a less clear message to the user that can be addressed with this patch n n nindex python direct hits py n n python direct hits py t revision n python direct hits py t working copy n n not specified n self name n n check for redundancy in definitions n definitions n for definition in self dh definitions n if definition name in definitions n from icecube icetray import logging n logging log fatal n conflicting duplicate directhitsdefinitions n n str definitions n n str definition n definitions definition n n nsecond and more seriously direct hits default definitions is global state which users tend to modify accidentally the best solution i can see is to simply eliminate it and have users instead call the underlying get default definitions which has the correct behavior of returning a new object on each call this is unfortunately a breaking change so i do not want to make it without feedback reporter cweaver cc resolution fixed ts component combo reconstruction summary common variables direct hits default definitions is dangerous priority normal keywords time milestone owner mwolf type defect
1
443,132
12,760,191,008
IssuesEvent
2020-06-29 07:34:22
GoogleContainerTools/skaffold
https://api.github.com/repos/GoogleContainerTools/skaffold
closed
[BUG] nil pointer dereference from k8s oidc provider
area/auth kind/bug needs-actionable-error needs-reproduction priority/p2
<!-- Issues without logs and details are more complicated to fix. Please help us by filling the template below! --> ### Expected behavior When invoking `skaffold run` against a target cluster protected by the OIDC protocol I would expect a clearer error message when the id token in my KUBECONFIG for that cluster has expired. ### Actual behavior `skaffold run` consistently returns this stack trace: ``` panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x50e0e1a] goroutine 1 [running]: k8s.io/client-go/plugin/pkg/client/auth/oidc.(*oidcAuthProvider).idToken(0xc000b55140, 0x0, 0x0, 0x0, 0x0) /skaffold/vendor/k8s.io/client-go/plugin/pkg/client/auth/oidc/oidc.go:281 +0x6fa k8s.io/client-go/plugin/pkg/client/auth/oidc.(*roundTripper).RoundTrip(0xc0009ec9c0, 0xc000832700, 0x574f15b, 0xa, 0xc000a09c20) /skaffold/vendor/k8s.io/client-go/plugin/pkg/client/auth/oidc/oidc.go:199 +0x7c k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0009ec9e0, 0xc000832600, 0xc0009ec9e0, 0xbfafbdd9d45f2a08, 0x7daf5a6da) /skaffold/vendor/k8s.io/client-go/transport/round_trippers.go:159 +0x1c0 net/http.send(0xc000832500, 0x5a71fe0, 0xc0009ec9e0, 0xbfafbdd9d45f2a08, 0x7daf5a6da, 0x68f44a0, 0xc000560688, 0xbfafbdd9d45f2a08, 0x1, 0x0) /usr/local/go/src/net/http/client.go:252 +0x43e net/http.(*Client).send(0xc0008147e0, 0xc000832500, 0xbfafbdd9d45f2a08, 0x7daf5a6da, 0x68f44a0, 0xc000560688, 0x0, 0x1, 0x0) /usr/local/go/src/net/http/client.go:176 +0xfa net/http.(*Client).do(0xc0008147e0, 0xc000832500, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/client.go:699 +0x44a net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:567 k8s.io/client-go/rest.(*Request).request(0xc000a76a50, 0xc0007292b0, 0x0, 0x0) /skaffold/vendor/k8s.io/client-go/rest/request.go:801 +0x3e9 k8s.io/client-go/rest.(*Request).Do(0xc000a76a50, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /skaffold/vendor/k8s.io/client-go/rest/request.go:873 +0xd8 k8s.io/client-go/discovery.(*DiscoveryClient).ServerVersion(0xc0009eca40, 0x5ad0080, 0xc0009eca40, 0x0) /skaffold/vendor/k8s.io/client-go/discovery/discovery_client.go:408 +0xa9 github.com/GoogleContainerTools/skaffold/pkg/skaffold/runner.failIfClusterIsNotReachable(0x5a72580, 0xc0001a4008) /skaffold/pkg/skaffold/runner/deploy.go:95 +0x7a github.com/GoogleContainerTools/skaffold/pkg/skaffold/runner.(*SkaffoldRunner).Deploy(0xc00082c000, 0x5ab5440, 0xc0001b67c0, 0x5a72580, 0xc0001a4008, 0xc000d2a780, 0x2, 0x2, 0x2, 0x0) /skaffold/pkg/skaffold/runner/deploy.go:55 +0x26b github.com/GoogleContainerTools/skaffold/pkg/skaffold/runner.(*SkaffoldRunner).DeployAndLog(0xc00082c000, 0x5ab5440, 0xc0001b67c0, 0x5a72580, 0xc0001a4008, 0xc000d2a780, 0x2, 0x2, 0x0, 0x0) /skaffold/pkg/skaffold/runner/build_deploy.go:104 +0x1f5 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.doRun.func1(0x5ad4da0, 0xc00082c000, 0xc00016be60, 0x0, 0x100000001) /skaffold/cmd/skaffold/app/cmd/run.go:49 +0x1b5 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.withRunner(0x5ab5440, 0xc0001b67c0, 0xc000c0fb28, 0xc00093fb50, 0x41e9022) /skaffold/cmd/skaffold/app/cmd/runner.go:47 +0xb4 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.doRun(0x5ab5440, 0xc0001b67c0, 0x5a72580, 0xc0001a4008, 0xc0001a4008, 0xc0001af980) /skaffold/cmd/skaffold/app/cmd/run.go:43 +0x83 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.(*builder).NoArgs.func1(0xc00035a2c0, 0xc00000e700, 0x0, 0x2, 0x0, 0x0) /skaffold/cmd/skaffold/app/cmd/commands.go:99 +0xbb github.com/spf13/cobra.(*Command).execute(0xc00035a2c0, 0xc00000e6e0, 0x2, 0x2, 0xc00035a2c0, 0xc00000e6e0) /skaffold/vendor/github.com/spf13/cobra/command.go:842 +0x453 github.com/spf13/cobra.(*Command).ExecuteC(0xc00035a000, 0xc0001a4008, 0x5a72580, 0xc0001a4010) /skaffold/vendor/github.com/spf13/cobra/command.go:950 +0x349 github.com/spf13/cobra.(*Command).Execute(...) /skaffold/vendor/github.com/spf13/cobra/command.go:887 github.com/spf13/cobra.(*Command).ExecuteContext(...) /skaffold/vendor/github.com/spf13/cobra/command.go:880 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app.Run(0x5a72580, 0xc0001a4008, 0x5a72580, 0xc0001a4010, 0x0, 0x0) /skaffold/cmd/skaffold/app/skaffold.go:33 +0xde main.main() /skaffold/cmd/skaffold/skaffold.go:31 +0x4e ``` ### Information - Skaffold version: v1.10.1 - Operating system: macOS - Contents of skaffold.yaml: Not disclosable ### Steps to reproduce the behavior 1. Set context to a cluster that uses oidc for authentication 2. Modify KUBECONFIG such that the `id-token` or for that cluster's user defintion is invalid. E.g. ``` yaml users: - name: user@domain.com user: auth-provider: config: id-token: imabadidtoken ... name: oidc ``` 3. Execute `skaffold run` on any deployable profile that uses helm ### Additional Notes For the sake of others running into a similar stack trace, try refreshing your cluster credentials; that's how I got around this problem. It does seem like the client here tries to check if the token is expired so I am unsure if that is the root cause. In case that isn't the root cause you can inspect a scrubbed version of the JWT payload below if that provides any insight. FWIW the k8s cluster I'm using is hosted on IBM Cloud which offers a Free plan if it comes to that for debugging. ``` json { "iam_id": "", "iss": "", "sub": "", "aud": "", "given_name": "", "family_name": "", "name": "", "email": "", "exp": 1591643667, "scope": "ibm openid containers-kubernetes", "iat": 1591640067, "sub_60d74e319e72c2c58c02b8c2d18c41bc": "", "iam_id_60d74e319e72c2c58c02b8c2d18c41bc": "", "realmed_sub_60d74e319e72c2c58c02b8c2d18c41bc": "", "groups_60d74e319e72c2c58c02b8c2d18c41bc": [ ] } ```
1.0
[BUG] nil pointer dereference from k8s oidc provider - <!-- Issues without logs and details are more complicated to fix. Please help us by filling the template below! --> ### Expected behavior When invoking `skaffold run` against a target cluster protected by the OIDC protocol I would expect a clearer error message when the id token in my KUBECONFIG for that cluster has expired. ### Actual behavior `skaffold run` consistently returns this stack trace: ``` panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x50e0e1a] goroutine 1 [running]: k8s.io/client-go/plugin/pkg/client/auth/oidc.(*oidcAuthProvider).idToken(0xc000b55140, 0x0, 0x0, 0x0, 0x0) /skaffold/vendor/k8s.io/client-go/plugin/pkg/client/auth/oidc/oidc.go:281 +0x6fa k8s.io/client-go/plugin/pkg/client/auth/oidc.(*roundTripper).RoundTrip(0xc0009ec9c0, 0xc000832700, 0x574f15b, 0xa, 0xc000a09c20) /skaffold/vendor/k8s.io/client-go/plugin/pkg/client/auth/oidc/oidc.go:199 +0x7c k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc0009ec9e0, 0xc000832600, 0xc0009ec9e0, 0xbfafbdd9d45f2a08, 0x7daf5a6da) /skaffold/vendor/k8s.io/client-go/transport/round_trippers.go:159 +0x1c0 net/http.send(0xc000832500, 0x5a71fe0, 0xc0009ec9e0, 0xbfafbdd9d45f2a08, 0x7daf5a6da, 0x68f44a0, 0xc000560688, 0xbfafbdd9d45f2a08, 0x1, 0x0) /usr/local/go/src/net/http/client.go:252 +0x43e net/http.(*Client).send(0xc0008147e0, 0xc000832500, 0xbfafbdd9d45f2a08, 0x7daf5a6da, 0x68f44a0, 0xc000560688, 0x0, 0x1, 0x0) /usr/local/go/src/net/http/client.go:176 +0xfa net/http.(*Client).do(0xc0008147e0, 0xc000832500, 0x0, 0x0, 0x0) /usr/local/go/src/net/http/client.go:699 +0x44a net/http.(*Client).Do(...) /usr/local/go/src/net/http/client.go:567 k8s.io/client-go/rest.(*Request).request(0xc000a76a50, 0xc0007292b0, 0x0, 0x0) /skaffold/vendor/k8s.io/client-go/rest/request.go:801 +0x3e9 k8s.io/client-go/rest.(*Request).Do(0xc000a76a50, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /skaffold/vendor/k8s.io/client-go/rest/request.go:873 +0xd8 k8s.io/client-go/discovery.(*DiscoveryClient).ServerVersion(0xc0009eca40, 0x5ad0080, 0xc0009eca40, 0x0) /skaffold/vendor/k8s.io/client-go/discovery/discovery_client.go:408 +0xa9 github.com/GoogleContainerTools/skaffold/pkg/skaffold/runner.failIfClusterIsNotReachable(0x5a72580, 0xc0001a4008) /skaffold/pkg/skaffold/runner/deploy.go:95 +0x7a github.com/GoogleContainerTools/skaffold/pkg/skaffold/runner.(*SkaffoldRunner).Deploy(0xc00082c000, 0x5ab5440, 0xc0001b67c0, 0x5a72580, 0xc0001a4008, 0xc000d2a780, 0x2, 0x2, 0x2, 0x0) /skaffold/pkg/skaffold/runner/deploy.go:55 +0x26b github.com/GoogleContainerTools/skaffold/pkg/skaffold/runner.(*SkaffoldRunner).DeployAndLog(0xc00082c000, 0x5ab5440, 0xc0001b67c0, 0x5a72580, 0xc0001a4008, 0xc000d2a780, 0x2, 0x2, 0x0, 0x0) /skaffold/pkg/skaffold/runner/build_deploy.go:104 +0x1f5 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.doRun.func1(0x5ad4da0, 0xc00082c000, 0xc00016be60, 0x0, 0x100000001) /skaffold/cmd/skaffold/app/cmd/run.go:49 +0x1b5 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.withRunner(0x5ab5440, 0xc0001b67c0, 0xc000c0fb28, 0xc00093fb50, 0x41e9022) /skaffold/cmd/skaffold/app/cmd/runner.go:47 +0xb4 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.doRun(0x5ab5440, 0xc0001b67c0, 0x5a72580, 0xc0001a4008, 0xc0001a4008, 0xc0001af980) /skaffold/cmd/skaffold/app/cmd/run.go:43 +0x83 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app/cmd.(*builder).NoArgs.func1(0xc00035a2c0, 0xc00000e700, 0x0, 0x2, 0x0, 0x0) /skaffold/cmd/skaffold/app/cmd/commands.go:99 +0xbb github.com/spf13/cobra.(*Command).execute(0xc00035a2c0, 0xc00000e6e0, 0x2, 0x2, 0xc00035a2c0, 0xc00000e6e0) /skaffold/vendor/github.com/spf13/cobra/command.go:842 +0x453 github.com/spf13/cobra.(*Command).ExecuteC(0xc00035a000, 0xc0001a4008, 0x5a72580, 0xc0001a4010) /skaffold/vendor/github.com/spf13/cobra/command.go:950 +0x349 github.com/spf13/cobra.(*Command).Execute(...) /skaffold/vendor/github.com/spf13/cobra/command.go:887 github.com/spf13/cobra.(*Command).ExecuteContext(...) /skaffold/vendor/github.com/spf13/cobra/command.go:880 github.com/GoogleContainerTools/skaffold/cmd/skaffold/app.Run(0x5a72580, 0xc0001a4008, 0x5a72580, 0xc0001a4010, 0x0, 0x0) /skaffold/cmd/skaffold/app/skaffold.go:33 +0xde main.main() /skaffold/cmd/skaffold/skaffold.go:31 +0x4e ``` ### Information - Skaffold version: v1.10.1 - Operating system: macOS - Contents of skaffold.yaml: Not disclosable ### Steps to reproduce the behavior 1. Set context to a cluster that uses oidc for authentication 2. Modify KUBECONFIG such that the `id-token` or for that cluster's user defintion is invalid. E.g. ``` yaml users: - name: user@domain.com user: auth-provider: config: id-token: imabadidtoken ... name: oidc ``` 3. Execute `skaffold run` on any deployable profile that uses helm ### Additional Notes For the sake of others running into a similar stack trace, try refreshing your cluster credentials; that's how I got around this problem. It does seem like the client here tries to check if the token is expired so I am unsure if that is the root cause. In case that isn't the root cause you can inspect a scrubbed version of the JWT payload below if that provides any insight. FWIW the k8s cluster I'm using is hosted on IBM Cloud which offers a Free plan if it comes to that for debugging. ``` json { "iam_id": "", "iss": "", "sub": "", "aud": "", "given_name": "", "family_name": "", "name": "", "email": "", "exp": 1591643667, "scope": "ibm openid containers-kubernetes", "iat": 1591640067, "sub_60d74e319e72c2c58c02b8c2d18c41bc": "", "iam_id_60d74e319e72c2c58c02b8c2d18c41bc": "", "realmed_sub_60d74e319e72c2c58c02b8c2d18c41bc": "", "groups_60d74e319e72c2c58c02b8c2d18c41bc": [ ] } ```
non_defect
nil pointer dereference from oidc provider issues without logs and details are more complicated to fix please help us by filling the template below expected behavior when invoking skaffold run against a target cluster protected by the oidc protocol i would expect a clearer error message when the id token in my kubeconfig for that cluster has expired actual behavior skaffold run consistently returns this stack trace panic runtime error invalid memory address or nil pointer dereference goroutine io client go plugin pkg client auth oidc oidcauthprovider idtoken skaffold vendor io client go plugin pkg client auth oidc oidc go io client go plugin pkg client auth oidc roundtripper roundtrip skaffold vendor io client go plugin pkg client auth oidc oidc go io client go transport useragentroundtripper roundtrip skaffold vendor io client go transport round trippers go net http send usr local go src net http client go net http client send usr local go src net http client go net http client do usr local go src net http client go net http client do usr local go src net http client go io client go rest request request skaffold vendor io client go rest request go io client go rest request do skaffold vendor io client go rest request go io client go discovery discoveryclient serverversion skaffold vendor io client go discovery discovery client go github com googlecontainertools skaffold pkg skaffold runner failifclusterisnotreachable skaffold pkg skaffold runner deploy go github com googlecontainertools skaffold pkg skaffold runner skaffoldrunner deploy skaffold pkg skaffold runner deploy go github com googlecontainertools skaffold pkg skaffold runner skaffoldrunner deployandlog skaffold pkg skaffold runner build deploy go github com googlecontainertools skaffold cmd skaffold app cmd dorun skaffold cmd skaffold app cmd run go github com googlecontainertools skaffold cmd skaffold app cmd withrunner skaffold cmd skaffold app cmd runner go github com googlecontainertools skaffold cmd skaffold app cmd dorun skaffold cmd skaffold app cmd run go github com googlecontainertools skaffold cmd skaffold app cmd builder noargs skaffold cmd skaffold app cmd commands go github com cobra command execute skaffold vendor github com cobra command go github com cobra command executec skaffold vendor github com cobra command go github com cobra command execute skaffold vendor github com cobra command go github com cobra command executecontext skaffold vendor github com cobra command go github com googlecontainertools skaffold cmd skaffold app run skaffold cmd skaffold app skaffold go main main skaffold cmd skaffold skaffold go information skaffold version operating system macos contents of skaffold yaml not disclosable steps to reproduce the behavior set context to a cluster that uses oidc for authentication modify kubeconfig such that the id token or for that cluster s user defintion is invalid e g yaml users name user domain com user auth provider config id token imabadidtoken name oidc execute skaffold run on any deployable profile that uses helm additional notes for the sake of others running into a similar stack trace try refreshing your cluster credentials that s how i got around this problem it does seem like the client here tries to check if the token is expired so i am unsure if that is the root cause in case that isn t the root cause you can inspect a scrubbed version of the jwt payload below if that provides any insight fwiw the cluster i m using is hosted on ibm cloud which offers a free plan if it comes to that for debugging json iam id iss sub aud given name family name name email exp scope ibm openid containers kubernetes iat sub iam id realmed sub groups
0
77,322
26,921,073,660
IssuesEvent
2023-02-07 10:28:57
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
/etc/SuSE-release is no longer present
Type: Defect
### System information Type | Version/Name --- | --- Distribution Name | SUSE Linux Enterprise Server, openSUSE Leap, openSUSE Tumbleweed Distribution Version | 15 Kernel Version | Any Architecture | Any OpenZFS Version | 2.1.9 ### Describe the problem you're observing The configure script does not set the VENDOR to "sles" on SUSE Linux Enterprise Server 15, openSUSE Leap 15 and openSUSE Tumbleweed. The script looks for the file `/etc/SuSE-release`, which was [deprecated several years ago](https://en.opensuse.org/Etc_SuSE-release) and is not present in current SUSE Linux distributions. SUSE recommends to read `/etc/os-release` and `/usr/lib/os-release` instead. The configure script sets the `initconfdir` to `/etc/default` when VENDOR is undefined. If the vendor is "sles", the directory is set to `/etc/sysconfig`: ```sh case "$VENDOR" in sles) initconfdir=/etc/sysconfig ;; esac ``` **The SUSE developers, on the other hand, use `%_sysconfdir/default/zfs` in their [zfs.spec](https://build.opensuse.org/package/view_file/filesystems/zfs/zfs.spec).** I've got a patch that reads `/etc/os-release` and `/usr/lib/os-release`, which causes VENDOR to be set to "sles". Should `initconfdir=/etc/sysconfig` be kept or should this setting be changed to `initconfdir=/etc/default` if VENDOR is "sles"? ### Describe how to reproduce the problem See issue #14467 for instructions on how to set up a virtual machine with openSUSE Leap 15. The command `rpmbuild -bb rpmbuild/SPECS/zfs.spec` fails if the file `/etc/SuSE-release` does not exist. The command `rpmbuild -D'_initconfdir /etc/default' -bb rpmbuild/SPECS/zfs.spec` succeeds. The zfs.spec file has these statements: ``` %if %{undefined _initconfdir} %global _initconfdir /etc/sysconfig %endif ``` This could be changed to: ``` %if %{undefined _initconfdir} %if 0%{?suse_version} %global _initconfdir /etc/default %else %global _initconfdir /etc/sysconfig %endif %endif ``` ### Include any warning/errors/backtraces from the system logs Output from rpmbuild: ``` RPM build errors: File not found: /home/vagrant/rpmbuild/BUILDROOT/zfs-2.1.99-1724_geb823cbc7.x86_64/etc/sysconfig/zfs Installed (but unpackaged) file(s) found: /etc/default/zfs ```
1.0
/etc/SuSE-release is no longer present - ### System information Type | Version/Name --- | --- Distribution Name | SUSE Linux Enterprise Server, openSUSE Leap, openSUSE Tumbleweed Distribution Version | 15 Kernel Version | Any Architecture | Any OpenZFS Version | 2.1.9 ### Describe the problem you're observing The configure script does not set the VENDOR to "sles" on SUSE Linux Enterprise Server 15, openSUSE Leap 15 and openSUSE Tumbleweed. The script looks for the file `/etc/SuSE-release`, which was [deprecated several years ago](https://en.opensuse.org/Etc_SuSE-release) and is not present in current SUSE Linux distributions. SUSE recommends to read `/etc/os-release` and `/usr/lib/os-release` instead. The configure script sets the `initconfdir` to `/etc/default` when VENDOR is undefined. If the vendor is "sles", the directory is set to `/etc/sysconfig`: ```sh case "$VENDOR" in sles) initconfdir=/etc/sysconfig ;; esac ``` **The SUSE developers, on the other hand, use `%_sysconfdir/default/zfs` in their [zfs.spec](https://build.opensuse.org/package/view_file/filesystems/zfs/zfs.spec).** I've got a patch that reads `/etc/os-release` and `/usr/lib/os-release`, which causes VENDOR to be set to "sles". Should `initconfdir=/etc/sysconfig` be kept or should this setting be changed to `initconfdir=/etc/default` if VENDOR is "sles"? ### Describe how to reproduce the problem See issue #14467 for instructions on how to set up a virtual machine with openSUSE Leap 15. The command `rpmbuild -bb rpmbuild/SPECS/zfs.spec` fails if the file `/etc/SuSE-release` does not exist. The command `rpmbuild -D'_initconfdir /etc/default' -bb rpmbuild/SPECS/zfs.spec` succeeds. The zfs.spec file has these statements: ``` %if %{undefined _initconfdir} %global _initconfdir /etc/sysconfig %endif ``` This could be changed to: ``` %if %{undefined _initconfdir} %if 0%{?suse_version} %global _initconfdir /etc/default %else %global _initconfdir /etc/sysconfig %endif %endif ``` ### Include any warning/errors/backtraces from the system logs Output from rpmbuild: ``` RPM build errors: File not found: /home/vagrant/rpmbuild/BUILDROOT/zfs-2.1.99-1724_geb823cbc7.x86_64/etc/sysconfig/zfs Installed (but unpackaged) file(s) found: /etc/default/zfs ```
defect
etc suse release is no longer present system information type version name distribution name suse linux enterprise server opensuse leap opensuse tumbleweed distribution version kernel version any architecture any openzfs version describe the problem you re observing the configure script does not set the vendor to sles on suse linux enterprise server opensuse leap and opensuse tumbleweed the script looks for the file etc suse release which was and is not present in current suse linux distributions suse recommends to read etc os release and usr lib os release instead the configure script sets the initconfdir to etc default when vendor is undefined if the vendor is sles the directory is set to etc sysconfig sh case vendor in sles initconfdir etc sysconfig esac the suse developers on the other hand use sysconfdir default zfs in their i ve got a patch that reads etc os release and usr lib os release which causes vendor to be set to sles should initconfdir etc sysconfig be kept or should this setting be changed to initconfdir etc default if vendor is sles describe how to reproduce the problem see issue for instructions on how to set up a virtual machine with opensuse leap the command rpmbuild bb rpmbuild specs zfs spec fails if the file etc suse release does not exist the command rpmbuild d initconfdir etc default bb rpmbuild specs zfs spec succeeds the zfs spec file has these statements if undefined initconfdir global initconfdir etc sysconfig endif this could be changed to if undefined initconfdir if suse version global initconfdir etc default else global initconfdir etc sysconfig endif endif include any warning errors backtraces from the system logs output from rpmbuild rpm build errors file not found home vagrant rpmbuild buildroot zfs etc sysconfig zfs installed but unpackaged file s found etc default zfs
1
69,688
15,026,197,647
IssuesEvent
2021-02-01 22:16:24
angular/angular
https://api.github.com/repos/angular/angular
opened
Deprecate `SafeStyle` and `DomSanitizer.bypassSecurityTrustStyle`
P3 comp: core comp: security core: styling bindings refactoring security
_(copied from internal tracker)_ The `SafeStyle` type is returned from the `DomSanitizer.bypassSecurityTrustStyle` method. Both of these APIs are apart of style sanitization in Angular. With the style sanitization refactor work, there will no longer be any need for these APIs in the framework, therefore, both the `SafeStyle` and `DomSanitizer.bypassSecurityTrustStyle` APIs will need to be deprecated. The following steps need to be completed: * Deprecate `SafeStyle` in the API docs (@deprecated) * Deprecate `DomSanitizer.bypassSecurityTrustStyle` in the API docs (@deprecated) * Emit a warning message when `DomSanitizer.bypassSecurityTrustStyle` is used * Add entries to deprecation guide Related design doc: https://hackmd.io/_sF2TzXLSiuuYbY88MpdQA?view
True
Deprecate `SafeStyle` and `DomSanitizer.bypassSecurityTrustStyle` - _(copied from internal tracker)_ The `SafeStyle` type is returned from the `DomSanitizer.bypassSecurityTrustStyle` method. Both of these APIs are apart of style sanitization in Angular. With the style sanitization refactor work, there will no longer be any need for these APIs in the framework, therefore, both the `SafeStyle` and `DomSanitizer.bypassSecurityTrustStyle` APIs will need to be deprecated. The following steps need to be completed: * Deprecate `SafeStyle` in the API docs (@deprecated) * Deprecate `DomSanitizer.bypassSecurityTrustStyle` in the API docs (@deprecated) * Emit a warning message when `DomSanitizer.bypassSecurityTrustStyle` is used * Add entries to deprecation guide Related design doc: https://hackmd.io/_sF2TzXLSiuuYbY88MpdQA?view
non_defect
deprecate safestyle and domsanitizer bypasssecuritytruststyle copied from internal tracker the safestyle type is returned from the domsanitizer bypasssecuritytruststyle method both of these apis are apart of style sanitization in angular with the style sanitization refactor work there will no longer be any need for these apis in the framework therefore both the safestyle and domsanitizer bypasssecuritytruststyle apis will need to be deprecated the following steps need to be completed deprecate safestyle in the api docs deprecated deprecate domsanitizer bypasssecuritytruststyle in the api docs deprecated emit a warning message when domsanitizer bypasssecuritytruststyle is used add entries to deprecation guide related design doc
0