Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
1,962
2,603,974,140
IssuesEvent
2015-02-24 19:01:04
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳疱疹男科医院
auto-migrated Priority-Medium Type-Defect
``` 沈阳疱疹男科医院〓沈陽軍區政治部醫院性病〓TEL:024-3102330 8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于� ��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:09
1.0
沈阳疱疹男科医院 - ``` 沈阳疱疹男科医院〓沈陽軍區政治部醫院性病〓TEL:024-3102330 8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于� ��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 8:09
defect
沈阳疱疹男科医院 沈阳疱疹男科医院〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位于� �� 。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 original issue reported on code google com by gmail com on jun at
1
40,611
10,065,198,617
IssuesEvent
2019-07-23 10:19:00
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
split brain executors partition.count=1999 OOME
Module: Cluster Module: IExecutor Team: Core Type: Defect
http://jenkins.hazelcast.com/view/split/job/split-executors/31/console http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-09_52_54/executors/ hz-root/HzMember4HZAA/HzMember4HZAA.hprof hz-root/HzMember5HZAA/HzMember5HZAA.hprof http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-09_52_54/executors/go ops="${ops} -Dhazelcast.partition.count=1999" 1GB meamber heap http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-09_52_54/executors/gc.html higher partition.count=1999 gives a larger overhead, maybe the OOME hprof is not a leak but the hprof could help find inefficiency 2 members encountered a OOME however rerunning the test with a 2GB heap passed http://jenkins.hazelcast.com/view/split/job/split-executors/32/console however the GC charts still look leaky http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-14_46_12/executors/gc.html
1.0
split brain executors partition.count=1999 OOME - http://jenkins.hazelcast.com/view/split/job/split-executors/31/console http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-09_52_54/executors/ hz-root/HzMember4HZAA/HzMember4HZAA.hprof hz-root/HzMember5HZAA/HzMember5HZAA.hprof http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-09_52_54/executors/go ops="${ops} -Dhazelcast.partition.count=1999" 1GB meamber heap http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-09_52_54/executors/gc.html higher partition.count=1999 gives a larger overhead, maybe the OOME hprof is not a leak but the hprof could help find inefficiency 2 members encountered a OOME however rerunning the test with a 2GB heap passed http://jenkins.hazelcast.com/view/split/job/split-executors/32/console however the GC charts still look leaky http://54.147.27.51/~jenkins/workspace/split-executors/3.12.1/2019_07_20-14_46_12/executors/gc.html
defect
split brain executors partition count oome hz root hprof hz root hprof ops ops dhazelcast partition count meamber heap higher partition count gives a larger overhead maybe the oome hprof is not a leak but the hprof could help find inefficiency members encountered a oome however rerunning the test with a heap passed however the gc charts still look leaky
1
2,916
2,607,966,064
IssuesEvent
2015-02-26 00:42:24
chrsmithdemos/leveldb
https://api.github.com/repos/chrsmithdemos/leveldb
closed
Create the form
auto-migrated Priority-Medium Type-Defect
``` Hello. Help please, I need to create the form for editing table data from the database leveldb ``` ----- Original issue reported on code.google.com by `faN...@rambler.ru` on 31 May 2012 at 4:55
1.0
Create the form - ``` Hello. Help please, I need to create the form for editing table data from the database leveldb ``` ----- Original issue reported on code.google.com by `faN...@rambler.ru` on 31 May 2012 at 4:55
defect
create the form hello help please i need to create the form for editing table data from the database leveldb original issue reported on code google com by fan rambler ru on may at
1
2,432
2,688,616,823
IssuesEvent
2015-03-31 01:58:59
gios-asu/text-geolocator
https://api.github.com/repos/gios-asu/text-geolocator
opened
NLP Documentation - developer's guide
documentation nlp
Ensure that the proper comments are in place in the NLP module to ensure a quality developer's guide is produced
1.0
NLP Documentation - developer's guide - Ensure that the proper comments are in place in the NLP module to ensure a quality developer's guide is produced
non_defect
nlp documentation developer s guide ensure that the proper comments are in place in the nlp module to ensure a quality developer s guide is produced
0
124,497
17,772,588,170
IssuesEvent
2021-08-30 15:13:26
kapseliboi/energy-futures-vis-avenir-energetique
https://api.github.com/repos/kapseliboi/energy-futures-vis-avenir-energetique
opened
CVE-2020-28477 (High) detected in immer-1.10.0.tgz
security vulnerability
## CVE-2020-28477 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-1.10.0.tgz</b></p></summary> <p>Create your next immutable state by mutating the current one</p> <p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-1.10.0.tgz">https://registry.npmjs.org/immer/-/immer-1.10.0.tgz</a></p> <p>Path to dependency file: energy-futures-vis-avenir-energetique/package.json</p> <p>Path to vulnerable library: energy-futures-vis-avenir-energetique/node_modules/immer/package.json</p> <p> Dependency Hierarchy: - react-5.3.19.tgz (Root Library) - react-dev-utils-9.1.0.tgz - :x: **immer-1.10.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/energy-futures-vis-avenir-energetique/commit/907b3c15edb7159764857453edc4f32b2432cdd4">907b3c15edb7159764857453edc4f32b2432cdd4</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects all versions of package immer. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28477>CVE-2020-28477</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/immerjs/immer/releases/tag/v8.0.1">https://github.com/immerjs/immer/releases/tag/v8.0.1</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: v8.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-28477 (High) detected in immer-1.10.0.tgz - ## CVE-2020-28477 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>immer-1.10.0.tgz</b></p></summary> <p>Create your next immutable state by mutating the current one</p> <p>Library home page: <a href="https://registry.npmjs.org/immer/-/immer-1.10.0.tgz">https://registry.npmjs.org/immer/-/immer-1.10.0.tgz</a></p> <p>Path to dependency file: energy-futures-vis-avenir-energetique/package.json</p> <p>Path to vulnerable library: energy-futures-vis-avenir-energetique/node_modules/immer/package.json</p> <p> Dependency Hierarchy: - react-5.3.19.tgz (Root Library) - react-dev-utils-9.1.0.tgz - :x: **immer-1.10.0.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/kapseliboi/energy-futures-vis-avenir-energetique/commit/907b3c15edb7159764857453edc4f32b2432cdd4">907b3c15edb7159764857453edc4f32b2432cdd4</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> This affects all versions of package immer. <p>Publish Date: 2021-01-19 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-28477>CVE-2020-28477</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/immerjs/immer/releases/tag/v8.0.1">https://github.com/immerjs/immer/releases/tag/v8.0.1</a></p> <p>Release Date: 2021-01-19</p> <p>Fix Resolution: v8.0.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in immer tgz cve high severity vulnerability vulnerable library immer tgz create your next immutable state by mutating the current one library home page a href path to dependency file energy futures vis avenir energetique package json path to vulnerable library energy futures vis avenir energetique node modules immer package json dependency hierarchy react tgz root library react dev utils tgz x immer tgz vulnerable library found in head commit a href found in base branch master vulnerability details this affects all versions of package immer publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
49,690
13,187,251,736
IssuesEvent
2020-08-13 02:49:39
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
[clsim] fix hole ice params for all segments (Trac #1877)
Incomplete Migration Migrated from Trac combo simulation defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1877">https://code.icecube.wisc.edu/ticket/1877</a>, reported by david.schultz and owned by sebastian.sanchez</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:24", "description": "Commit r149878/IceCube broke simprod-scripts because it didn't update `python/traysegments/I3CLSimMakeHits.py`.\n\n", "reporter": "david.schultz", "cc": "olivas, claudio.kopper", "resolution": "fixed", "_ts": "1550067204154158", "component": "combo simulation", "summary": "[clsim] fix hole ice params for all segments", "priority": "blocker", "keywords": "", "time": "2016-10-01T15:27:26", "milestone": "", "owner": "sebastian.sanchez", "type": "defect" } ``` </p> </details>
1.0
[clsim] fix hole ice params for all segments (Trac #1877) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1877">https://code.icecube.wisc.edu/ticket/1877</a>, reported by david.schultz and owned by sebastian.sanchez</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:13:24", "description": "Commit r149878/IceCube broke simprod-scripts because it didn't update `python/traysegments/I3CLSimMakeHits.py`.\n\n", "reporter": "david.schultz", "cc": "olivas, claudio.kopper", "resolution": "fixed", "_ts": "1550067204154158", "component": "combo simulation", "summary": "[clsim] fix hole ice params for all segments", "priority": "blocker", "keywords": "", "time": "2016-10-01T15:27:26", "milestone": "", "owner": "sebastian.sanchez", "type": "defect" } ``` </p> </details>
defect
fix hole ice params for all segments trac migrated from json status closed changetime description commit icecube broke simprod scripts because it didn t update python traysegments py n n reporter david schultz cc olivas claudio kopper resolution fixed ts component combo simulation summary fix hole ice params for all segments priority blocker keywords time milestone owner sebastian sanchez type defect
1
26,431
4,707,582,762
IssuesEvent
2016-10-13 20:34:39
literat/srazvs
https://api.github.com/repos/literat/srazvs
closed
Fatal Error: Call to a member function getPresenterName() on a non-object
1-defect 2-public 3-presenter 3-registration testing
Fatal Error: Call to a member function getPresenterName() on a non-object in /var/www/virtual/vodni/web/www/srazvs/app/bootstrap.php:137 source: http://vodni.skauting.cz/srazvs/registration/www.qrka.cz
1.0
Fatal Error: Call to a member function getPresenterName() on a non-object - Fatal Error: Call to a member function getPresenterName() on a non-object in /var/www/virtual/vodni/web/www/srazvs/app/bootstrap.php:137 source: http://vodni.skauting.cz/srazvs/registration/www.qrka.cz
defect
fatal error call to a member function getpresentername on a non object fatal error call to a member function getpresentername on a non object in var www virtual vodni web www srazvs app bootstrap php source
1
38,714
8,952,575,982
IssuesEvent
2019-01-25 16:53:50
svigerske/ipopt-donotuse
https://api.github.com/repos/svigerske/ipopt-donotuse
closed
Segmentation Fault using IPOPT
Ipopt defect
Issue created by migration from Trac. Original creator: ascrelot Original creation time: 2011-03-01 03:37:05 Assignee: ipopt-team Version: 3.9 Hi, I'm using IPOPT to solve the subproblem of a Trust Region Algorithm. So my objective function is the model. I have to evaluate it by calling a method implemented in another class than the one where the eval_f, eval_g (and so on) are implemented. I want to give some objects as arguments of the constructor which builds the TNLP to be able to call my method. I've read in the documentation that we have to use SmartPtr to avoid problem of erased reference. Is that only for object defined in IPOPt or for all objects? Cause I can't define such SmartPtr for my own objects (cause there are undefined methods) and using "raw" pointers causes Segmentation fault when I'm leaving the procedure where the call status = app->OptimizeTNLP(mynlp) is done. Do you have some code example where objects are given as arguments to the constructor of the TNLP? Thanks Anne-Sophie Crelot
1.0
Segmentation Fault using IPOPT - Issue created by migration from Trac. Original creator: ascrelot Original creation time: 2011-03-01 03:37:05 Assignee: ipopt-team Version: 3.9 Hi, I'm using IPOPT to solve the subproblem of a Trust Region Algorithm. So my objective function is the model. I have to evaluate it by calling a method implemented in another class than the one where the eval_f, eval_g (and so on) are implemented. I want to give some objects as arguments of the constructor which builds the TNLP to be able to call my method. I've read in the documentation that we have to use SmartPtr to avoid problem of erased reference. Is that only for object defined in IPOPt or for all objects? Cause I can't define such SmartPtr for my own objects (cause there are undefined methods) and using "raw" pointers causes Segmentation fault when I'm leaving the procedure where the call status = app->OptimizeTNLP(mynlp) is done. Do you have some code example where objects are given as arguments to the constructor of the TNLP? Thanks Anne-Sophie Crelot
defect
segmentation fault using ipopt issue created by migration from trac original creator ascrelot original creation time assignee ipopt team version hi i m using ipopt to solve the subproblem of a trust region algorithm so my objective function is the model i have to evaluate it by calling a method implemented in another class than the one where the eval f eval g and so on are implemented i want to give some objects as arguments of the constructor which builds the tnlp to be able to call my method i ve read in the documentation that we have to use smartptr to avoid problem of erased reference is that only for object defined in ipopt or for all objects cause i can t define such smartptr for my own objects cause there are undefined methods and using raw pointers causes segmentation fault when i m leaving the procedure where the call status app optimizetnlp mynlp is done do you have some code example where objects are given as arguments to the constructor of the tnlp thanks anne sophie crelot
1
71,980
23,879,709,763
IssuesEvent
2022-09-07 23:21:29
department-of-veterans-affairs/vets-design-system-documentation
https://api.github.com/repos/department-of-veterans-affairs/vets-design-system-documentation
closed
[FUNCTIONALITY]: Sortable Table - SHOULD use our base Table component
pattern-new 508-defect-4 accessibility 508-issue-semantic-markup
@1Copenut commented on [Mon May 18 2020](https://github.com/department-of-veterans-affairs/va.gov-team/issues/9197) **Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description <!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. --> Our [sortable table component](https://department-of-veterans-affairs.github.io/veteran-facing-services-tools/visual-design/components/sortabletable/) ~~would benefit greatly from a basic `<Table />` component~~ should be a prop-driven extension on top of our [Table component](https://github.com/department-of-veterans-affairs/component-library/blob/master/src/components/Table/Table.jsx). This way we could build out more complex tables as extensions than completely new components. Thinking: * Tables with multiple heading rows * Tables with rows or columns that span multiple * Tables with sortable columns ## Related Issues * #9193 * #9194 ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** _Trevor_ ## Acceptance Criteria - [ ] HTML validates with W3C validator or other HTML5 checker - [ ] No axe errors - [ ] [Table component](https://github.com/department-of-veterans-affairs/component-library/blob/master/src/components/Table/Table.jsx) is used to create extended or enhanced table components - [ ] Visual styling does not change, except where noted by issue tickets <!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. --> ## Environment * https://department-of-veterans-affairs.github.io/veteran-facing-services-tools/visual-design/components/sortabletable/
1.0
[FUNCTIONALITY]: Sortable Table - SHOULD use our base Table component - @1Copenut commented on [Mon May 18 2020](https://github.com/department-of-veterans-affairs/va.gov-team/issues/9197) **Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️ Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description <!-- This is a detailed description of the issue. It should include a restatement of the title, and provide more background information. --> Our [sortable table component](https://department-of-veterans-affairs.github.io/veteran-facing-services-tools/visual-design/components/sortabletable/) ~~would benefit greatly from a basic `<Table />` component~~ should be a prop-driven extension on top of our [Table component](https://github.com/department-of-veterans-affairs/component-library/blob/master/src/components/Table/Table.jsx). This way we could build out more complex tables as extensions than completely new components. Thinking: * Tables with multiple heading rows * Tables with rows or columns that span multiple * Tables with sortable columns ## Related Issues * #9193 * #9194 ## Point of Contact <!-- If this issue is being opened by a VFS team member, please add a point of contact. Usually this is the same person who enters the issue ticket. --> **VFS Point of Contact:** _Trevor_ ## Acceptance Criteria - [ ] HTML validates with W3C validator or other HTML5 checker - [ ] No axe errors - [ ] [Table component](https://github.com/department-of-veterans-affairs/component-library/blob/master/src/components/Table/Table.jsx) is used to create extended or enhanced table components - [ ] Visual styling does not change, except where noted by issue tickets <!-- As a keyboard user, I want to open the Level of Coverage widget by pressing Spacebar or pressing Enter. These keypress actions should not interfere with the mouse click event also opening the widget. --> ## Environment * https://department-of-veterans-affairs.github.io/veteran-facing-services-tools/visual-design/components/sortabletable/
defect
sortable table should use our base table component commented on feedback framework ❗️ must for if the feedback must be applied ⚠️ should if the feedback is best practice ✔️ consider for suggestions enhancements description our would benefit greatly from a basic component should be a prop driven extension on top of our this way we could build out more complex tables as extensions than completely new components thinking tables with multiple heading rows tables with rows or columns that span multiple tables with sortable columns related issues point of contact if this issue is being opened by a vfs team member please add a point of contact usually this is the same person who enters the issue ticket vfs point of contact trevor acceptance criteria html validates with validator or other checker no axe errors is used to create extended or enhanced table components visual styling does not change except where noted by issue tickets environment
1
51,147
13,191,541,923
IssuesEvent
2020-08-13 12:19:27
OpenMS/OpenMS
https://api.github.com/repos/OpenMS/OpenMS
closed
TOPPView: Layers display not updated
TOPPView critical defect
In TOPPView, when a file is opened as a "new layer", it should appear in the "Layers" window. Due to a recent regression, this does not happen (immediately) any more. The layer is drawn in the main window, but the "Layers" window is not updated - that is, until I open another file, or minimise and un-minimise the TOPPView window. (There may be other triggers as well.) Without knowing these workarounds, it is very confusing that the "Layer" entry is missing. This also prevents users from interacting with the layer (disabling it, setting it to active, removing it...). This occurs with the current development version on Ubuntu 18.04.4.
1.0
TOPPView: Layers display not updated - In TOPPView, when a file is opened as a "new layer", it should appear in the "Layers" window. Due to a recent regression, this does not happen (immediately) any more. The layer is drawn in the main window, but the "Layers" window is not updated - that is, until I open another file, or minimise and un-minimise the TOPPView window. (There may be other triggers as well.) Without knowing these workarounds, it is very confusing that the "Layer" entry is missing. This also prevents users from interacting with the layer (disabling it, setting it to active, removing it...). This occurs with the current development version on Ubuntu 18.04.4.
defect
toppview layers display not updated in toppview when a file is opened as a new layer it should appear in the layers window due to a recent regression this does not happen immediately any more the layer is drawn in the main window but the layers window is not updated that is until i open another file or minimise and un minimise the toppview window there may be other triggers as well without knowing these workarounds it is very confusing that the layer entry is missing this also prevents users from interacting with the layer disabling it setting it to active removing it this occurs with the current development version on ubuntu
1
266,486
20,154,514,497
IssuesEvent
2022-02-09 15:20:11
Legal-and-General/canopy
https://api.github.com/repos/Legal-and-General/canopy
closed
Deprecation of SCSS only documentation
documentation question
We are tentatively deciding to deprecate documentation for those applications only using the SCSS/CSS. This refers to the manual documentation that can be found in the storybook notes. If any projects are using SCSS/CSS only, things will still work as expected, but in the future you will have to inspect the code to determine what classes to provide. - The manual documentation was providing quite a time overhead in terms of contributing new components. - The manual documentation was difficult to keep up to date - The manual documentation could draw a team into using only the scss/css where it would have been preferable to use the Angular components. Initially the intention of this documentation was to help with teams not using Angular, it is however quite cumbersome and received little interest for the amount of effort put into it. If you are a team who use this approach, or would be interested in using Canopy not with Angular we do have some advanced experiments with Angular Elements. Please respond on this issue. In time we will start to remove the `scss only` documentation, and will keep this issue up to date.
1.0
Deprecation of SCSS only documentation - We are tentatively deciding to deprecate documentation for those applications only using the SCSS/CSS. This refers to the manual documentation that can be found in the storybook notes. If any projects are using SCSS/CSS only, things will still work as expected, but in the future you will have to inspect the code to determine what classes to provide. - The manual documentation was providing quite a time overhead in terms of contributing new components. - The manual documentation was difficult to keep up to date - The manual documentation could draw a team into using only the scss/css where it would have been preferable to use the Angular components. Initially the intention of this documentation was to help with teams not using Angular, it is however quite cumbersome and received little interest for the amount of effort put into it. If you are a team who use this approach, or would be interested in using Canopy not with Angular we do have some advanced experiments with Angular Elements. Please respond on this issue. In time we will start to remove the `scss only` documentation, and will keep this issue up to date.
non_defect
deprecation of scss only documentation we are tentatively deciding to deprecate documentation for those applications only using the scss css this refers to the manual documentation that can be found in the storybook notes if any projects are using scss css only things will still work as expected but in the future you will have to inspect the code to determine what classes to provide the manual documentation was providing quite a time overhead in terms of contributing new components the manual documentation was difficult to keep up to date the manual documentation could draw a team into using only the scss css where it would have been preferable to use the angular components initially the intention of this documentation was to help with teams not using angular it is however quite cumbersome and received little interest for the amount of effort put into it if you are a team who use this approach or would be interested in using canopy not with angular we do have some advanced experiments with angular elements please respond on this issue in time we will start to remove the scss only documentation and will keep this issue up to date
0
191,156
6,826,366,247
IssuesEvent
2017-11-08 13:55:41
openshift/origin
https://api.github.com/repos/openshift/origin
opened
Metrics not working for oc cluster up --latest --metrics
component/cluster-up component/metrics kind/bug priority/P1
I ran the following command: ``` $ oc cluster up --version=latest --service-catalog --metrics ``` I see the following error in the openshift-ansible-metrics-job pod: > fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "The conditional check 'lookupip.stdout not in ansible_all_ipv4_addresses' failed. The error was: error while evaluating conditional (lookupip.stdout not in ansible_all_ipv4_addresses): Unable to look up a name or access an attribute in template string ({% if lookupip.stdout not in ansible_all_ipv4_addresses %} True {% else %} False {% endif %}).\nMake sure your variable name does not contain invalid characters like '-': argument of type 'StrictUndefined' is not iterable\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/validate_hostnames.yml': line 11, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n failed_when: false\n - name: Warn user about bad openshift_hostname values\n ^ here\n"} Full log: [openshift-ansible-metrics-job-c8v7d.log](https://github.com/openshift/origin/files/1454095/openshift-ansible-metrics-job-c8v7d.log) All of my metrics pods are failed: ``` $ oc get pods -n openshift-infra NAME READY STATUS RESTARTS AGE openshift-ansible-metrics-job-c8v7d 0/1 Error 0 46m openshift-ansible-metrics-job-k4qbx 0/1 Error 0 47m openshift-ansible-metrics-job-kqz2z 0/1 Error 0 51m openshift-ansible-metrics-job-nn4bb 0/1 Error 0 49m ``` ##### Version oc v3.7.0-alpha.1+b953213-1499 kubernetes v1.7.6+a08f5eeb62 features: Basic-Auth Server https://127.0.0.1:8443 openshift v3.7.0-rc.0+b953213-158 kubernetes v1.7.6+a08f5eeb62 cc @jwforres @csrwng @rhamilto
1.0
Metrics not working for oc cluster up --latest --metrics - I ran the following command: ``` $ oc cluster up --version=latest --service-catalog --metrics ``` I see the following error in the openshift-ansible-metrics-job pod: > fatal: [127.0.0.1]: FAILED! => {"failed": true, "msg": "The conditional check 'lookupip.stdout not in ansible_all_ipv4_addresses' failed. The error was: error while evaluating conditional (lookupip.stdout not in ansible_all_ipv4_addresses): Unable to look up a name or access an attribute in template string ({% if lookupip.stdout not in ansible_all_ipv4_addresses %} True {% else %} False {% endif %}).\nMake sure your variable name does not contain invalid characters like '-': argument of type 'StrictUndefined' is not iterable\n\nThe error appears to have been in '/usr/share/ansible/openshift-ansible/playbooks/common/openshift-cluster/validate_hostnames.yml': line 11, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n failed_when: false\n - name: Warn user about bad openshift_hostname values\n ^ here\n"} Full log: [openshift-ansible-metrics-job-c8v7d.log](https://github.com/openshift/origin/files/1454095/openshift-ansible-metrics-job-c8v7d.log) All of my metrics pods are failed: ``` $ oc get pods -n openshift-infra NAME READY STATUS RESTARTS AGE openshift-ansible-metrics-job-c8v7d 0/1 Error 0 46m openshift-ansible-metrics-job-k4qbx 0/1 Error 0 47m openshift-ansible-metrics-job-kqz2z 0/1 Error 0 51m openshift-ansible-metrics-job-nn4bb 0/1 Error 0 49m ``` ##### Version oc v3.7.0-alpha.1+b953213-1499 kubernetes v1.7.6+a08f5eeb62 features: Basic-Auth Server https://127.0.0.1:8443 openshift v3.7.0-rc.0+b953213-158 kubernetes v1.7.6+a08f5eeb62 cc @jwforres @csrwng @rhamilto
non_defect
metrics not working for oc cluster up latest metrics i ran the following command oc cluster up version latest service catalog metrics i see the following error in the openshift ansible metrics job pod fatal failed failed true msg the conditional check lookupip stdout not in ansible all addresses failed the error was error while evaluating conditional lookupip stdout not in ansible all addresses unable to look up a name or access an attribute in template string if lookupip stdout not in ansible all addresses true else false endif nmake sure your variable name does not contain invalid characters like argument of type strictundefined is not iterable n nthe error appears to have been in usr share ansible openshift ansible playbooks common openshift cluster validate hostnames yml line column but may nbe elsewhere in the file depending on the exact syntax problem n nthe offending line appears to be n n failed when false n name warn user about bad openshift hostname values n here n full log all of my metrics pods are failed oc get pods n openshift infra name ready status restarts age openshift ansible metrics job error openshift ansible metrics job error openshift ansible metrics job error openshift ansible metrics job error version oc alpha kubernetes features basic auth server openshift rc kubernetes cc jwforres csrwng rhamilto
0
76,077
26,226,742,287
IssuesEvent
2023-01-04 19:26:43
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
Video playback is being cropped
T-Defect
### Steps to reproduce 1. I send a screen recording video from my android phone to a matrix group chat. 2. The video is successfully sent and a thumbnail is generated 3. I click the video thumbnail in the timeline and the top and bottom of the video are now unviewable. ### Outcome #### What did you expect? I expected to play the video from element exactly as It was recorded on my phone without any cropping. https://user-images.githubusercontent.com/16907963/210633152-cb645478-e5d3-4a2e-bb3c-f606e55d1d07.mp4 #### What happened instead? The video is cropped and I cannot see the entire content of the recording (top and bottom). Although the thumbnail is correct in the video, the actual playback is not. https://user-images.githubusercontent.com/16907963/210633338-09352726-89ca-4288-a61c-8624be4a7e72.mp4 ### Your phone model Pixel 6 ### Operating system version 13 ### Application version and app store _No response_ ### Homeserver matrix.org ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
Video playback is being cropped - ### Steps to reproduce 1. I send a screen recording video from my android phone to a matrix group chat. 2. The video is successfully sent and a thumbnail is generated 3. I click the video thumbnail in the timeline and the top and bottom of the video are now unviewable. ### Outcome #### What did you expect? I expected to play the video from element exactly as It was recorded on my phone without any cropping. https://user-images.githubusercontent.com/16907963/210633152-cb645478-e5d3-4a2e-bb3c-f606e55d1d07.mp4 #### What happened instead? The video is cropped and I cannot see the entire content of the recording (top and bottom). Although the thumbnail is correct in the video, the actual playback is not. https://user-images.githubusercontent.com/16907963/210633338-09352726-89ca-4288-a61c-8624be4a7e72.mp4 ### Your phone model Pixel 6 ### Operating system version 13 ### Application version and app store _No response_ ### Homeserver matrix.org ### Will you send logs? No ### Are you willing to provide a PR? No
defect
video playback is being cropped steps to reproduce i send a screen recording video from my android phone to a matrix group chat the video is successfully sent and a thumbnail is generated i click the video thumbnail in the timeline and the top and bottom of the video are now unviewable outcome what did you expect i expected to play the video from element exactly as it was recorded on my phone without any cropping what happened instead the video is cropped and i cannot see the entire content of the recording top and bottom although the thumbnail is correct in the video the actual playback is not your phone model pixel operating system version application version and app store no response homeserver matrix org will you send logs no are you willing to provide a pr no
1
77,139
3,506,264,806
IssuesEvent
2016-01-08 05:06:12
OregonCore/OregonCore
https://api.github.com/repos/OregonCore/OregonCore
opened
[Teron Gorefiend] Shadow of Death (BB #200)
migrated Priority: Medium Type: Bug
This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 15.06.2010 13:48:38 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** new **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/200 <hr> He don't cast shadow of death.
1.0
[Teron Gorefiend] Shadow of Death (BB #200) - This issue was migrated from bitbucket. **Original Reporter:** **Original Date:** 15.06.2010 13:48:38 GMT+0000 **Original Priority:** major **Original Type:** bug **Original State:** new **Direct Link:** https://bitbucket.org/oregon/oregoncore/issues/200 <hr> He don't cast shadow of death.
non_defect
shadow of death bb this issue was migrated from bitbucket original reporter original date gmt original priority major original type bug original state new direct link he don t cast shadow of death
0
721,580
24,831,859,064
IssuesEvent
2022-10-26 04:45:49
MelchiorDahrk/MMM2022
https://api.github.com/repos/MelchiorDahrk/MMM2022
closed
AA22_i20: Abandoned House
priority-3
Typical abandoned house. Plenty of cobwebs, ruins, maybe even an ancestral ghost.
1.0
AA22_i20: Abandoned House - Typical abandoned house. Plenty of cobwebs, ruins, maybe even an ancestral ghost.
non_defect
abandoned house typical abandoned house plenty of cobwebs ruins maybe even an ancestral ghost
0
24,076
3,881,245,168
IssuesEvent
2016-04-13 02:57:37
department-of-veterans-affairs/veterans-employment-center
https://api.github.com/repos/department-of-veterans-affairs/veterans-employment-center
opened
text wrap looks off on C&E page
design
@gnakm is the text on this page supposed to wrap like this? it's only going partially across the page. ![screen shot 2016-04-12 at 10 56 26 pm](https://cloud.githubusercontent.com/assets/13770771/14481537/f10fc504-0101-11e6-9b61-6afeafafdf3d.png)
1.0
text wrap looks off on C&E page - @gnakm is the text on this page supposed to wrap like this? it's only going partially across the page. ![screen shot 2016-04-12 at 10 56 26 pm](https://cloud.githubusercontent.com/assets/13770771/14481537/f10fc504-0101-11e6-9b61-6afeafafdf3d.png)
non_defect
text wrap looks off on c e page gnakm is the text on this page supposed to wrap like this it s only going partially across the page
0
53,872
13,262,409,212
IssuesEvent
2020-08-20 21:44:02
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
[phys-services] I3GeometryDecomposer doesn't know about all OMTypes (Trac #2222)
Migrated from Trac analysis defect
I3GeometryDecomposer issues errors when given a relativly modern GCD file with OMTypes Scintillator and IceAct. I am guessing this is not actually an error. Either the logging should be downgraded to debug or IceAct and Scintillator should be added to the switch statement. https://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/phys-services/trunk/private/phys-services/I3GeometryDecomposer.cxx?rev=149765#L215 <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2222">https://code.icecube.wisc.edu/projects/icecube/ticket/2222</a>, reported by kjmeagherand owned by mjl5147</em></summary> <p> ```json { "status": "closed", "changetime": "2019-05-07T14:57:57", "_ts": "1557241077906143", "description": "I3GeometryDecomposer issues errors when given a relativly modern GCD file with OMTypes Scintillator and IceAct. I am guessing this is not actually an error. Either the logging should be downgraded to debug or IceAct and Scintillator should be added to the switch statement.\n\n\nhttps://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/phys-services/trunk/private/phys-services/I3GeometryDecomposer.cxx?rev=149765#L215", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "time": "2018-12-06T19:49:44", "component": "analysis", "summary": "[phys-services] I3GeometryDecomposer doesn't know about all OMTypes", "priority": "normal", "keywords": "", "milestone": "Vernal Equinox 2019", "owner": "mjl5147", "type": "defect" } ``` </p> </details>
1.0
[phys-services] I3GeometryDecomposer doesn't know about all OMTypes (Trac #2222) - I3GeometryDecomposer issues errors when given a relativly modern GCD file with OMTypes Scintillator and IceAct. I am guessing this is not actually an error. Either the logging should be downgraded to debug or IceAct and Scintillator should be added to the switch statement. https://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/phys-services/trunk/private/phys-services/I3GeometryDecomposer.cxx?rev=149765#L215 <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2222">https://code.icecube.wisc.edu/projects/icecube/ticket/2222</a>, reported by kjmeagherand owned by mjl5147</em></summary> <p> ```json { "status": "closed", "changetime": "2019-05-07T14:57:57", "_ts": "1557241077906143", "description": "I3GeometryDecomposer issues errors when given a relativly modern GCD file with OMTypes Scintillator and IceAct. I am guessing this is not actually an error. Either the logging should be downgraded to debug or IceAct and Scintillator should be added to the switch statement.\n\n\nhttps://code.icecube.wisc.edu/projects/icecube/browser/IceCube/projects/phys-services/trunk/private/phys-services/I3GeometryDecomposer.cxx?rev=149765#L215", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "time": "2018-12-06T19:49:44", "component": "analysis", "summary": "[phys-services] I3GeometryDecomposer doesn't know about all OMTypes", "priority": "normal", "keywords": "", "milestone": "Vernal Equinox 2019", "owner": "mjl5147", "type": "defect" } ``` </p> </details>
defect
doesn t know about all omtypes trac issues errors when given a relativly modern gcd file with omtypes scintillator and iceact i am guessing this is not actually an error either the logging should be downgraded to debug or iceact and scintillator should be added to the switch statement migrated from json status closed changetime ts description issues errors when given a relativly modern gcd file with omtypes scintillator and iceact i am guessing this is not actually an error either the logging should be downgraded to debug or iceact and scintillator should be added to the switch statement n n n reporter kjmeagher cc resolution fixed time component analysis summary doesn t know about all omtypes priority normal keywords milestone vernal equinox owner type defect
1
203,904
15,394,591,044
IssuesEvent
2021-03-03 18:06:12
openservicemesh/osm
https://api.github.com/repos/openservicemesh/osm
opened
test: pkg/service ClusterName.String() method
tests
In `/home/de/src/osm/pkg/service/types.go` stringer does not have good unit test coverage. It would be great to write a small test for this function. ![image](https://user-images.githubusercontent.com/49918230/109851029-075c3c80-7c08-11eb-9752-8950c6621846.png)
1.0
test: pkg/service ClusterName.String() method - In `/home/de/src/osm/pkg/service/types.go` stringer does not have good unit test coverage. It would be great to write a small test for this function. ![image](https://user-images.githubusercontent.com/49918230/109851029-075c3c80-7c08-11eb-9752-8950c6621846.png)
non_defect
test pkg service clustername string method in home de src osm pkg service types go stringer does not have good unit test coverage it would be great to write a small test for this function
0
149,188
11,885,238,877
IssuesEvent
2020-03-27 19:11:59
bcgov/range-web
https://api.github.com/repos/bcgov/range-web
closed
RUP back button navigating to wrong page
bug ready to test
Refresh doesn’t fix it Browser back arrow eventually gets me back to the list
1.0
RUP back button navigating to wrong page - Refresh doesn’t fix it Browser back arrow eventually gets me back to the list
non_defect
rup back button navigating to wrong page refresh doesn’t fix it browser back arrow eventually gets me back to the list
0
774,484
27,199,262,592
IssuesEvent
2023-02-20 08:31:05
ballerina-platform/ballerina-dev-website
https://api.github.com/repos/ballerina-platform/ballerina-dev-website
closed
Need to Update the Observability Guide on Prometheus Tags
Priority/Highest Type/Improvement Points/0.5 Area/LearnPages Category/Content
**Description:** Need to update the Observability guide on the list of tags, which need to be published when accessing Prometheus. This should be added under the "Creating your own dashboard" section. **Suggested Labels:** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees:** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees--> **Affected Product Version:** **OS, Browser, other environment details and versions:** **Steps to reproduce:** **Related Issues:** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
1.0
Need to Update the Observability Guide on Prometheus Tags - **Description:** Need to update the Observability guide on the list of tags, which need to be published when accessing Prometheus. This should be added under the "Creating your own dashboard" section. **Suggested Labels:** <!-- Optional comma separated list of suggested labels. Non committers can’t assign labels to issues, so this will help issue creators who are not a committer to suggest possible labels--> **Suggested Assignees:** <!--Optional comma separated list of suggested team members who should attend the issue. Non committers can’t assign issues to assignees, so this will help issue creators who are not a committer to suggest possible assignees--> **Affected Product Version:** **OS, Browser, other environment details and versions:** **Steps to reproduce:** **Related Issues:** <!-- Any related issues such as sub tasks, issues reported in other repositories (e.g component repositories), similar problems, etc. -->
non_defect
need to update the observability guide on prometheus tags description need to update the observability guide on the list of tags which need to be published when accessing prometheus this should be added under the creating your own dashboard section suggested labels suggested assignees affected product version os browser other environment details and versions steps to reproduce related issues
0
68,744
21,876,153,639
IssuesEvent
2022-05-19 10:19:03
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
F-Droid can't build - org.maplibre.gl:android-sdk pulls in non-FOSS Google Services
T-Defect
### Steps to reproduce Try to build for fdroid... gradle dependencies for `fdroidRelease*` says: ``` +--- org.maplibre.gl:android-sdk:9.5.2 | +--- org.maplibre.gl:android-sdk-geojson:5.9.0 | | \--- com.google.code.gson:gson:2.8.6 | +--- com.mapbox.mapboxsdk:mapbox-android-gestures:0.7.0 | | +--- androidx.core:core:1.0.0 -> 1.7.0 (*) | | \--- androidx.annotation:annotation:1.0.0 -> 1.3.0 | +--- org.maplibre.gl:android-sdk-turf:5.9.0 | | \--- org.maplibre.gl:android-sdk-geojson:5.9.0 (*) | +--- androidx.annotation:annotation:1.0.0 -> 1.3.0 | +--- androidx.fragment:fragment:1.0.0 -> 1.4.1 (*) | +--- com.squareup.okhttp3:okhttp:3.12.3 -> 4.9.3 (*) | \--- com.google.android.gms:play-services-location:16.0.0 ``` Since: https://github.com/vector-im/element-android/commit/824e713c51c5aa5b89a85ed5e4c105f5e76a4ba8 ### Outcome Can't build from FOSS deps ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store _No response_ ### Homeserver _No response_ ### Will you send logs? No
1.0
F-Droid can't build - org.maplibre.gl:android-sdk pulls in non-FOSS Google Services - ### Steps to reproduce Try to build for fdroid... gradle dependencies for `fdroidRelease*` says: ``` +--- org.maplibre.gl:android-sdk:9.5.2 | +--- org.maplibre.gl:android-sdk-geojson:5.9.0 | | \--- com.google.code.gson:gson:2.8.6 | +--- com.mapbox.mapboxsdk:mapbox-android-gestures:0.7.0 | | +--- androidx.core:core:1.0.0 -> 1.7.0 (*) | | \--- androidx.annotation:annotation:1.0.0 -> 1.3.0 | +--- org.maplibre.gl:android-sdk-turf:5.9.0 | | \--- org.maplibre.gl:android-sdk-geojson:5.9.0 (*) | +--- androidx.annotation:annotation:1.0.0 -> 1.3.0 | +--- androidx.fragment:fragment:1.0.0 -> 1.4.1 (*) | +--- com.squareup.okhttp3:okhttp:3.12.3 -> 4.9.3 (*) | \--- com.google.android.gms:play-services-location:16.0.0 ``` Since: https://github.com/vector-im/element-android/commit/824e713c51c5aa5b89a85ed5e4c105f5e76a4ba8 ### Outcome Can't build from FOSS deps ### Your phone model _No response_ ### Operating system version _No response_ ### Application version and app store _No response_ ### Homeserver _No response_ ### Will you send logs? No
defect
f droid can t build org maplibre gl android sdk pulls in non foss google services steps to reproduce try to build for fdroid gradle dependencies for fdroidrelease says org maplibre gl android sdk org maplibre gl android sdk geojson com google code gson gson com mapbox mapboxsdk mapbox android gestures androidx core core androidx annotation annotation org maplibre gl android sdk turf org maplibre gl android sdk geojson androidx annotation annotation androidx fragment fragment com squareup okhttp com google android gms play services location since outcome can t build from foss deps your phone model no response operating system version no response application version and app store no response homeserver no response will you send logs no
1
11,516
7,583,149,638
IssuesEvent
2018-04-25 07:54:15
maowerner/sLapH-contractions
https://api.github.com/repos/maowerner/sLapH-contractions
closed
Phase factor in momentum uses exp too often
performance
**Branch**: phase-factor - [x] Compare performance to old version - [x] Do some correctness test with a larger lattice. --- The momentum phase factor in the VdaggerV calls the `exp` function for each site on the lattice. Similar to the FFT one might be able to rewrite `exp(-i p x) = exp(-i p_x)^x` and then only call the `exp` function once. To go through the lattice, one just to multiply the phase factor with the cached value. The `exp` function seems to use around 50 cycles ([source](https://streamhpc.com/blog/2012-07-16/how-expensive-is-an-operation-on-a-cpu/)).
True
Phase factor in momentum uses exp too often - **Branch**: phase-factor - [x] Compare performance to old version - [x] Do some correctness test with a larger lattice. --- The momentum phase factor in the VdaggerV calls the `exp` function for each site on the lattice. Similar to the FFT one might be able to rewrite `exp(-i p x) = exp(-i p_x)^x` and then only call the `exp` function once. To go through the lattice, one just to multiply the phase factor with the cached value. The `exp` function seems to use around 50 cycles ([source](https://streamhpc.com/blog/2012-07-16/how-expensive-is-an-operation-on-a-cpu/)).
non_defect
phase factor in momentum uses exp too often branch phase factor compare performance to old version do some correctness test with a larger lattice the momentum phase factor in the vdaggerv calls the exp function for each site on the lattice similar to the fft one might be able to rewrite exp i p x exp i p x x and then only call the exp function once to go through the lattice one just to multiply the phase factor with the cached value the exp function seems to use around cycles
0
19,617
3,228,437,930
IssuesEvent
2015-10-12 02:06:09
essandess/etv-comskip
https://api.github.com/repos/essandess/etv-comskip
closed
installer sets comskip directory to be mode 700 for user 501
auto-migrated Priority-Medium Type-Defect
``` i don't have an eyetv (just want to run comskip directly). I suppose that might have confused the installer for some reason, but the comskip directory is improperly permitted. thanks for going to the trouble to put this all together. version 2.0.2-10.6 sh-3.2# pwd /Library/Application Support/ETVComskip sh-3.2# ls -al total 112 drwxr-xr-x@ 14 danno staff 476 Apr 3 11:29 . drwxrwxr-x 27 root admin 918 Apr 3 11:28 .. d-wx-wx-wt@ 2 501 staff 68 Jun 1 2010 .Trashes -rw-r--r--@ 1 501 staff 975 Jun 1 2010 AUTHORS -rw-r--r--@ 1 501 staff 1310 Jun 1 2010 CHANGELOG drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 ComSkipper.app drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 Install ETVComskip.app -rw-r--r--@ 1 501 staff 17987 Jun 1 2010 LICENSE -rw-r--r--@ 1 501 staff 18300 Jun 1 2010 LICENSE.rtf drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 MarkCommercials.app -rw-r--r--@ 1 501 staff 4315 Jun 1 2010 README-EyeTV3 drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 UnInstall ETVComskip.app drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 Wine.app drwx------@ 13 501 staff 442 Jun 1 2010 comskip ``` Original issue reported on code.google.com by `danpri...@gmail.com` on 3 Apr 2011 at 3:50
1.0
installer sets comskip directory to be mode 700 for user 501 - ``` i don't have an eyetv (just want to run comskip directly). I suppose that might have confused the installer for some reason, but the comskip directory is improperly permitted. thanks for going to the trouble to put this all together. version 2.0.2-10.6 sh-3.2# pwd /Library/Application Support/ETVComskip sh-3.2# ls -al total 112 drwxr-xr-x@ 14 danno staff 476 Apr 3 11:29 . drwxrwxr-x 27 root admin 918 Apr 3 11:28 .. d-wx-wx-wt@ 2 501 staff 68 Jun 1 2010 .Trashes -rw-r--r--@ 1 501 staff 975 Jun 1 2010 AUTHORS -rw-r--r--@ 1 501 staff 1310 Jun 1 2010 CHANGELOG drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 ComSkipper.app drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 Install ETVComskip.app -rw-r--r--@ 1 501 staff 17987 Jun 1 2010 LICENSE -rw-r--r--@ 1 501 staff 18300 Jun 1 2010 LICENSE.rtf drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 MarkCommercials.app -rw-r--r--@ 1 501 staff 4315 Jun 1 2010 README-EyeTV3 drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 UnInstall ETVComskip.app drwxr-xr-x@ 3 501 staff 102 Jun 1 2010 Wine.app drwx------@ 13 501 staff 442 Jun 1 2010 comskip ``` Original issue reported on code.google.com by `danpri...@gmail.com` on 3 Apr 2011 at 3:50
defect
installer sets comskip directory to be mode for user i don t have an eyetv just want to run comskip directly i suppose that might have confused the installer for some reason but the comskip directory is improperly permitted thanks for going to the trouble to put this all together version sh pwd library application support etvcomskip sh ls al total drwxr xr x danno staff apr drwxrwxr x root admin apr d wx wx wt staff jun trashes rw r r staff jun authors rw r r staff jun changelog drwxr xr x staff jun comskipper app drwxr xr x staff jun install etvcomskip app rw r r staff jun license rw r r staff jun license rtf drwxr xr x staff jun markcommercials app rw r r staff jun readme drwxr xr x staff jun uninstall etvcomskip app drwxr xr x staff jun wine app drwx staff jun comskip original issue reported on code google com by danpri gmail com on apr at
1
30,006
13,194,186,617
IssuesEvent
2020-08-13 16:23:35
terraform-providers/terraform-provider-aws
https://api.github.com/repos/terraform-providers/terraform-provider-aws
closed
aws s3 lifecycle bucket wants to set expiration to null when abort multipart upload is present
service/s3
_This issue was originally opened by @mpodber1971 as hashicorp/terraform#25788. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._ <hr> when i add a lifecycle policy to delete abort multipart uploads and do NOT include an expiration on an AWS s3 bucket, terraform wants to set the expiration to null. It is misleading and a little concerning as to whether I accidentally implemented a lifecycle policy to delete our production data. If the multi part upload isn't present, then terraform acts as expected. I have attached the plan before apply, aws cli showing lifecycle post apply and a second terraform plan showing that it wants to apply these changes. [test-bucket-post-apply-new-plan.log](https://github.com/hashicorp/terraform/files/5052441/test-bucket-post-apply-new-plan.log) [test-bucket-lifecycle-post-apply.log](https://github.com/hashicorp/terraform/files/5052443/test-bucket-lifecycle-post-apply.log) [test-bucket-pre-apply.log](https://github.com/hashicorp/terraform/files/5052444/test-bucket-pre-apply.log)
1.0
aws s3 lifecycle bucket wants to set expiration to null when abort multipart upload is present - _This issue was originally opened by @mpodber1971 as hashicorp/terraform#25788. It was migrated here as a result of the [provider split](https://www.hashicorp.com/blog/upcoming-provider-changes-in-terraform-0-10/). The original body of the issue is below._ <hr> when i add a lifecycle policy to delete abort multipart uploads and do NOT include an expiration on an AWS s3 bucket, terraform wants to set the expiration to null. It is misleading and a little concerning as to whether I accidentally implemented a lifecycle policy to delete our production data. If the multi part upload isn't present, then terraform acts as expected. I have attached the plan before apply, aws cli showing lifecycle post apply and a second terraform plan showing that it wants to apply these changes. [test-bucket-post-apply-new-plan.log](https://github.com/hashicorp/terraform/files/5052441/test-bucket-post-apply-new-plan.log) [test-bucket-lifecycle-post-apply.log](https://github.com/hashicorp/terraform/files/5052443/test-bucket-lifecycle-post-apply.log) [test-bucket-pre-apply.log](https://github.com/hashicorp/terraform/files/5052444/test-bucket-pre-apply.log)
non_defect
aws lifecycle bucket wants to set expiration to null when abort multipart upload is present this issue was originally opened by as hashicorp terraform it was migrated here as a result of the the original body of the issue is below when i add a lifecycle policy to delete abort multipart uploads and do not include an expiration on an aws bucket terraform wants to set the expiration to null it is misleading and a little concerning as to whether i accidentally implemented a lifecycle policy to delete our production data if the multi part upload isn t present then terraform acts as expected i have attached the plan before apply aws cli showing lifecycle post apply and a second terraform plan showing that it wants to apply these changes
0
29,677
5,814,009,261
IssuesEvent
2017-05-05 01:09:05
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
Default value for __x() with context not returned if non default language translation is empty
Defect i18n On hold
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.4.3 I am not sure if this is desired behavior or not. I have pot and po files generated by Cake's i18n shell. I have following translation in non default language `pl` ``` msgctxt "Navigation menu" msgid "Search" msgstr "" ``` And now, behavior is different depending on language beeing default or not. `__x("Navigation menu", "Search")` gives me an empty string in language `pl`. If i switch to any other language, so the default lang would be picked up (default.pot in my case) default value "Search" is returned as I would expect it to be. Defining translation for example like this ``` msgctxt "Navigation menu" msgid "Search" msgstr "test test" ``` will result in returning "test test" for `pl` and "Search" for default language.
1.0
Default value for __x() with context not returned if non default language translation is empty - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.4.3 I am not sure if this is desired behavior or not. I have pot and po files generated by Cake's i18n shell. I have following translation in non default language `pl` ``` msgctxt "Navigation menu" msgid "Search" msgstr "" ``` And now, behavior is different depending on language beeing default or not. `__x("Navigation menu", "Search")` gives me an empty string in language `pl`. If i switch to any other language, so the default lang would be picked up (default.pot in my case) default value "Search" is returned as I would expect it to be. Defining translation for example like this ``` msgctxt "Navigation menu" msgid "Search" msgstr "test test" ``` will result in returning "test test" for `pl` and "Search" for default language.
defect
default value for x with context not returned if non default language translation is empty this is a multiple allowed bug enhancement feature discussion rfc cakephp version i am not sure if this is desired behavior or not i have pot and po files generated by cake s shell i have following translation in non default language pl msgctxt navigation menu msgid search msgstr and now behavior is different depending on language beeing default or not x navigation menu search gives me an empty string in language pl if i switch to any other language so the default lang would be picked up default pot in my case default value search is returned as i would expect it to be defining translation for example like this msgctxt navigation menu msgid search msgstr test test will result in returning test test for pl and search for default language
1
244,858
20,725,254,533
IssuesEvent
2022-03-14 00:23:51
DnD-Montreal/session-tome
https://api.github.com/repos/DnD-Montreal/session-tome
opened
Accept: League Admin Event Control
acceptance test
## Description Acceptance Test for #363 <!-- Provide a general summary of the test in the title above --> [UAT Environment](https://session-tome.triassi.ca) for executing the acceptance flow <!-- See #439 for automation of this flow --> ## Acceptance Flow <!-- Describe the step by step procedure of the acceptance test --> 1. Log in to the admin account 2. navigate to the admin page via /admin 3. Navigate to the events page via the panel on the left 4. Attempt to create a league event 5. Verify that the event is created 6. Attempt to edit a league event 7. Verify that the changes you have made are saved to the event 8. Attempt to delete a league event. 9. Verify that the event is deleted.
1.0
Accept: League Admin Event Control - ## Description Acceptance Test for #363 <!-- Provide a general summary of the test in the title above --> [UAT Environment](https://session-tome.triassi.ca) for executing the acceptance flow <!-- See #439 for automation of this flow --> ## Acceptance Flow <!-- Describe the step by step procedure of the acceptance test --> 1. Log in to the admin account 2. navigate to the admin page via /admin 3. Navigate to the events page via the panel on the left 4. Attempt to create a league event 5. Verify that the event is created 6. Attempt to edit a league event 7. Verify that the changes you have made are saved to the event 8. Attempt to delete a league event. 9. Verify that the event is deleted.
non_defect
accept league admin event control description acceptance test for for executing the acceptance flow acceptance flow log in to the admin account navigate to the admin page via admin navigate to the events page via the panel on the left attempt to create a league event verify that the event is created attempt to edit a league event verify that the changes you have made are saved to the event attempt to delete a league event verify that the event is deleted
0
6,516
2,610,256,033
IssuesEvent
2015-02-26 19:21:49
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳激光祛除痘坑多少钱
auto-migrated Priority-Medium Type-Defect
``` 深圳激光祛除痘坑多少钱【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:30
1.0
深圳激光祛除痘坑多少钱 - ``` 深圳激光祛除痘坑多少钱【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:30
defect
深圳激光祛除痘坑多少钱 深圳激光祛除痘坑多少钱【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
1
270,276
20,596,669,333
IssuesEvent
2022-03-05 16:06:16
42-AI/SentimentalBB
https://api.github.com/repos/42-AI/SentimentalBB
closed
docs (contrib): add agilmet as contributor
documentation
# 📖 Add a contributor Add my name and 42 login to the README.md # ✔️ Definition of done Name and login added to README.md
1.0
docs (contrib): add agilmet as contributor - # 📖 Add a contributor Add my name and 42 login to the README.md # ✔️ Definition of done Name and login added to README.md
non_defect
docs contrib add agilmet as contributor 📖 add a contributor add my name and login to the readme md ✔️ definition of done name and login added to readme md
0
56,213
14,981,573,301
IssuesEvent
2021-01-28 15:00:01
department-of-veterans-affairs/va.gov-team
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
closed
508-defect-3 [COGNITION, SEMANTIC MARKUP]: individual search items SHOULD read semantically
508-defect-3 508-issue-cognition 508-issue-semantic-markup 508/Accessibility dt-yellow-ribbon-schools-search vsa vsa-decision-tools
# [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3) **Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description Each search result **should** have semantic structure, associating labels with values. For the current code, the `p` elements are not associated with their values, and break out as if a separate sentence for screen readers. Non-sighted users do not get the same semantic content that sighted users get from the visual design. Due to limitations of Formation, the design may need to be modified. Recording of screen reader experience in Slack comment: https://dsva.slack.com/archives/C52CL1PKQ/p1588718480342600?thread_ts=1588710896.325800&cid=C52CL1PKQ Consider adjusting the code for "(per student, per year)" so that the CSS has it inline, rather than breaking at some viewports, as shown in this screenshot. ![Screen Shot 2020-05-05 at 6 38 05 PM](https://user-images.githubusercontent.com/57469/81122748-996bb980-8eff-11ea-8cd8-62322cfcab1e.png) ## Point of Contact **VFS Point of Contact:** Jennifer ## Acceptance Criteria As a screen reader user, I want to understand the context of the search results content so that I may locate the search result that best meets my needs. ## Acceptance Criteria - [ ] Defect is remediated - [ ] Any changes that impact the design intent are reviewed/approved by Design team - [ ] Changes are peer reviewed by Engineering - [ ] Changes are reviewed on Staging by Product Owner and Product Manager - [ ] Changes are merged and deployed to Production ## Potential fix > **Please note:** If the `display` property is changed in the CSS it changes the semantics. For example, if a `ul` gets `display:block`, it removes the "list" semantics. [Source: "Fixing" Lists](https://www.scottohara.me/blog/2019/01/12/lists-and-safari.html) ### ~~Current~~ Previous code ```html <div class="search-results vads-u-margin-top--2"> <div class="medium-screen:vads-l-col vads-l-col vads-u-margin-bottom--2 vads-u-padding-x--2 vads-u-padding-y--2 vads-u-background-color--gray-light-alt vads-u-border--3px vads-u-border-color--transparent"> <h3 class="vads-u-margin--0">Abilene Christian University</h3> <p class="vads-u-margin-bottom--1 vads-u-margin-top--0">Abilene, TX</p> <div class="vads-l-row vads-u-margin-top--2"> <div class="vads-l-col--6 vads-u-display--flex vads-u-flex-direction--column vads-u-justify-content--space-between"> <div class="vads-u-col"> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin--0">Maximum Yellow Ribbon funding amount<br>(per student, per year)</h4> <p class="vads-u-margin--0">All tuition and fees not covered by Post-9/11 GI Bill benefits</p> </div> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0">Funding available for</h4> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0">All eligible students</p> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0">School website</h4> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0"><a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a></p> </div> <div class="vads-l-col--6 vads-u-padding-left--2"> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin--0">Degree type</h4> <p class="vads-u-margin--0">All</p> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--7 vads-u-margin-bottom--0">School or program</h4> <p class="vads-u-margin--0">All</p> </div> </div> </div> <div class="medium-screen:vads-l-col vads-l-col vads-u-margin-bottom--2 vads-u-padding-x--2 vads-u-padding-y--2 vads-u-background-color--gray-light-alt vads-u-border--3px vads-u-border-color--transparent"> <h3 class="vads-u-margin--0">Abraham Baldwin Agricultural College</h3> <p class="vads-u-margin-bottom--1 vads-u-margin-top--0">Tifton, GA</p> <div class="vads-l-row vads-u-margin-top--2"> <!-- etcetera --> </div> ``` ### Current code The highlighted text below is an example of an issue with the code structure. ```diff <ul class="search-results vads-u-margin-top--2 vads-u-padding--0" data-e2e-id="search-results"> <li class="usa-unstyled-list vads-l-col vads-u-margin-bottom--2 vads-u-padding-x--2 vads-u-padding-y--2 vads-u-background-color--gray-light-alt"> <p class="vads-u-font-size--h3 vads-u-font-weight--bold vads-u-font-family--serif vads-u-margin--0" data-e2e-id="result-title"> <span class="sr-only">School name</span> Berkeley College Of New York </p> <p class="vads-u-margin-bottom--1 vads-u-margin-top--0"> <span class="sr-only">School location</span> New York, NY</p> <div class="vads-l-row vads-u-margin-top--2"> <div class="vads-l-col--12 vads-u-display--flex vads-u-flex-direction--column vads-u-justify-content--space-between medium-screen:vads-l-col--6"> <div class="vads-u-col"> <p class="vads-u-font-weight--bold vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin--0"> Maximum Yellow Ribbon funding amount<br> (per student, per year)<span class="sr-only">:</span> </p> <p class="vads-u-margin--0"> Pays remaining tuition that Post-9/11 GI Bill doesn't cover </p> </div> <p class="vads-u-font-weight--bold vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0"> Funding available for<span class="sr-only">:</span> </p> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0"> All eligible students </p> ! <p class="vads-u-font-weight--bold vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0"> ! School website<span class="sr-only">:</span> ! </p> ! <p class="vads-u-margin-top--0 vads-u-margin-bottom--0"> ! <a href="https://www.BerkeleyCollege.edu" rel="noreferrer noopener" target="_blank">www.berkeleycollege.edu</a> </p> </div> <div class="vads-l-col--12 medium-screen:vads-l-col--6 medium-screen:vads-u-padding-left--2"> <p class="vads-u-font-weight--bold vads-u-margin-top--2 vads-u-margin-bottom--0 vads-u-font-family--sans vads-u-font-size--h5 medium-screen:vads-u-margin--0"> Degree type<span class="sr-only">:</span> </p> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0 medium-screen:vads-u-margin--0"> All </p> <p class="school-program vads-u-font-weight--bold vads-u-margin-top--2 vads-u-margin-bottom--0 vads-u-font-family--sans vads-u-font-size--h5 medium-screen:vads-u-margin-bottom--0"> School or program<span class="sr-only">:</span> </p> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0 medium-screen:vads-u-margin--0"> All </p> </div> </div> </li> <li>Next result item</li> </ul> ``` ### Potential fix One I believe this preferred solution is not possible, given the limitation of working with Formation. ```html <dl> <dt> <dfn class="sr-only">School name</dfn> Abilene Christian University </dt> <dd> <dfn class="sr-only">City, State</dfn> Abilene, <abbr title="Texas">TX</abbr> </dd> <dd> <dfn>Maximum Yellow Ribbon funding amount<br>(per student, per year)</dfn> All tuition and fees not covered by Post-9/11 GI Bill benefits </dd> <dd> <dfn>Funding available for</dfn> All eligible students </dd> <dd> <dfn>School website</dfn> <a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a> </dd> <dd> <dfn>Degree type</dfn> All </dd> <dd> <dfn>School or program</dfn> All </dd> <dt><dfn class="sr-only">School name</dfn> Abraham Baldwin Agricultural College</dt> <dd><dfn class="sr-only">City, State</dfn> Tifton, <abbr title="Georgia">GA</abbr></dd> <!-- etcetera --> </dl> ``` ### Potential fix Two This alternate may also not be possible with Formation. ```html <!-- full search results list --> <ul> <!-- search results list item --> <li> <!-- search results list item name --> <div role="heading" aria-level="3" id="abilene_christian_university"> <dfn class="sr-only">School name</dfn> Abilene Christian University </div> <!-- search results list name, list of details --> <ul aria-labelledby="abilene_christian_university"> <li> <dfn class="sr-only">City, State</dfn> Abilene, <abbr title="Texas">TX</abbr> </li> <li> <dfn>Maximum Yellow Ribbon funding amount<br>(per student, per year)</dfn> All tuition and fees not covered by Post-9/11 GI Bill benefits </li> <li> <dfn>Funding available for</dfn> All eligible students </li> <li> <dfn>School website</dfn> <a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a> </li> <li> <dfn>Degree type</dfn> All </li> <li> <dfn>School or program</dfn> All </li> </ul><!-- end labelledby for abilene_christian_university --> </li> <li> <div role="heading" aria-level="3" id="abraham_baldwin_agricultural_college"> <dfn class="sr-only">School name</dfn> Abraham Baldwin Agricultural College </div> <!-- search results list name, list of details --> <ul aria-labelledby="abraham_baldwin_agricultural_college"> <li> <dfn class="sr-only">City, State</dfn> Tifton, <abbr title="Georgia">GA</abbr> </li> <!-- etcetera --> </dl> ``` ### Potential fix Three Using divs in order to meet Formation requirements, and provide semantic context using ARIA. ```html <!-- full search results list --> <div role="list"> <!-- search results list item --> <div role="listitem"> <!-- search results list item name --> <div role="heading" aria-level="3" id="abilene_christian_university"> <dfn class="sr-only">School name</dfn> Abilene Christian University </div> <!-- search results list name, list of details --> <div role="list" aria-labelledby="abilene_christian_university"> <div role="listitem"> <dfn class="sr-only">City, State</dfn> Abilene, <abbr title="Texas">TX</abbr> </div> <div role="listitem"> <dfn>Maximum Yellow Ribbon funding amount<br>(per student, per year)</dfn> All tuition and fees not covered by Post-9/11 GI Bill benefits </div> <div role="listitem"> <dfn>Funding available for</dfn> All eligible students </div> <div role="listitem"> <dfn>School website</dfn> <a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a> </div> <div role="listitem"> <dfn>Degree type</dfn> All </div> <div role="listitem"> <dfn>School or program</dfn> All </div> </div><!-- end labelledby for abilene_christian_university --> </div><!-- end search result list item --> <!-- search results list item --> <div role="listitem"> <!-- search results list item name --> <div role="heading" aria-level="3" id="abraham_baldwin_agricultural_college"> <dfn class="sr-only">School name</dfn> Abraham Baldwin Agricultural College </div> <div role="list" aria-labelledby="abraham_baldwin_agricultural_college"> <div role="listitem"> <dfn class="sr-only">City, State</dfn> Tifton, <abbr title="Georgia">GA</abbr> </div> <!-- etcetera --> </dl> ```
1.0
508-defect-3 [COGNITION, SEMANTIC MARKUP]: individual search items SHOULD read semantically - # [508-defect-3](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/platform/accessibility/guidance/defect-severity-rubric.md#508-defect-3) **Feedback framework** - **❗️ Must** for if the feedback must be applied - **⚠️Should** if the feedback is best practice - **✔️ Consider** for suggestions/enhancements ## Description Each search result **should** have semantic structure, associating labels with values. For the current code, the `p` elements are not associated with their values, and break out as if a separate sentence for screen readers. Non-sighted users do not get the same semantic content that sighted users get from the visual design. Due to limitations of Formation, the design may need to be modified. Recording of screen reader experience in Slack comment: https://dsva.slack.com/archives/C52CL1PKQ/p1588718480342600?thread_ts=1588710896.325800&cid=C52CL1PKQ Consider adjusting the code for "(per student, per year)" so that the CSS has it inline, rather than breaking at some viewports, as shown in this screenshot. ![Screen Shot 2020-05-05 at 6 38 05 PM](https://user-images.githubusercontent.com/57469/81122748-996bb980-8eff-11ea-8cd8-62322cfcab1e.png) ## Point of Contact **VFS Point of Contact:** Jennifer ## Acceptance Criteria As a screen reader user, I want to understand the context of the search results content so that I may locate the search result that best meets my needs. ## Acceptance Criteria - [ ] Defect is remediated - [ ] Any changes that impact the design intent are reviewed/approved by Design team - [ ] Changes are peer reviewed by Engineering - [ ] Changes are reviewed on Staging by Product Owner and Product Manager - [ ] Changes are merged and deployed to Production ## Potential fix > **Please note:** If the `display` property is changed in the CSS it changes the semantics. For example, if a `ul` gets `display:block`, it removes the "list" semantics. [Source: "Fixing" Lists](https://www.scottohara.me/blog/2019/01/12/lists-and-safari.html) ### ~~Current~~ Previous code ```html <div class="search-results vads-u-margin-top--2"> <div class="medium-screen:vads-l-col vads-l-col vads-u-margin-bottom--2 vads-u-padding-x--2 vads-u-padding-y--2 vads-u-background-color--gray-light-alt vads-u-border--3px vads-u-border-color--transparent"> <h3 class="vads-u-margin--0">Abilene Christian University</h3> <p class="vads-u-margin-bottom--1 vads-u-margin-top--0">Abilene, TX</p> <div class="vads-l-row vads-u-margin-top--2"> <div class="vads-l-col--6 vads-u-display--flex vads-u-flex-direction--column vads-u-justify-content--space-between"> <div class="vads-u-col"> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin--0">Maximum Yellow Ribbon funding amount<br>(per student, per year)</h4> <p class="vads-u-margin--0">All tuition and fees not covered by Post-9/11 GI Bill benefits</p> </div> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0">Funding available for</h4> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0">All eligible students</p> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0">School website</h4> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0"><a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a></p> </div> <div class="vads-l-col--6 vads-u-padding-left--2"> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin--0">Degree type</h4> <p class="vads-u-margin--0">All</p> <h4 class="vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--7 vads-u-margin-bottom--0">School or program</h4> <p class="vads-u-margin--0">All</p> </div> </div> </div> <div class="medium-screen:vads-l-col vads-l-col vads-u-margin-bottom--2 vads-u-padding-x--2 vads-u-padding-y--2 vads-u-background-color--gray-light-alt vads-u-border--3px vads-u-border-color--transparent"> <h3 class="vads-u-margin--0">Abraham Baldwin Agricultural College</h3> <p class="vads-u-margin-bottom--1 vads-u-margin-top--0">Tifton, GA</p> <div class="vads-l-row vads-u-margin-top--2"> <!-- etcetera --> </div> ``` ### Current code The highlighted text below is an example of an issue with the code structure. ```diff <ul class="search-results vads-u-margin-top--2 vads-u-padding--0" data-e2e-id="search-results"> <li class="usa-unstyled-list vads-l-col vads-u-margin-bottom--2 vads-u-padding-x--2 vads-u-padding-y--2 vads-u-background-color--gray-light-alt"> <p class="vads-u-font-size--h3 vads-u-font-weight--bold vads-u-font-family--serif vads-u-margin--0" data-e2e-id="result-title"> <span class="sr-only">School name</span> Berkeley College Of New York </p> <p class="vads-u-margin-bottom--1 vads-u-margin-top--0"> <span class="sr-only">School location</span> New York, NY</p> <div class="vads-l-row vads-u-margin-top--2"> <div class="vads-l-col--12 vads-u-display--flex vads-u-flex-direction--column vads-u-justify-content--space-between medium-screen:vads-l-col--6"> <div class="vads-u-col"> <p class="vads-u-font-weight--bold vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin--0"> Maximum Yellow Ribbon funding amount<br> (per student, per year)<span class="sr-only">:</span> </p> <p class="vads-u-margin--0"> Pays remaining tuition that Post-9/11 GI Bill doesn't cover </p> </div> <p class="vads-u-font-weight--bold vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0"> Funding available for<span class="sr-only">:</span> </p> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0"> All eligible students </p> ! <p class="vads-u-font-weight--bold vads-u-font-family--sans vads-u-font-size--h5 vads-u-margin-top--2 vads-u-margin-bottom--0"> ! School website<span class="sr-only">:</span> ! </p> ! <p class="vads-u-margin-top--0 vads-u-margin-bottom--0"> ! <a href="https://www.BerkeleyCollege.edu" rel="noreferrer noopener" target="_blank">www.berkeleycollege.edu</a> </p> </div> <div class="vads-l-col--12 medium-screen:vads-l-col--6 medium-screen:vads-u-padding-left--2"> <p class="vads-u-font-weight--bold vads-u-margin-top--2 vads-u-margin-bottom--0 vads-u-font-family--sans vads-u-font-size--h5 medium-screen:vads-u-margin--0"> Degree type<span class="sr-only">:</span> </p> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0 medium-screen:vads-u-margin--0"> All </p> <p class="school-program vads-u-font-weight--bold vads-u-margin-top--2 vads-u-margin-bottom--0 vads-u-font-family--sans vads-u-font-size--h5 medium-screen:vads-u-margin-bottom--0"> School or program<span class="sr-only">:</span> </p> <p class="vads-u-margin-top--0 vads-u-margin-bottom--0 medium-screen:vads-u-margin--0"> All </p> </div> </div> </li> <li>Next result item</li> </ul> ``` ### Potential fix One I believe this preferred solution is not possible, given the limitation of working with Formation. ```html <dl> <dt> <dfn class="sr-only">School name</dfn> Abilene Christian University </dt> <dd> <dfn class="sr-only">City, State</dfn> Abilene, <abbr title="Texas">TX</abbr> </dd> <dd> <dfn>Maximum Yellow Ribbon funding amount<br>(per student, per year)</dfn> All tuition and fees not covered by Post-9/11 GI Bill benefits </dd> <dd> <dfn>Funding available for</dfn> All eligible students </dd> <dd> <dfn>School website</dfn> <a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a> </dd> <dd> <dfn>Degree type</dfn> All </dd> <dd> <dfn>School or program</dfn> All </dd> <dt><dfn class="sr-only">School name</dfn> Abraham Baldwin Agricultural College</dt> <dd><dfn class="sr-only">City, State</dfn> Tifton, <abbr title="Georgia">GA</abbr></dd> <!-- etcetera --> </dl> ``` ### Potential fix Two This alternate may also not be possible with Formation. ```html <!-- full search results list --> <ul> <!-- search results list item --> <li> <!-- search results list item name --> <div role="heading" aria-level="3" id="abilene_christian_university"> <dfn class="sr-only">School name</dfn> Abilene Christian University </div> <!-- search results list name, list of details --> <ul aria-labelledby="abilene_christian_university"> <li> <dfn class="sr-only">City, State</dfn> Abilene, <abbr title="Texas">TX</abbr> </li> <li> <dfn>Maximum Yellow Ribbon funding amount<br>(per student, per year)</dfn> All tuition and fees not covered by Post-9/11 GI Bill benefits </li> <li> <dfn>Funding available for</dfn> All eligible students </li> <li> <dfn>School website</dfn> <a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a> </li> <li> <dfn>Degree type</dfn> All </li> <li> <dfn>School or program</dfn> All </li> </ul><!-- end labelledby for abilene_christian_university --> </li> <li> <div role="heading" aria-level="3" id="abraham_baldwin_agricultural_college"> <dfn class="sr-only">School name</dfn> Abraham Baldwin Agricultural College </div> <!-- search results list name, list of details --> <ul aria-labelledby="abraham_baldwin_agricultural_college"> <li> <dfn class="sr-only">City, State</dfn> Tifton, <abbr title="Georgia">GA</abbr> </li> <!-- etcetera --> </dl> ``` ### Potential fix Three Using divs in order to meet Formation requirements, and provide semantic context using ARIA. ```html <!-- full search results list --> <div role="list"> <!-- search results list item --> <div role="listitem"> <!-- search results list item name --> <div role="heading" aria-level="3" id="abilene_christian_university"> <dfn class="sr-only">School name</dfn> Abilene Christian University </div> <!-- search results list name, list of details --> <div role="list" aria-labelledby="abilene_christian_university"> <div role="listitem"> <dfn class="sr-only">City, State</dfn> Abilene, <abbr title="Texas">TX</abbr> </div> <div role="listitem"> <dfn>Maximum Yellow Ribbon funding amount<br>(per student, per year)</dfn> All tuition and fees not covered by Post-9/11 GI Bill benefits </div> <div role="listitem"> <dfn>Funding available for</dfn> All eligible students </div> <div role="listitem"> <dfn>School website</dfn> <a href="www.acu.edu" rel="noreferrer noopener">www.acu.edu</a> </div> <div role="listitem"> <dfn>Degree type</dfn> All </div> <div role="listitem"> <dfn>School or program</dfn> All </div> </div><!-- end labelledby for abilene_christian_university --> </div><!-- end search result list item --> <!-- search results list item --> <div role="listitem"> <!-- search results list item name --> <div role="heading" aria-level="3" id="abraham_baldwin_agricultural_college"> <dfn class="sr-only">School name</dfn> Abraham Baldwin Agricultural College </div> <div role="list" aria-labelledby="abraham_baldwin_agricultural_college"> <div role="listitem"> <dfn class="sr-only">City, State</dfn> Tifton, <abbr title="Georgia">GA</abbr> </div> <!-- etcetera --> </dl> ```
defect
defect individual search items should read semantically feedback framework ❗️ must for if the feedback must be applied ⚠️should if the feedback is best practice ✔️ consider for suggestions enhancements description each search result should have semantic structure associating labels with values for the current code the p elements are not associated with their values and break out as if a separate sentence for screen readers non sighted users do not get the same semantic content that sighted users get from the visual design due to limitations of formation the design may need to be modified recording of screen reader experience in slack comment consider adjusting the code for per student per year so that the css has it inline rather than breaking at some viewports as shown in this screenshot point of contact vfs point of contact jennifer acceptance criteria as a screen reader user i want to understand the context of the search results content so that i may locate the search result that best meets my needs acceptance criteria defect is remediated any changes that impact the design intent are reviewed approved by design team changes are peer reviewed by engineering changes are reviewed on staging by product owner and product manager changes are merged and deployed to production potential fix please note if the display property is changed in the css it changes the semantics for example if a ul gets display block it removes the list semantics current previous code html abilene christian university abilene tx maximum yellow ribbon funding amount per student per year all tuition and fees not covered by post gi bill benefits funding available for all eligible students school website degree type all school or program all abraham baldwin agricultural college tifton ga current code the highlighted text below is an example of an issue with the code structure diff school name berkeley college of new york school location new york ny maximum yellow ribbon funding amount per student per year pays remaining tuition that post gi bill doesn t cover funding available for all eligible students school website degree type all school or program all next result item potential fix one i believe this preferred solution is not possible given the limitation of working with formation html school name abilene christian university city state abilene tx maximum yellow ribbon funding amount per student per year all tuition and fees not covered by post gi bill benefits funding available for all eligible students school website degree type all school or program all school name abraham baldwin agricultural college city state tifton ga potential fix two this alternate may also not be possible with formation html school name abilene christian university city state abilene tx maximum yellow ribbon funding amount per student per year all tuition and fees not covered by post gi bill benefits funding available for all eligible students school website degree type all school or program all school name abraham baldwin agricultural college city state tifton ga potential fix three using divs in order to meet formation requirements and provide semantic context using aria html school name abilene christian university city state abilene tx maximum yellow ribbon funding amount per student per year all tuition and fees not covered by post gi bill benefits funding available for all eligible students school website degree type all school or program all school name abraham baldwin agricultural college city state tifton ga
1
347,830
31,279,110,429
IssuesEvent
2023-08-22 08:27:27
telstra/open-kilda
https://api.github.com/repos/telstra/open-kilda
closed
EnvCleanupExtension must verify absense of abandoned traffgen addresses
area/testing improvement
Currently EnvCleanupExtension doesn't check, if active traffgens have stored addresses, which can be left from previous tests or manual runs (especially in HW environments). So it would be nice to have this check (and, eventually, remove them).
1.0
EnvCleanupExtension must verify absense of abandoned traffgen addresses - Currently EnvCleanupExtension doesn't check, if active traffgens have stored addresses, which can be left from previous tests or manual runs (especially in HW environments). So it would be nice to have this check (and, eventually, remove them).
non_defect
envcleanupextension must verify absense of abandoned traffgen addresses currently envcleanupextension doesn t check if active traffgens have stored addresses which can be left from previous tests or manual runs especially in hw environments so it would be nice to have this check and eventually remove them
0
20,840
3,422,062,716
IssuesEvent
2015-12-08 21:24:23
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
VM should interpret path arguments in the console like any other unix command
area-vm priority-low triaged Type-Defect
I'm trying to use the fancy new package root stuff: ~/tmp/dart-sha &gt; dart --package-root=~/Code/Dart-git/dart/lib/ crypto.dart Unable to open file: ~/Code/Dart-git/dart/lib/crypto/crypto.dart'file:///Users/sethladd/tmp/dart-sha/crypto.dart': Error: line 3 pos 1: library handler failed #import('package:crypto/crypto.dart'); ^ The file ~/Code/Dart-git/dart/lib/crypto/crypto.dart certainly exists. The contents of crypto.dart in my work dir is: #import('package:crypto/crypto.dart'); main() { &nbsp;&nbsp;&nbsp;&nbsp;var sha = new SHA256(); } The solution was the fully qualify my path for the package-root command line arg. The ~ did not work.
1.0
VM should interpret path arguments in the console like any other unix command - I'm trying to use the fancy new package root stuff: ~/tmp/dart-sha &gt; dart --package-root=~/Code/Dart-git/dart/lib/ crypto.dart Unable to open file: ~/Code/Dart-git/dart/lib/crypto/crypto.dart'file:///Users/sethladd/tmp/dart-sha/crypto.dart': Error: line 3 pos 1: library handler failed #import('package:crypto/crypto.dart'); ^ The file ~/Code/Dart-git/dart/lib/crypto/crypto.dart certainly exists. The contents of crypto.dart in my work dir is: #import('package:crypto/crypto.dart'); main() { &nbsp;&nbsp;&nbsp;&nbsp;var sha = new SHA256(); } The solution was the fully qualify my path for the package-root command line arg. The ~ did not work.
defect
vm should interpret path arguments in the console like any other unix command i m trying to use the fancy new package root stuff tmp dart sha gt dart package root code dart git dart lib crypto dart unable to open file code dart git dart lib crypto crypto dart file users sethladd tmp dart sha crypto dart error line pos library handler failed import package crypto crypto dart the file code dart git dart lib crypto crypto dart certainly exists the contents of crypto dart in my work dir is import package crypto crypto dart main nbsp nbsp nbsp nbsp var sha new the solution was the fully qualify my path for the package root command line arg the did not work
1
7,857
3,106,082,853
IssuesEvent
2015-09-01 01:23:29
california-civic-data-coalition/django-calaccess-raw-data
https://api.github.com/repos/california-civic-data-coalition/django-calaccess-raw-data
closed
Add documentation for the ``bus_city`` field on the ``CvrCampaignDisclosureCd`` database model
documentation enhancement small
## Your mission Add documentation for the ``bus_city`` field on the ``CvrCampaignDisclosureCd`` database model. ## Here's how **Step 1**: Claim this ticket by leaving a comment below. Tell everyone you're ON IT! **Step 2**: Open up the file that contains this model. It should be in <a href="https://github.com/california-civic-data-coalition/django-calaccess-raw-data/blob/master/calaccess_raw/models/campaign.py">calaccess_raw.models.campaign.py</a>. **Step 3**: Hit the little pencil button in the upper-right corner of the code box to begin editing the file. ![Edit](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367320.67.png) **Step 4**: Find this model and field in the file. (Clicking into the box and searching with CTRL-F can help you here.) Once you find it, we expect the field to lack the ``help_text`` field typically used in Django to explain what a field contains. ```python effect_dt = fields.DateField( null=True, db_column="EFFECT_DT" ) ``` **Step 5**: In a separate tab, open up the <a href="Quilmes">official state documentation</a> and find the page that defines all the fields in this model. ![The docs](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367001.08.png) **Step 6**: Find the row in that table's definition table that spells out what this field contains. If it lacks documentation. Note that in the ticket and close it now. ![The definition](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367068.59.png) **Step 7**: Return to the GitHub tab. **Step 8**: Add the state's label explaining what's in the field, to our field definition by inserting it a ``help_text`` argument. That should look something like this: ```python effect_dt = fields.DateField( null=True, db_column="EFFECT_DT", # Add a help_text argument like the one here, but put your string in instead. help_text="The other values in record were effective as of this date" ) ``` **Step 9**: Scroll down below the code box and describe the change you've made in the commit message. Press the button below. ![Commit](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367511.66.png) **Step 10**: Review your changes and create a pull request submitting them to the core team for inclusion. ![Pull request](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440368058.52.png) That's it! Mission accomplished!
1.0
Add documentation for the ``bus_city`` field on the ``CvrCampaignDisclosureCd`` database model - ## Your mission Add documentation for the ``bus_city`` field on the ``CvrCampaignDisclosureCd`` database model. ## Here's how **Step 1**: Claim this ticket by leaving a comment below. Tell everyone you're ON IT! **Step 2**: Open up the file that contains this model. It should be in <a href="https://github.com/california-civic-data-coalition/django-calaccess-raw-data/blob/master/calaccess_raw/models/campaign.py">calaccess_raw.models.campaign.py</a>. **Step 3**: Hit the little pencil button in the upper-right corner of the code box to begin editing the file. ![Edit](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367320.67.png) **Step 4**: Find this model and field in the file. (Clicking into the box and searching with CTRL-F can help you here.) Once you find it, we expect the field to lack the ``help_text`` field typically used in Django to explain what a field contains. ```python effect_dt = fields.DateField( null=True, db_column="EFFECT_DT" ) ``` **Step 5**: In a separate tab, open up the <a href="Quilmes">official state documentation</a> and find the page that defines all the fields in this model. ![The docs](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367001.08.png) **Step 6**: Find the row in that table's definition table that spells out what this field contains. If it lacks documentation. Note that in the ticket and close it now. ![The definition](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367068.59.png) **Step 7**: Return to the GitHub tab. **Step 8**: Add the state's label explaining what's in the field, to our field definition by inserting it a ``help_text`` argument. That should look something like this: ```python effect_dt = fields.DateField( null=True, db_column="EFFECT_DT", # Add a help_text argument like the one here, but put your string in instead. help_text="The other values in record were effective as of this date" ) ``` **Step 9**: Scroll down below the code box and describe the change you've made in the commit message. Press the button below. ![Commit](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440367511.66.png) **Step 10**: Review your changes and create a pull request submitting them to the core team for inclusion. ![Pull request](https://dl.dropboxusercontent.com/u/3640647/ScreenCloud/1440368058.52.png) That's it! Mission accomplished!
non_defect
add documentation for the bus city field on the cvrcampaigndisclosurecd database model your mission add documentation for the bus city field on the cvrcampaigndisclosurecd database model here s how step claim this ticket by leaving a comment below tell everyone you re on it step open up the file that contains this model it should be in a href step hit the little pencil button in the upper right corner of the code box to begin editing the file step find this model and field in the file clicking into the box and searching with ctrl f can help you here once you find it we expect the field to lack the help text field typically used in django to explain what a field contains python effect dt fields datefield null true db column effect dt step in a separate tab open up the official state documentation and find the page that defines all the fields in this model step find the row in that table s definition table that spells out what this field contains if it lacks documentation note that in the ticket and close it now step return to the github tab step add the state s label explaining what s in the field to our field definition by inserting it a help text argument that should look something like this python effect dt fields datefield null true db column effect dt add a help text argument like the one here but put your string in instead help text the other values in record were effective as of this date step scroll down below the code box and describe the change you ve made in the commit message press the button below step review your changes and create a pull request submitting them to the core team for inclusion that s it mission accomplished
0
220,758
16,984,899,440
IssuesEvent
2021-06-30 13:24:21
NetworkGradeLinux/mion-docs
https://api.github.com/repos/NetworkGradeLinux/mion-docs
closed
mion: Why base off of ONL and ONLP?
documentation
The initial use case of Mion is to provide ONLP functionality to network switches, without relying on the base ONL operating system. We want to be clear about why we chose to approach it this way, referencing current feeling about switch OS's (and more generally NOS's). This is also a necessary to serve as a lede into the documentation we are writing with regards to building ONLP separately and the issues we encountered along the way (if we decide to include these).
1.0
mion: Why base off of ONL and ONLP? - The initial use case of Mion is to provide ONLP functionality to network switches, without relying on the base ONL operating system. We want to be clear about why we chose to approach it this way, referencing current feeling about switch OS's (and more generally NOS's). This is also a necessary to serve as a lede into the documentation we are writing with regards to building ONLP separately and the issues we encountered along the way (if we decide to include these).
non_defect
mion why base off of onl and onlp the initial use case of mion is to provide onlp functionality to network switches without relying on the base onl operating system we want to be clear about why we chose to approach it this way referencing current feeling about switch os s and more generally nos s this is also a necessary to serve as a lede into the documentation we are writing with regards to building onlp separately and the issues we encountered along the way if we decide to include these
0
53,890
13,262,425,087
IssuesEvent
2020-08-20 21:46:05
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
Millipede reconstruction fails with new wavedeform + SPE correction (Trac #2244)
Migrated from Trac combo reconstruction defect
Processing of level3 muon filter data using level3-filter-muon/python/level3_Master.py fails with assertion error i3_assert(p->GetWidth() > 0) inside UpdateData() function from MillipedeDOMCacheMap.cxx when using level2pass3 data created with the new wavedeform + SPE correction. A fix is critical because a large number of point source analyses in the Nu Sources WG depend on millipede reconstructions. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2244">https://code.icecube.wisc.edu/projects/icecube/ticket/2244</a>, reported by jwoodand owned by jbraun</em></summary> <p> ```json { "status": "closed", "changetime": "2019-06-21T23:36:50", "_ts": "1561160210298267", "description": "Processing of level3 muon filter data using level3-filter-muon/python/level3_Master.py fails with assertion error i3_assert(p->GetWidth() > 0) inside UpdateData() function from MillipedeDOMCacheMap.cxx when using level2pass3 data created with the new wavedeform + SPE correction. A fix is critical because a large number of point source analyses in the Nu Sources WG depend on millipede reconstructions.", "reporter": "jwood", "cc": "", "resolution": "fixed", "time": "2019-02-28T19:15:58", "component": "combo reconstruction", "summary": "Millipede reconstruction fails with new wavedeform + SPE correction", "priority": "blocker", "keywords": "millipede", "milestone": "Autumnal Equinox 2019", "owner": "jbraun", "type": "defect" } ``` </p> </details>
1.0
Millipede reconstruction fails with new wavedeform + SPE correction (Trac #2244) - Processing of level3 muon filter data using level3-filter-muon/python/level3_Master.py fails with assertion error i3_assert(p->GetWidth() > 0) inside UpdateData() function from MillipedeDOMCacheMap.cxx when using level2pass3 data created with the new wavedeform + SPE correction. A fix is critical because a large number of point source analyses in the Nu Sources WG depend on millipede reconstructions. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2244">https://code.icecube.wisc.edu/projects/icecube/ticket/2244</a>, reported by jwoodand owned by jbraun</em></summary> <p> ```json { "status": "closed", "changetime": "2019-06-21T23:36:50", "_ts": "1561160210298267", "description": "Processing of level3 muon filter data using level3-filter-muon/python/level3_Master.py fails with assertion error i3_assert(p->GetWidth() > 0) inside UpdateData() function from MillipedeDOMCacheMap.cxx when using level2pass3 data created with the new wavedeform + SPE correction. A fix is critical because a large number of point source analyses in the Nu Sources WG depend on millipede reconstructions.", "reporter": "jwood", "cc": "", "resolution": "fixed", "time": "2019-02-28T19:15:58", "component": "combo reconstruction", "summary": "Millipede reconstruction fails with new wavedeform + SPE correction", "priority": "blocker", "keywords": "millipede", "milestone": "Autumnal Equinox 2019", "owner": "jbraun", "type": "defect" } ``` </p> </details>
defect
millipede reconstruction fails with new wavedeform spe correction trac processing of muon filter data using filter muon python master py fails with assertion error assert p getwidth inside updatedata function from millipededomcachemap cxx when using data created with the new wavedeform spe correction a fix is critical because a large number of point source analyses in the nu sources wg depend on millipede reconstructions migrated from json status closed changetime ts description processing of muon filter data using filter muon python master py fails with assertion error assert p getwidth inside updatedata function from millipededomcachemap cxx when using data created with the new wavedeform spe correction a fix is critical because a large number of point source analyses in the nu sources wg depend on millipede reconstructions reporter jwood cc resolution fixed time component combo reconstruction summary millipede reconstruction fails with new wavedeform spe correction priority blocker keywords millipede milestone autumnal equinox owner jbraun type defect
1
65,387
19,473,700,999
IssuesEvent
2021-12-24 08:00:22
PyTables/PyTables
https://api.github.com/repos/PyTables/PyTables
closed
Library not loaded: @rpath/libblosc.1.dylib
defect good first issues help wanted
Hello, I am working in a virtual environment on a Jupyter notebook. tables installs without error, but when I run `import tables` or `python3 -m tables.tests.test_all`, I receive this error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "/Users/gv/Desktop/Appyter/my-first-appyter-2/venv/lib/python3.9/site-packages/tables/__init__.py", line 99, in <module> from .utilsextension import ( ImportError: dlopen(/Users/gv/Desktop/Appyter/my-first-appyter-2/venv/lib/python3.9/site-packages/tables/utilsextension.cpython-39-darwin.so, 2): Library not loaded: @rpath/libblosc.1.dylib Referenced from: /Users/gv/Desktop/Appyter/my-first-appyter-2/venv/lib/python3.9/site-packages/tables/utilsextension.cpython-39-darwin.so Reason: image not found I tried `python3 -m pip install blosc` and moved blosc into the **tables** directory, but that did not fix the issue. I also tried `pip install -U --force-reinstall --no-binary tables tables`. I would greatly appreciate any help! Thanks!
1.0
Library not loaded: @rpath/libblosc.1.dylib - Hello, I am working in a virtual environment on a Jupyter notebook. tables installs without error, but when I run `import tables` or `python3 -m tables.tests.test_all`, I receive this error: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 188, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/runpy.py", line 111, in _get_module_details __import__(pkg_name) File "/Users/gv/Desktop/Appyter/my-first-appyter-2/venv/lib/python3.9/site-packages/tables/__init__.py", line 99, in <module> from .utilsextension import ( ImportError: dlopen(/Users/gv/Desktop/Appyter/my-first-appyter-2/venv/lib/python3.9/site-packages/tables/utilsextension.cpython-39-darwin.so, 2): Library not loaded: @rpath/libblosc.1.dylib Referenced from: /Users/gv/Desktop/Appyter/my-first-appyter-2/venv/lib/python3.9/site-packages/tables/utilsextension.cpython-39-darwin.so Reason: image not found I tried `python3 -m pip install blosc` and moved blosc into the **tables** directory, but that did not fix the issue. I also tried `pip install -U --force-reinstall --no-binary tables tables`. I would greatly appreciate any help! Thanks!
defect
library not loaded rpath libblosc dylib hello i am working in a virtual environment on a jupyter notebook tables installs without error but when i run import tables or m tables tests test all i receive this error traceback most recent call last file library frameworks python framework versions lib runpy py line in run module as main mod name mod spec code get module details mod name error file library frameworks python framework versions lib runpy py line in get module details import pkg name file users gv desktop appyter my first appyter venv lib site packages tables init py line in from utilsextension import importerror dlopen users gv desktop appyter my first appyter venv lib site packages tables utilsextension cpython darwin so library not loaded rpath libblosc dylib referenced from users gv desktop appyter my first appyter venv lib site packages tables utilsextension cpython darwin so reason image not found i tried m pip install blosc and moved blosc into the tables directory but that did not fix the issue i also tried pip install u force reinstall no binary tables tables i would greatly appreciate any help thanks
1
257,381
22,157,292,783
IssuesEvent
2022-06-04 01:54:56
apache/beam
https://api.github.com/repos/apache/beam
opened
Test all Coder structuralValue implementations
P3 sdk-java-core clarified beam-fixit test
Here is a test helper that check that structuralValue is consistent with equals: https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/CoderProperties.java#L200 And here is one that tests it another way: https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/CoderProperties.java#L226 With the deprecation of consistentWithEquals and implementing all the structualValue methods, we should add these tests to every coder. Imported from Jira [BEAM-6904](https://issues.apache.org/jira/browse/BEAM-6904). Original Jira may contain additional context. Reported by: kenn.
1.0
Test all Coder structuralValue implementations - Here is a test helper that check that structuralValue is consistent with equals: https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/CoderProperties.java#L200 And here is one that tests it another way: https://github.com/apache/beam/blob/master/sdks/java/core/src/main/java/org/apache/beam/sdk/testing/CoderProperties.java#L226 With the deprecation of consistentWithEquals and implementing all the structualValue methods, we should add these tests to every coder. Imported from Jira [BEAM-6904](https://issues.apache.org/jira/browse/BEAM-6904). Original Jira may contain additional context. Reported by: kenn.
non_defect
test all coder structuralvalue implementations here is a test helper that check that structuralvalue is consistent with equals and here is one that tests it another way with the deprecation of consistentwithequals and implementing all the structualvalue methods we should add these tests to every coder imported from jira original jira may contain additional context reported by kenn
0
47,647
13,066,043,253
IssuesEvent
2020-07-30 20:53:17
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[gotoblas] add checksuming to make files for lapack (Trac #917)
Migrated from Trac defect infrastructure
The GotoBLAS makefiles are a bit fragile and will happily try to patch and build a bad download of LAPACK. Add a checksum test and fail FABULOUSLY if the LAPACK d/l fails. Migrated from https://code.icecube.wisc.edu/ticket/917 ```json { "status": "closed", "changetime": "2015-04-17T01:11:52", "description": "The GotoBLAS makefiles are a bit fragile and will happily try to patch and build a bad download of LAPACK. Add a checksum test and fail FABULOUSLY if the LAPACK d/l fails.", "reporter": "nega", "cc": "", "resolution": "wontfix", "_ts": "1429233112808376", "component": "infrastructure", "summary": "[gotoblas] add checksuming to make files for lapack", "priority": "normal", "keywords": "", "time": "2015-04-10T03:02:48", "milestone": "", "owner": "nega", "type": "defect" } ```
1.0
[gotoblas] add checksuming to make files for lapack (Trac #917) - The GotoBLAS makefiles are a bit fragile and will happily try to patch and build a bad download of LAPACK. Add a checksum test and fail FABULOUSLY if the LAPACK d/l fails. Migrated from https://code.icecube.wisc.edu/ticket/917 ```json { "status": "closed", "changetime": "2015-04-17T01:11:52", "description": "The GotoBLAS makefiles are a bit fragile and will happily try to patch and build a bad download of LAPACK. Add a checksum test and fail FABULOUSLY if the LAPACK d/l fails.", "reporter": "nega", "cc": "", "resolution": "wontfix", "_ts": "1429233112808376", "component": "infrastructure", "summary": "[gotoblas] add checksuming to make files for lapack", "priority": "normal", "keywords": "", "time": "2015-04-10T03:02:48", "milestone": "", "owner": "nega", "type": "defect" } ```
defect
add checksuming to make files for lapack trac the gotoblas makefiles are a bit fragile and will happily try to patch and build a bad download of lapack add a checksum test and fail fabulously if the lapack d l fails migrated from json status closed changetime description the gotoblas makefiles are a bit fragile and will happily try to patch and build a bad download of lapack add a checksum test and fail fabulously if the lapack d l fails reporter nega cc resolution wontfix ts component infrastructure summary add checksuming to make files for lapack priority normal keywords time milestone owner nega type defect
1
501,866
14,535,489,804
IssuesEvent
2020-12-15 05:43:44
ChrisNZL/Tallowmere2
https://api.github.com/repos/ChrisNZL/Tallowmere2
opened
IOException: T2.LocalSettings.WriteIniFileToDisk / System.IO.FileStream.Dispose
⚠ priority 📄 filesystem 🦟 bug
Auto report. 0.2.1. Feedback ID: 20201211-S4X99 Error when writing LocalSettings.ini. ``` 8:53:57, Frame 3320, EXCEPTION » IOException: Win32 IO returned 112. Path: C:\Users\XXXXXXXXXX\AppData\LocalLow\Chris McFarland\Tallowmere 2\SteamAccounts\XXXXXXXXXXXXXXX\LocalSettings.ini >>>>> CRITICAL ERROR >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> System.IO.FileStream.Dispose (System.Boolean disposing) System.IO.Stream.Close () System.IO.StreamWriter.Dispose (System.Boolean disposing) System.IO.TextWriter.Dispose () System.IO.File.WriteAllText (System.String path, System.String contents, System.Text.Encoding encoding) System.IO.File.WriteAllText (System.String path, System.String contents) T2.LocalSettings.WriteIniFileToDisk () T2.LocalSettings.ClearRejoinGameStuff () T2.RelayClient._OnConnectedToServer () T2.NetworkClient.OnPeerConnected (LiteNetLib.NetPeer newServerPeer) LiteNetLib.NetManager.ProcessEvent (LiteNetLib.NetManager+NetEvent evt) LiteNetLib.NetManager.PollEvents () T2.NetworkClient.Update () GameStates: ShowingTitleScreen, UsingMainMenu, InitialisingAsNetworkHost, ViewingAlertBox, HandshakingWithServer SystemPlayer InputDevice: XInput Controller ```
1.0
IOException: T2.LocalSettings.WriteIniFileToDisk / System.IO.FileStream.Dispose - Auto report. 0.2.1. Feedback ID: 20201211-S4X99 Error when writing LocalSettings.ini. ``` 8:53:57, Frame 3320, EXCEPTION » IOException: Win32 IO returned 112. Path: C:\Users\XXXXXXXXXX\AppData\LocalLow\Chris McFarland\Tallowmere 2\SteamAccounts\XXXXXXXXXXXXXXX\LocalSettings.ini >>>>> CRITICAL ERROR >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> System.IO.FileStream.Dispose (System.Boolean disposing) System.IO.Stream.Close () System.IO.StreamWriter.Dispose (System.Boolean disposing) System.IO.TextWriter.Dispose () System.IO.File.WriteAllText (System.String path, System.String contents, System.Text.Encoding encoding) System.IO.File.WriteAllText (System.String path, System.String contents) T2.LocalSettings.WriteIniFileToDisk () T2.LocalSettings.ClearRejoinGameStuff () T2.RelayClient._OnConnectedToServer () T2.NetworkClient.OnPeerConnected (LiteNetLib.NetPeer newServerPeer) LiteNetLib.NetManager.ProcessEvent (LiteNetLib.NetManager+NetEvent evt) LiteNetLib.NetManager.PollEvents () T2.NetworkClient.Update () GameStates: ShowingTitleScreen, UsingMainMenu, InitialisingAsNetworkHost, ViewingAlertBox, HandshakingWithServer SystemPlayer InputDevice: XInput Controller ```
non_defect
ioexception localsettings writeinifiletodisk system io filestream dispose auto report feedback id error when writing localsettings ini frame exception » ioexception io returned path c users xxxxxxxxxx appdata locallow chris mcfarland tallowmere steamaccounts xxxxxxxxxxxxxxx localsettings ini critical error system io filestream dispose system boolean disposing system io stream close system io streamwriter dispose system boolean disposing system io textwriter dispose system io file writealltext system string path system string contents system text encoding encoding system io file writealltext system string path system string contents localsettings writeinifiletodisk localsettings clearrejoingamestuff relayclient onconnectedtoserver networkclient onpeerconnected litenetlib netpeer newserverpeer litenetlib netmanager processevent litenetlib netmanager netevent evt litenetlib netmanager pollevents networkclient update gamestates showingtitlescreen usingmainmenu initialisingasnetworkhost viewingalertbox handshakingwithserver systemplayer inputdevice xinput controller
0
84,397
3,664,365,380
IssuesEvent
2016-02-19 11:19:24
QualiMaster/QM-IConf
https://api.github.com/repos/QualiMaster/QM-IConf
closed
Turn Configuration model into regression test case (upon release)
-> RELEASE (High Priority) QM configuration model
Turn the actual version of the configuration model into a regression test case and ensure functionality of instantiation.
1.0
Turn Configuration model into regression test case (upon release) - Turn the actual version of the configuration model into a regression test case and ensure functionality of instantiation.
non_defect
turn configuration model into regression test case upon release turn the actual version of the configuration model into a regression test case and ensure functionality of instantiation
0
397,429
11,728,304,362
IssuesEvent
2020-03-10 17:19:28
Thorium-Sim/thorium
https://api.github.com/repos/Thorium-Sim/thorium
opened
Scripted sensor contacts disappear if moved
priority/high type/bug
### Requested By: ryananderson@telosu.com ### Priority: High ### Version: 2.7.0 If you use mission scripting to create a pre-layout of the sensor grid, then load it, then try to move the items on the sensor grid, some or all of them disappear. ### Steps to Reproduce 1. In mission scripting, create a pre-determined sensor layout. 2. Load that frame. 3. Select one of the pre-scripted contacts. 4. Watch things disappear.
1.0
Scripted sensor contacts disappear if moved - ### Requested By: ryananderson@telosu.com ### Priority: High ### Version: 2.7.0 If you use mission scripting to create a pre-layout of the sensor grid, then load it, then try to move the items on the sensor grid, some or all of them disappear. ### Steps to Reproduce 1. In mission scripting, create a pre-determined sensor layout. 2. Load that frame. 3. Select one of the pre-scripted contacts. 4. Watch things disappear.
non_defect
scripted sensor contacts disappear if moved requested by ryananderson telosu com priority high version if you use mission scripting to create a pre layout of the sensor grid then load it then try to move the items on the sensor grid some or all of them disappear steps to reproduce in mission scripting create a pre determined sensor layout load that frame select one of the pre scripted contacts watch things disappear
0
22,054
3,591,015,019
IssuesEvent
2016-02-01 09:44:26
Stripeberry/unlabeled
https://api.github.com/repos/Stripeberry/unlabeled
closed
Сменить EditArea на Ace (https://github.com/ajaxorg/ace/)
auto-migrated Priority-Medium Type-Defect
``` Сменить EditArea на Ace (https://github.com/ajaxorg/ace/) ``` Original issue reported on code.google.com by `stripeberry@gmail.com` on 30 Jan 2012 at 9:46
1.0
Сменить EditArea на Ace (https://github.com/ajaxorg/ace/) - ``` Сменить EditArea на Ace (https://github.com/ajaxorg/ace/) ``` Original issue reported on code.google.com by `stripeberry@gmail.com` on 30 Jan 2012 at 9:46
defect
сменить editarea на ace сменить editarea на ace original issue reported on code google com by stripeberry gmail com on jan at
1
62,308
12,199,112,236
IssuesEvent
2020-04-30 00:39:58
kwk/test-llvm-bz-import-5
https://api.github.com/repos/kwk/test-llvm-bz-import-5
closed
SlotIndexes::getInstructionIndex(const llvm::MachineInstr*) const: Assertion `itr != mi2iMap.end() && "Instruction not found in maps."' failed.
BZ-BUG-STATUS: RESOLVED BZ-RESOLUTION: FIXED dummy import from bugzilla libraries/Common Code Generator Code
This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=15838.
2.0
SlotIndexes::getInstructionIndex(const llvm::MachineInstr*) const: Assertion `itr != mi2iMap.end() && "Instruction not found in maps."' failed. - This issue was imported from Bugzilla https://bugs.llvm.org/show_bug.cgi?id=15838.
non_defect
slotindexes getinstructionindex const llvm machineinstr const assertion itr end instruction not found in maps failed this issue was imported from bugzilla
0
27,239
4,939,809,678
IssuesEvent
2016-11-29 15:20:28
noctarius/chromebackspaceonlinux
https://api.github.com/repos/noctarius/chromebackspaceonlinux
closed
Steals backspace in editable PDFs on Mac
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Open http://mountainview.gov/documents/CM09Fillable.pdf 2. Try to fill in one of the fields 3. Press backspace What is the expected output? What do you see instead? Expected last typed character to disappear. Instead nothing happened. What version of the product are you using? On what operating system? Chrome 37 on OSX 10.9.4 Please provide any additional information below. ``` Original issue reported on code.google.com by `animeki...@gmail.com` on 5 Sep 2014 at 9:15
1.0
Steals backspace in editable PDFs on Mac - ``` What steps will reproduce the problem? 1. Open http://mountainview.gov/documents/CM09Fillable.pdf 2. Try to fill in one of the fields 3. Press backspace What is the expected output? What do you see instead? Expected last typed character to disappear. Instead nothing happened. What version of the product are you using? On what operating system? Chrome 37 on OSX 10.9.4 Please provide any additional information below. ``` Original issue reported on code.google.com by `animeki...@gmail.com` on 5 Sep 2014 at 9:15
defect
steals backspace in editable pdfs on mac what steps will reproduce the problem open try to fill in one of the fields press backspace what is the expected output what do you see instead expected last typed character to disappear instead nothing happened what version of the product are you using on what operating system chrome on osx please provide any additional information below original issue reported on code google com by animeki gmail com on sep at
1
8,184
2,611,469,914
IssuesEvent
2015-02-27 05:15:04
chrsmith/hedgewars
https://api.github.com/repos/chrsmith/hedgewars
closed
Ammo menu key assignment doesn't work on another player's turn
auto-migrated OpSys-All Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Assign ammo menu to Alt key 2. When it's another player turn, you can call ammo menu only with right mouse click What is the expected output? What do you see instead? Alt key should work Please use labels and text to provide additional information. ``` Original issue reported on code.google.com by `unC0Rr` on 11 Feb 2011 at 1:10 * Blocking: #496
1.0
Ammo menu key assignment doesn't work on another player's turn - ``` What steps will reproduce the problem? 1. Assign ammo menu to Alt key 2. When it's another player turn, you can call ammo menu only with right mouse click What is the expected output? What do you see instead? Alt key should work Please use labels and text to provide additional information. ``` Original issue reported on code.google.com by `unC0Rr` on 11 Feb 2011 at 1:10 * Blocking: #496
defect
ammo menu key assignment doesn t work on another player s turn what steps will reproduce the problem assign ammo menu to alt key when it s another player turn you can call ammo menu only with right mouse click what is the expected output what do you see instead alt key should work please use labels and text to provide additional information original issue reported on code google com by on feb at blocking
1
822,626
30,880,049,483
IssuesEvent
2023-08-03 16:52:10
CodeYourFuture/Module-JS3
https://api.github.com/repos/CodeYourFuture/Module-JS3
closed
[PD] Resilience learning points and suggestions
🏕 Priority Mandatory 🐇 Size Small 📅 Week 3 Topic Confidence
### Coursework content Watch the following video and read the articles. How do you think CYF can improve the PD session on resilience? Share 5 of your own learning points and 3 new suggestions for us to make the session even better. - [Listening to shame](https://www.ted.com/talks/brene_brown_listening_to_shame/comments) - [Growth Mindset + Vulnerability](https://medium.com/teachers-on-fire/growth-mindset-vulnerability-c956512286) - [Failure, vulnerability, and the true nature of change](https://www.impactinternational.com/blog/2022/09/failure-vulnerability-and-true-nature-change) ### Estimated time in hours (PD has max 4 per week total) 1 ### What is the purpose of this assignment? This assignment will help you deepen your understanding of resilience, strengthen your skills, and support future cohorts that will attend CYF. ### How to submit Attach the link of your Google doc to this ticket on your board. ### Anything else? Optional video list: [The benefits of failure](https://www.ted.com/playlists/418/the_benefits_of_failure)
1.0
[PD] Resilience learning points and suggestions - ### Coursework content Watch the following video and read the articles. How do you think CYF can improve the PD session on resilience? Share 5 of your own learning points and 3 new suggestions for us to make the session even better. - [Listening to shame](https://www.ted.com/talks/brene_brown_listening_to_shame/comments) - [Growth Mindset + Vulnerability](https://medium.com/teachers-on-fire/growth-mindset-vulnerability-c956512286) - [Failure, vulnerability, and the true nature of change](https://www.impactinternational.com/blog/2022/09/failure-vulnerability-and-true-nature-change) ### Estimated time in hours (PD has max 4 per week total) 1 ### What is the purpose of this assignment? This assignment will help you deepen your understanding of resilience, strengthen your skills, and support future cohorts that will attend CYF. ### How to submit Attach the link of your Google doc to this ticket on your board. ### Anything else? Optional video list: [The benefits of failure](https://www.ted.com/playlists/418/the_benefits_of_failure)
non_defect
resilience learning points and suggestions coursework content watch the following video and read the articles how do you think cyf can improve the pd session on resilience share of your own learning points and new suggestions for us to make the session even better estimated time in hours pd has max per week total what is the purpose of this assignment this assignment will help you deepen your understanding of resilience strengthen your skills and support future cohorts that will attend cyf how to submit attach the link of your google doc to this ticket on your board anything else optional video list
0
66,258
20,105,754,907
IssuesEvent
2022-02-07 10:19:15
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
"Add existing room" opens "Create a room" modal
T-Defect X-Regression S-Major A-Spaces Z-IA Z-Labs
### Steps to reproduce 1. Create a private Space 2. Open the Space 3. Expand the left room list on the left (this is important, the collapsed version works fine) 4. Click on the Plus button next to your Space's name 5. Click on "Add existing room" ![Screenshot 2022-02-04 at 19-50-34 Element #matrix-backstage-devroom fosdem org](https://user-images.githubusercontent.com/10872136/152586629-a1f70bfb-2f41-451f-a88d-967d78e42471.png) ### Outcome #### What did you expect? A modal opens letting me pick one or many rooms that I'm joined to. #### What happened instead? The modal for creating a room opened. ![Peek 2022-02-04 19-49](https://user-images.githubusercontent.com/10872136/152586813-073615ab-8e62-4bbf-b4f7-a1896eacedce.gif) ### Operating system _No response_ ### Browser information Firefox Developer 97 ### URL for webapp https://app.element.io/ ### Application version Element version: 1.10.1 Olm version: 3.2.8 ### Homeserver matrix.org ### Will you send logs? No
1.0
"Add existing room" opens "Create a room" modal - ### Steps to reproduce 1. Create a private Space 2. Open the Space 3. Expand the left room list on the left (this is important, the collapsed version works fine) 4. Click on the Plus button next to your Space's name 5. Click on "Add existing room" ![Screenshot 2022-02-04 at 19-50-34 Element #matrix-backstage-devroom fosdem org](https://user-images.githubusercontent.com/10872136/152586629-a1f70bfb-2f41-451f-a88d-967d78e42471.png) ### Outcome #### What did you expect? A modal opens letting me pick one or many rooms that I'm joined to. #### What happened instead? The modal for creating a room opened. ![Peek 2022-02-04 19-49](https://user-images.githubusercontent.com/10872136/152586813-073615ab-8e62-4bbf-b4f7-a1896eacedce.gif) ### Operating system _No response_ ### Browser information Firefox Developer 97 ### URL for webapp https://app.element.io/ ### Application version Element version: 1.10.1 Olm version: 3.2.8 ### Homeserver matrix.org ### Will you send logs? No
defect
add existing room opens create a room modal steps to reproduce create a private space open the space expand the left room list on the left this is important the collapsed version works fine click on the plus button next to your space s name click on add existing room outcome what did you expect a modal opens letting me pick one or many rooms that i m joined to what happened instead the modal for creating a room opened operating system no response browser information firefox developer url for webapp application version element version olm version homeserver matrix org will you send logs no
1
52,362
13,224,700,552
IssuesEvent
2020-08-17 19:40:07
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
Steamshovel no longer working with simple python server (Trac #2110)
Incomplete Migration Migrated from Trac combo core defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2110">https://code.icecube.wisc.edu/projects/icecube/ticket/2110</a>, reported by blaufussand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-03-28T13:52:47", "_ts": "1553781167065708", "description": "Here at UMD, we've had a TV running steamshovel, taking data from a cache directory\nand serving it out to any connected steamshovel. We currently still have a version of steamshovel from\nserveral years ago. An attempt to use with trunk/steamshovel fails to get any events into the \nsteamshovel for display.\n\nServer is a simple python tornado server:\nhttp://code.icecube.wisc.edu/svn/sandbox/blaufuss/i3tv_server/umd\nIncluded instructions are there for running and connecting via steamshovel\n\nI have a few example input files at UMD:\n~blaufuss/cache/\n\nConnection is reported, but no frames are available for display, or to play.\n\nIt seems some of the stubs of the socket: interface are there, but some pieces are missing. Seems\nto trace back to Hans's redo of shovelio\n\nOur current working system has a very old source:\nWorking Copy Root Path: /opt/i3display/ofsw/src/steamshovel\nURL: http://code.icecube.wisc.edu/svn/projects/steamshovel/trunk\nRelative URL: ^/projects/steamshovel/trunk\nRepository Root: http://code.icecube.wisc.edu/svn\nRepository UUID: 16731396-06f5-0310-8873-f7f720988828\nRevision: 129911\nNode Kind: directory\nSchedule: normal\nLast Changed Author: david.schultz\nLast Changed Rev: 129822\nLast Changed Date: 2015-03-05 00:14:01 -0500 (Thu, 05 Mar 2015)\n\n", "reporter": "blaufuss", "cc": "nega, david.schultz, blaufuss", "resolution": "wontfix", "time": "2017-11-03T20:04:01", "component": "combo core", "summary": "Steamshovel no longer working with simple python server", "priority": "normal", "keywords": "steamshovel sockets", "milestone": "Summer Solstice 2019", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
Steamshovel no longer working with simple python server (Trac #2110) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/2110">https://code.icecube.wisc.edu/projects/icecube/ticket/2110</a>, reported by blaufussand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2019-03-28T13:52:47", "_ts": "1553781167065708", "description": "Here at UMD, we've had a TV running steamshovel, taking data from a cache directory\nand serving it out to any connected steamshovel. We currently still have a version of steamshovel from\nserveral years ago. An attempt to use with trunk/steamshovel fails to get any events into the \nsteamshovel for display.\n\nServer is a simple python tornado server:\nhttp://code.icecube.wisc.edu/svn/sandbox/blaufuss/i3tv_server/umd\nIncluded instructions are there for running and connecting via steamshovel\n\nI have a few example input files at UMD:\n~blaufuss/cache/\n\nConnection is reported, but no frames are available for display, or to play.\n\nIt seems some of the stubs of the socket: interface are there, but some pieces are missing. Seems\nto trace back to Hans's redo of shovelio\n\nOur current working system has a very old source:\nWorking Copy Root Path: /opt/i3display/ofsw/src/steamshovel\nURL: http://code.icecube.wisc.edu/svn/projects/steamshovel/trunk\nRelative URL: ^/projects/steamshovel/trunk\nRepository Root: http://code.icecube.wisc.edu/svn\nRepository UUID: 16731396-06f5-0310-8873-f7f720988828\nRevision: 129911\nNode Kind: directory\nSchedule: normal\nLast Changed Author: david.schultz\nLast Changed Rev: 129822\nLast Changed Date: 2015-03-05 00:14:01 -0500 (Thu, 05 Mar 2015)\n\n", "reporter": "blaufuss", "cc": "nega, david.schultz, blaufuss", "resolution": "wontfix", "time": "2017-11-03T20:04:01", "component": "combo core", "summary": "Steamshovel no longer working with simple python server", "priority": "normal", "keywords": "steamshovel sockets", "milestone": "Summer Solstice 2019", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
steamshovel no longer working with simple python server trac migrated from json status closed changetime ts description here at umd we ve had a tv running steamshovel taking data from a cache directory nand serving it out to any connected steamshovel we currently still have a version of steamshovel from nserveral years ago an attempt to use with trunk steamshovel fails to get any events into the nsteamshovel for display n nserver is a simple python tornado server n instructions are there for running and connecting via steamshovel n ni have a few example input files at umd n blaufuss cache n nconnection is reported but no frames are available for display or to play n nit seems some of the stubs of the socket interface are there but some pieces are missing seems nto trace back to hans s redo of shovelio n nour current working system has a very old source nworking copy root path opt ofsw src steamshovel nurl url projects steamshovel trunk nrepository root uuid nrevision nnode kind directory nschedule normal nlast changed author david schultz nlast changed rev nlast changed date thu mar n n reporter blaufuss cc nega david schultz blaufuss resolution wontfix time component combo core summary steamshovel no longer working with simple python server priority normal keywords steamshovel sockets milestone summer solstice owner nega type defect
1
65,784
16,479,623,384
IssuesEvent
2021-05-24 09:53:23
Crocoblock/suggestions
https://api.github.com/repos/Crocoblock/suggestions
closed
Please add the previous and next mode to the back strap images of the single product image widget of the jet woobuilder plugin.
JetWooBuilder
It would be great if we could display individual images as sliders. Now, if the number of images is large, they are placed below each other, which creates a very bad user interface. If this feature is added to this great plugin, our needs will be largely met and there will be no need to install the jet product gallery plugin.Because the less the plugin is installed, the better. I will give you a similar image of this feature, I hope it will be added in future updates. https://prnt.sc/12zy8d6 please check. thanks
1.0
Please add the previous and next mode to the back strap images of the single product image widget of the jet woobuilder plugin. - It would be great if we could display individual images as sliders. Now, if the number of images is large, they are placed below each other, which creates a very bad user interface. If this feature is added to this great plugin, our needs will be largely met and there will be no need to install the jet product gallery plugin.Because the less the plugin is installed, the better. I will give you a similar image of this feature, I hope it will be added in future updates. https://prnt.sc/12zy8d6 please check. thanks
non_defect
please add the previous and next mode to the back strap images of the single product image widget of the jet woobuilder plugin it would be great if we could display individual images as sliders now if the number of images is large they are placed below each other which creates a very bad user interface if this feature is added to this great plugin our needs will be largely met and there will be no need to install the jet product gallery plugin because the less the plugin is installed the better i will give you a similar image of this feature i hope it will be added in future updates please check thanks
0
24,837
4,108,326,060
IssuesEvent
2016-06-06 15:46:42
Guake/guake
https://api.github.com/repos/Guake/guake
closed
No copy/paste with ctrl+shift+c/v in 0.7.2 or 0.8.3
Priority:Low Type: Defect
I cannot copy/paste anymore since upgrading from 0.4.4 (Ubtuntu 14.04 latest package) to 0.8.3 and then back down to 0.7.2. Any ideas? Might be related to - #697
1.0
No copy/paste with ctrl+shift+c/v in 0.7.2 or 0.8.3 - I cannot copy/paste anymore since upgrading from 0.4.4 (Ubtuntu 14.04 latest package) to 0.8.3 and then back down to 0.7.2. Any ideas? Might be related to - #697
defect
no copy paste with ctrl shift c v in or i cannot copy paste anymore since upgrading from ubtuntu latest package to and then back down to any ideas might be related to
1
109,153
23,728,553,097
IssuesEvent
2022-08-30 22:20:51
EmaApps/emanote
https://api.github.com/repos/EmaApps/emanote
closed
Encoding issues in Nix install unless `LC_ALL` is set
documentation question unicode
**Describe the bug** I have a markdown file with file name in CJK characters. The generated html has name `������.html`. Also in terminal, ``` [Debug#WS.Client.01] ~~> Identity (Identity (Identity R[LMLType Md]:������)) ``` **To Reproduce** Steps to reproduce the behavior: 1. newly created project folder with only one file: `测试.md` 2. run `emanote` **Expected behavior** In terminal and browser, Chinese characters should be properly displayed. **Desktop (please complete the following information):** - Browser Chrome - Version latest
1.0
Encoding issues in Nix install unless `LC_ALL` is set - **Describe the bug** I have a markdown file with file name in CJK characters. The generated html has name `������.html`. Also in terminal, ``` [Debug#WS.Client.01] ~~> Identity (Identity (Identity R[LMLType Md]:������)) ``` **To Reproduce** Steps to reproduce the behavior: 1. newly created project folder with only one file: `测试.md` 2. run `emanote` **Expected behavior** In terminal and browser, Chinese characters should be properly displayed. **Desktop (please complete the following information):** - Browser Chrome - Version latest
non_defect
encoding issues in nix install unless lc all is set describe the bug i have a markdown file with file name in cjk characters the generated html has name ������ html also in terminal identity identity identity r ������ to reproduce steps to reproduce the behavior newly created project folder with only one file 测试 md run emanote expected behavior in terminal and browser chinese characters should be properly displayed desktop please complete the following information browser chrome version latest
0
286,992
21,631,792,874
IssuesEvent
2022-05-05 10:28:44
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
Document <implicitJoinPathsToOne/> flag
T: Enhancement C: Documentation P: Medium E: All Editions
The `<implicitJoinPathsToOne/>` flag hasn't been documented yet in the code generation section. Related flags include: - `<implicitJoinPathsToOne/>` (jOOQ 3.11+) - `<implicitJoinPathsUseTableNameForUnambiguousFKs/>` (jOOQ 3.17+) - `<implicitJoinPathsAsKotlinProperties/>` (jOOQ 3.17+)
1.0
Document <implicitJoinPathsToOne/> flag - The `<implicitJoinPathsToOne/>` flag hasn't been documented yet in the code generation section. Related flags include: - `<implicitJoinPathsToOne/>` (jOOQ 3.11+) - `<implicitJoinPathsUseTableNameForUnambiguousFKs/>` (jOOQ 3.17+) - `<implicitJoinPathsAsKotlinProperties/>` (jOOQ 3.17+)
non_defect
document flag the flag hasn t been documented yet in the code generation section related flags include jooq jooq jooq
0
73,608
24,716,545,465
IssuesEvent
2022-10-20 07:27:15
hyperledger/iroha
https://api.github.com/repos/hyperledger/iroha
closed
[BUG] Any information hasn't been propagated to the client when a client tries to register the account in the non-existent domain
Bug iroha2 Dev defect QA-confirmed
### GIT commit hash b783f10f ### Minimum working example 1. Install `scripts/test_env.sh setup` 2. Try to register the account in the non-existent domain `./iroha_client_cli account register --id="mad_hatter@looking_glass" --key="ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c"` ### Expected behaviour Error: This domain non-exist ### Actual behaviour Any information hasn't been propagated ```bash alexstrokelive@Aleksandrs-MacBook-Pro-9 test % ./iroha_client_cli account register --id="mad_hatter@looking_glass" --key="ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c" User: alice@wonderland {"PUBLIC_KEY":"ed01207233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0","PRIVATE_KEY":{"digest_function":"ed25519","payload":"9ac47abf59b356e0bd7dcbbbb4dec080e302156a48ca907e47cb6aea1d32719e7233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0"},"ACCOUNT_ID":"alice@wonderland","BASIC_AUTH":{"web_login":"mad_hatter","password":"ilovetea"},"TORII_API_URL":"http://127.0.0.1:8080","TORII_TELEMETRY_URL":"http://127.0.0.1:8180","TRANSACTION_TIME_TO_LIVE_MS":100000,"TRANSACTION_STATUS_TIMEOUT_MS":15000,"TRANSACTION_LIMITS":{"max_instruction_number":4096,"max_wasm_size_bytes":4194304},"ADD_TRANSACTION_NONCE":false} ``` ### Operating system MacOS ### Current environment Source code build ### Logs in JSON format <details> <summary>Log contents</summary> ```bash alexstrokelive@Aleksandrs-MacBook-Pro-9 test % ./iroha_client_cli account register --id="mad_hatter@looking_glass" --key="ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c" User: alice@wonderland {"PUBLIC_KEY":"ed01207233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0","PRIVATE_KEY":{"digest_function":"ed25519","payload":"9ac47abf59b356e0bd7dcbbbb4dec080e302156a48ca907e47cb6aea1d32719e7233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0"},"ACCOUNT_ID":"alice@wonderland","BASIC_AUTH":{"web_login":"mad_hatter","password":"ilovetea"},"TORII_API_URL":"http://127.0.0.1:8080","TORII_TELEMETRY_URL":"http://127.0.0.1:8180","TRANSACTION_TIME_TO_LIVE_MS":100000,"TRANSACTION_STATUS_TIMEOUT_MS":15000,"TRANSACTION_LIMITS":{"max_instruction_number":4096,"max_wasm_size_bytes":4194304},"ADD_TRANSACTION_NONCE":false} alexstrokelive@Aleksandrs-MacBook-Pro-9 test % cd .. alexstrokelive@Aleksandrs-MacBook-Pro-9 iroha % git rev-parse --short HEAD b783f10f alexstrokelive@Aleksandrs-MacBook-Pro-9 iroha % cat test/peers/iroha0/.log 2022-10-13T17:44:45.488734Z INFO iroha: Hyperledgerいろは2にようこそ! 2022-10-13T17:44:45.489024Z INFO iroha: (translation) Welcome to Hyperledger Iroha 2! 2022-10-13T17:44:45.503896Z INFO iroha: Starting peer listen_addr=127.0.0.1:1337 2022-10-13T17:44:45.503943Z INFO iroha_p2p::network: Binding listener listen_addr=127.0.0.1:1337 2022-10-13T17:44:45.504094Z INFO iroha_p2p::network: Starting network actor listen_addr=127.0.0.1:1337 2022-10-13T17:44:45.512890Z INFO iroha_core::kura: Loaded 0 blocks at init. 2022-10-13T17:44:45.516972Z ERROR iroha: Telemetry did not start 2022-10-13T17:44:45.517047Z INFO handle{self=Network { peers: 0 } msg=Connected(Id { address: "127.0.0.1:56756", public_key: { digest: ed25519, payload: FACA9E8AA83225CB4D16D67F27DD4F93FC30FFA11ADC1F5C88FD5495ECC91020 } }, 15580931942437498738)}: iroha_p2p::network: Peer connected listen_addr=127.0.0.1:1337 count.new_peers=2 count.peers=1 2022-10-13T17:44:45.517561Z INFO iroha: Starting Iroha 2022-10-13T17:44:45.517811Z INFO handle{self=Network { peers: 1 } msg=Connected(Id { address: "127.0.0.1:56761", public_key: { digest: ed25519, payload: CC25624D62896D3A0BFD8940F928DC2ABF27CC57CEFEB442AA96D9081AAE58A1 } }, 16598413621405376918)}: iroha_p2p::network: Peer connected listen_addr=127.0.0.1:1337 count.new_peers=1 count.peers=2 2022-10-13T17:44:45.517917Z INFO handle{self=Network { peers: 2 } msg=Connected(Id { address: "127.0.0.1:56762", public_key: { digest: ed25519, payload: 8E351A70B6A603ED285D666B8D689B680865913BA03CE29FB7D13A166C4E7F1F } }, 2317764054872731552)}: iroha_p2p::network: Peer connected listen_addr=127.0.0.1:1337 count.new_peers=0 count.peers=3 2022-10-13T17:44:45.773034Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Initializing iroha using the genesis block. 2022-10-13T17:44:45.774266Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Publishing genesis block. block_hash=982529a65e89f1461fa366d09c914cfdbac3791944d3ebb51ddc208877144451 2022-10-13T17:44:45.783013Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Created a block to commit. peer_role=ValidatingPeer block_hash=facf3307ad54f16f6c53610f0fcf7d34871ba07d5dfe113e4c4f5d5fe2a641a3 2022-10-13T17:44:45.790807Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:44:45.790879Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:44:45.791233Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Committing block prev_peer_role=ValidatingPeer new_peer_role=ValidatingPeer new_block_height=1 block_hash=facf3307ad54f16f6c53610f0fcf7d34871ba07d5dfe113e4c4f5d5fe2a641a3 2022-10-13T17:45:36.656546Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56764}: warp::filters::trace: processing request 2022-10-13T17:45:36.657142Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56764}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:45:36.763841Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56765}: warp::filters::trace: processing request 2022-10-13T17:45:36.765145Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56765}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:45:36.767958Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Forwarding tx to leader peer_addr=127.0.0.1:1337 peer_role=ValidatingPeer leader_addr=127.0.0.1:1339 tx_hash=80ec0156c463fa62559fa628a6215513197c68d7eca52a9a2024f51eee1332dd 2022-10-13T17:45:37.799717Z WARN run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::block: Transaction validation failed reason=Failed to execute instruction of type register: Failed to find. Failed to find domain: `looking_glass` caused_by=Some(InstructionExecutionFail { instruction: Register(RegisterBox { object: EvaluatesTo { expression: Raw(Identifiable(NewAccount(NewAccount { id: Id { name: "mad_hatter", domain_id: Id { name: "looking_glass" } }, signatories: {{ digest: ed25519, payload: A753146E75B910AE5E2994DC8ADEA9E7D87E5D53024CFA310CE992F17106F92C }}, metadata: Metadata { map: {} } }))), _value_type: PhantomData } }), reason: "Failed to find. Failed to find domain: `looking_glass`" }) 2022-10-13T17:45:37.801812Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Signed block candidate peer_role=ValidatingPeer block_hash=e4965075b994239f5252de93c77fd4b514255e693e9249a44af6b59350576ad6 2022-10-13T17:45:37.812016Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:45:37.812100Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:45:37.812292Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Committing block prev_peer_role=ValidatingPeer new_peer_role=Leader new_block_height=2 block_hash=e4965075b994239f5252de93c77fd4b514255e693e9249a44af6b59350576ad6 2022-10-13T17:46:10.543670Z INFO request{method=POST path=/query version=HTTP/1.1 remote.addr=127.0.0.1:56766}: warp::filters::trace: processing request 2022-10-13T17:46:10.545303Z INFO request{method=POST path=/query version=HTTP/1.1 remote.addr=127.0.0.1:56766}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:46:36.515852Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56767}: warp::filters::trace: processing request 2022-10-13T17:46:36.516645Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56767}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:46:36.622871Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56768}: warp::filters::trace: processing request 2022-10-13T17:46:36.623901Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56768}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:46:37.627533Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: sumeragi Doing block with 1 txs. 2022-10-13T17:46:37.631277Z WARN run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::block: Transaction validation failed reason=Failed to execute instruction of type register: Failed to find. Failed to find domain: `looking_glass` caused_by=Some(InstructionExecutionFail { instruction: Register(RegisterBox { object: EvaluatesTo { expression: Raw(Identifiable(NewAccount(NewAccount { id: Id { name: "mad_hatter", domain_id: Id { name: "looking_glass" } }, signatories: {{ digest: ed25519, payload: A753146E75B910AE5E2994DC8ADEA9E7D87E5D53024CFA310CE992F17106F92C }}, metadata: Metadata { map: {} } }))), _value_type: PhantomData } }), reason: "Failed to find. Failed to find domain: `looking_glass`" }) 2022-10-13T17:46:37.667490Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:46:37.667586Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:46:37.667814Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Committing block prev_peer_role=Leader new_peer_role=ValidatingPeer new_block_height=3 block_hash=64741f63fd8c58f4736f70f58b7bc85c204b0394162a634550723dcdee4f0025 ``` </details> ### Who can help? @astrokov7
1.0
[BUG] Any information hasn't been propagated to the client when a client tries to register the account in the non-existent domain - ### GIT commit hash b783f10f ### Minimum working example 1. Install `scripts/test_env.sh setup` 2. Try to register the account in the non-existent domain `./iroha_client_cli account register --id="mad_hatter@looking_glass" --key="ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c"` ### Expected behaviour Error: This domain non-exist ### Actual behaviour Any information hasn't been propagated ```bash alexstrokelive@Aleksandrs-MacBook-Pro-9 test % ./iroha_client_cli account register --id="mad_hatter@looking_glass" --key="ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c" User: alice@wonderland {"PUBLIC_KEY":"ed01207233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0","PRIVATE_KEY":{"digest_function":"ed25519","payload":"9ac47abf59b356e0bd7dcbbbb4dec080e302156a48ca907e47cb6aea1d32719e7233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0"},"ACCOUNT_ID":"alice@wonderland","BASIC_AUTH":{"web_login":"mad_hatter","password":"ilovetea"},"TORII_API_URL":"http://127.0.0.1:8080","TORII_TELEMETRY_URL":"http://127.0.0.1:8180","TRANSACTION_TIME_TO_LIVE_MS":100000,"TRANSACTION_STATUS_TIMEOUT_MS":15000,"TRANSACTION_LIMITS":{"max_instruction_number":4096,"max_wasm_size_bytes":4194304},"ADD_TRANSACTION_NONCE":false} ``` ### Operating system MacOS ### Current environment Source code build ### Logs in JSON format <details> <summary>Log contents</summary> ```bash alexstrokelive@Aleksandrs-MacBook-Pro-9 test % ./iroha_client_cli account register --id="mad_hatter@looking_glass" --key="ed0120a753146e75b910ae5e2994dc8adea9e7d87e5d53024cfa310ce992f17106f92c" User: alice@wonderland {"PUBLIC_KEY":"ed01207233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0","PRIVATE_KEY":{"digest_function":"ed25519","payload":"9ac47abf59b356e0bd7dcbbbb4dec080e302156a48ca907e47cb6aea1d32719e7233bfc89dcbd68c19fde6ce6158225298ec1131b6a130d1aeb454c1ab5183c0"},"ACCOUNT_ID":"alice@wonderland","BASIC_AUTH":{"web_login":"mad_hatter","password":"ilovetea"},"TORII_API_URL":"http://127.0.0.1:8080","TORII_TELEMETRY_URL":"http://127.0.0.1:8180","TRANSACTION_TIME_TO_LIVE_MS":100000,"TRANSACTION_STATUS_TIMEOUT_MS":15000,"TRANSACTION_LIMITS":{"max_instruction_number":4096,"max_wasm_size_bytes":4194304},"ADD_TRANSACTION_NONCE":false} alexstrokelive@Aleksandrs-MacBook-Pro-9 test % cd .. alexstrokelive@Aleksandrs-MacBook-Pro-9 iroha % git rev-parse --short HEAD b783f10f alexstrokelive@Aleksandrs-MacBook-Pro-9 iroha % cat test/peers/iroha0/.log 2022-10-13T17:44:45.488734Z INFO iroha: Hyperledgerいろは2にようこそ! 2022-10-13T17:44:45.489024Z INFO iroha: (translation) Welcome to Hyperledger Iroha 2! 2022-10-13T17:44:45.503896Z INFO iroha: Starting peer listen_addr=127.0.0.1:1337 2022-10-13T17:44:45.503943Z INFO iroha_p2p::network: Binding listener listen_addr=127.0.0.1:1337 2022-10-13T17:44:45.504094Z INFO iroha_p2p::network: Starting network actor listen_addr=127.0.0.1:1337 2022-10-13T17:44:45.512890Z INFO iroha_core::kura: Loaded 0 blocks at init. 2022-10-13T17:44:45.516972Z ERROR iroha: Telemetry did not start 2022-10-13T17:44:45.517047Z INFO handle{self=Network { peers: 0 } msg=Connected(Id { address: "127.0.0.1:56756", public_key: { digest: ed25519, payload: FACA9E8AA83225CB4D16D67F27DD4F93FC30FFA11ADC1F5C88FD5495ECC91020 } }, 15580931942437498738)}: iroha_p2p::network: Peer connected listen_addr=127.0.0.1:1337 count.new_peers=2 count.peers=1 2022-10-13T17:44:45.517561Z INFO iroha: Starting Iroha 2022-10-13T17:44:45.517811Z INFO handle{self=Network { peers: 1 } msg=Connected(Id { address: "127.0.0.1:56761", public_key: { digest: ed25519, payload: CC25624D62896D3A0BFD8940F928DC2ABF27CC57CEFEB442AA96D9081AAE58A1 } }, 16598413621405376918)}: iroha_p2p::network: Peer connected listen_addr=127.0.0.1:1337 count.new_peers=1 count.peers=2 2022-10-13T17:44:45.517917Z INFO handle{self=Network { peers: 2 } msg=Connected(Id { address: "127.0.0.1:56762", public_key: { digest: ed25519, payload: 8E351A70B6A603ED285D666B8D689B680865913BA03CE29FB7D13A166C4E7F1F } }, 2317764054872731552)}: iroha_p2p::network: Peer connected listen_addr=127.0.0.1:1337 count.new_peers=0 count.peers=3 2022-10-13T17:44:45.773034Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Initializing iroha using the genesis block. 2022-10-13T17:44:45.774266Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Publishing genesis block. block_hash=982529a65e89f1461fa366d09c914cfdbac3791944d3ebb51ddc208877144451 2022-10-13T17:44:45.783013Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Created a block to commit. peer_role=ValidatingPeer block_hash=facf3307ad54f16f6c53610f0fcf7d34871ba07d5dfe113e4c4f5d5fe2a641a3 2022-10-13T17:44:45.790807Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:44:45.790879Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:44:45.791233Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Committing block prev_peer_role=ValidatingPeer new_peer_role=ValidatingPeer new_block_height=1 block_hash=facf3307ad54f16f6c53610f0fcf7d34871ba07d5dfe113e4c4f5d5fe2a641a3 2022-10-13T17:45:36.656546Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56764}: warp::filters::trace: processing request 2022-10-13T17:45:36.657142Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56764}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:45:36.763841Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56765}: warp::filters::trace: processing request 2022-10-13T17:45:36.765145Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56765}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:45:36.767958Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Forwarding tx to leader peer_addr=127.0.0.1:1337 peer_role=ValidatingPeer leader_addr=127.0.0.1:1339 tx_hash=80ec0156c463fa62559fa628a6215513197c68d7eca52a9a2024f51eee1332dd 2022-10-13T17:45:37.799717Z WARN run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::block: Transaction validation failed reason=Failed to execute instruction of type register: Failed to find. Failed to find domain: `looking_glass` caused_by=Some(InstructionExecutionFail { instruction: Register(RegisterBox { object: EvaluatesTo { expression: Raw(Identifiable(NewAccount(NewAccount { id: Id { name: "mad_hatter", domain_id: Id { name: "looking_glass" } }, signatories: {{ digest: ed25519, payload: A753146E75B910AE5E2994DC8ADEA9E7D87E5D53024CFA310CE992F17106F92C }}, metadata: Metadata { map: {} } }))), _value_type: PhantomData } }), reason: "Failed to find. Failed to find domain: `looking_glass`" }) 2022-10-13T17:45:37.801812Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Signed block candidate peer_role=ValidatingPeer block_hash=e4965075b994239f5252de93c77fd4b514255e693e9249a44af6b59350576ad6 2022-10-13T17:45:37.812016Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:45:37.812100Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:45:37.812292Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Committing block prev_peer_role=ValidatingPeer new_peer_role=Leader new_block_height=2 block_hash=e4965075b994239f5252de93c77fd4b514255e693e9249a44af6b59350576ad6 2022-10-13T17:46:10.543670Z INFO request{method=POST path=/query version=HTTP/1.1 remote.addr=127.0.0.1:56766}: warp::filters::trace: processing request 2022-10-13T17:46:10.545303Z INFO request{method=POST path=/query version=HTTP/1.1 remote.addr=127.0.0.1:56766}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:46:36.515852Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56767}: warp::filters::trace: processing request 2022-10-13T17:46:36.516645Z INFO request{method=GET path=/pending_transactions version=HTTP/1.1 remote.addr=127.0.0.1:56767}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:46:36.622871Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56768}: warp::filters::trace: processing request 2022-10-13T17:46:36.623901Z INFO request{method=POST path=/transaction version=HTTP/1.1 remote.addr=127.0.0.1:56768}: warp::filters::trace: finished processing with success status=200 2022-10-13T17:46:37.627533Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: sumeragi Doing block with 1 txs. 2022-10-13T17:46:37.631277Z WARN run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::block: Transaction validation failed reason=Failed to execute instruction of type register: Failed to find. Failed to find domain: `looking_glass` caused_by=Some(InstructionExecutionFail { instruction: Register(RegisterBox { object: EvaluatesTo { expression: Raw(Identifiable(NewAccount(NewAccount { id: Id { name: "mad_hatter", domain_id: Id { name: "looking_glass" } }, signatories: {{ digest: ed25519, payload: A753146E75B910AE5E2994DC8ADEA9E7D87E5D53024CFA310CE992F17106F92C }}, metadata: Metadata { map: {} } }))), _value_type: PhantomData } }), reason: "Failed to find. Failed to find domain: `looking_glass`" }) 2022-10-13T17:46:37.667490Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:46:37.667586Z ERROR run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Some events failed to be sent e=channel closed 2022-10-13T17:46:37.667814Z INFO run{shutdown_receiver=Receiver { inner: Some(Inner { state: State { is_complete: false, is_closed: false, is_rx_task_set: false, is_tx_task_set: false } }) }}: iroha_core::sumeragi::main_loop: Committing block prev_peer_role=Leader new_peer_role=ValidatingPeer new_block_height=3 block_hash=64741f63fd8c58f4736f70f58b7bc85c204b0394162a634550723dcdee4f0025 ``` </details> ### Who can help? @astrokov7
defect
any information hasn t been propagated to the client when a client tries to register the account in the non existent domain git commit hash minimum working example install scripts test env sh setup try to register the account in the non existent domain iroha client cli account register id mad hatter looking glass key expected behaviour error this domain non exist actual behaviour any information hasn t been propagated bash alexstrokelive aleksandrs macbook pro test iroha client cli account register id mad hatter looking glass key user alice wonderland public key private key digest function payload account id alice wonderland basic auth web login mad hatter password ilovetea torii api url operating system macos current environment source code build logs in json format log contents bash alexstrokelive aleksandrs macbook pro test iroha client cli account register id mad hatter looking glass key user alice wonderland public key private key digest function payload account id alice wonderland basic auth web login mad hatter password ilovetea torii api url alexstrokelive aleksandrs macbook pro test cd alexstrokelive aleksandrs macbook pro iroha git rev parse short head alexstrokelive aleksandrs macbook pro iroha cat test peers log info iroha ! info iroha translation welcome to hyperledger iroha info iroha starting peer listen addr info iroha network binding listener listen addr info iroha network starting network actor listen addr info iroha core kura loaded blocks at init error iroha telemetry did not start info handle self network peers msg connected id address public key digest payload iroha network peer connected listen addr count new peers count peers info iroha starting iroha info handle self network peers msg connected id address public key digest payload iroha network peer connected listen addr count new peers count peers info handle self network peers msg connected id address public key digest payload iroha network peer connected listen addr count new peers count peers info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop initializing iroha using the genesis block info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop publishing genesis block block hash info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop created a block to commit peer role validatingpeer block hash error run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop some events failed to be sent e channel closed error run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop some events failed to be sent e channel closed info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop committing block prev peer role validatingpeer new peer role validatingpeer new block height block hash info request method get path pending transactions version http remote addr warp filters trace processing request info request method get path pending transactions version http remote addr warp filters trace finished processing with success status info request method post path transaction version http remote addr warp filters trace processing request info request method post path transaction version http remote addr warp filters trace finished processing with success status info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop forwarding tx to leader peer addr peer role validatingpeer leader addr tx hash warn run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core block transaction validation failed reason failed to execute instruction of type register failed to find failed to find domain looking glass caused by some instructionexecutionfail instruction register registerbox object evaluatesto expression raw identifiable newaccount newaccount id id name mad hatter domain id id name looking glass signatories digest payload metadata metadata map value type phantomdata reason failed to find failed to find domain looking glass info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop signed block candidate peer role validatingpeer block hash error run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop some events failed to be sent e channel closed error run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop some events failed to be sent e channel closed info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop committing block prev peer role validatingpeer new peer role leader new block height block hash info request method post path query version http remote addr warp filters trace processing request info request method post path query version http remote addr warp filters trace finished processing with success status info request method get path pending transactions version http remote addr warp filters trace processing request info request method get path pending transactions version http remote addr warp filters trace finished processing with success status info request method post path transaction version http remote addr warp filters trace processing request info request method post path transaction version http remote addr warp filters trace finished processing with success status info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop sumeragi doing block with txs warn run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core block transaction validation failed reason failed to execute instruction of type register failed to find failed to find domain looking glass caused by some instructionexecutionfail instruction register registerbox object evaluatesto expression raw identifiable newaccount newaccount id id name mad hatter domain id id name looking glass signatories digest payload metadata metadata map value type phantomdata reason failed to find failed to find domain looking glass error run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop some events failed to be sent e channel closed error run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop some events failed to be sent e channel closed info run shutdown receiver receiver inner some inner state state is complete false is closed false is rx task set false is tx task set false iroha core sumeragi main loop committing block prev peer role leader new peer role validatingpeer new block height block hash who can help
1
69,115
22,164,494,077
IssuesEvent
2022-06-05 01:56:42
naev/naev
https://api.github.com/repos/naev/naev
closed
Normal news should not appear in Thurion / Proteron Space
Type-Defect Priority-Low
Probably need to have completely custom Thurion / Proteron news for immersion reasons.
1.0
Normal news should not appear in Thurion / Proteron Space - Probably need to have completely custom Thurion / Proteron news for immersion reasons.
defect
normal news should not appear in thurion proteron space probably need to have completely custom thurion proteron news for immersion reasons
1
180,666
21,625,810,295
IssuesEvent
2022-05-05 01:52:36
faizulho/sanity-gatsby-blog
https://api.github.com/repos/faizulho/sanity-gatsby-blog
closed
CVE-2019-16769 (Medium) detected in serialize-javascript-1.9.1.tgz - autoclosed
security vulnerability
## CVE-2019-16769 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serialize-javascript-1.9.1.tgz</b></p></summary> <p>Serialize JavaScript to a superset of JSON that includes regular expressions and functions.</p> <p>Library home page: <a href="https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz">https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/sanity-gatsby-blog/web/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/sanity-gatsby-blog/web/node_modules/serialize-javascript/package.json</p> <p> Dependency Hierarchy: - gatsby-2.15.9.tgz (Root Library) - terser-webpack-plugin-1.4.1.tgz - :x: **serialize-javascript-1.9.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/faizulho/sanity-gatsby-blog/commit/79771ad30d8e68c487b064fabb669d2057bec868">79771ad30d8e68c487b064fabb669d2057bec868</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The serialize-javascript npm package before version 2.1.1 is vulnerable to Cross-site Scripting (XSS). It does not properly mitigate against unsafe characters in serialized regular expressions. This vulnerability is not affected on Node.js environment since Node.js's implementation of RegExp.prototype.toString() backslash-escapes all forward slashes in regular expressions. If serialized data of regular expression objects are used in an environment other than Node.js, it is affected by this vulnerability. <p>Publish Date: 2019-12-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16769>CVE-2019-16769</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769</a></p> <p>Release Date: 2019-12-05</p> <p>Fix Resolution: v2.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-16769 (Medium) detected in serialize-javascript-1.9.1.tgz - autoclosed - ## CVE-2019-16769 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>serialize-javascript-1.9.1.tgz</b></p></summary> <p>Serialize JavaScript to a superset of JSON that includes regular expressions and functions.</p> <p>Library home page: <a href="https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz">https://registry.npmjs.org/serialize-javascript/-/serialize-javascript-1.9.1.tgz</a></p> <p>Path to dependency file: /tmp/ws-scm/sanity-gatsby-blog/web/package.json</p> <p>Path to vulnerable library: /tmp/ws-scm/sanity-gatsby-blog/web/node_modules/serialize-javascript/package.json</p> <p> Dependency Hierarchy: - gatsby-2.15.9.tgz (Root Library) - terser-webpack-plugin-1.4.1.tgz - :x: **serialize-javascript-1.9.1.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/faizulho/sanity-gatsby-blog/commit/79771ad30d8e68c487b064fabb669d2057bec868">79771ad30d8e68c487b064fabb669d2057bec868</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The serialize-javascript npm package before version 2.1.1 is vulnerable to Cross-site Scripting (XSS). It does not properly mitigate against unsafe characters in serialized regular expressions. This vulnerability is not affected on Node.js environment since Node.js's implementation of RegExp.prototype.toString() backslash-escapes all forward slashes in regular expressions. If serialized data of regular expression objects are used in an environment other than Node.js, it is affected by this vulnerability. <p>Publish Date: 2019-12-05 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-16769>CVE-2019-16769</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.4</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: Low - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-16769</a></p> <p>Release Date: 2019-12-05</p> <p>Fix Resolution: v2.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve medium detected in serialize javascript tgz autoclosed cve medium severity vulnerability vulnerable library serialize javascript tgz serialize javascript to a superset of json that includes regular expressions and functions library home page a href path to dependency file tmp ws scm sanity gatsby blog web package json path to vulnerable library tmp ws scm sanity gatsby blog web node modules serialize javascript package json dependency hierarchy gatsby tgz root library terser webpack plugin tgz x serialize javascript tgz vulnerable library found in head commit a href vulnerability details the serialize javascript npm package before version is vulnerable to cross site scripting xss it does not properly mitigate against unsafe characters in serialized regular expressions this vulnerability is not affected on node js environment since node js s implementation of regexp prototype tostring backslash escapes all forward slashes in regular expressions if serialized data of regular expression objects are used in an environment other than node js it is affected by this vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
10,302
2,622,141,417
IssuesEvent
2015-03-04 00:02:11
byzhang/spserver
https://api.github.com/repos/byzhang/spserver
opened
testecho killed by ubuntu
auto-migrated Priority-Medium Type-Defect
``` If you are not sure whether this is a bug, please go to http://groups.google.com/group/spserver to request help. What steps will reproduce the problem? 1. run testecho(change sp_session uint16_t to unsigned int). 2. use a multi-thread client in other client to do connect, send,recv (don't close fd until program exit. 3. about 100,000 connections, ubuntu looks very slowly, and killed testecho. I use top. there is 700M memory free(Total 2G), and CPU only takes 30%. What is the expected output? What do you see instead? work normally. testecho was killed by system. What version of the product are you using? On what operating system? 0.9.5 under ubuntu 10.10 Please provide any additional information below. Jul 18 21:00:24 opensips kernel: [10762.781159] Mem-Info: Jul 18 21:00:24 opensips kernel: [10762.781160] DMA per-cpu: Jul 18 21:00:24 opensips kernel: [10762.781162] CPU 0: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.781164] CPU 1: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.781166] Normal per-cpu: Jul 18 21:00:24 opensips kernel: [10762.781168] CPU 0: hi: 186, btch: 31 usd: 40 Jul 18 21:00:24 opensips kernel: [10762.781170] CPU 1: hi: 186, btch: 31 usd: 30 Jul 18 21:00:24 opensips kernel: [10762.781171] HighMem per-cpu: Jul 18 21:00:24 opensips kernel: [10762.781173] CPU 0: hi: 186, btch: 31 usd: 155 Jul 18 21:00:24 opensips kernel: [10762.781175] CPU 1: hi: 186, btch: 31 usd: 21 Jul 18 21:00:24 opensips kernel: [10762.781178] active_anon:46948 inactive_anon:16673 isolated_anon:0 Jul 18 21:00:24 opensips kernel: [10762.781180] active_file:11359 inactive_file:20878 isolated_file:0 Jul 18 21:00:24 opensips kernel: [10762.781181] unevictable:0 dirty:14 writeback:0 unstable:0 Jul 18 21:00:24 opensips kernel: [10762.781182] free:190739 slab_reclaimable:19024 slab_unreclaimable:57150 Jul 18 21:00:24 opensips kernel: [10762.781183] mapped:13250 shmem:9981 pagetables:1152 bounce:0 Jul 18 21:00:24 opensips kernel: [10762.781188] DMA free:3524kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15808kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:356kB slab_unreclaimable:1084kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Jul 18 21:00:24 opensips kernel: [10762.781194] lowmem_reserve[]: 0 865 1980 1980 Jul 18 21:00:24 opensips kernel: [10762.781201] Normal free:3664kB min:3728kB low:4660kB high:5592kB active_anon:0kB inactive_anon:0kB active_file:220kB inactive_file:40kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:75740kB slab_unreclaimable:227516kB kernel_stack:2456kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:450 all_unreclaimable? yes Jul 18 21:00:24 opensips kernel: [10762.781207] lowmem_reserve[]: 0 0 8922 8922 Jul 18 21:00:24 opensips kernel: [10762.781214] HighMem free:755768kB min:512kB low:1712kB high:2912kB active_anon:187792kB inactive_anon:66692kB active_file:45216kB inactive_file:83472kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1142020kB mlocked:0kB dirty:56kB writeback:0kB mapped:52996kB shmem:39924kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:4608kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.781220] lowmem_reserve[]: 0 0 0 0 Jul 18 21:00:24 opensips kernel: [10762.781223] DMA: 5*4kB 4*8kB 5*16kB 10*32kB 14*64kB 17*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3524kB Jul 18 21:00:24 opensips kernel: [10762.781232] Normal: 228*4kB 66*8kB 5*16kB 1*32kB 1*64kB 0*128kB 2*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 3664kB Jul 18 21:00:24 opensips kernel: [10762.781239] HighMem: 564*4kB 637*8kB 784*16kB 488*32kB 364*64kB 101*128kB 40*256kB 14*512kB 5*1024kB 3*2048kB 160*4096kB = 755768kB Jul 18 21:00:24 opensips kernel: [10762.781248] 42223 total pagecache pages Jul 18 21:00:24 opensips kernel: [10762.781249] 0 pages in swap cache Jul 18 21:00:24 opensips kernel: [10762.781251] Swap cache stats: add 0, delete 0, find 0/0 Jul 18 21:00:24 opensips kernel: [10762.781253] Free swap = 6141948kB Jul 18 21:00:24 opensips kernel: [10762.781254] Total swap = 6141948kB Jul 18 21:00:24 opensips kernel: [10762.781254] Total swap = 6141948kB Jul 18 21:00:24 opensips kernel: [10762.784591] 515079 pages RAM Jul 18 21:00:24 opensips kernel: [10762.784593] 287754 pages HighMem Jul 18 21:00:24 opensips kernel: [10762.784595] 9031 pages reserved Jul 18 21:00:24 opensips kernel: [10762.784596] 82449 pages shared Jul 18 21:00:24 opensips kernel: [10762.784597] 294015 pages non-shared Jul 18 21:00:24 opensips kernel: [10762.784600] Out of memory: kill process 1369 (gnome-session) score 65119 or a child Jul 18 21:00:24 opensips kernel: [10762.784604] Killed process 1521 (gnome-panel) vsz:86088kB, anon-rss:3644kB, file-rss:13840kB Jul 18 21:00:24 opensips kernel: [10762.855130] Xorg invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0 Jul 18 21:00:24 opensips kernel: [10762.855134] Xorg cpuset=/ mems_allowed=0 Jul 18 21:00:24 opensips kernel: [10762.855137] Pid: 1049, comm: Xorg Not tainted 2.6.35-30-generic #54-Ubuntu Jul 18 21:00:24 opensips kernel: [10762.855139] Call Trace: Jul 18 21:00:24 opensips kernel: [10762.855146] [<c01dd51a>] dump_header+0x7a/0xb0 Jul 18 21:00:24 opensips kernel: [10762.855149] [<c01dd5ac>] oom_kill_process+0x5c/0x160 Jul 18 21:00:24 opensips kernel: [10762.855151] [<c01ddb19>] ? select_bad_process+0xa9/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855154] [<c01ddba1>] __out_of_memory+0x51/0xb0 Jul 18 21:00:24 opensips kernel: [10762.855156] [<c01ddc58>] out_of_memory+0x58/0xd0 Jul 18 21:00:24 opensips kernel: [10762.855159] [<c01e0b86>] __alloc_pages_slowpath+0x496/0x4b0 Jul 18 21:00:24 opensips kernel: [10762.855162] [<c01e0d0f>] __alloc_pages_nodemask+0x16f/0x1c0 Jul 18 21:00:24 opensips kernel: [10762.855164] [<c01e0d7c>] __get_free_pages+0x1c/0x30 Jul 18 21:00:24 opensips kernel: [10762.855167] [<c02299c1>] __pollwait+0xa1/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855169] [<c0229985>] ? __pollwait+0x65/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855173] [<c0571b1c>] unix_poll+0x1c/0xa0 Jul 18 21:00:24 opensips kernel: [10762.855176] [<c04e72f4>] sock_poll+0x14/0x20 Jul 18 21:00:24 opensips kernel: [10762.855178] [<c022962b>] do_select+0x37b/0x670 Jul 18 21:00:24 opensips kernel: [10762.855181] [<c03535dd>] ? kobject_put+0x1d/0x50 Jul 18 21:00:24 opensips kernel: [10762.855183] [<c0229920>] ? __pollwait+0x0/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855185] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855188] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855190] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855192] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855194] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855196] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855198] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855200] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855202] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855223] [<c0229fb0>] core_sys_select+0x140/0x240 Jul 18 21:00:24 opensips kernel: [10762.855226] [<c016b2da>] ? hrtimer_try_to_cancel+0x3a/0xc0 Jul 18 21:00:24 opensips kernel: [10762.855229] [<c0227bb2>] ? vfs_ioctl+0x32/0xb0 Jul 18 21:00:24 opensips kernel: [10762.855231] [<c022a2b1>] sys_select+0x31/0xc0 Jul 18 21:00:24 opensips kernel: [10762.855233] [<c0165660>] ? sys_clock_gettime+0x50/0xa0 Jul 18 21:00:24 opensips kernel: [10762.855236] [<c05cc254>] syscall_call+0x7/0xb Jul 18 21:00:24 opensips kernel: [10762.855239] [<c05c0000>] ? pcibios_setup+0x93/0x3ac Jul 18 21:00:24 opensips kernel: [10762.855241] Mem-Info: Jul 18 21:00:24 opensips kernel: [10762.855242] DMA per-cpu: Jul 18 21:00:24 opensips kernel: [10762.855244] CPU 0: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.855245] CPU 1: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.855247] Normal per-cpu: Jul 18 21:00:24 opensips kernel: [10762.855248] CPU 0: hi: 186, btch: 31 usd: 31 Jul 18 21:00:24 opensips kernel: [10762.855250] CPU 1: hi: 186, btch: 31 usd: 30 Jul 18 21:00:24 opensips kernel: [10762.855251] HighMem per-cpu: Jul 18 21:00:24 opensips kernel: [10762.855252] CPU 0: hi: 186, btch: 31 usd: 175 Jul 18 21:00:24 opensips kernel: [10762.855254] CPU 1: hi: 186, btch: 31 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.855257] active_anon:42267 inactive_anon:16487 isolated_anon:0 Jul 18 21:00:24 opensips kernel: [10762.855258] active_file:12010 inactive_file:18540 isolated_file:33 Jul 18 21:00:24 opensips kernel: [10762.855259] unevictable:0 dirty:14 writeback:0 unstable:0 Jul 18 21:00:24 opensips kernel: [10762.855259] free:197553 slab_reclaimable:19004 slab_unreclaimable:57150 Jul 18 21:00:24 opensips kernel: [10762.855260] mapped:12134 shmem:9764 pagetables:935 bounce:0 Jul 18 21:00:24 opensips kernel: [10762.855265] DMA free:3524kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15808kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:356kB slab_unreclaimable:1084kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.855268] lowmem_reserve[]: 0 865 1980 1980 Jul 18 21:00:24 opensips kernel: [10762.855274] Normal free:3684kB min:3728kB low:4660kB high:5592kB active_anon:0kB inactive_anon:0kB active_file:96kB inactive_file:236kB unevictable:0kB isolated(anon):0kB isolated(file):132kB present:885944kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:75660kB slab_unreclaimable:227516kB kernel_stack:2456kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:559 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.855278] lowmem_reserve[]: 0 0 8922 8922 Jul 18 21:00:24 opensips kernel: [10762.855284] HighMem free:783004kB min:512kB low:1712kB high:2912kB active_anon:169068kB inactive_anon:65948kB active_file:47944kB inactive_file:73924kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1142020kB mlocked:0kB dirty:56kB writeback:0kB mapped:48532kB shmem:39056kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:3740kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.855287] lowmem_reserve[]: 0 0 0 0 Jul 18 21:00:24 opensips kernel: [10762.855290] DMA: 5*4kB 4*8kB 5*16kB 10*32kB 14*64kB 17*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3524kB Jul 18 21:00:24 opensips kernel: [10762.855298] Normal: 207*4kB 67*8kB 11*16kB 1*32kB 1*64kB 0*128kB 2*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 3684kB ``` Original issue reported on code.google.com by `vivid333...@gmail.com` on 19 Jul 2011 at 2:13
1.0
testecho killed by ubuntu - ``` If you are not sure whether this is a bug, please go to http://groups.google.com/group/spserver to request help. What steps will reproduce the problem? 1. run testecho(change sp_session uint16_t to unsigned int). 2. use a multi-thread client in other client to do connect, send,recv (don't close fd until program exit. 3. about 100,000 connections, ubuntu looks very slowly, and killed testecho. I use top. there is 700M memory free(Total 2G), and CPU only takes 30%. What is the expected output? What do you see instead? work normally. testecho was killed by system. What version of the product are you using? On what operating system? 0.9.5 under ubuntu 10.10 Please provide any additional information below. Jul 18 21:00:24 opensips kernel: [10762.781159] Mem-Info: Jul 18 21:00:24 opensips kernel: [10762.781160] DMA per-cpu: Jul 18 21:00:24 opensips kernel: [10762.781162] CPU 0: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.781164] CPU 1: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.781166] Normal per-cpu: Jul 18 21:00:24 opensips kernel: [10762.781168] CPU 0: hi: 186, btch: 31 usd: 40 Jul 18 21:00:24 opensips kernel: [10762.781170] CPU 1: hi: 186, btch: 31 usd: 30 Jul 18 21:00:24 opensips kernel: [10762.781171] HighMem per-cpu: Jul 18 21:00:24 opensips kernel: [10762.781173] CPU 0: hi: 186, btch: 31 usd: 155 Jul 18 21:00:24 opensips kernel: [10762.781175] CPU 1: hi: 186, btch: 31 usd: 21 Jul 18 21:00:24 opensips kernel: [10762.781178] active_anon:46948 inactive_anon:16673 isolated_anon:0 Jul 18 21:00:24 opensips kernel: [10762.781180] active_file:11359 inactive_file:20878 isolated_file:0 Jul 18 21:00:24 opensips kernel: [10762.781181] unevictable:0 dirty:14 writeback:0 unstable:0 Jul 18 21:00:24 opensips kernel: [10762.781182] free:190739 slab_reclaimable:19024 slab_unreclaimable:57150 Jul 18 21:00:24 opensips kernel: [10762.781183] mapped:13250 shmem:9981 pagetables:1152 bounce:0 Jul 18 21:00:24 opensips kernel: [10762.781188] DMA free:3524kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15808kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:356kB slab_unreclaimable:1084kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Jul 18 21:00:24 opensips kernel: [10762.781194] lowmem_reserve[]: 0 865 1980 1980 Jul 18 21:00:24 opensips kernel: [10762.781201] Normal free:3664kB min:3728kB low:4660kB high:5592kB active_anon:0kB inactive_anon:0kB active_file:220kB inactive_file:40kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:885944kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:75740kB slab_unreclaimable:227516kB kernel_stack:2456kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:450 all_unreclaimable? yes Jul 18 21:00:24 opensips kernel: [10762.781207] lowmem_reserve[]: 0 0 8922 8922 Jul 18 21:00:24 opensips kernel: [10762.781214] HighMem free:755768kB min:512kB low:1712kB high:2912kB active_anon:187792kB inactive_anon:66692kB active_file:45216kB inactive_file:83472kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1142020kB mlocked:0kB dirty:56kB writeback:0kB mapped:52996kB shmem:39924kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:4608kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.781220] lowmem_reserve[]: 0 0 0 0 Jul 18 21:00:24 opensips kernel: [10762.781223] DMA: 5*4kB 4*8kB 5*16kB 10*32kB 14*64kB 17*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3524kB Jul 18 21:00:24 opensips kernel: [10762.781232] Normal: 228*4kB 66*8kB 5*16kB 1*32kB 1*64kB 0*128kB 2*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 3664kB Jul 18 21:00:24 opensips kernel: [10762.781239] HighMem: 564*4kB 637*8kB 784*16kB 488*32kB 364*64kB 101*128kB 40*256kB 14*512kB 5*1024kB 3*2048kB 160*4096kB = 755768kB Jul 18 21:00:24 opensips kernel: [10762.781248] 42223 total pagecache pages Jul 18 21:00:24 opensips kernel: [10762.781249] 0 pages in swap cache Jul 18 21:00:24 opensips kernel: [10762.781251] Swap cache stats: add 0, delete 0, find 0/0 Jul 18 21:00:24 opensips kernel: [10762.781253] Free swap = 6141948kB Jul 18 21:00:24 opensips kernel: [10762.781254] Total swap = 6141948kB Jul 18 21:00:24 opensips kernel: [10762.781254] Total swap = 6141948kB Jul 18 21:00:24 opensips kernel: [10762.784591] 515079 pages RAM Jul 18 21:00:24 opensips kernel: [10762.784593] 287754 pages HighMem Jul 18 21:00:24 opensips kernel: [10762.784595] 9031 pages reserved Jul 18 21:00:24 opensips kernel: [10762.784596] 82449 pages shared Jul 18 21:00:24 opensips kernel: [10762.784597] 294015 pages non-shared Jul 18 21:00:24 opensips kernel: [10762.784600] Out of memory: kill process 1369 (gnome-session) score 65119 or a child Jul 18 21:00:24 opensips kernel: [10762.784604] Killed process 1521 (gnome-panel) vsz:86088kB, anon-rss:3644kB, file-rss:13840kB Jul 18 21:00:24 opensips kernel: [10762.855130] Xorg invoked oom-killer: gfp_mask=0xd0, order=0, oom_adj=0 Jul 18 21:00:24 opensips kernel: [10762.855134] Xorg cpuset=/ mems_allowed=0 Jul 18 21:00:24 opensips kernel: [10762.855137] Pid: 1049, comm: Xorg Not tainted 2.6.35-30-generic #54-Ubuntu Jul 18 21:00:24 opensips kernel: [10762.855139] Call Trace: Jul 18 21:00:24 opensips kernel: [10762.855146] [<c01dd51a>] dump_header+0x7a/0xb0 Jul 18 21:00:24 opensips kernel: [10762.855149] [<c01dd5ac>] oom_kill_process+0x5c/0x160 Jul 18 21:00:24 opensips kernel: [10762.855151] [<c01ddb19>] ? select_bad_process+0xa9/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855154] [<c01ddba1>] __out_of_memory+0x51/0xb0 Jul 18 21:00:24 opensips kernel: [10762.855156] [<c01ddc58>] out_of_memory+0x58/0xd0 Jul 18 21:00:24 opensips kernel: [10762.855159] [<c01e0b86>] __alloc_pages_slowpath+0x496/0x4b0 Jul 18 21:00:24 opensips kernel: [10762.855162] [<c01e0d0f>] __alloc_pages_nodemask+0x16f/0x1c0 Jul 18 21:00:24 opensips kernel: [10762.855164] [<c01e0d7c>] __get_free_pages+0x1c/0x30 Jul 18 21:00:24 opensips kernel: [10762.855167] [<c02299c1>] __pollwait+0xa1/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855169] [<c0229985>] ? __pollwait+0x65/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855173] [<c0571b1c>] unix_poll+0x1c/0xa0 Jul 18 21:00:24 opensips kernel: [10762.855176] [<c04e72f4>] sock_poll+0x14/0x20 Jul 18 21:00:24 opensips kernel: [10762.855178] [<c022962b>] do_select+0x37b/0x670 Jul 18 21:00:24 opensips kernel: [10762.855181] [<c03535dd>] ? kobject_put+0x1d/0x50 Jul 18 21:00:24 opensips kernel: [10762.855183] [<c0229920>] ? __pollwait+0x0/0xe0 Jul 18 21:00:24 opensips kernel: [10762.855185] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855188] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855190] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855192] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855194] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855196] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855198] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855200] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855202] [<c0229a00>] ? pollwake+0x0/0x60 Jul 18 21:00:24 opensips kernel: [10762.855223] [<c0229fb0>] core_sys_select+0x140/0x240 Jul 18 21:00:24 opensips kernel: [10762.855226] [<c016b2da>] ? hrtimer_try_to_cancel+0x3a/0xc0 Jul 18 21:00:24 opensips kernel: [10762.855229] [<c0227bb2>] ? vfs_ioctl+0x32/0xb0 Jul 18 21:00:24 opensips kernel: [10762.855231] [<c022a2b1>] sys_select+0x31/0xc0 Jul 18 21:00:24 opensips kernel: [10762.855233] [<c0165660>] ? sys_clock_gettime+0x50/0xa0 Jul 18 21:00:24 opensips kernel: [10762.855236] [<c05cc254>] syscall_call+0x7/0xb Jul 18 21:00:24 opensips kernel: [10762.855239] [<c05c0000>] ? pcibios_setup+0x93/0x3ac Jul 18 21:00:24 opensips kernel: [10762.855241] Mem-Info: Jul 18 21:00:24 opensips kernel: [10762.855242] DMA per-cpu: Jul 18 21:00:24 opensips kernel: [10762.855244] CPU 0: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.855245] CPU 1: hi: 0, btch: 1 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.855247] Normal per-cpu: Jul 18 21:00:24 opensips kernel: [10762.855248] CPU 0: hi: 186, btch: 31 usd: 31 Jul 18 21:00:24 opensips kernel: [10762.855250] CPU 1: hi: 186, btch: 31 usd: 30 Jul 18 21:00:24 opensips kernel: [10762.855251] HighMem per-cpu: Jul 18 21:00:24 opensips kernel: [10762.855252] CPU 0: hi: 186, btch: 31 usd: 175 Jul 18 21:00:24 opensips kernel: [10762.855254] CPU 1: hi: 186, btch: 31 usd: 0 Jul 18 21:00:24 opensips kernel: [10762.855257] active_anon:42267 inactive_anon:16487 isolated_anon:0 Jul 18 21:00:24 opensips kernel: [10762.855258] active_file:12010 inactive_file:18540 isolated_file:33 Jul 18 21:00:24 opensips kernel: [10762.855259] unevictable:0 dirty:14 writeback:0 unstable:0 Jul 18 21:00:24 opensips kernel: [10762.855259] free:197553 slab_reclaimable:19004 slab_unreclaimable:57150 Jul 18 21:00:24 opensips kernel: [10762.855260] mapped:12134 shmem:9764 pagetables:935 bounce:0 Jul 18 21:00:24 opensips kernel: [10762.855265] DMA free:3524kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15808kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:356kB slab_unreclaimable:1084kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.855268] lowmem_reserve[]: 0 865 1980 1980 Jul 18 21:00:24 opensips kernel: [10762.855274] Normal free:3684kB min:3728kB low:4660kB high:5592kB active_anon:0kB inactive_anon:0kB active_file:96kB inactive_file:236kB unevictable:0kB isolated(anon):0kB isolated(file):132kB present:885944kB mlocked:0kB dirty:0kB writeback:0kB mapped:4kB shmem:0kB slab_reclaimable:75660kB slab_unreclaimable:227516kB kernel_stack:2456kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:559 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.855278] lowmem_reserve[]: 0 0 8922 8922 Jul 18 21:00:24 opensips kernel: [10762.855284] HighMem free:783004kB min:512kB low:1712kB high:2912kB active_anon:169068kB inactive_anon:65948kB active_file:47944kB inactive_file:73924kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:1142020kB mlocked:0kB dirty:56kB writeback:0kB mapped:48532kB shmem:39056kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:3740kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 18 21:00:24 opensips kernel: [10762.855287] lowmem_reserve[]: 0 0 0 0 Jul 18 21:00:24 opensips kernel: [10762.855290] DMA: 5*4kB 4*8kB 5*16kB 10*32kB 14*64kB 17*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 3524kB Jul 18 21:00:24 opensips kernel: [10762.855298] Normal: 207*4kB 67*8kB 11*16kB 1*32kB 1*64kB 0*128kB 2*256kB 1*512kB 1*1024kB 0*2048kB 0*4096kB = 3684kB ``` Original issue reported on code.google.com by `vivid333...@gmail.com` on 19 Jul 2011 at 2:13
defect
testecho killed by ubuntu if you are not sure whether this is a bug please go to to request help what steps will reproduce the problem run testecho change sp session t to unsigned int use a multi thread client in other client to do connect send recv don t close fd until program exit about connections ubuntu looks very slowly and killed testecho i use top there is memory free total and cpu only takes what is the expected output what do you see instead work normally testecho was killed by system what version of the product are you using on what operating system under ubuntu please provide any additional information below jul opensips kernel mem info jul opensips kernel dma per cpu jul opensips kernel cpu hi btch usd jul opensips kernel cpu hi btch usd jul opensips kernel normal per cpu jul opensips kernel cpu hi btch usd jul opensips kernel cpu hi btch usd jul opensips kernel highmem per cpu jul opensips kernel cpu hi btch usd jul opensips kernel cpu hi btch usd jul opensips kernel active anon inactive anon isolated anon jul opensips kernel active file inactive file isolated file jul opensips kernel unevictable dirty writeback unstable jul opensips kernel free slab reclaimable slab unreclaimable jul opensips kernel mapped shmem pagetables bounce jul opensips kernel dma free min low high active anon inactive anon active file inactive file unevictable isolated anon isolated file present mlocked dirty writeback mapped shmem slab reclaimable slab unreclaimable kernel stack pagetables unstable bounce writeback tmp pages scanned all unreclaimable yes jul opensips kernel lowmem reserve jul opensips kernel normal free min low high active anon inactive anon active file inactive file unevictable isolated anon isolated file present mlocked dirty writeback mapped shmem slab reclaimable slab unreclaimable kernel stack pagetables unstable bounce writeback tmp pages scanned all unreclaimable yes jul opensips kernel lowmem reserve jul opensips kernel highmem free min low high active anon inactive anon active file inactive file unevictable isolated anon isolated file present mlocked dirty writeback mapped shmem slab reclaimable slab unreclaimable kernel stack pagetables unstable bounce writeback tmp pages scanned all unreclaimable no jul opensips kernel lowmem reserve jul opensips kernel dma jul opensips kernel normal jul opensips kernel highmem jul opensips kernel total pagecache pages jul opensips kernel pages in swap cache jul opensips kernel swap cache stats add delete find jul opensips kernel free swap jul opensips kernel total swap jul opensips kernel total swap jul opensips kernel pages ram jul opensips kernel pages highmem jul opensips kernel pages reserved jul opensips kernel pages shared jul opensips kernel pages non shared jul opensips kernel out of memory kill process gnome session score or a child jul opensips kernel killed process gnome panel vsz anon rss file rss jul opensips kernel xorg invoked oom killer gfp mask order oom adj jul opensips kernel xorg cpuset mems allowed jul opensips kernel pid comm xorg not tainted generic ubuntu jul opensips kernel call trace jul opensips kernel dump header jul opensips kernel oom kill process jul opensips kernel select bad process jul opensips kernel out of memory jul opensips kernel out of memory jul opensips kernel alloc pages slowpath jul opensips kernel alloc pages nodemask jul opensips kernel get free pages jul opensips kernel pollwait jul opensips kernel pollwait jul opensips kernel unix poll jul opensips kernel sock poll jul opensips kernel do select jul opensips kernel kobject put jul opensips kernel pollwait jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel pollwake jul opensips kernel core sys select jul opensips kernel hrtimer try to cancel jul opensips kernel vfs ioctl jul opensips kernel sys select jul opensips kernel sys clock gettime jul opensips kernel syscall call jul opensips kernel pcibios setup jul opensips kernel mem info jul opensips kernel dma per cpu jul opensips kernel cpu hi btch usd jul opensips kernel cpu hi btch usd jul opensips kernel normal per cpu jul opensips kernel cpu hi btch usd jul opensips kernel cpu hi btch usd jul opensips kernel highmem per cpu jul opensips kernel cpu hi btch usd jul opensips kernel cpu hi btch usd jul opensips kernel active anon inactive anon isolated anon jul opensips kernel active file inactive file isolated file jul opensips kernel unevictable dirty writeback unstable jul opensips kernel free slab reclaimable slab unreclaimable jul opensips kernel mapped shmem pagetables bounce jul opensips kernel dma free min low high active anon inactive anon active file inactive file unevictable isolated anon isolated file present mlocked dirty writeback mapped shmem slab reclaimable slab unreclaimable kernel stack pagetables unstable bounce writeback tmp pages scanned all unreclaimable no jul opensips kernel lowmem reserve jul opensips kernel normal free min low high active anon inactive anon active file inactive file unevictable isolated anon isolated file present mlocked dirty writeback mapped shmem slab reclaimable slab unreclaimable kernel stack pagetables unstable bounce writeback tmp pages scanned all unreclaimable no jul opensips kernel lowmem reserve jul opensips kernel highmem free min low high active anon inactive anon active file inactive file unevictable isolated anon isolated file present mlocked dirty writeback mapped shmem slab reclaimable slab unreclaimable kernel stack pagetables unstable bounce writeback tmp pages scanned all unreclaimable no jul opensips kernel lowmem reserve jul opensips kernel dma jul opensips kernel normal original issue reported on code google com by gmail com on jul at
1
22,642
3,670,989,518
IssuesEvent
2016-02-22 03:15:03
gperftools/gperftools
https://api.github.com/repos/gperftools/gperftools
closed
Hanging in ARCH_FORK with CPUPROFILE
Priority-Medium Status-Accepted Type-Defect
Originally reported on Google Code with ID 701 ``` There are two ways I have been able to reproduce the problem. The first method occurs at random, and in spans of time (running in release mode). The second seems to occur every time I run internal tools linked against libprofiler with gdb/cgdb. I have been unable to generate a simplified reproducer that can be shared. What steps will reproduce the problem? 1. compile code in debug mode, linked against libprofiler.so 2. run executable in cgdb 3. wait 4. interrupt execution and observe that: a. all but one thread are waiting in poll, or epoll, or pthread_cond_wait, or etc. b. one thread is stuck in a fork system call, on the ARCH_FORK line c. CPU is at 100% What is the expected output? What do you see instead? The program is expected to finish normally. The program hangs 'forever' in a call to fork(). On the ARCH_FORK() macro with $rax = -ERESTARTNOINTR What version of the product are you using? On what operating system? 2.2.1 / 2.4 RHEL6 Please provide any additional information below. I have a quick (non-complete) fix (attached) for this using pthread_atfork and pthread_sigmask to block SIGPROF before a fork and then re-enable it afterwards. From my testing, this always prevents the hanging issue. I have communicated my fix with Developer Services at my job and they have indicated that it would be preferred if this solution could be patched into the gperftools source code. While this is probably sufficient for the usecase at my job, it feels incomplete for the purposes of patching into the gperftools codebase. ``` Reported by `Sam.J.Jaffe` on 2015-07-20 17:18:35 <hr> * *Attachment: hang in ARCH_FORK.png<br>![hang in ARCH_FORK.png](https://storage.googleapis.com/google-code-attachments/gperftools/issue-701/comment-0/hang in ARCH_FORK.png)* * *Attachment: [cpu_profiler_nohang.cpp](https://storage.googleapis.com/google-code-attachments/gperftools/issue-701/comment-0/cpu_profiler_nohang.cpp)*
1.0
Hanging in ARCH_FORK with CPUPROFILE - Originally reported on Google Code with ID 701 ``` There are two ways I have been able to reproduce the problem. The first method occurs at random, and in spans of time (running in release mode). The second seems to occur every time I run internal tools linked against libprofiler with gdb/cgdb. I have been unable to generate a simplified reproducer that can be shared. What steps will reproduce the problem? 1. compile code in debug mode, linked against libprofiler.so 2. run executable in cgdb 3. wait 4. interrupt execution and observe that: a. all but one thread are waiting in poll, or epoll, or pthread_cond_wait, or etc. b. one thread is stuck in a fork system call, on the ARCH_FORK line c. CPU is at 100% What is the expected output? What do you see instead? The program is expected to finish normally. The program hangs 'forever' in a call to fork(). On the ARCH_FORK() macro with $rax = -ERESTARTNOINTR What version of the product are you using? On what operating system? 2.2.1 / 2.4 RHEL6 Please provide any additional information below. I have a quick (non-complete) fix (attached) for this using pthread_atfork and pthread_sigmask to block SIGPROF before a fork and then re-enable it afterwards. From my testing, this always prevents the hanging issue. I have communicated my fix with Developer Services at my job and they have indicated that it would be preferred if this solution could be patched into the gperftools source code. While this is probably sufficient for the usecase at my job, it feels incomplete for the purposes of patching into the gperftools codebase. ``` Reported by `Sam.J.Jaffe` on 2015-07-20 17:18:35 <hr> * *Attachment: hang in ARCH_FORK.png<br>![hang in ARCH_FORK.png](https://storage.googleapis.com/google-code-attachments/gperftools/issue-701/comment-0/hang in ARCH_FORK.png)* * *Attachment: [cpu_profiler_nohang.cpp](https://storage.googleapis.com/google-code-attachments/gperftools/issue-701/comment-0/cpu_profiler_nohang.cpp)*
defect
hanging in arch fork with cpuprofile originally reported on google code with id there are two ways i have been able to reproduce the problem the first method occurs at random and in spans of time running in release mode the second seems to occur every time i run internal tools linked against libprofiler with gdb cgdb i have been unable to generate a simplified reproducer that can be shared what steps will reproduce the problem compile code in debug mode linked against libprofiler so run executable in cgdb wait interrupt execution and observe that a all but one thread are waiting in poll or epoll or pthread cond wait or etc b one thread is stuck in a fork system call on the arch fork line c cpu is at what is the expected output what do you see instead the program is expected to finish normally the program hangs forever in a call to fork on the arch fork macro with rax erestartnointr what version of the product are you using on what operating system please provide any additional information below i have a quick non complete fix attached for this using pthread atfork and pthread sigmask to block sigprof before a fork and then re enable it afterwards from my testing this always prevents the hanging issue i have communicated my fix with developer services at my job and they have indicated that it would be preferred if this solution could be patched into the gperftools source code while this is probably sufficient for the usecase at my job it feels incomplete for the purposes of patching into the gperftools codebase reported by sam j jaffe on attachment hang in arch fork png in arch fork png attachment
1
6,497
14,674,727,025
IssuesEvent
2020-12-30 15:57:22
RRZE-HPC/likwid
https://api.github.com/repos/RRZE-HPC/likwid
opened
Intel Comet Lake support
new architecture
**Why do you need support for this specific architecture?** Comet Lake is another generation of Intel x86-64 processor. I would like to use likwid-perfctr to study x86-64 programs on a Comet Lake processor such as i9-10900K. **Which architecture model, family and further information? CPU or accelerator? ** ``` $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 10 On-line CPU(s) list: 0-9 Thread(s) per core: 1 Core(s) per socket: 10 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 165 Model name: Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz Stepping: 5 CPU MHz: 3700.357 BogoMIPS: 7400.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-9 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts pku ospke md_clear flush_l1d arch_capabilities ``` **Is the documentation of the hardware counters publicly available?** I don't know. **Are there already any usable tools (commercial or open-source)?** I don't know.
1.0
Intel Comet Lake support - **Why do you need support for this specific architecture?** Comet Lake is another generation of Intel x86-64 processor. I would like to use likwid-perfctr to study x86-64 programs on a Comet Lake processor such as i9-10900K. **Which architecture model, family and further information? CPU or accelerator? ** ``` $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 10 On-line CPU(s) list: 0-9 Thread(s) per core: 1 Core(s) per socket: 10 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 165 Model name: Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz Stepping: 5 CPU MHz: 3700.357 BogoMIPS: 7400.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 20480K NUMA node0 CPU(s): 0-9 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm arat pln pts pku ospke md_clear flush_l1d arch_capabilities ``` **Is the documentation of the hardware counters publicly available?** I don't know. **Are there already any usable tools (commercial or open-source)?** I don't know.
non_defect
intel comet lake support why do you need support for this specific architecture comet lake is another generation of intel processor i would like to use likwid perfctr to study programs on a comet lake processor such as which architecture model family and further information cpu or accelerator lscpu architecture cpu op mode s bit bit byte order little endian cpu s on line cpu s list thread s per core core s per socket socket s numa node s vendor id genuineintel cpu family model model name intel r core tm cpu stepping cpu mhz bogomips virtualization vt x cache cache cache cache numa cpu s flags fpu vme de pse tsc msr pae mce apic sep mtrr pge mca cmov pat clflush dts acpi mmx fxsr sse ss ht tm pbe syscall nx rdtscp lm constant tsc art arch perfmon pebs bts rep good nopl xtopology nonstop tsc cpuid aperfmperf pni pclmulqdq monitor ds cpl vmx smx est sdbg fma xtpr pdcm pcid movbe popcnt tsc deadline timer aes xsave avx rdrand lahf lm abm cpuid fault invpcid single ssbd ibrs ibpb stibp ibrs enhanced tpr shadow vnmi flexpriority ept vpid fsgsbase tsc adjust smep erms invpcid mpx rdseed adx smap clflushopt intel pt xsaveopt xsavec xsaves dtherm arat pln pts pku ospke md clear flush arch capabilities is the documentation of the hardware counters publicly available i don t know are there already any usable tools commercial or open source i don t know
0
23,817
3,851,868,059
IssuesEvent
2016-04-06 05:29:04
GPF/imame4all
https://api.github.com/repos/GPF/imame4all
closed
crashes
auto-migrated Priority-Medium Type-Defect
``` What steps will reproduce the problem? 1. Opening imame4all version 1.9 or 1.10 on iphone 3GS 3.1.3 firmware What is the expected output? What do you see instead? Gives "Loading.... please wait" message, then crashes back to home. What version of the product are you using? On what operating system? imame4all version 1.9 or 1.10 on jailbroken firmware 3.1.3 Please provide any additional information below. v1.8 and below are fine, but 1.9 and 1.10 crash as described above. Downgrading back to 1.8 works, but I'd like to use the newer versions. Would have raised it with 1.9's release but thought it would be fixed with next release so didn't - unfortunately now 1.10 doesn't work either so I'm stuck. ``` Original issue reported on code.google.com by `ktfros...@gmail.com` on 27 Sep 2011 at 11:16
1.0
crashes - ``` What steps will reproduce the problem? 1. Opening imame4all version 1.9 or 1.10 on iphone 3GS 3.1.3 firmware What is the expected output? What do you see instead? Gives "Loading.... please wait" message, then crashes back to home. What version of the product are you using? On what operating system? imame4all version 1.9 or 1.10 on jailbroken firmware 3.1.3 Please provide any additional information below. v1.8 and below are fine, but 1.9 and 1.10 crash as described above. Downgrading back to 1.8 works, but I'd like to use the newer versions. Would have raised it with 1.9's release but thought it would be fixed with next release so didn't - unfortunately now 1.10 doesn't work either so I'm stuck. ``` Original issue reported on code.google.com by `ktfros...@gmail.com` on 27 Sep 2011 at 11:16
defect
crashes what steps will reproduce the problem opening version or on iphone firmware what is the expected output what do you see instead gives loading please wait message then crashes back to home what version of the product are you using on what operating system version or on jailbroken firmware please provide any additional information below and below are fine but and crash as described above downgrading back to works but i d like to use the newer versions would have raised it with s release but thought it would be fixed with next release so didn t unfortunately now doesn t work either so i m stuck original issue reported on code google com by ktfros gmail com on sep at
1
31,170
6,443,905,014
IssuesEvent
2017-08-12 02:23:09
opendatakit/opendatakit
https://api.github.com/repos/opendatakit/opendatakit
closed
Enketo webform integration not working
Aggregate Priority-Medium Type-Defect
Originally reported on Google Code with ID 975 ``` What steps will reproduce the problem? 1. Go to Site Admin -> Preferences 2. Configure Enketo Webform Integration 3. By entering API URL and API token What is the expected output? What do you see instead? It should show me the web form instead of it raise error "There was an error obtaining the webform. (message:no account exists for this OpenRosa server)" What version of the product are you using? On what operating system? ODK Aggregate 1.4.1 Please provide any additional information below. I had created account in Enketo with different gmail address than I had used to access ODK Aggregate for first time. Kindly guide me what wrong I am doing in configuration. ``` Reported by `piyushsh18` on 2014-02-15 07:38:17
1.0
Enketo webform integration not working - Originally reported on Google Code with ID 975 ``` What steps will reproduce the problem? 1. Go to Site Admin -> Preferences 2. Configure Enketo Webform Integration 3. By entering API URL and API token What is the expected output? What do you see instead? It should show me the web form instead of it raise error "There was an error obtaining the webform. (message:no account exists for this OpenRosa server)" What version of the product are you using? On what operating system? ODK Aggregate 1.4.1 Please provide any additional information below. I had created account in Enketo with different gmail address than I had used to access ODK Aggregate for first time. Kindly guide me what wrong I am doing in configuration. ``` Reported by `piyushsh18` on 2014-02-15 07:38:17
defect
enketo webform integration not working originally reported on google code with id what steps will reproduce the problem go to site admin preferences configure enketo webform integration by entering api url and api token what is the expected output what do you see instead it should show me the web form instead of it raise error there was an error obtaining the webform message no account exists for this openrosa server what version of the product are you using on what operating system odk aggregate please provide any additional information below i had created account in enketo with different gmail address than i had used to access odk aggregate for first time kindly guide me what wrong i am doing in configuration reported by on
1
15,797
2,869,071,938
IssuesEvent
2015-06-05 23:06:19
dart-lang/sdk
https://api.github.com/repos/dart-lang/sdk
closed
Pub serve fails on binded assets since Polymer Dart 0.10.0-pre.13
Area-Pkg Pkg-Polymer PolymerMilestone-Next Priority-Medium Triaged Type-Defect
*This issue was originally filed by kurotensh...&#064;autistici.org* _____ Summary of the issue : When trying to dynamically bind images from an observable Dart collection with a template, it works in Dart but cause an issue with Pub Serve (in Javascript so). -------------------------------------------------------------------------- How To reproduce the problem : 1) Create an observable Map&lt;String, List&lt;String&gt;&gt; (in .dart) filled with this pattern : ('Toto' : {'img1', 'img2', 'img3',...}, 'Tata': {'img4', 'img5', 'img6',...},...) The key can be an id for a &lt;div&gt; and the values an image file name, without extension and path. 2) In a polymer template, bind the Map like this (I named it 'elts'). &lt;template repeat=&quot;{{key in elts.keys}}&quot;&gt; &nbsp;&lt;div id={{key}}&gt; &nbsp;&nbsp;&nbsp;&lt;img src=&quot;./{{elts[key\][0]}}.png&quot; id=elts[key\][3]&gt; &nbsp;&nbsp;&lt;/div&gt; &lt;/template&gt; You can embed a second template repeat to display every images. Just put the assets named like in the List (+ png extension) in the right directory. -------------------------------------------------------------------------- What is the expected output? What is expected is several &lt;div&gt; filled with several pictures, dynamically, in Dartium (Dart) and with Dart2JD (Firefox for example). -------------------------------------------------------------------------- What I see instead : Expected result with Dartium (Dart). The Dart2JS build has no error. But when running with pub serve, blank page and this error : [web] GET /%7B%7B%20%27/resources/images/%27%20+%20elts%5Bkey%5D%5B3%5D%7D%7D =&gt; Could not find asset CVWebkit|web/%7B%7B%20%27/resources/images/%27%20+%20elts%5Bkey%5D%5B3%5D%7D%7D. Note that my img tag was : &lt;img src=&quot;./ressources/images/{{elts[key\][3]}}.png&quot; id=elts[key\][3] title={{key}}&gt; -------------------------------------------------------------------------- **What version of the product are you using? On what operating system?** I am using Dart 1.4.0 stable (and tested Dart 1.5.0 dev) with Polymer Dart 0.10.0-pre.13, in Windows 7 SP1 x64 and Windows 8.1 x64. -------------------------------------------------------------------------- Additional information : - No problem with Polymer Dart 0.10.0-pre.12. - Same behavior with this kind of thing : &lt;img src=&quot;{{ '.' + elts[key\][3]}}&quot;&gt; ______ **Attachments:** [expected.jpg](https://storage.googleapis.com/google-code-attachments/dart/issue-19068/comment-0/expected.jpg) (104.19 KB) [obtained.jpg](https://storage.googleapis.com/google-code-attachments/dart/issue-19068/comment-0/obtained.jpg) (102.27 KB) [complete message.txt](https://storage.googleapis.com/google-code-attachments/dart/issue-19068/comment-0/complete message.txt) (1.96 KB)
1.0
Pub serve fails on binded assets since Polymer Dart 0.10.0-pre.13 - *This issue was originally filed by kurotensh...&#064;autistici.org* _____ Summary of the issue : When trying to dynamically bind images from an observable Dart collection with a template, it works in Dart but cause an issue with Pub Serve (in Javascript so). -------------------------------------------------------------------------- How To reproduce the problem : 1) Create an observable Map&lt;String, List&lt;String&gt;&gt; (in .dart) filled with this pattern : ('Toto' : {'img1', 'img2', 'img3',...}, 'Tata': {'img4', 'img5', 'img6',...},...) The key can be an id for a &lt;div&gt; and the values an image file name, without extension and path. 2) In a polymer template, bind the Map like this (I named it 'elts'). &lt;template repeat=&quot;{{key in elts.keys}}&quot;&gt; &nbsp;&lt;div id={{key}}&gt; &nbsp;&nbsp;&nbsp;&lt;img src=&quot;./{{elts[key\][0]}}.png&quot; id=elts[key\][3]&gt; &nbsp;&nbsp;&lt;/div&gt; &lt;/template&gt; You can embed a second template repeat to display every images. Just put the assets named like in the List (+ png extension) in the right directory. -------------------------------------------------------------------------- What is the expected output? What is expected is several &lt;div&gt; filled with several pictures, dynamically, in Dartium (Dart) and with Dart2JD (Firefox for example). -------------------------------------------------------------------------- What I see instead : Expected result with Dartium (Dart). The Dart2JS build has no error. But when running with pub serve, blank page and this error : [web] GET /%7B%7B%20%27/resources/images/%27%20+%20elts%5Bkey%5D%5B3%5D%7D%7D =&gt; Could not find asset CVWebkit|web/%7B%7B%20%27/resources/images/%27%20+%20elts%5Bkey%5D%5B3%5D%7D%7D. Note that my img tag was : &lt;img src=&quot;./ressources/images/{{elts[key\][3]}}.png&quot; id=elts[key\][3] title={{key}}&gt; -------------------------------------------------------------------------- **What version of the product are you using? On what operating system?** I am using Dart 1.4.0 stable (and tested Dart 1.5.0 dev) with Polymer Dart 0.10.0-pre.13, in Windows 7 SP1 x64 and Windows 8.1 x64. -------------------------------------------------------------------------- Additional information : - No problem with Polymer Dart 0.10.0-pre.12. - Same behavior with this kind of thing : &lt;img src=&quot;{{ '.' + elts[key\][3]}}&quot;&gt; ______ **Attachments:** [expected.jpg](https://storage.googleapis.com/google-code-attachments/dart/issue-19068/comment-0/expected.jpg) (104.19 KB) [obtained.jpg](https://storage.googleapis.com/google-code-attachments/dart/issue-19068/comment-0/obtained.jpg) (102.27 KB) [complete message.txt](https://storage.googleapis.com/google-code-attachments/dart/issue-19068/comment-0/complete message.txt) (1.96 KB)
defect
pub serve fails on binded assets since polymer dart pre this issue was originally filed by kurotensh autistici org summary of the issue when trying to dynamically bind images from an observable dart collection with a template it works in dart but cause an issue with pub serve in javascript so how to reproduce the problem create an observable map lt string list lt string gt gt in dart filled with this pattern toto tata the key can be an id for a lt div gt and the values an image file name without extension and path in a polymer template bind the map like this i named it elts lt template repeat quot key in elts keys quot gt nbsp lt div id key gt nbsp nbsp nbsp lt img src quot elts png quot id elts gt nbsp nbsp lt div gt lt template gt you can embed a second template repeat to display every images just put the assets named like in the list png extension in the right directory what is the expected output what is expected is several lt div gt filled with several pictures dynamically in dartium dart and with firefox for example what i see instead expected result with dartium dart the build has no error but when running with pub serve blank page and this error get resources images gt could not find asset cvwebkit web resources images note that my img tag was lt img src quot ressources images elts png quot id elts title key gt what version of the product are you using on what operating system i am using dart stable and tested dart dev with polymer dart pre in windows and windows additional information no problem with polymer dart pre same behavior with this kind of thing lt img src quot elts quot gt attachments kb kb message txt kb
1
3,691
2,610,067,063
IssuesEvent
2015-02-26 18:19:42
chrsmith/jsjsj122
https://api.github.com/repos/chrsmith/jsjsj122
opened
路桥治前列腺炎哪家好
auto-migrated Priority-Medium Type-Defect
``` 路桥治前列腺炎哪家好【台州五洲生殖医院】24小时健康咨询 热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市 椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1 18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、 112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:45
1.0
路桥治前列腺炎哪家好 - ``` 路桥治前列腺炎哪家好【台州五洲生殖医院】24小时健康咨询 热线:0576-88066933-(扣扣800080609)-(微信号tzwzszyy)医院地址:台州市 椒江区枫南路229号(枫南大转盘旁)乘车线路:乘坐104、108、1 18、198及椒江一金清公交车直达枫南小区,乘坐107、105、109、 112、901、 902公交车到星星广场下车,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 ``` ----- Original issue reported on code.google.com by `poweragr...@gmail.com` on 30 May 2014 at 8:45
defect
路桥治前列腺炎哪家好 路桥治前列腺炎哪家好【台州五洲生殖医院】 热线 微信号tzwzszyy 医院地址 台州市 (枫南大转盘旁)乘车线路 、 、 、 , 、 、 、 、 、 ,步行即可到院。 诊疗项目:阳痿,早泄,前列腺炎,前列腺增生,龟头炎,�� �精,无精。包皮包茎,精索静脉曲张,淋病等。 台州五洲生殖医院是台州最大的男科医院,权威专家在线免�� �咨询,拥有专业完善的男科检查治疗设备,严格按照国家标� ��收费。尖端医疗设备,与世界同步。权威专家,成就专业典 范。人性化服务,一切以患者为中心。 看男科就选台州五洲生殖医院,专业男科为男人。 original issue reported on code google com by poweragr gmail com on may at
1
311,163
26,772,904,646
IssuesEvent
2023-01-31 15:15:11
ices-tools-dev/RDBEScore
https://api.github.com/repos/ices-tools-dev/RDBEScore
opened
Make a list of text book examples available in RDBES
6_test_data
a brief document linking the RDBES data to original books and the scripts used to upload will facilitate present work (finding and modifying stuff) and help in building help files and vignettes based on the datasets
1.0
Make a list of text book examples available in RDBES - a brief document linking the RDBES data to original books and the scripts used to upload will facilitate present work (finding and modifying stuff) and help in building help files and vignettes based on the datasets
non_defect
make a list of text book examples available in rdbes a brief document linking the rdbes data to original books and the scripts used to upload will facilitate present work finding and modifying stuff and help in building help files and vignettes based on the datasets
0
78,939
27,827,386,662
IssuesEvent
2023-03-19 22:33:57
zed-industries/community
https://api.github.com/repos/zed-industries/community
closed
Copy on select doesn't work
defect terminal
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it text is not being automatically copied into the clipboard after selecting it ### Environment ``` cat .config/zed/settings.json // Zed settings // // For information on how to configure Zed, see the Zed // documentation: https://zed.dev/docs/configuring-zed // // To see all of Zed's default settings without changing your // custom settings, run the `open default settings` command // from the command palette or from `Zed` application menu. { "buffer_font_size": 15, "soft_wrap": "editor_width", "copy_on_select": true } ``` ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
1.0
Copy on select doesn't work - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it text is not being automatically copied into the clipboard after selecting it ### Environment ``` cat .config/zed/settings.json // Zed settings // // For information on how to configure Zed, see the Zed // documentation: https://zed.dev/docs/configuring-zed // // To see all of Zed's default settings without changing your // custom settings, run the `open default settings` command // from the command palette or from `Zed` application menu. { "buffer_font_size": 15, "soft_wrap": "editor_width", "copy_on_select": true } ``` ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
defect
copy on select doesn t work check for existing issues completed describe the bug provide steps to reproduce it text is not being automatically copied into the clipboard after selecting it environment cat config zed settings json zed settings for information on how to configure zed see the zed documentation to see all of zed s default settings without changing your custom settings run the open default settings command from the command palette or from zed application menu buffer font size soft wrap editor width copy on select true if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
1
218,835
7,332,512,470
IssuesEvent
2018-03-05 16:31:46
NCEAS/metacat
https://api.github.com/repos/NCEAS/metacat
closed
Use the spring application-context xml file from d1_cn_index_processor.
Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Feature
--- Author Name: **Jing Tao** (Jing Tao) Original Redmine Issue: 6050, https://projects.ecoinformatics.org/ecoinfo/issues/6050 Original Date: 2013-08-08 Original Assignee: Jing Tao --- We copied the application-context-systemmeta100.xml, application-context-systemmeta064.xml, eml and fgdc files from d1_cn_index_processor svn server to the metacat-index svn server. So if there are some changes in those files in the d1_cn, we need to commit the change to the metacat-index as well. The code were duplicated. We changed to use the ClassPathXmlApplicationContext rather than FileXmlApplicationContext and the metacat-index can use those files in the d1_cn_index_processor.jar. The only files are kept in the metacat-index is the application-context-resource-map.xml and index-processor-context.xml. They are different than the ones in d1_cn_index_processor.
1.0
Use the spring application-context xml file from d1_cn_index_processor. - --- Author Name: **Jing Tao** (Jing Tao) Original Redmine Issue: 6050, https://projects.ecoinformatics.org/ecoinfo/issues/6050 Original Date: 2013-08-08 Original Assignee: Jing Tao --- We copied the application-context-systemmeta100.xml, application-context-systemmeta064.xml, eml and fgdc files from d1_cn_index_processor svn server to the metacat-index svn server. So if there are some changes in those files in the d1_cn, we need to commit the change to the metacat-index as well. The code were duplicated. We changed to use the ClassPathXmlApplicationContext rather than FileXmlApplicationContext and the metacat-index can use those files in the d1_cn_index_processor.jar. The only files are kept in the metacat-index is the application-context-resource-map.xml and index-processor-context.xml. They are different than the ones in d1_cn_index_processor.
non_defect
use the spring application context xml file from cn index processor author name jing tao jing tao original redmine issue original date original assignee jing tao we copied the application context xml application context xml eml and fgdc files from cn index processor svn server to the metacat index svn server so if there are some changes in those files in the cn we need to commit the change to the metacat index as well the code were duplicated we changed to use the classpathxmlapplicationcontext rather than filexmlapplicationcontext and the metacat index can use those files in the cn index processor jar the only files are kept in the metacat index is the application context resource map xml and index processor context xml they are different than the ones in cn index processor
0
606,078
18,754,338,210
IssuesEvent
2021-11-05 08:46:00
matrixorigin/matrixone
https://api.github.com/repos/matrixorigin/matrixone
closed
[Factorisation]:F-Tree construction
priority/high kind/feature
matrixone will use a query plan completely different from the traditional plan, which is based on the theory of factorization.
1.0
[Factorisation]:F-Tree construction - matrixone will use a query plan completely different from the traditional plan, which is based on the theory of factorization.
non_defect
f tree construction matrixone will use a query plan completely different from the traditional plan which is based on the theory of factorization
0
48,181
13,067,498,900
IssuesEvent
2020-07-31 00:39:20
icecube-trac/tix2
https://api.github.com/repos/icecube-trac/tix2
closed
[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2 (Trac #1915)
Migrated from Trac combo reconstruction defect
Apparently DSTExtractor resets the event header, including the SubEventID. So all the SubEventIDs in L2 are 0. How it happens: in the segment dst.extractor.ExtractDST: extract_to_frame = True this causes the header to be overwritten. Maybe we should set this to False in filterscripts? Not sure what else it controls though. Migrated from https://code.icecube.wisc.edu/ticket/1915 ```json { "status": "closed", "changetime": "2019-02-13T14:13:30", "description": "Apparently DSTExtractor resets the event header, including the SubEventID. So all the SubEventIDs in L2 are 0.\n\nHow it happens:\nin the segment dst.extractor.ExtractDST:\n extract_to_frame = True\nthis causes the header to be overwritten.\n\nMaybe we should set this to False in filterscripts? Not sure what else it controls though.", "reporter": "david.schultz", "cc": "claudio.kopper", "resolution": "fixed", "_ts": "1550067210114669", "component": "combo reconstruction", "summary": "[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2", "priority": "critical", "keywords": "", "time": "2016-11-29T22:22:46", "milestone": "", "owner": "juancarlos", "type": "defect" } ```
1.0
[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2 (Trac #1915) - Apparently DSTExtractor resets the event header, including the SubEventID. So all the SubEventIDs in L2 are 0. How it happens: in the segment dst.extractor.ExtractDST: extract_to_frame = True this causes the header to be overwritten. Maybe we should set this to False in filterscripts? Not sure what else it controls though. Migrated from https://code.icecube.wisc.edu/ticket/1915 ```json { "status": "closed", "changetime": "2019-02-13T14:13:30", "description": "Apparently DSTExtractor resets the event header, including the SubEventID. So all the SubEventIDs in L2 are 0.\n\nHow it happens:\nin the segment dst.extractor.ExtractDST:\n extract_to_frame = True\nthis causes the header to be overwritten.\n\nMaybe we should set this to False in filterscripts? Not sure what else it controls though.", "reporter": "david.schultz", "cc": "claudio.kopper", "resolution": "fixed", "_ts": "1550067210114669", "component": "combo reconstruction", "summary": "[dst/filterscripts] DSTExtractor setting SubEventID to 0 in L2", "priority": "critical", "keywords": "", "time": "2016-11-29T22:22:46", "milestone": "", "owner": "juancarlos", "type": "defect" } ```
defect
dstextractor setting subeventid to in trac apparently dstextractor resets the event header including the subeventid so all the subeventids in are how it happens in the segment dst extractor extractdst extract to frame true this causes the header to be overwritten maybe we should set this to false in filterscripts not sure what else it controls though migrated from json status closed changetime description apparently dstextractor resets the event header including the subeventid so all the subeventids in are n nhow it happens nin the segment dst extractor extractdst n extract to frame true nthis causes the header to be overwritten n nmaybe we should set this to false in filterscripts not sure what else it controls though reporter david schultz cc claudio kopper resolution fixed ts component combo reconstruction summary dstextractor setting subeventid to in priority critical keywords time milestone owner juancarlos type defect
1
18,765
3,086,888,914
IssuesEvent
2015-08-25 07:58:42
NamPNQ/html5slides
https://api.github.com/repos/NamPNQ/html5slides
closed
How to export slides to other formats, e.g. pdf?
auto-migrated Priority-Medium Type-Defect
``` I was wondering if the slides produced can be exported to pdf or imported to Google Docs. Thanks, Cuneyt ``` Original issue reported on code.google.com by `ctaski...@gmail.com` on 19 Sep 2011 at 9:30
1.0
How to export slides to other formats, e.g. pdf? - ``` I was wondering if the slides produced can be exported to pdf or imported to Google Docs. Thanks, Cuneyt ``` Original issue reported on code.google.com by `ctaski...@gmail.com` on 19 Sep 2011 at 9:30
defect
how to export slides to other formats e g pdf i was wondering if the slides produced can be exported to pdf or imported to google docs thanks cuneyt original issue reported on code google com by ctaski gmail com on sep at
1
440,837
12,704,773,803
IssuesEvent
2020-06-23 02:30:19
Edu4rdSHL/findomain
https://api.github.com/repos/Edu4rdSHL/findomain
closed
Track CNAME for the subdomains
Medium Priority implemented-in-plus-version request-for-enhancement solved
Is it possible to include the track of cname information for the discovered subdomains?
1.0
Track CNAME for the subdomains - Is it possible to include the track of cname information for the discovered subdomains?
non_defect
track cname for the subdomains is it possible to include the track of cname information for the discovered subdomains
0
21,681
17,462,242,921
IssuesEvent
2021-08-06 12:13:33
SCALE-MS/scale-ms
https://api.github.com/repos/SCALE-MS/scale-ms
closed
Consider pinning required package versions for RCT stack
scalems.radical usability
Some recent issues have been complicated by bugs or differences in behavior across package versions in one or more of `radical.pilot`, `radical.utils`, `radical.pilot`, and/or `radical.saga`. I recently backed down the scalems dependency from `git+https://github.com/radical-cybertools/radical.pilot.git@project/scalems#egg=radical.pilot` to `radical.pilot>=1.6.5`, leaving the reponsibility to the user to install a specific VCS ref if necessary. However, we still effectively `pip install --upgrade git+https://github.com/radical-cybertools/radical.pilot.git@project/scalems#egg=radical.pilot` in the testing environment without controlling the versions of the dependencies. We may want to have two testing paths: * "stable": using a venv with strict exact version tags or VCS refs * "dev": using a venv freshly created with the latest version of all dependencies We would need to consider how to mark pytest tests appropriately. This will also further complicate the resource definitions in use. Relates to #90 and #100
True
Consider pinning required package versions for RCT stack - Some recent issues have been complicated by bugs or differences in behavior across package versions in one or more of `radical.pilot`, `radical.utils`, `radical.pilot`, and/or `radical.saga`. I recently backed down the scalems dependency from `git+https://github.com/radical-cybertools/radical.pilot.git@project/scalems#egg=radical.pilot` to `radical.pilot>=1.6.5`, leaving the reponsibility to the user to install a specific VCS ref if necessary. However, we still effectively `pip install --upgrade git+https://github.com/radical-cybertools/radical.pilot.git@project/scalems#egg=radical.pilot` in the testing environment without controlling the versions of the dependencies. We may want to have two testing paths: * "stable": using a venv with strict exact version tags or VCS refs * "dev": using a venv freshly created with the latest version of all dependencies We would need to consider how to mark pytest tests appropriately. This will also further complicate the resource definitions in use. Relates to #90 and #100
non_defect
consider pinning required package versions for rct stack some recent issues have been complicated by bugs or differences in behavior across package versions in one or more of radical pilot radical utils radical pilot and or radical saga i recently backed down the scalems dependency from git to radical pilot leaving the reponsibility to the user to install a specific vcs ref if necessary however we still effectively pip install upgrade git in the testing environment without controlling the versions of the dependencies we may want to have two testing paths stable using a venv with strict exact version tags or vcs refs dev using a venv freshly created with the latest version of all dependencies we would need to consider how to mark pytest tests appropriately this will also further complicate the resource definitions in use relates to and
0
14,059
2,789,879,857
IssuesEvent
2015-05-08 22:07:39
google/google-visualization-api-issues
https://api.github.com/repos/google/google-visualization-api-issues
closed
corechart package BarChart height/starting y position problem
Priority-Medium Type-Defect
Original [issue 402](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=402) created by orwant on 2010-08-31T18:19:58.000Z: <b>What steps will reproduce the problem? Please provide a link to a</b> <b>demonstration page if at all possible, or attach code.</b> 1. http://code.google.com/apis/ajax/playground/?type=visualization#bar_chart You'll notice the title's y position is set to 54; 2. Change the height to 800. And the title's y position is 128. You would think the initial y position might only change because of the font size of the title. <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> BarChart in the corechart package <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> <b>What operating system and browser are you using?</b> Firefox <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
1.0
corechart package BarChart height/starting y position problem - Original [issue 402](https://code.google.com/p/google-visualization-api-issues/issues/detail?id=402) created by orwant on 2010-08-31T18:19:58.000Z: <b>What steps will reproduce the problem? Please provide a link to a</b> <b>demonstration page if at all possible, or attach code.</b> 1. http://code.google.com/apis/ajax/playground/?type=visualization#bar_chart You'll notice the title's y position is set to 54; 2. Change the height to 800. And the title's y position is 128. You would think the initial y position might only change because of the font size of the title. <b>What component is this issue related to (PieChart, LineChart, DataTable,</b> <b>Query, etc)?</b> BarChart in the corechart package <b>Are you using the test environment (version 1.1)?</b> <b>(If you are not sure, answer NO)</b> <b>What operating system and browser are you using?</b> Firefox <b>*********************************************************</b> <b>For developers viewing this issue: please click the 'star' icon to be</b> <b>notified of future changes, and to let us know how many of you are</b> <b>interested in seeing it resolved.</b> <b>*********************************************************</b>
defect
corechart package barchart height starting y position problem original created by orwant on what steps will reproduce the problem please provide a link to a demonstration page if at all possible or attach code you ll notice the title s y position is set to change the height to and the title s y position is you would think the initial y position might only change because of the font size of the title what component is this issue related to piechart linechart datatable query etc barchart in the corechart package are you using the test environment version if you are not sure answer no what operating system and browser are you using firefox for developers viewing this issue please click the star icon to be notified of future changes and to let us know how many of you are interested in seeing it resolved
1
77,397
26,965,825,634
IssuesEvent
2023-02-08 22:16:00
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: "unable to discover open pages" with headless and user-data-dir
I-defect needs-triaging G-chromedriver
### What happened? I recently started getting an error when trying to create a webdriver: `selenium.common.exceptions.WebDriverException: Message: unknown error: unable to discover open pages` It seems like this happens if the two following settings are set: * `headless = True` * adding an argument with a `--user-data-dir` If I turn off either one of those options then it works, but if both are set I get the error specified above. Sample code snippet which is consistently reproducing on my machine attached below as well as full log output. Any ideas on how to debug this would be appreciated. Thanks! ### How can we reproduce the issue? ```shell #!/usr/bin/python3 from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService service = ChromeService(executable_path='/tmp/chromedriver') options = webdriver.ChromeOptions() options.headless = True options.add_argument('--user-data-dir=/tmp/data') driver = webdriver.Chrome(service=service, options=options) driver.close() ``` ### Relevant log output ```shell 14:02 drheld ~ python test.py Traceback (most recent call last): File "/home/drheld/test.py", line 10, in <module> driver = webdriver.Chrome(service=service, options=options) File "//home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__ super().__init__(DesiredCapabilities.CHROME['browserName'], "goog", File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 92, in __init__ super().__init__( File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 277, in __init__ self.start_session(capabilities, browser_profile) File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 370, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 435, in execute self.error_handler.check_response(response) File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: unable to discover open pages Stacktrace: #0 0x560d60699d93 <unknown> #1 0x560d604682d7 <unknown> #2 0x560d604952f4 <unknown> #3 0x560d6049017b <unknown> #4 0x560d6048ca3d <unknown> #5 0x560d604d14f4 <unknown> #6 0x560d604c8353 <unknown> #7 0x560d60497e40 <unknown> #8 0x560d60499038 <unknown> #9 0x560d606ed8be <unknown> #10 0x560d606f18f0 <unknown> #11 0x560d606d1f90 <unknown> #12 0x560d606f2b7d <unknown> #13 0x560d606c3578 <unknown> #14 0x560d60717348 <unknown> #15 0x560d607174d6 <unknown> #16 0x560d60731341 <unknown> #17 0x7f59d5a07fd4 <unknown> ``` ### Operating System Debian rodete 20230126.02.03RD ### Selenium version Python 3.10.9 ### What are the browser(s) and version(s) where you see this issue? Chrome 110.0.5481 ### What are the browser driver(s) and version(s) where you see this issue? Chromedriver 110.0.5481 google-chrome ### Are you using Selenium Grid? _No response_
1.0
[🐛 Bug]: "unable to discover open pages" with headless and user-data-dir - ### What happened? I recently started getting an error when trying to create a webdriver: `selenium.common.exceptions.WebDriverException: Message: unknown error: unable to discover open pages` It seems like this happens if the two following settings are set: * `headless = True` * adding an argument with a `--user-data-dir` If I turn off either one of those options then it works, but if both are set I get the error specified above. Sample code snippet which is consistently reproducing on my machine attached below as well as full log output. Any ideas on how to debug this would be appreciated. Thanks! ### How can we reproduce the issue? ```shell #!/usr/bin/python3 from selenium import webdriver from selenium.webdriver.chrome.service import Service as ChromeService service = ChromeService(executable_path='/tmp/chromedriver') options = webdriver.ChromeOptions() options.headless = True options.add_argument('--user-data-dir=/tmp/data') driver = webdriver.Chrome(service=service, options=options) driver.close() ``` ### Relevant log output ```shell 14:02 drheld ~ python test.py Traceback (most recent call last): File "/home/drheld/test.py", line 10, in <module> driver = webdriver.Chrome(service=service, options=options) File "//home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py", line 69, in __init__ super().__init__(DesiredCapabilities.CHROME['browserName'], "goog", File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py", line 92, in __init__ super().__init__( File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 277, in __init__ self.start_session(capabilities, browser_profile) File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 370, in start_session response = self.execute(Command.NEW_SESSION, parameters) File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/webdriver.py", line 435, in execute self.error_handler.check_response(response) File "/home/drheld/.local/lib/python3.10/site-packages/selenium/webdriver/remote/errorhandler.py", line 247, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: unable to discover open pages Stacktrace: #0 0x560d60699d93 <unknown> #1 0x560d604682d7 <unknown> #2 0x560d604952f4 <unknown> #3 0x560d6049017b <unknown> #4 0x560d6048ca3d <unknown> #5 0x560d604d14f4 <unknown> #6 0x560d604c8353 <unknown> #7 0x560d60497e40 <unknown> #8 0x560d60499038 <unknown> #9 0x560d606ed8be <unknown> #10 0x560d606f18f0 <unknown> #11 0x560d606d1f90 <unknown> #12 0x560d606f2b7d <unknown> #13 0x560d606c3578 <unknown> #14 0x560d60717348 <unknown> #15 0x560d607174d6 <unknown> #16 0x560d60731341 <unknown> #17 0x7f59d5a07fd4 <unknown> ``` ### Operating System Debian rodete 20230126.02.03RD ### Selenium version Python 3.10.9 ### What are the browser(s) and version(s) where you see this issue? Chrome 110.0.5481 ### What are the browser driver(s) and version(s) where you see this issue? Chromedriver 110.0.5481 google-chrome ### Are you using Selenium Grid? _No response_
defect
unable to discover open pages with headless and user data dir what happened i recently started getting an error when trying to create a webdriver selenium common exceptions webdriverexception message unknown error unable to discover open pages it seems like this happens if the two following settings are set headless true adding an argument with a user data dir if i turn off either one of those options then it works but if both are set i get the error specified above sample code snippet which is consistently reproducing on my machine attached below as well as full log output any ideas on how to debug this would be appreciated thanks how can we reproduce the issue shell usr bin from selenium import webdriver from selenium webdriver chrome service import service as chromeservice service chromeservice executable path tmp chromedriver options webdriver chromeoptions options headless true options add argument user data dir tmp data driver webdriver chrome service service options options driver close relevant log output shell drheld python test py traceback most recent call last file home drheld test py line in driver webdriver chrome service service options options file home drheld local lib site packages selenium webdriver chrome webdriver py line in init super init desiredcapabilities chrome goog file home drheld local lib site packages selenium webdriver chromium webdriver py line in init super init file home drheld local lib site packages selenium webdriver remote webdriver py line in init self start session capabilities browser profile file home drheld local lib site packages selenium webdriver remote webdriver py line in start session response self execute command new session parameters file home drheld local lib site packages selenium webdriver remote webdriver py line in execute self error handler check response response file home drheld local lib site packages selenium webdriver remote errorhandler py line in check response raise exception class message screen stacktrace selenium common exceptions webdriverexception message unknown error unable to discover open pages stacktrace operating system debian rodete selenium version python what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver google chrome are you using selenium grid no response
1
59,655
17,023,193,038
IssuesEvent
2021-07-03 00:47:49
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Plazas not rendering
Component: mapnik Priority: major Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 2.50pm, Friday, 21st December 2007]** The tagging FAQ seems to indicate that a "plaza" is marked up by drawing a closed way of type "highway=pedestrian" and "area=yes". Osmarender gets this right, Mapnik doesn't. Mapnik seems to ignore the "area=yes" tag and render a closed-loop or pedestrianised road with the name written on that roadway (would be better splashed across the middle of the area). See http://www.openstreetmap.org/?lat=51.61966&lon=-3.94169&zoom=17&layers=B0F The centre of the image is Castle Square, Swansea showing as loop of road. Similarly, there's a circular plaza area ("Kingsway Plaza") about 150m to the northwest doing the same thing. Possibly linked to this is shopping centres not rendering. 150m to the southwest of Castle Square is the Quadrant Shopping Centre. This is rendering as invisible all except for the name which follows part of the periphery-line. Again, Osmarender does this right.
1.0
Plazas not rendering - **[Submitted to the original trac issue database at 2.50pm, Friday, 21st December 2007]** The tagging FAQ seems to indicate that a "plaza" is marked up by drawing a closed way of type "highway=pedestrian" and "area=yes". Osmarender gets this right, Mapnik doesn't. Mapnik seems to ignore the "area=yes" tag and render a closed-loop or pedestrianised road with the name written on that roadway (would be better splashed across the middle of the area). See http://www.openstreetmap.org/?lat=51.61966&lon=-3.94169&zoom=17&layers=B0F The centre of the image is Castle Square, Swansea showing as loop of road. Similarly, there's a circular plaza area ("Kingsway Plaza") about 150m to the northwest doing the same thing. Possibly linked to this is shopping centres not rendering. 150m to the southwest of Castle Square is the Quadrant Shopping Centre. This is rendering as invisible all except for the name which follows part of the periphery-line. Again, Osmarender does this right.
defect
plazas not rendering the tagging faq seems to indicate that a plaza is marked up by drawing a closed way of type highway pedestrian and area yes osmarender gets this right mapnik doesn t mapnik seems to ignore the area yes tag and render a closed loop or pedestrianised road with the name written on that roadway would be better splashed across the middle of the area see the centre of the image is castle square swansea showing as loop of road similarly there s a circular plaza area kingsway plaza about to the northwest doing the same thing possibly linked to this is shopping centres not rendering to the southwest of castle square is the quadrant shopping centre this is rendering as invisible all except for the name which follows part of the periphery line again osmarender does this right
1
4,234
2,610,089,763
IssuesEvent
2015-02-26 18:27:10
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳痘痘如何治疗比较好
auto-migrated Priority-Medium Type-Defect
``` 深圳痘痘如何治疗比较好【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:37
1.0
深圳痘痘如何治疗比较好 - ``` 深圳痘痘如何治疗比较好【深圳韩方科颜全国热线400-869-1818�� �24小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 7:37
defect
深圳痘痘如何治疗比较好 深圳痘痘如何治疗比较好【 �� � 】深圳韩方科颜专业祛痘连锁机构,机构以�� �国秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品� ��韩方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反 弹”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国�� �专业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸� ��的痘痘。 original issue reported on code google com by szft com on may at
1
11,198
2,641,737,828
IssuesEvent
2015-03-11 19:27:23
chrsmith/html5rocks
https://api.github.com/repos/chrsmith/html5rocks
closed
slides: Reusing the framework (licencing issue)
Priority-Medium Slides Type-Defect
Original [issue 101](https://code.google.com/p/html5rocks/issues/detail?id=101) created by chrsmith on 2010-07-29T00:46:18.000Z: Reported by yorirou, Jun 06, 2010 May I reuse the framework (only the HTML5 part, css, js, so no content or media) for a GPLv2 project? Comment 1 by andreas.kuckartz, Jun 17, 2010 APL2 and GPL2 are considered to be incompatible. See http://www.gnu.org/licenses/license-list.html and http://www.apache.org/licenses/GPL-compatibility.html The answer to your question therfore depends on a decision by the copyright owners. Comment 2 by yorirou, Jun 17, 2010 Is it possible to dual license the code? I would like to use it for Drupal.
1.0
slides: Reusing the framework (licencing issue) - Original [issue 101](https://code.google.com/p/html5rocks/issues/detail?id=101) created by chrsmith on 2010-07-29T00:46:18.000Z: Reported by yorirou, Jun 06, 2010 May I reuse the framework (only the HTML5 part, css, js, so no content or media) for a GPLv2 project? Comment 1 by andreas.kuckartz, Jun 17, 2010 APL2 and GPL2 are considered to be incompatible. See http://www.gnu.org/licenses/license-list.html and http://www.apache.org/licenses/GPL-compatibility.html The answer to your question therfore depends on a decision by the copyright owners. Comment 2 by yorirou, Jun 17, 2010 Is it possible to dual license the code? I would like to use it for Drupal.
defect
slides reusing the framework licencing issue original created by chrsmith on reported by yorirou jun may i reuse the framework only the part css js so no content or media for a project comment by andreas kuckartz jun and are considered to be incompatible see and the answer to your question therfore depends on a decision by the copyright owners comment by yorirou jun is it possible to dual license the code i would like to use it for drupal
1
257,332
8,136,024,266
IssuesEvent
2018-08-20 06:49:23
bbottema/simple-java-mail
https://api.github.com/repos/bbottema/simple-java-mail
closed
When reading (chinese) .msg files, HTML converted from RTF is completely garbled (encoding issue)
Priority-Medium bug
The problem is that the RTF's included codepage is ignored and all the hex bytes for text are converted one at the time. However, codepage 936 (chinese charset) requires two bytes per character (double byte character set, DBCS). Moreover, any code page defined in the RTF header should be honored when parsing user text. Solved by https://github.com/bbottema/outlook-message-parser/issues/3.
1.0
When reading (chinese) .msg files, HTML converted from RTF is completely garbled (encoding issue) - The problem is that the RTF's included codepage is ignored and all the hex bytes for text are converted one at the time. However, codepage 936 (chinese charset) requires two bytes per character (double byte character set, DBCS). Moreover, any code page defined in the RTF header should be honored when parsing user text. Solved by https://github.com/bbottema/outlook-message-parser/issues/3.
non_defect
when reading chinese msg files html converted from rtf is completely garbled encoding issue the problem is that the rtf s included codepage is ignored and all the hex bytes for text are converted one at the time however codepage chinese charset requires two bytes per character double byte character set dbcs moreover any code page defined in the rtf header should be honored when parsing user text solved by
0
33,680
7,196,556,191
IssuesEvent
2018-02-05 03:50:34
DivinumOfficium/divinum-officium
https://api.github.com/repos/DivinumOfficium/divinum-officium
closed
Accents and Ligature Consistency
Component-Text Priority-Medium Type-Defect auto-migrated
``` Make all the text have Latin accents where required and ligatures (æ œ á é í ó ú) as required. This is a big job. It would probably be possible to slurp a big liturgical Latin text from somewhere, and also use our Psalms repository, to get a dictionary of unaccented->accented transformations word by word, and then apply the substitutions automatically. ``` Original issue reported on code.google.com by `APMarcel...@gmail.com` on 24 Sep 2011 at 9:18
1.0
Accents and Ligature Consistency - ``` Make all the text have Latin accents where required and ligatures (æ œ á é í ó ú) as required. This is a big job. It would probably be possible to slurp a big liturgical Latin text from somewhere, and also use our Psalms repository, to get a dictionary of unaccented->accented transformations word by word, and then apply the substitutions automatically. ``` Original issue reported on code.google.com by `APMarcel...@gmail.com` on 24 Sep 2011 at 9:18
defect
accents and ligature consistency make all the text have latin accents where required and ligatures æ œ á é í ó ú as required this is a big job it would probably be possible to slurp a big liturgical latin text from somewhere and also use our psalms repository to get a dictionary of unaccented accented transformations word by word and then apply the substitutions automatically original issue reported on code google com by apmarcel gmail com on sep at
1
24,680
4,074,119,104
IssuesEvent
2016-05-28 07:08:09
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
opened
Don't throw SQLDialectNotSupportedException when an unsupported data type is encountered as a bind variable
C: Functionality P: Medium T: Defect
We're currently throwing a `SQLDialectNotSupportedException` when an unsupported data type (e.g. `java.util.List` is encountered as a bind variable in `DSL.val()`. This is misleading for users, we should throw a more specific exception. ---- See: http://stackoverflow.com/q/37485419/521799
1.0
Don't throw SQLDialectNotSupportedException when an unsupported data type is encountered as a bind variable - We're currently throwing a `SQLDialectNotSupportedException` when an unsupported data type (e.g. `java.util.List` is encountered as a bind variable in `DSL.val()`. This is misleading for users, we should throw a more specific exception. ---- See: http://stackoverflow.com/q/37485419/521799
defect
don t throw sqldialectnotsupportedexception when an unsupported data type is encountered as a bind variable we re currently throwing a sqldialectnotsupportedexception when an unsupported data type e g java util list is encountered as a bind variable in dsl val this is misleading for users we should throw a more specific exception see
1
76,278
26,342,507,260
IssuesEvent
2023-01-10 18:54:47
apache/jmeter
https://api.github.com/repos/apache/jmeter
opened
Test execution creates /src/dist-check/temp/ which is out of /build/
defect to-triage
### Expected behavior Test execution should create files in `/build/` folders only, so the temp files are not accidentally committed under source control ### Actual behavior One of the tests generates `/src/dist-check/temp/` ### Steps to reproduce the problem `./gradlew :src:dist-check:test` ### JMeter Version 5.5 ### Java Version _No response_ ### OS Version _No response_
1.0
Test execution creates /src/dist-check/temp/ which is out of /build/ - ### Expected behavior Test execution should create files in `/build/` folders only, so the temp files are not accidentally committed under source control ### Actual behavior One of the tests generates `/src/dist-check/temp/` ### Steps to reproduce the problem `./gradlew :src:dist-check:test` ### JMeter Version 5.5 ### Java Version _No response_ ### OS Version _No response_
defect
test execution creates src dist check temp which is out of build expected behavior test execution should create files in build folders only so the temp files are not accidentally committed under source control actual behavior one of the tests generates src dist check temp steps to reproduce the problem gradlew src dist check test jmeter version java version no response os version no response
1
62,798
12,244,961,507
IssuesEvent
2020-05-05 12:09:13
HGustavs/LenaSYS
https://api.github.com/repos/HGustavs/LenaSYS
closed
Update functions "hideDrop" and "switchDrop" to better fit the code standard
CodeViewer Group-1-2020
As a part of dividing issue #8232, this issue focuses on making the functions "hideDrop" and "switchDrop" within codeviewer.js to better fit the code-standard. The primary focus is on updating functionality rather than fixing local changes (e.g indentation and parenthesis placement). Changes made should be relatively small with few updated rows.
1.0
Update functions "hideDrop" and "switchDrop" to better fit the code standard - As a part of dividing issue #8232, this issue focuses on making the functions "hideDrop" and "switchDrop" within codeviewer.js to better fit the code-standard. The primary focus is on updating functionality rather than fixing local changes (e.g indentation and parenthesis placement). Changes made should be relatively small with few updated rows.
non_defect
update functions hidedrop and switchdrop to better fit the code standard as a part of dividing issue this issue focuses on making the functions hidedrop and switchdrop within codeviewer js to better fit the code standard the primary focus is on updating functionality rather than fixing local changes e g indentation and parenthesis placement changes made should be relatively small with few updated rows
0
73,776
24,796,570,386
IssuesEvent
2022-10-24 17:48:35
zed-industries/feedback
https://api.github.com/repos/zed-industries/feedback
opened
Typescript autocomplete menu performance
defect typescript language
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it A user on Twitter wrote: > I'm currently using Zed as my main editor, but the TS autocomplete performance is not there yet. It's slow and by the time I decide to choose an option from the autocomplete, it changes and I select the wrong thing. > > Why is the TS performance so slow? This is something I've experience as well. Not sure if its on our end of the language server end, but results appear super slow and you can see them being slowly filtered down, after you have finished typing. ### Expected behavior - ### Environment Zed 0.61.0 – /Applications/Zed.app macOS 12.6 architecture x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue _No response_
1.0
Typescript autocomplete menu performance - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it A user on Twitter wrote: > I'm currently using Zed as my main editor, but the TS autocomplete performance is not there yet. It's slow and by the time I decide to choose an option from the autocomplete, it changes and I select the wrong thing. > > Why is the TS performance so slow? This is something I've experience as well. Not sure if its on our end of the language server end, but results appear super slow and you can see them being slowly filtered down, after you have finished typing. ### Expected behavior - ### Environment Zed 0.61.0 – /Applications/Zed.app macOS 12.6 architecture x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue _No response_
defect
typescript autocomplete menu performance check for existing issues completed describe the bug provide steps to reproduce it a user on twitter wrote i m currently using zed as my main editor but the ts autocomplete performance is not there yet it s slow and by the time i decide to choose an option from the autocomplete it changes and i select the wrong thing why is the ts performance so slow this is something i ve experience as well not sure if its on our end of the language server end but results appear super slow and you can see them being slowly filtered down after you have finished typing expected behavior environment zed – applications zed app macos architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue no response
1
329,162
28,179,734,042
IssuesEvent
2023-04-04 00:48:59
elastic/elasticsearch
https://api.github.com/repos/elastic/elasticsearch
closed
[CI] RemoteClusterSecurityRestIT and RemoteClusterSecuritySpecialUserIT and RemoteClusterSecurityWithSameModelRemotesRestIT classMethods failing
>test-failure :Security/Security Team:Security
**Build scan:** https://gradle-enterprise.elastic.co/s/3wwnehg5i63ow/tests/:x-pack:plugin:security:qa:multi-cluster:javaRestTest/org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityRestIT **Reproduction line:** ``` null ``` **Applicable branches:** main **Reproduces locally?:** Didn't try **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityRestIT&tests.test=classMethod **Failure excerpt:** ``` java.lang.RuntimeException: An error occurred orchestrating test cluster. at __randomizedtesting.SeedInfo.seed([8D2E6C76D2C38EC3]:0) at org.elasticsearch.test.cluster.local.LocalClusterHandle.execute(LocalClusterHandle.java:236) at org.elasticsearch.test.cluster.local.LocalClusterHandle.execute(LocalClusterHandle.java:241) at org.elasticsearch.test.cluster.local.LocalClusterHandle.start(LocalClusterHandle.java:68) at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:38) at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:39) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850) at java.lang.Thread.run(Thread.java:1623) Caused by: java.util.concurrent.CancellationException: Request execution cancelled at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase.execute(CloseableHttpAsyncClientBase.java:114) at org.apache.http.impl.nio.client.InternalHttpAsyncClient.execute(InternalHttpAsyncClient.java:138) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at org.elasticsearch.xpack.remotecluster.AbstractRemoteClusterSecurityTestCase.performRequestAgainstFulfillingCluster(AbstractRemoteClusterSecurityTestCase.java:173) at org.elasticsearch.xpack.remotecluster.AbstractRemoteClusterSecurityTestCase.createCrossClusterAccessApiKey(AbstractRemoteClusterSecurityTestCase.java:127) at org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityRestIT.lambda$static$0(RemoteClusterSecurityRestIT.java:69) at org.elasticsearch.test.cluster.local.AbstractLocalSpecBuilder.lambda$keystore$12(AbstractLocalSpecBuilder.java:165) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.lambda$resolveKeystore$1(LocalClusterSpec.java:244) at java.lang.Iterable.forEach(Iterable.java:75) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.resolveKeystore(LocalClusterSpec.java:244) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.getSetting(LocalClusterSpec.java:208) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.isMasterEligible(LocalClusterSpec.java:194) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:178) at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at org.elasticsearch.test.cluster.local.DefaultSettingsProvider.get(DefaultSettingsProvider.java:73) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.lambda$resolveSettings$0(LocalClusterSpec.java:226) at java.util.ArrayList.forEach(ArrayList.java:1511) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.resolveSettings(LocalClusterSpec.java:226) at org.elasticsearch.test.cluster.local.LocalClusterFactory$Node.writeConfiguration(LocalClusterFactory.java:331) at org.elasticsearch.test.cluster.local.LocalClusterFactory$Node.start(LocalClusterFactory.java:139) at org.elasticsearch.test.cluster.local.LocalClusterHandle.lambda$start$0(LocalClusterHandle.java:68) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:290) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:754) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:667) at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:765) at org.elasticsearch.test.cluster.local.LocalClusterHandle.lambda$start$1(LocalClusterHandle.java:68) at org.elasticsearch.test.cluster.local.LocalClusterHandle.lambda$execute$14(LocalClusterHandle.java:242) at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1456) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) ```
1.0
[CI] RemoteClusterSecurityRestIT and RemoteClusterSecuritySpecialUserIT and RemoteClusterSecurityWithSameModelRemotesRestIT classMethods failing - **Build scan:** https://gradle-enterprise.elastic.co/s/3wwnehg5i63ow/tests/:x-pack:plugin:security:qa:multi-cluster:javaRestTest/org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityRestIT **Reproduction line:** ``` null ``` **Applicable branches:** main **Reproduces locally?:** Didn't try **Failure history:** https://gradle-enterprise.elastic.co/scans/tests?tests.container=org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityRestIT&tests.test=classMethod **Failure excerpt:** ``` java.lang.RuntimeException: An error occurred orchestrating test cluster. at __randomizedtesting.SeedInfo.seed([8D2E6C76D2C38EC3]:0) at org.elasticsearch.test.cluster.local.LocalClusterHandle.execute(LocalClusterHandle.java:236) at org.elasticsearch.test.cluster.local.LocalClusterHandle.execute(LocalClusterHandle.java:241) at org.elasticsearch.test.cluster.local.LocalClusterHandle.start(LocalClusterHandle.java:68) at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:38) at org.elasticsearch.test.cluster.local.LocalElasticsearchCluster$1.evaluate(LocalElasticsearchCluster.java:39) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:38) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.tests.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.tests.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:43) at org.apache.lucene.tests.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44) at org.apache.lucene.tests.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60) at org.apache.lucene.tests.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:390) at com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:850) at java.lang.Thread.run(Thread.java:1623) Caused by: java.util.concurrent.CancellationException: Request execution cancelled at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase.execute(CloseableHttpAsyncClientBase.java:114) at org.apache.http.impl.nio.client.InternalHttpAsyncClient.execute(InternalHttpAsyncClient.java:138) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:288) at org.elasticsearch.xpack.remotecluster.AbstractRemoteClusterSecurityTestCase.performRequestAgainstFulfillingCluster(AbstractRemoteClusterSecurityTestCase.java:173) at org.elasticsearch.xpack.remotecluster.AbstractRemoteClusterSecurityTestCase.createCrossClusterAccessApiKey(AbstractRemoteClusterSecurityTestCase.java:127) at org.elasticsearch.xpack.remotecluster.RemoteClusterSecurityRestIT.lambda$static$0(RemoteClusterSecurityRestIT.java:69) at org.elasticsearch.test.cluster.local.AbstractLocalSpecBuilder.lambda$keystore$12(AbstractLocalSpecBuilder.java:165) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.lambda$resolveKeystore$1(LocalClusterSpec.java:244) at java.lang.Iterable.forEach(Iterable.java:75) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.resolveKeystore(LocalClusterSpec.java:244) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.getSetting(LocalClusterSpec.java:208) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.isMasterEligible(LocalClusterSpec.java:194) at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:178) at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at org.elasticsearch.test.cluster.local.DefaultSettingsProvider.get(DefaultSettingsProvider.java:73) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.lambda$resolveSettings$0(LocalClusterSpec.java:226) at java.util.ArrayList.forEach(ArrayList.java:1511) at org.elasticsearch.test.cluster.local.LocalClusterSpec$LocalNodeSpec.resolveSettings(LocalClusterSpec.java:226) at org.elasticsearch.test.cluster.local.LocalClusterFactory$Node.writeConfiguration(LocalClusterFactory.java:331) at org.elasticsearch.test.cluster.local.LocalClusterFactory$Node.start(LocalClusterFactory.java:139) at org.elasticsearch.test.cluster.local.LocalClusterHandle.lambda$start$0(LocalClusterHandle.java:68) at java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) at java.util.AbstractList$RandomAccessSpliterator.forEachRemaining(AbstractList.java:722) at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.util.stream.ForEachOps$ForEachTask.compute(ForEachOps.java:290) at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:754) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) at java.util.concurrent.ForkJoinTask.invoke(ForkJoinTask.java:667) at java.util.stream.ForEachOps$ForEachOp.evaluateParallel(ForEachOps.java:159) at java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateParallel(ForEachOps.java:173) at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:233) at java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:596) at java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:765) at org.elasticsearch.test.cluster.local.LocalClusterHandle.lambda$start$1(LocalClusterHandle.java:68) at org.elasticsearch.test.cluster.local.LocalClusterHandle.lambda$execute$14(LocalClusterHandle.java:242) at java.util.concurrent.ForkJoinTask$AdaptedCallable.exec(ForkJoinTask.java:1456) at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387) at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312) at java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843) at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808) at java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188) ```
non_defect
remoteclustersecurityrestit and remoteclustersecurityspecialuserit and remoteclustersecuritywithsamemodelremotesrestit classmethods failing build scan reproduction line null applicable branches main reproduces locally didn t try failure history failure excerpt java lang runtimeexception an error occurred orchestrating test cluster at randomizedtesting seedinfo seed at org elasticsearch test cluster local localclusterhandle execute localclusterhandle java at org elasticsearch test cluster local localclusterhandle execute localclusterhandle java at org elasticsearch test cluster local localclusterhandle start localclusterhandle java at org elasticsearch test cluster local localelasticsearchcluster evaluate localelasticsearchcluster java at org elasticsearch test cluster local localelasticsearchcluster evaluate localelasticsearchcluster java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testrulestoreclassname evaluate testrulestoreclassname java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules noshadowingoroverridesonmethodsrule evaluate noshadowingoroverridesonmethodsrule java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at org apache lucene tests util testruleassertionsrequired evaluate testruleassertionsrequired java at org apache lucene tests util abstractbeforeafterrule evaluate abstractbeforeafterrule java at org apache lucene tests util testrulemarkfailure evaluate testrulemarkfailure java at org apache lucene tests util testruleignoreaftermaxfailures evaluate testruleignoreaftermaxfailures java at org apache lucene tests util testruleignoretestsuites evaluate testruleignoretestsuites java at com carrotsearch randomizedtesting rules statementadapter evaluate statementadapter java at com carrotsearch randomizedtesting threadleakcontrol statementrunner run threadleakcontrol java at com carrotsearch randomizedtesting threadleakcontrol lambda forktimeoutingtask threadleakcontrol java at java lang thread run thread java caused by java util concurrent cancellationexception request execution cancelled at org apache http impl nio client closeablehttpasyncclientbase execute closeablehttpasyncclientbase java at org apache http impl nio client internalhttpasyncclient execute internalhttpasyncclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch client restclient performrequest restclient java at org elasticsearch xpack remotecluster abstractremoteclustersecuritytestcase performrequestagainstfulfillingcluster abstractremoteclustersecuritytestcase java at org elasticsearch xpack remotecluster abstractremoteclustersecuritytestcase createcrossclusteraccessapikey abstractremoteclustersecuritytestcase java at org elasticsearch xpack remotecluster remoteclustersecurityrestit lambda static remoteclustersecurityrestit java at org elasticsearch test cluster local abstractlocalspecbuilder lambda keystore abstractlocalspecbuilder java at org elasticsearch test cluster local localclusterspec localnodespec lambda resolvekeystore localclusterspec java at java lang iterable foreach iterable java at org elasticsearch test cluster local localclusterspec localnodespec resolvekeystore localclusterspec java at org elasticsearch test cluster local localclusterspec localnodespec getsetting localclusterspec java at org elasticsearch test cluster local localclusterspec localnodespec ismastereligible localclusterspec java at java util stream referencepipeline accept referencepipeline java at java util abstractlist randomaccessspliterator foreachremaining abstractlist java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java util stream reduceops reduceop evaluatesequential reduceops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline collect referencepipeline java at org elasticsearch test cluster local defaultsettingsprovider get defaultsettingsprovider java at org elasticsearch test cluster local localclusterspec localnodespec lambda resolvesettings localclusterspec java at java util arraylist foreach arraylist java at org elasticsearch test cluster local localclusterspec localnodespec resolvesettings localclusterspec java at org elasticsearch test cluster local localclusterfactory node writeconfiguration localclusterfactory java at org elasticsearch test cluster local localclusterfactory node start localclusterfactory java at org elasticsearch test cluster local localclusterhandle lambda start localclusterhandle java at java util stream foreachops foreachop ofref accept foreachops java at java util abstractlist randomaccessspliterator foreachremaining abstractlist java at java util stream abstractpipeline copyinto abstractpipeline java at java util stream foreachops foreachtask compute foreachops java at java util concurrent countedcompleter exec countedcompleter java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjointask invoke forkjointask java at java util stream foreachops foreachop evaluateparallel foreachops java at java util stream foreachops foreachop ofref evaluateparallel foreachops java at java util stream abstractpipeline evaluate abstractpipeline java at java util stream referencepipeline foreach referencepipeline java at java util stream referencepipeline head foreach referencepipeline java at org elasticsearch test cluster local localclusterhandle lambda start localclusterhandle java at org elasticsearch test cluster local localclusterhandle lambda execute localclusterhandle java at java util concurrent forkjointask adaptedcallable exec forkjointask java at java util concurrent forkjointask doexec forkjointask java at java util concurrent forkjoinpool workqueue toplevelexec forkjoinpool java at java util concurrent forkjoinpool scan forkjoinpool java at java util concurrent forkjoinpool runworker forkjoinpool java at java util concurrent forkjoinworkerthread run forkjoinworkerthread java
0
110,646
16,985,723,445
IssuesEvent
2021-06-30 14:11:34
turkdevops/prettier
https://api.github.com/repos/turkdevops/prettier
opened
CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz
security vulnerability
## CVE-2019-20149 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary> <p>Get the native type of a value.</p> <p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p> <p>Path to dependency file: prettier/website/package.json</p> <p>Path to vulnerable library: prettier/website/node_modules/kind-of</p> <p> Dependency Hierarchy: - webpack-cli-4.6.0.tgz (Root Library) - webpack-merge-5.7.3.tgz - clone-deep-4.0.1.tgz - :x: **kind-of-6.0.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/prettier/commit/809a19cb68976d3c1564f9a9770b0deeb5c4e158">809a19cb68976d3c1564f9a9770b0deeb5c4e158</a></p> <p>Found in base branch: <b>patch-release</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result. <p>Publish Date: 2019-12-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p> <p>Release Date: 2019-12-30</p> <p>Fix Resolution: 6.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-20149 (High) detected in kind-of-6.0.2.tgz - ## CVE-2019-20149 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>kind-of-6.0.2.tgz</b></p></summary> <p>Get the native type of a value.</p> <p>Library home page: <a href="https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz">https://registry.npmjs.org/kind-of/-/kind-of-6.0.2.tgz</a></p> <p>Path to dependency file: prettier/website/package.json</p> <p>Path to vulnerable library: prettier/website/node_modules/kind-of</p> <p> Dependency Hierarchy: - webpack-cli-4.6.0.tgz (Root Library) - webpack-merge-5.7.3.tgz - clone-deep-4.0.1.tgz - :x: **kind-of-6.0.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/turkdevops/prettier/commit/809a19cb68976d3c1564f9a9770b0deeb5c4e158">809a19cb68976d3c1564f9a9770b0deeb5c4e158</a></p> <p>Found in base branch: <b>patch-release</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ctorName in index.js in kind-of v6.0.2 allows external user input to overwrite certain internal attributes via a conflicting name, as demonstrated by 'constructor': {'name':'Symbol'}. Hence, a crafted payload can overwrite this builtin attribute to manipulate the type detection result. <p>Publish Date: 2019-12-30 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-20149>CVE-2019-20149</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: High - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149">http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2019-20149</a></p> <p>Release Date: 2019-12-30</p> <p>Fix Resolution: 6.0.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in kind of tgz cve high severity vulnerability vulnerable library kind of tgz get the native type of a value library home page a href path to dependency file prettier website package json path to vulnerable library prettier website node modules kind of dependency hierarchy webpack cli tgz root library webpack merge tgz clone deep tgz x kind of tgz vulnerable library found in head commit a href found in base branch patch release vulnerability details ctorname in index js in kind of allows external user input to overwrite certain internal attributes via a conflicting name as demonstrated by constructor name symbol hence a crafted payload can overwrite this builtin attribute to manipulate the type detection result publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
33,317
7,088,044,631
IssuesEvent
2018-01-11 20:00:56
energicryptocurrency/energi
https://api.github.com/repos/energicryptocurrency/energi
closed
Wallframe background image (drkblue_walletFrame_bg.png) is not loaded in traditional style
Defect User Interface
In other styles this image is loading correctly but in "trad" style no. ![untitled](https://user-images.githubusercontent.com/4750797/34816353-2146de06-f6ce-11e7-87ce-5a0093acda1d.png)
1.0
Wallframe background image (drkblue_walletFrame_bg.png) is not loaded in traditional style - In other styles this image is loading correctly but in "trad" style no. ![untitled](https://user-images.githubusercontent.com/4750797/34816353-2146de06-f6ce-11e7-87ce-5a0093acda1d.png)
defect
wallframe background image drkblue walletframe bg png is not loaded in traditional style in other styles this image is loading correctly but in trad style no
1
21,719
3,917,148,987
IssuesEvent
2016-04-21 06:52:01
Microsoft/vscode
https://api.github.com/repos/Microsoft/vscode
closed
C# does not autocomplete closing quotation
c# v-test
- VSCode Version: 1.0.1-alpha - OS Version: Windows 10 Steps to Reproduce: 1. Open a c# file and start typing a method that accepts a string. 2. Console.WriteLine( Type a " here Actual: Does not autocomplete the ending ". This is not the behavior of Visual Studio 2015.
1.0
C# does not autocomplete closing quotation - - VSCode Version: 1.0.1-alpha - OS Version: Windows 10 Steps to Reproduce: 1. Open a c# file and start typing a method that accepts a string. 2. Console.WriteLine( Type a " here Actual: Does not autocomplete the ending ". This is not the behavior of Visual Studio 2015.
non_defect
c does not autocomplete closing quotation vscode version alpha os version windows steps to reproduce open a c file and start typing a method that accepts a string console writeline type a here actual does not autocomplete the ending this is not the behavior of visual studio
0
10,118
2,618,937,317
IssuesEvent
2015-03-03 00:02:36
chrsmith/open-ig
https://api.github.com/repos/chrsmith/open-ig
closed
Crash in diplomacy screen
auto-migrated Priority-Medium Type-Defect
``` Game version: 0.95.152 Operating System: Linux x64 Java runtime version: 1.7.0_51 Installed using the Launcher? yes Game language (en, hu, de): hu What steps will reproduce the problem? 1. Load the attached save. 2. Go to diplomacy. 3. Phone the Dargslan. 4. Offer an alliance aganist the 'SZNSZ'. 5. Offer 500 000 ¢. What is the expected output? What do you see instead? Should offer half million to the Dargslan. Instead i see a crash window. (Log included.) Please provide any additional information below. Please upload any save before and/or after the problem happened. Please attach the open-ig.log file found in the application's directory. Log and savegame attached. Note: Sometimes it work. Cca. 1 times out of 5. ``` Original issue reported on code.google.com by `kli...@gmail.com` on 18 Jan 2014 at 1:33 Attachments: * [open-ig.log](https://storage.googleapis.com/google-code-attachments/open-ig/issue-812/comment-0/open-ig.log) * [info-2014-01-18-14-14-54-939.xml](https://storage.googleapis.com/google-code-attachments/open-ig/issue-812/comment-0/info-2014-01-18-14-14-54-939.xml) * [save-2014-01-18-14-14-54-939.xml.gz](https://storage.googleapis.com/google-code-attachments/open-ig/issue-812/comment-0/save-2014-01-18-14-14-54-939.xml.gz)
1.0
Crash in diplomacy screen - ``` Game version: 0.95.152 Operating System: Linux x64 Java runtime version: 1.7.0_51 Installed using the Launcher? yes Game language (en, hu, de): hu What steps will reproduce the problem? 1. Load the attached save. 2. Go to diplomacy. 3. Phone the Dargslan. 4. Offer an alliance aganist the 'SZNSZ'. 5. Offer 500 000 ¢. What is the expected output? What do you see instead? Should offer half million to the Dargslan. Instead i see a crash window. (Log included.) Please provide any additional information below. Please upload any save before and/or after the problem happened. Please attach the open-ig.log file found in the application's directory. Log and savegame attached. Note: Sometimes it work. Cca. 1 times out of 5. ``` Original issue reported on code.google.com by `kli...@gmail.com` on 18 Jan 2014 at 1:33 Attachments: * [open-ig.log](https://storage.googleapis.com/google-code-attachments/open-ig/issue-812/comment-0/open-ig.log) * [info-2014-01-18-14-14-54-939.xml](https://storage.googleapis.com/google-code-attachments/open-ig/issue-812/comment-0/info-2014-01-18-14-14-54-939.xml) * [save-2014-01-18-14-14-54-939.xml.gz](https://storage.googleapis.com/google-code-attachments/open-ig/issue-812/comment-0/save-2014-01-18-14-14-54-939.xml.gz)
defect
crash in diplomacy screen game version operating system linux java runtime version installed using the launcher yes game language en hu de hu what steps will reproduce the problem load the attached save go to diplomacy phone the dargslan offer an alliance aganist the sznsz offer ¢ what is the expected output what do you see instead should offer half million to the dargslan instead i see a crash window log included please provide any additional information below please upload any save before and or after the problem happened please attach the open ig log file found in the application s directory log and savegame attached note sometimes it work cca times out of original issue reported on code google com by kli gmail com on jan at attachments
1
26,997
4,848,497,729
IssuesEvent
2016-11-10 17:40:38
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
template random position the divs
Defect
This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.3.7 * Platform and Target: Chrome-Firefox, MySQL,Wamp64 3.0.6. ### What you did on default baked add.ctp template I add 2 div at the end, out of the form divs. my divs looks like this: ``` <div1> <div2 float left width:20%> </div> <div3 float left width:20%> </div> </div> ``` ### What happened the added divs not appear on the position where I write it 1 to 5 is the tried positions. ``` <nav> </nav> 1 <div baked> 2 <form> <input> 3 <input> </form> 4 </div> 5 ``` ### What you expected to happen the div to appear where I position them position 1: ``` <nav> </nav> conteiner clearfix <div1> <div 2></div> <div baked></div> <div 3></div> </div> ``` the baked div(form) become 20% like mine divs position 2 like 1 position 3 like 1 position 4 correct where it should be position 5 correct it goes under the nav and become invisible
1.0
template random position the divs - This is a (multiple allowed): * [x] bug * [ ] enhancement * [ ] feature-discussion (RFC) * CakePHP Version: 3.3.7 * Platform and Target: Chrome-Firefox, MySQL,Wamp64 3.0.6. ### What you did on default baked add.ctp template I add 2 div at the end, out of the form divs. my divs looks like this: ``` <div1> <div2 float left width:20%> </div> <div3 float left width:20%> </div> </div> ``` ### What happened the added divs not appear on the position where I write it 1 to 5 is the tried positions. ``` <nav> </nav> 1 <div baked> 2 <form> <input> 3 <input> </form> 4 </div> 5 ``` ### What you expected to happen the div to appear where I position them position 1: ``` <nav> </nav> conteiner clearfix <div1> <div 2></div> <div baked></div> <div 3></div> </div> ``` the baked div(form) become 20% like mine divs position 2 like 1 position 3 like 1 position 4 correct where it should be position 5 correct it goes under the nav and become invisible
defect
template random position the divs this is a multiple allowed bug enhancement feature discussion rfc cakephp version platform and target chrome firefox mysql what you did on default baked add ctp template i add div at the end out of the form divs my divs looks like this what happened the added divs not appear on the position where i write it to is the tried positions what you expected to happen the div to appear where i position them position conteiner clearfix the baked div form become like mine divs position like position like position correct where it should be position correct it goes under the nav and become invisible
1
60,712
17,023,501,224
IssuesEvent
2021-07-03 02:20:58
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
uploaded traces information out-of-date
Component: website Priority: minor Resolution: fixed Type: defect
**[Submitted to the original trac issue database at 3.11pm, Sunday, 1st November 2009]** according to this wiki page: http://wiki.openstreetmap.org/wiki/Visibility_of_GPS_traces (linked e.g. from here: http://www.openstreetmap.org/traces/mine) there is now (for some time) 4 categories of traces: Identifiable Public Trackable Private but the list (traces/mine) shows still just 2 of them: private and public. Please update.
1.0
uploaded traces information out-of-date - **[Submitted to the original trac issue database at 3.11pm, Sunday, 1st November 2009]** according to this wiki page: http://wiki.openstreetmap.org/wiki/Visibility_of_GPS_traces (linked e.g. from here: http://www.openstreetmap.org/traces/mine) there is now (for some time) 4 categories of traces: Identifiable Public Trackable Private but the list (traces/mine) shows still just 2 of them: private and public. Please update.
defect
uploaded traces information out of date according to this wiki page linked e g from here there is now for some time categories of traces identifiable public trackable private but the list traces mine shows still just of them private and public please update
1
82,015
31,857,095,636
IssuesEvent
2023-09-15 08:16:34
vector-im/element-call
https://api.github.com/repos/vector-im/element-call
opened
Latency can increase to ~10 seconds
T-Defect
### Steps to reproduce Reports of latency on a call getting up to 5 or 10 seconds, at which point obviously the call is unusable. It apparently starts off okay and then gets worse. That's all we have currently. Perhaps we have some buffers that are way too large in livekit? ### Outcome #### What did you expect? #### What happened instead? ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Will you send logs? No
1.0
Latency can increase to ~10 seconds - ### Steps to reproduce Reports of latency on a call getting up to 5 or 10 seconds, at which point obviously the call is unusable. It apparently starts off okay and then gets worse. That's all we have currently. Perhaps we have some buffers that are way too large in livekit? ### Outcome #### What did you expect? #### What happened instead? ### Operating system _No response_ ### Browser information _No response_ ### URL for webapp _No response_ ### Will you send logs? No
defect
latency can increase to seconds steps to reproduce reports of latency on a call getting up to or seconds at which point obviously the call is unusable it apparently starts off okay and then gets worse that s all we have currently perhaps we have some buffers that are way too large in livekit outcome what did you expect what happened instead operating system no response browser information no response url for webapp no response will you send logs no
1
43,690
11,797,252,720
IssuesEvent
2020-03-18 12:23:08
KenQuin/Lists-Cola
https://api.github.com/repos/KenQuin/Lists-Cola
opened
Slow response times for Card_1_TwoHops_Hist_complete_w_PIN
Defect
This issue has been automatically created from Bot UI Automation. The goal is 3 seconds and the response time Uncached is 16934 milliseconds
1.0
Slow response times for Card_1_TwoHops_Hist_complete_w_PIN - This issue has been automatically created from Bot UI Automation. The goal is 3 seconds and the response time Uncached is 16934 milliseconds
defect
slow response times for card twohops hist complete w pin this issue has been automatically created from bot ui automation the goal is seconds and the response time uncached is milliseconds
1
59,371
11,959,134,651
IssuesEvent
2020-04-04 20:43:30
SharePoint/sp-dev-fx-webparts
https://api.github.com/repos/SharePoint/sp-dev-fx-webparts
closed
react-rxjs-event-emitter - RefrenceError: internalBinding is not defined
area:sample-code status:answered type:bug
## Category - [ ] Question - [X] Bug - [ ] Enhancement ## Authors @VelinGeorgiev @VesaJuvonen ## Expected or Desired Behavior Run `gulp serve` and be able to preview the webpart in the workbench. ## Observed Behavior Upon running `gulp serve` the following error log occurs: `internal/util/inspect.js:31 const types = internalBinding('types'); ^` `ReferenceError: internalBinding is not defined at internal/util/inspect.js:31:15 at req_ (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:137:5) at require (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:110:12) at util.js:25:21 at req_ (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:137:5) at require (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:110:12) at fs.js:42:21 at req_ (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:137:5) at Object.req [as require] (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:54:10) at Object.<anonymous> (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\vinyl-fs\node_modules\graceful-fs\fs.js:1:37)` ## Steps to Reproduce Clone repository, run `npm i` then `gulp serve`.
1.0
react-rxjs-event-emitter - RefrenceError: internalBinding is not defined - ## Category - [ ] Question - [X] Bug - [ ] Enhancement ## Authors @VelinGeorgiev @VesaJuvonen ## Expected or Desired Behavior Run `gulp serve` and be able to preview the webpart in the workbench. ## Observed Behavior Upon running `gulp serve` the following error log occurs: `internal/util/inspect.js:31 const types = internalBinding('types'); ^` `ReferenceError: internalBinding is not defined at internal/util/inspect.js:31:15 at req_ (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:137:5) at require (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:110:12) at util.js:25:21 at req_ (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:137:5) at require (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:110:12) at fs.js:42:21 at req_ (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:137:5) at Object.req [as require] (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\natives\index.js:54:10) at Object.<anonymous> (C:\Users\manaf.ibrahim\Documents\Dev\Sharepoint\sp-dev-fx-webparts\samples\react-rxjs-event-emitter\node_modules\vinyl-fs\node_modules\graceful-fs\fs.js:1:37)` ## Steps to Reproduce Clone repository, run `npm i` then `gulp serve`.
non_defect
react rxjs event emitter refrenceerror internalbinding is not defined category question bug enhancement authors velingeorgiev vesajuvonen expected or desired behavior run gulp serve and be able to preview the webpart in the workbench observed behavior upon running gulp serve the following error log occurs internal util inspect js const types internalbinding types referenceerror internalbinding is not defined at internal util inspect js at req c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules natives index js at require c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules natives index js at util js at req c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules natives index js at require c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules natives index js at fs js at req c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules natives index js at object req c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules natives index js at object c users manaf ibrahim documents dev sharepoint sp dev fx webparts samples react rxjs event emitter node modules vinyl fs node modules graceful fs fs js steps to reproduce clone repository run npm i then gulp serve
0
400,152
11,769,835,292
IssuesEvent
2020-03-15 16:33:36
pokt-network/posmint
https://api.github.com/repos/pokt-network/posmint
closed
Only panic in consensus breaking situations
high priority
Remove all panics in POSmint where it isn't a consensus breaking issue. Think about how clients can send messges and hit the rpc that could result in a panic. A client should never be able to kill a node from a panic.
1.0
Only panic in consensus breaking situations - Remove all panics in POSmint where it isn't a consensus breaking issue. Think about how clients can send messges and hit the rpc that could result in a panic. A client should never be able to kill a node from a panic.
non_defect
only panic in consensus breaking situations remove all panics in posmint where it isn t a consensus breaking issue think about how clients can send messges and hit the rpc that could result in a panic a client should never be able to kill a node from a panic
0
187,375
6,756,598,817
IssuesEvent
2017-10-24 07:46:34
threefoldfoundation/app_backend
https://api.github.com/repos/threefoldfoundation/app_backend
closed
Wallet on Dashboard
priority_minor state_verification type_feature
- see list of users + search - detail of user for transaction history + balance of tokens payment admins should be able to grant tokens in detail page of a user
1.0
Wallet on Dashboard - - see list of users + search - detail of user for transaction history + balance of tokens payment admins should be able to grant tokens in detail page of a user
non_defect
wallet on dashboard see list of users search detail of user for transaction history balance of tokens payment admins should be able to grant tokens in detail page of a user
0
22,821
3,972,371,188
IssuesEvent
2016-05-04 15:08:02
cockroachdb/cockroach
https://api.github.com/repos/cockroachdb/cockroach
reopened
stress: failed test in cockroach/sql/sql.test: TestParallel
Robot test-failure
Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/08d664015764b5b04fc09946d0588e16f8651cca Stress build found a failed test: ``` === RUN TestParallel W160413 06:06:16.533564 gossip/gossip.go:887 not connected to cluster; use --join to specify a connected node I160413 06:06:16.534119 storage/engine/rocksdb.go:137 opening in memory rocksdb instance I160413 06:06:16.534965 server/node.go:360 store store=0:0 ([]=) not bootstrapped W160413 06:06:16.534980 gossip/gossip.go:887 not connected to cluster; use --join to specify a connected node I160413 06:06:16.536546 storage/replica_command.go:1409 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.536283877s I160413 06:06:16.537066 server/node.go:310 **** cluster {732dcf36-5507-40a2-a6bc-d56f9bc9196c} has been created I160413 06:06:16.537077 server/node.go:311 **** add additional nodes by specifying --join=127.0.0.1:36422 W160413 06:06:16.537084 gossip/gossip.go:887 not connected to cluster; use --join to specify a connected node I160413 06:06:16.537491 server/node.go:373 initialized store store=1:1 ([]=): {Capacity:8312655872 Available:6045941760 RangeCount:0} I160413 06:06:16.537532 server/node.go:285 node ID 1 initialized I160413 06:06:16.537563 storage/stores.go:286 read 0 node addresses from persistent storage I160413 06:06:16.537597 server/node.go:494 connecting to gossip network to verify cluster ID... I160413 06:06:16.537873 server/node.go:515 node connected via gossip and verified as part of cluster {"732dcf36-5507-40a2-a6bc-d56f9bc9196c"} I160413 06:06:16.537894 server/node.go:338 [node=1] Started node with [[]=] engine(s) and attributes [] I160413 06:06:16.537909 server/server.go:363 starting https server at 127.0.0.1:46789 I160413 06:06:16.537918 server/server.go:364 starting grpc/postgres server at 127.0.0.1:36422 I160413 06:06:16.538124 storage/split_queue.go:100 splitting range=1 [/Min-/Max) at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0] I160413 06:06:16.553762 storage/replica_command.go:1862 initiating a split of range=1 [/Min-/Max) at key /Table/11 I160413 06:06:16.569586 server/updates.go:147 No previous updates check time. I160413 06:06:16.590344 storage/replica_command.go:1409 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.590173081s I160413 06:06:16.590446 storage/replica_command.go:1862 initiating a split of range=2 [/Table/11-/Max) at key /Table/12 I160413 06:06:16.599681 storage/replica_command.go:1409 range 3: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.599545137s I160413 06:06:16.599755 storage/replica_command.go:1862 initiating a split of range=3 [/Table/12-/Max) at key /Table/13 I160413 06:06:16.610705 storage/replica_command.go:1409 range 4: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.610513526s I160413 06:06:16.610779 storage/replica_command.go:1862 initiating a split of range=4 [/Table/13-/Max) at key /Table/14 I160413 06:06:16.649804 storage/replica_command.go:1409 range 5: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.649618198s I160413 06:06:16.650043 storage/split_queue.go:100 splitting range=5 [/Table/14-/Max) at keys [/Table/50/0] I160413 06:06:16.650788 storage/replica_command.go:1862 initiating a split of range=5 [/Table/14-/Max) at key /Table/50 Running test partestdata/subquery_retry partestdata/subquery_retry/main:1: running setup partestdata/subquery_retry/setup:1 root: CREATE TABLE T (k INT) partestdata/subquery_retry/setup:4 root: INSERT INTO T VALUES (1) I160413 06:06:16.679935 storage/replica_command.go:1409 range 6: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.679753137s partestdata/subquery_retry/setup: 2 partestdata/subquery_retry/main:3: running txn,txn,txn partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) I160413 06:06:17.535153 gossip/gossip.go:913 starting client to 127.0.0.1:36422 I160413 06:06:17.535631 gossip/client.go:83 closing client to node 1 (127.0.0.1:36422): gossip/client.go:177: stopping outgoing client to node 1 (127.0.0.1:36422); loopback connection W160413 06:06:20.038279 storage/replica.go:1173 unable to cancel expired Raft command ResolveIntent [/Table/51/1/132570077884416001/0,/Min), ResolveIntent [/Table/51/1/132570077884416001/1/1,/Min) W160413 06:06:20.046704 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.048378 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.049836 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.051015 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.054216 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.055950 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.057429 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.058744 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.059771 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.063544 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.073708 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.074963 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.075914 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.077020 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.078338 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.080193 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.083400 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.086381 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.087882 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.089358 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.107261 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded partestdata/subquery_retry/main:5: running final partestdata/subquery_retry/final:2 root: SELECT COUNT(k) - COUNT(DISTINCT k) from T; partestdata/subquery_retry/final: 1 I160413 06:06:20.110933 stopper.go:352 draining; tasks left: 2 server/node.go:741 I160413 06:06:20.245589 stopper.go:352 draining; tasks left: 1 server/node.go:741 1 parallel tests passed --- FAIL: TestParallel (3.79s) logic_test.go:526: partestdata/subquery_retry/txn:6: expected success, but found pq: failed to send RPC: too many errors encountered (1 of 1 total): rpc error: code = 4 desc = context deadline exceeded logic_test.go:526: partestdata/subquery_retry/txn:6: expected success, but found pq: failed to send RPC: too many errors encountered (1 of 1 total): rpc error: code = 4 desc = context deadline exceeded logic_test.go:526: partestdata/subquery_retry/txn:6: expected success, but found pq: failed to send RPC: too many errors encountered (1 of 1 total): rpc error: code = 4 desc = context deadline exceeded ``` Run Details: ``` 0 runs so far, 0 failures, over 5s 0 runs so far, 0 failures, over 10s 0 runs so far, 0 failures, over 15s 0 runs so far, 0 failures, over 20s 0 runs so far, 0 failures, over 25s 2 runs so far, 0 failures, over 30s 4 runs so far, 0 failures, over 35s 8 runs so far, 0 failures, over 40s 8 runs so far, 0 failures, over 45s 8 runs so far, 0 failures, over 50s 8 runs so far, 0 failures, over 55s 8 runs so far, 0 failures, over 1m0s 10 runs so far, 0 failures, over 1m5s 14 runs so far, 0 failures, over 1m10s 15 runs so far, 0 failures, over 1m15s 16 runs so far, 0 failures, over 1m20s 16 runs so far, 0 failures, over 1m25s 16 runs so far, 0 failures, over 1m30s 18 runs so far, 0 failures, over 1m35s 18 runs so far, 0 failures, over 1m40s 23 runs so far, 0 failures, over 1m45s 24 runs so far, 0 failures, over 1m50s 24 runs so far, 0 failures, over 1m55s 24 runs so far, 0 failures, over 2m0s 24 runs so far, 0 failures, over 2m5s 25 runs so far, 0 failures, over 2m10s 26 runs so far, 0 failures, over 2m15s 30 runs so far, 0 failures, over 2m20s 32 runs so far, 0 failures, over 2m25s 32 runs so far, 0 failures, over 2m30s 32 runs so far, 0 failures, over 2m35s 32 runs so far, 0 failures, over 2m40s 35 runs so far, 0 failures, over 2m45s 36 runs so far, 0 failures, over 2m50s 39 runs so far, 0 failures, over 2m55s 40 runs so far, 0 failures, over 3m0s 40 runs so far, 0 failures, over 3m5s 40 runs so far, 0 failures, over 3m10s 41 runs so far, 0 failures, over 3m15s 42 runs so far, 0 failures, over 3m20s 44 runs so far, 0 failures, over 3m25s 47 runs so far, 0 failures, over 3m30s 48 runs so far, 0 failures, over 3m35s 48 runs so far, 0 failures, over 3m40s 48 runs so far, 0 failures, over 3m45s 49 runs so far, 0 failures, over 3m50s 52 runs so far, 0 failures, over 3m55s 53 runs so far, 0 failures, over 4m0s 54 runs so far, 0 failures, over 4m5s 56 runs so far, 0 failures, over 4m10s 56 runs so far, 0 failures, over 4m15s 57 runs so far, 0 failures, over 4m20s 58 runs so far, 0 failures, over 4m25s 60 runs so far, 0 failures, over 4m30s 61 runs so far, 0 failures, over 4m35s 63 runs so far, 0 failures, over 4m40s 64 runs so far, 0 failures, over 4m45s 65 runs so far, 0 failures, over 4m50s 66 runs so far, 0 failures, over 4m55s 67 runs so far, 0 failures, over 5m0s 68 runs so far, 0 failures, over 5m5s 71 runs so far, 0 failures, over 5m10s 71 runs so far, 0 failures, over 5m15s 71 runs so far, 0 failures, over 5m20s 73 runs so far, 0 failures, over 5m25s 74 runs so far, 0 failures, over 5m30s 75 runs so far, 0 failures, over 5m35s 78 runs so far, 0 failures, over 5m40s 79 runs so far, 0 failures, over 5m45s 79 runs so far, 0 failures, over 5m50s 80 runs so far, 0 failures, over 5m55s 81 runs so far, 0 failures, over 6m0s 81 runs so far, 0 failures, over 6m5s 82 runs completed, 1 failures, over 6m7s FAIL ``` Please assign, take a look and update the issue accordingly.
1.0
stress: failed test in cockroach/sql/sql.test: TestParallel - Binary: cockroach/static-tests.tar.gz sha: https://github.com/cockroachdb/cockroach/commits/08d664015764b5b04fc09946d0588e16f8651cca Stress build found a failed test: ``` === RUN TestParallel W160413 06:06:16.533564 gossip/gossip.go:887 not connected to cluster; use --join to specify a connected node I160413 06:06:16.534119 storage/engine/rocksdb.go:137 opening in memory rocksdb instance I160413 06:06:16.534965 server/node.go:360 store store=0:0 ([]=) not bootstrapped W160413 06:06:16.534980 gossip/gossip.go:887 not connected to cluster; use --join to specify a connected node I160413 06:06:16.536546 storage/replica_command.go:1409 range 1: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.536283877s I160413 06:06:16.537066 server/node.go:310 **** cluster {732dcf36-5507-40a2-a6bc-d56f9bc9196c} has been created I160413 06:06:16.537077 server/node.go:311 **** add additional nodes by specifying --join=127.0.0.1:36422 W160413 06:06:16.537084 gossip/gossip.go:887 not connected to cluster; use --join to specify a connected node I160413 06:06:16.537491 server/node.go:373 initialized store store=1:1 ([]=): {Capacity:8312655872 Available:6045941760 RangeCount:0} I160413 06:06:16.537532 server/node.go:285 node ID 1 initialized I160413 06:06:16.537563 storage/stores.go:286 read 0 node addresses from persistent storage I160413 06:06:16.537597 server/node.go:494 connecting to gossip network to verify cluster ID... I160413 06:06:16.537873 server/node.go:515 node connected via gossip and verified as part of cluster {"732dcf36-5507-40a2-a6bc-d56f9bc9196c"} I160413 06:06:16.537894 server/node.go:338 [node=1] Started node with [[]=] engine(s) and attributes [] I160413 06:06:16.537909 server/server.go:363 starting https server at 127.0.0.1:46789 I160413 06:06:16.537918 server/server.go:364 starting grpc/postgres server at 127.0.0.1:36422 I160413 06:06:16.538124 storage/split_queue.go:100 splitting range=1 [/Min-/Max) at keys [/Table/11/0 /Table/12/0 /Table/13/0 /Table/14/0] I160413 06:06:16.553762 storage/replica_command.go:1862 initiating a split of range=1 [/Min-/Max) at key /Table/11 I160413 06:06:16.569586 server/updates.go:147 No previous updates check time. I160413 06:06:16.590344 storage/replica_command.go:1409 range 2: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.590173081s I160413 06:06:16.590446 storage/replica_command.go:1862 initiating a split of range=2 [/Table/11-/Max) at key /Table/12 I160413 06:06:16.599681 storage/replica_command.go:1409 range 3: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.599545137s I160413 06:06:16.599755 storage/replica_command.go:1862 initiating a split of range=3 [/Table/12-/Max) at key /Table/13 I160413 06:06:16.610705 storage/replica_command.go:1409 range 4: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.610513526s I160413 06:06:16.610779 storage/replica_command.go:1862 initiating a split of range=4 [/Table/13-/Max) at key /Table/14 I160413 06:06:16.649804 storage/replica_command.go:1409 range 5: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.649618198s I160413 06:06:16.650043 storage/split_queue.go:100 splitting range=5 [/Table/14-/Max) at keys [/Table/50/0] I160413 06:06:16.650788 storage/replica_command.go:1862 initiating a split of range=5 [/Table/14-/Max) at key /Table/50 Running test partestdata/subquery_retry partestdata/subquery_retry/main:1: running setup partestdata/subquery_retry/setup:1 root: CREATE TABLE T (k INT) partestdata/subquery_retry/setup:4 root: INSERT INTO T VALUES (1) I160413 06:06:16.679935 storage/replica_command.go:1409 range 6: new leader lease replica {1 1 1} 1970-01-01 00:00:00 +0000 UTC 405702h6m17.679753137s partestdata/subquery_retry/setup: 2 partestdata/subquery_retry/main:3: running txn,txn,txn partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) partestdata/subquery_retry/txn:6 root: INSERT INTO T VALUES ((SELECT MAX(k+1) FROM T)) I160413 06:06:17.535153 gossip/gossip.go:913 starting client to 127.0.0.1:36422 I160413 06:06:17.535631 gossip/client.go:83 closing client to node 1 (127.0.0.1:36422): gossip/client.go:177: stopping outgoing client to node 1 (127.0.0.1:36422); loopback connection W160413 06:06:20.038279 storage/replica.go:1173 unable to cancel expired Raft command ResolveIntent [/Table/51/1/132570077884416001/0,/Min), ResolveIntent [/Table/51/1/132570077884416001/1/1,/Min) W160413 06:06:20.046704 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.048378 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.049836 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.051015 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.054216 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.055950 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.057429 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.058744 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.059771 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.063544 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.073708 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.074963 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.075914 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.077020 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.078338 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.080193 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.083400 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.086381 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.087882 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.089358 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded W160413 06:06:20.107261 storage/intent_resolver.go:408 unable to resolve local intents; context deadline exceeded partestdata/subquery_retry/main:5: running final partestdata/subquery_retry/final:2 root: SELECT COUNT(k) - COUNT(DISTINCT k) from T; partestdata/subquery_retry/final: 1 I160413 06:06:20.110933 stopper.go:352 draining; tasks left: 2 server/node.go:741 I160413 06:06:20.245589 stopper.go:352 draining; tasks left: 1 server/node.go:741 1 parallel tests passed --- FAIL: TestParallel (3.79s) logic_test.go:526: partestdata/subquery_retry/txn:6: expected success, but found pq: failed to send RPC: too many errors encountered (1 of 1 total): rpc error: code = 4 desc = context deadline exceeded logic_test.go:526: partestdata/subquery_retry/txn:6: expected success, but found pq: failed to send RPC: too many errors encountered (1 of 1 total): rpc error: code = 4 desc = context deadline exceeded logic_test.go:526: partestdata/subquery_retry/txn:6: expected success, but found pq: failed to send RPC: too many errors encountered (1 of 1 total): rpc error: code = 4 desc = context deadline exceeded ``` Run Details: ``` 0 runs so far, 0 failures, over 5s 0 runs so far, 0 failures, over 10s 0 runs so far, 0 failures, over 15s 0 runs so far, 0 failures, over 20s 0 runs so far, 0 failures, over 25s 2 runs so far, 0 failures, over 30s 4 runs so far, 0 failures, over 35s 8 runs so far, 0 failures, over 40s 8 runs so far, 0 failures, over 45s 8 runs so far, 0 failures, over 50s 8 runs so far, 0 failures, over 55s 8 runs so far, 0 failures, over 1m0s 10 runs so far, 0 failures, over 1m5s 14 runs so far, 0 failures, over 1m10s 15 runs so far, 0 failures, over 1m15s 16 runs so far, 0 failures, over 1m20s 16 runs so far, 0 failures, over 1m25s 16 runs so far, 0 failures, over 1m30s 18 runs so far, 0 failures, over 1m35s 18 runs so far, 0 failures, over 1m40s 23 runs so far, 0 failures, over 1m45s 24 runs so far, 0 failures, over 1m50s 24 runs so far, 0 failures, over 1m55s 24 runs so far, 0 failures, over 2m0s 24 runs so far, 0 failures, over 2m5s 25 runs so far, 0 failures, over 2m10s 26 runs so far, 0 failures, over 2m15s 30 runs so far, 0 failures, over 2m20s 32 runs so far, 0 failures, over 2m25s 32 runs so far, 0 failures, over 2m30s 32 runs so far, 0 failures, over 2m35s 32 runs so far, 0 failures, over 2m40s 35 runs so far, 0 failures, over 2m45s 36 runs so far, 0 failures, over 2m50s 39 runs so far, 0 failures, over 2m55s 40 runs so far, 0 failures, over 3m0s 40 runs so far, 0 failures, over 3m5s 40 runs so far, 0 failures, over 3m10s 41 runs so far, 0 failures, over 3m15s 42 runs so far, 0 failures, over 3m20s 44 runs so far, 0 failures, over 3m25s 47 runs so far, 0 failures, over 3m30s 48 runs so far, 0 failures, over 3m35s 48 runs so far, 0 failures, over 3m40s 48 runs so far, 0 failures, over 3m45s 49 runs so far, 0 failures, over 3m50s 52 runs so far, 0 failures, over 3m55s 53 runs so far, 0 failures, over 4m0s 54 runs so far, 0 failures, over 4m5s 56 runs so far, 0 failures, over 4m10s 56 runs so far, 0 failures, over 4m15s 57 runs so far, 0 failures, over 4m20s 58 runs so far, 0 failures, over 4m25s 60 runs so far, 0 failures, over 4m30s 61 runs so far, 0 failures, over 4m35s 63 runs so far, 0 failures, over 4m40s 64 runs so far, 0 failures, over 4m45s 65 runs so far, 0 failures, over 4m50s 66 runs so far, 0 failures, over 4m55s 67 runs so far, 0 failures, over 5m0s 68 runs so far, 0 failures, over 5m5s 71 runs so far, 0 failures, over 5m10s 71 runs so far, 0 failures, over 5m15s 71 runs so far, 0 failures, over 5m20s 73 runs so far, 0 failures, over 5m25s 74 runs so far, 0 failures, over 5m30s 75 runs so far, 0 failures, over 5m35s 78 runs so far, 0 failures, over 5m40s 79 runs so far, 0 failures, over 5m45s 79 runs so far, 0 failures, over 5m50s 80 runs so far, 0 failures, over 5m55s 81 runs so far, 0 failures, over 6m0s 81 runs so far, 0 failures, over 6m5s 82 runs completed, 1 failures, over 6m7s FAIL ``` Please assign, take a look and update the issue accordingly.
non_defect
stress failed test in cockroach sql sql test testparallel binary cockroach static tests tar gz sha stress build found a failed test run testparallel gossip gossip go not connected to cluster use join to specify a connected node storage engine rocksdb go opening in memory rocksdb instance server node go store store not bootstrapped gossip gossip go not connected to cluster use join to specify a connected node storage replica command go range new leader lease replica utc server node go cluster has been created server node go add additional nodes by specifying join gossip gossip go not connected to cluster use join to specify a connected node server node go initialized store store capacity available rangecount server node go node id initialized storage stores go read node addresses from persistent storage server node go connecting to gossip network to verify cluster id server node go node connected via gossip and verified as part of cluster server node go started node with engine s and attributes server server go starting https server at server server go starting grpc postgres server at storage split queue go splitting range storage replica command go initiating a split of range min max at key table server updates go no previous updates check time storage replica command go range new leader lease replica utc storage replica command go initiating a split of range table max at key table storage replica command go range new leader lease replica utc storage replica command go initiating a split of range table max at key table storage replica command go range new leader lease replica utc storage replica command go initiating a split of range table max at key table storage replica command go range new leader lease replica utc storage split queue go splitting range storage replica command go initiating a split of range table max at key table running test partestdata subquery retry partestdata subquery retry main running setup partestdata subquery retry setup root create table t k int partestdata subquery retry setup root insert into t values storage replica command go range new leader lease replica utc partestdata subquery retry setup partestdata subquery retry main running txn txn txn partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t partestdata subquery retry txn root insert into t values select max k from t gossip gossip go starting client to gossip client go closing client to node gossip client go stopping outgoing client to node loopback connection storage replica go unable to cancel expired raft command resolveintent table min resolveintent table min storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded storage intent resolver go unable to resolve local intents context deadline exceeded partestdata subquery retry main running final partestdata subquery retry final root select count k count distinct k from t partestdata subquery retry final stopper go draining tasks left server node go stopper go draining tasks left server node go parallel tests passed fail testparallel logic test go partestdata subquery retry txn expected success but found pq failed to send rpc too many errors encountered of total rpc error code desc context deadline exceeded logic test go partestdata subquery retry txn expected success but found pq failed to send rpc too many errors encountered of total rpc error code desc context deadline exceeded logic test go partestdata subquery retry txn expected success but found pq failed to send rpc too many errors encountered of total rpc error code desc context deadline exceeded run details runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs so far failures over runs completed failures over fail please assign take a look and update the issue accordingly
0
80,846
30,560,727,165
IssuesEvent
2023-07-20 14:27:18
vector-im/element-web
https://api.github.com/repos/vector-im/element-web
closed
Searching - multiple results for the same user
T-Defect X-Cannot-Reproduce X-Regression S-Minor Z-Backend A-Invite A-Identity-Server O-Uncommon X-Needs-Community-Testing
I'm using Synapse + mxisd with searching in LDAP (AD). Riot Desktop application version 1.4.0 and 1.4.1 (older version are OK), have issue with searching. Example: If I search user with substring "kbre", then i have doubled (sometimes tripled and more) results with the same user. I see in syslog: There are two results for user from mxisd: one LDAP search in display name and second by 3PID. But it is still the same user. Riot 1.3.x and older display it correctly, but Riot 1.4.x display double results. It is wrong. ![Výstřižek](https://user-images.githubusercontent.com/54726936/66022102-2aebf680-e4ed-11e9-8c90-5983df52d3d5.PNG) It is similar situation as this: https://github.com/vector-im/riot-android/issues/1984 - **Platform**: desktop - **OS**: Windows - **Version**: 1.4.1 <!-- check the user settings panel if unsure -->
1.0
Searching - multiple results for the same user - I'm using Synapse + mxisd with searching in LDAP (AD). Riot Desktop application version 1.4.0 and 1.4.1 (older version are OK), have issue with searching. Example: If I search user with substring "kbre", then i have doubled (sometimes tripled and more) results with the same user. I see in syslog: There are two results for user from mxisd: one LDAP search in display name and second by 3PID. But it is still the same user. Riot 1.3.x and older display it correctly, but Riot 1.4.x display double results. It is wrong. ![Výstřižek](https://user-images.githubusercontent.com/54726936/66022102-2aebf680-e4ed-11e9-8c90-5983df52d3d5.PNG) It is similar situation as this: https://github.com/vector-im/riot-android/issues/1984 - **Platform**: desktop - **OS**: Windows - **Version**: 1.4.1 <!-- check the user settings panel if unsure -->
defect
searching multiple results for the same user i m using synapse mxisd with searching in ldap ad riot desktop application version and older version are ok have issue with searching example if i search user with substring kbre then i have doubled sometimes tripled and more results with the same user i see in syslog there are two results for user from mxisd one ldap search in display name and second by but it is still the same user riot x and older display it correctly but riot x display double results it is wrong it is similar situation as this platform desktop os windows version
1
139,378
11,260,516,190
IssuesEvent
2020-01-13 10:41:47
saltstack/salt
https://api.github.com/repos/saltstack/salt
closed
Create an acceptance test suite for salt
Needs Testcase Pending Discussion stale
Currently Salt is tested either by the CI which has unit and functional suites but no actual integration/acceptance suite or in production where bugs like [these](https://github.com/saltstack/salt/issues?q=is%3Aopen+is%3Aissue+label%3A%22High+Severity%22) happen. In order to ensure that salt works on all cases an acceptance test suite should be created. The suite will use an actual salt instance and will execute the different components that salt is made of such as highstates or modules. Some of these tests can be run as part of the CI process for each commit since they are supposed to run in a timely fashion e.g.: - When booting a machine with a salt minion/master/syndic is available and running - Salt master is accepting keys and all minions responds to test.ping - All CLI commands that salt provides are available and functional. - The common states are functioning correctly on the most common operating system salt is deployed on. Some of these tests can only be run on a nightly basis e.g.: - Run all states on every supported operating system to ensure they behave exactly the same. - Run the cloud related states and ensure that they can create an instance and operate it and that salt is installed and responding in that instance. And the rest of the tests can be run before release or before merging an important feature in order to check regressions e.g.: - Ensure that a memory leak has not occurred when salt performs X many many times. - Ensure that timeouts don't occur when a master has a lot of minions. In order to implement an acceptance test suite I'd use [test-kitchen](https://github.com/test-kitchen/test-kitchen/) which allows me to boot virtual machines using [Vagrant](https://github.com/test-kitchen/kitchen-vagrant), run [salt operations](https://github.com/simonmcc/kitchen-salt) and test them using [serverspec](http://serverspec.org/), [bats](https://github.com/sstephenson/bats) and any other testing software you can think of if you are able to implement a simple plugin with [kitchen-busser](https://github.com/test-kitchen/busser). I'm willing to contribute at least a part of this suite but it requires cooperation on your part as well. Each cloud provider you support should be tested in order to ensure that salt-cloud and all cloud related states work so API keys with limited permissions needs to be set up. You might decide that you don't want to share them publicly and that's fine but it means that saltstack is responsible for running these tests in order to ensure nothing breaks. It also requires some modifications to your CI process and what should be installed on the CI machines in order for these tests to be able to run. What do you guys think?
1.0
Create an acceptance test suite for salt - Currently Salt is tested either by the CI which has unit and functional suites but no actual integration/acceptance suite or in production where bugs like [these](https://github.com/saltstack/salt/issues?q=is%3Aopen+is%3Aissue+label%3A%22High+Severity%22) happen. In order to ensure that salt works on all cases an acceptance test suite should be created. The suite will use an actual salt instance and will execute the different components that salt is made of such as highstates or modules. Some of these tests can be run as part of the CI process for each commit since they are supposed to run in a timely fashion e.g.: - When booting a machine with a salt minion/master/syndic is available and running - Salt master is accepting keys and all minions responds to test.ping - All CLI commands that salt provides are available and functional. - The common states are functioning correctly on the most common operating system salt is deployed on. Some of these tests can only be run on a nightly basis e.g.: - Run all states on every supported operating system to ensure they behave exactly the same. - Run the cloud related states and ensure that they can create an instance and operate it and that salt is installed and responding in that instance. And the rest of the tests can be run before release or before merging an important feature in order to check regressions e.g.: - Ensure that a memory leak has not occurred when salt performs X many many times. - Ensure that timeouts don't occur when a master has a lot of minions. In order to implement an acceptance test suite I'd use [test-kitchen](https://github.com/test-kitchen/test-kitchen/) which allows me to boot virtual machines using [Vagrant](https://github.com/test-kitchen/kitchen-vagrant), run [salt operations](https://github.com/simonmcc/kitchen-salt) and test them using [serverspec](http://serverspec.org/), [bats](https://github.com/sstephenson/bats) and any other testing software you can think of if you are able to implement a simple plugin with [kitchen-busser](https://github.com/test-kitchen/busser). I'm willing to contribute at least a part of this suite but it requires cooperation on your part as well. Each cloud provider you support should be tested in order to ensure that salt-cloud and all cloud related states work so API keys with limited permissions needs to be set up. You might decide that you don't want to share them publicly and that's fine but it means that saltstack is responsible for running these tests in order to ensure nothing breaks. It also requires some modifications to your CI process and what should be installed on the CI machines in order for these tests to be able to run. What do you guys think?
non_defect
create an acceptance test suite for salt currently salt is tested either by the ci which has unit and functional suites but no actual integration acceptance suite or in production where bugs like happen in order to ensure that salt works on all cases an acceptance test suite should be created the suite will use an actual salt instance and will execute the different components that salt is made of such as highstates or modules some of these tests can be run as part of the ci process for each commit since they are supposed to run in a timely fashion e g when booting a machine with a salt minion master syndic is available and running salt master is accepting keys and all minions responds to test ping all cli commands that salt provides are available and functional the common states are functioning correctly on the most common operating system salt is deployed on some of these tests can only be run on a nightly basis e g run all states on every supported operating system to ensure they behave exactly the same run the cloud related states and ensure that they can create an instance and operate it and that salt is installed and responding in that instance and the rest of the tests can be run before release or before merging an important feature in order to check regressions e g ensure that a memory leak has not occurred when salt performs x many many times ensure that timeouts don t occur when a master has a lot of minions in order to implement an acceptance test suite i d use which allows me to boot virtual machines using run and test them using and any other testing software you can think of if you are able to implement a simple plugin with i m willing to contribute at least a part of this suite but it requires cooperation on your part as well each cloud provider you support should be tested in order to ensure that salt cloud and all cloud related states work so api keys with limited permissions needs to be set up you might decide that you don t want to share them publicly and that s fine but it means that saltstack is responsible for running these tests in order to ensure nothing breaks it also requires some modifications to your ci process and what should be installed on the ci machines in order for these tests to be able to run what do you guys think
0
169,947
13,166,764,015
IssuesEvent
2020-08-11 09:06:32
WoWManiaUK/Blackwing-Lair
https://api.github.com/repos/WoWManiaUK/Blackwing-Lair
closed
[NPC] Montarr - (give both faction quests) - Thousand Needles
Confirmed By Tester Fixed Confirmed Fixed in Dev zone 40-50
**Links:** npc http://cata.cavernoftime.com/npc=45271 horde quest http://cata.cavernoftime.com/quest=25874 ally quest http://cata.cavernoftime.com/quest=25873 ![both quests](https://user-images.githubusercontent.com/39439201/78411482-19a4b380-7610-11ea-8e0c-01d82ccc4b50.jpg) **What is happening:** - npc give both quests **What should happen:** _npc shouldnt give both quests_
1.0
[NPC] Montarr - (give both faction quests) - Thousand Needles - **Links:** npc http://cata.cavernoftime.com/npc=45271 horde quest http://cata.cavernoftime.com/quest=25874 ally quest http://cata.cavernoftime.com/quest=25873 ![both quests](https://user-images.githubusercontent.com/39439201/78411482-19a4b380-7610-11ea-8e0c-01d82ccc4b50.jpg) **What is happening:** - npc give both quests **What should happen:** _npc shouldnt give both quests_
non_defect
montarr give both faction quests thousand needles links npc horde quest ally quest what is happening npc give both quests what should happen npc shouldnt give both quests
0
6,628
2,610,258,086
IssuesEvent
2015-02-26 19:22:20
chrsmith/dsdsdaadf
https://api.github.com/repos/chrsmith/dsdsdaadf
opened
深圳激光怎么样祛痤疮
auto-migrated Priority-Medium Type-Defect
``` 深圳激光怎么样祛痤疮【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:47
1.0
深圳激光怎么样祛痤疮 - ``` 深圳激光怎么样祛痤疮【深圳韩方科颜全国热线400-869-1818,24 小时QQ4008691818】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 ``` ----- Original issue reported on code.google.com by `szft...@163.com` on 14 May 2014 at 8:47
defect
深圳激光怎么样祛痤疮 深圳激光怎么样祛痤疮【 , 】深圳韩方科颜专业祛痘连锁机构,机构以韩�� �秘方——韩方科颜这一国妆准字号治疗型权威,祛痘佳品,� ��方科颜专业祛痘连锁机构,采用韩国秘方配合专业“不反弹 ”健康祛痘技术并结合先进“先进豪华彩光”仪,开创国内�� �业治疗粉刺、痤疮签约包治先河,成功消除了许多顾客脸上� ��痘痘。 original issue reported on code google com by szft com on may at
1
214,091
24,039,455,985
IssuesEvent
2022-09-15 23:06:26
Azure/AKS
https://api.github.com/repos/Azure/AKS
opened
CVE-2021-25749: runAsNonRoot logic bypass for Windows containers
security announcement
### Issue Details See the GitHub issue for more details: [https://github.com/kubernetes/kubernetes/issues/112192](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fkubernetes%2Fissues%2F112192&data=05%7C01%7CMichael.Withrow%40microsoft.com%7Cebc9a4d1b1744d77a1da08da975ea5ce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988730072970444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=SzczrxmcoaaXvV09XruaKB2%2BJO%2FlCu6RKTuhPa3nRLE%3D&reserved=0) Hello Kubernetes Community, A security issue was discovered in Kubernetes that could allow Windows workloads to run as ContainerAdministrator even when those workloads set the runAsNonRoot option to true . This issue has been rated low and assigned CVE-2021-25749 Am I vulnerable? All Kubernetes clusters with following versions, running Windows workloads with runAsNonRoot are impacted. Affected Versions • kubelet v1.20 - v1.21 • kubelet v1.22.0 - v1.22.13 • kubelet v1.23.0 - v1.23.10 • kubelet v1.24.0 - v1.24.4 How do I mitigate this vulnerability? There are no known mitigations to this vulnerability. Fixed Versions • kubelet v1.22.14 • kubelet v1.23.11 • kubelet v1.23.5 • kubelet v1.25.0 To upgrade, refer to this documentation. For core Kubernetes: [https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubernetes.io%2Fdocs%2Ftasks%2Fadminister-cluster%2Fcluster-management%2F%23upgrading-a-cluster&data=05%7C01%7CMichael.Withrow%40microsoft.com%7Cebc9a4d1b1744d77a1da08da975ea5ce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988730072970444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=tTmz7SeszGmq%2FXjUxtW7Ijb4SxSY%2BgfHD787GuCRQko%3D&reserved=0) Detection Kubernetes Audit logs may indicate if the user name was misspelled to bypass the restriction placed on which user is a pod allowed to run as. If you find evidence that this vulnerability has been exploited, please contact [security@kubernetes.io](mailto:security@kubernetes.io)
True
CVE-2021-25749: runAsNonRoot logic bypass for Windows containers - ### Issue Details See the GitHub issue for more details: [https://github.com/kubernetes/kubernetes/issues/112192](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fkubernetes%2Fissues%2F112192&data=05%7C01%7CMichael.Withrow%40microsoft.com%7Cebc9a4d1b1744d77a1da08da975ea5ce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988730072970444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=SzczrxmcoaaXvV09XruaKB2%2BJO%2FlCu6RKTuhPa3nRLE%3D&reserved=0) Hello Kubernetes Community, A security issue was discovered in Kubernetes that could allow Windows workloads to run as ContainerAdministrator even when those workloads set the runAsNonRoot option to true . This issue has been rated low and assigned CVE-2021-25749 Am I vulnerable? All Kubernetes clusters with following versions, running Windows workloads with runAsNonRoot are impacted. Affected Versions • kubelet v1.20 - v1.21 • kubelet v1.22.0 - v1.22.13 • kubelet v1.23.0 - v1.23.10 • kubelet v1.24.0 - v1.24.4 How do I mitigate this vulnerability? There are no known mitigations to this vulnerability. Fixed Versions • kubelet v1.22.14 • kubelet v1.23.11 • kubelet v1.23.5 • kubelet v1.25.0 To upgrade, refer to this documentation. For core Kubernetes: [https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster](https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkubernetes.io%2Fdocs%2Ftasks%2Fadminister-cluster%2Fcluster-management%2F%23upgrading-a-cluster&data=05%7C01%7CMichael.Withrow%40microsoft.com%7Cebc9a4d1b1744d77a1da08da975ea5ce%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637988730072970444%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=tTmz7SeszGmq%2FXjUxtW7Ijb4SxSY%2BgfHD787GuCRQko%3D&reserved=0) Detection Kubernetes Audit logs may indicate if the user name was misspelled to bypass the restriction placed on which user is a pod allowed to run as. If you find evidence that this vulnerability has been exploited, please contact [security@kubernetes.io](mailto:security@kubernetes.io)
non_defect
cve runasnonroot logic bypass for windows containers issue details see the github issue for more details hello kubernetes community a security issue was discovered in kubernetes that could allow windows workloads to run as containeradministrator even when those workloads set the runasnonroot option to true this issue has been rated low and assigned cve am i vulnerable all kubernetes clusters with following versions running windows workloads with runasnonroot are impacted affected versions • kubelet • kubelet • kubelet • kubelet how do i mitigate this vulnerability there are no known mitigations to this vulnerability fixed versions • kubelet • kubelet • kubelet • kubelet to upgrade refer to this documentation for core kubernetes detection kubernetes audit logs may indicate if the user name was misspelled to bypass the restriction placed on which user is a pod allowed to run as if you find evidence that this vulnerability has been exploited please contact mailto security kubernetes io
0
33,569
7,166,019,815
IssuesEvent
2018-01-29 16:02:35
PowerDNS/pdns
https://api.github.com/repos/PowerDNS/pdns
closed
RPZ zones not loaded from server if unsuccesful on startup
defect rec
- Program: Recursor - Issue type: Bug report ### Short description When configuring the recursor to load RPZ zones from a server and that server is unavailable on startup, the recursor will not retry loading the zones later. ### Steps to reproduce 1. Have an authoritative server «rpz-server» serving an RPZ zone «rpz-zone», 2. Have a Lua config file for the recursor with the following contents: `rpzMaster("«rpz-server»", "«rpz-zone»", {defpol=Policy.Drop,refresh=8})`, 3. Make the authoritative server unavailable (pull the network cable for example), 4. Once the recursor has fully started up make the authoritative server available again, 5. Observer that nothing happens. ### Expected behaviour In step 5 of the previous section we would expect the recursor to try loading the RPZ zone again. ### Actual behaviour Nothing. ### Other information The `RPZIXFRTracker` is not started if «rpz-server» is not available on startup.
1.0
RPZ zones not loaded from server if unsuccesful on startup - - Program: Recursor - Issue type: Bug report ### Short description When configuring the recursor to load RPZ zones from a server and that server is unavailable on startup, the recursor will not retry loading the zones later. ### Steps to reproduce 1. Have an authoritative server «rpz-server» serving an RPZ zone «rpz-zone», 2. Have a Lua config file for the recursor with the following contents: `rpzMaster("«rpz-server»", "«rpz-zone»", {defpol=Policy.Drop,refresh=8})`, 3. Make the authoritative server unavailable (pull the network cable for example), 4. Once the recursor has fully started up make the authoritative server available again, 5. Observer that nothing happens. ### Expected behaviour In step 5 of the previous section we would expect the recursor to try loading the RPZ zone again. ### Actual behaviour Nothing. ### Other information The `RPZIXFRTracker` is not started if «rpz-server» is not available on startup.
defect
rpz zones not loaded from server if unsuccesful on startup program recursor issue type bug report short description when configuring the recursor to load rpz zones from a server and that server is unavailable on startup the recursor will not retry loading the zones later steps to reproduce have an authoritative server «rpz server» serving an rpz zone «rpz zone» have a lua config file for the recursor with the following contents rpzmaster «rpz server» «rpz zone» defpol policy drop refresh make the authoritative server unavailable pull the network cable for example once the recursor has fully started up make the authoritative server available again observer that nothing happens expected behaviour in step of the previous section we would expect the recursor to try loading the rpz zone again actual behaviour nothing other information the rpzixfrtracker is not started if «rpz server» is not available on startup
1
81,828
31,722,217,854
IssuesEvent
2023-09-10 14:39:05
cakephp/cakephp
https://api.github.com/repos/cakephp/cakephp
closed
5.x: Small regression on extractOriginal()
defect
### Description ```php $this->assertSame(['id' => null, 'body' => 'test save'], $entityBefore->extractOriginal(['id', 'body'])); ``` This used to be true. Now it is expected to be ```php $this->assertSame(['body' => 'test save'], $entityBefore->extractOriginal(['id', 'body'])); ``` It is weird that here, requesting two fields, one of them is dropped now I wonder if we could revert here the original behavior? ### CakePHP Version 5.x ### PHP Version 8.1
1.0
5.x: Small regression on extractOriginal() - ### Description ```php $this->assertSame(['id' => null, 'body' => 'test save'], $entityBefore->extractOriginal(['id', 'body'])); ``` This used to be true. Now it is expected to be ```php $this->assertSame(['body' => 'test save'], $entityBefore->extractOriginal(['id', 'body'])); ``` It is weird that here, requesting two fields, one of them is dropped now I wonder if we could revert here the original behavior? ### CakePHP Version 5.x ### PHP Version 8.1
defect
x small regression on extractoriginal description php this assertsame entitybefore extractoriginal this used to be true now it is expected to be php this assertsame entitybefore extractoriginal it is weird that here requesting two fields one of them is dropped now i wonder if we could revert here the original behavior cakephp version x php version
1