Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
5
112
repo_url
stringlengths
34
141
action
stringclasses
3 values
title
stringlengths
1
757
labels
stringlengths
4
664
body
stringlengths
3
261k
index
stringclasses
10 values
text_combine
stringlengths
96
261k
label
stringclasses
2 values
text
stringlengths
96
232k
binary_label
int64
0
1
7,905
2,611,064,643
IssuesEvent
2015-02-27 00:30:11
alistairreilly/andors-trail
https://api.github.com/repos/alistairreilly/andors-trail
opened
Lootbag left on ground after fight
auto-migrated Priority-Low Type-Defect
``` What steps will reproduce the problem? 1. Fight two (or more) monsters at the same time. 2. Kill one of them and make sure it leaves a lootbag. 3. Enter the inventory. 4. Kill the other monster. What is the expected output? What do you see instead? The first lootbag is left on the ground instead of appearing in the dialog. What version of the product are you using? On what device? Version 0.6.8 on HTC Hero (Stock 2.1 (slightly modified)) Please provide any additional information below. ``` Original issue reported on code.google.com by `johan.si...@gmail.com` on 24 Mar 2011 at 4:55
1.0
Lootbag left on ground after fight - ``` What steps will reproduce the problem? 1. Fight two (or more) monsters at the same time. 2. Kill one of them and make sure it leaves a lootbag. 3. Enter the inventory. 4. Kill the other monster. What is the expected output? What do you see instead? The first lootbag is left on the ground instead of appearing in the dialog. What version of the product are you using? On what device? Version 0.6.8 on HTC Hero (Stock 2.1 (slightly modified)) Please provide any additional information below. ``` Original issue reported on code.google.com by `johan.si...@gmail.com` on 24 Mar 2011 at 4:55
defect
lootbag left on ground after fight what steps will reproduce the problem fight two or more monsters at the same time kill one of them and make sure it leaves a lootbag enter the inventory kill the other monster what is the expected output what do you see instead the first lootbag is left on the ground instead of appearing in the dialog what version of the product are you using on what device version on htc hero stock slightly modified please provide any additional information below original issue reported on code google com by johan si gmail com on mar at
1
423,598
12,298,938,741
IssuesEvent
2020-05-11 11:27:08
AxonFramework/AxonFramework
https://api.github.com/repos/AxonFramework/AxonFramework
opened
Full Concurrency Policy sequence change
Ideal for Contribution Priority 3: Could Type: Enhancement
Currently, `FullConcurrencyPolicy` returns a `null` for its sequence. Since messages are processed in concurrent mode when their sequence is not equal, we should return a message identifier as a sequence for this policy. Proposed implementation: ``` public class FullConcurrencyPolicy implements SequencingPolicy<EventMessage<?>> { @Override public Object getSequenceIdentifierFor(EventMessage<?> event) { return event.getIdentifier(); } } ``` Please check the usage of it and adjust it accordingly.
1.0
Full Concurrency Policy sequence change - Currently, `FullConcurrencyPolicy` returns a `null` for its sequence. Since messages are processed in concurrent mode when their sequence is not equal, we should return a message identifier as a sequence for this policy. Proposed implementation: ``` public class FullConcurrencyPolicy implements SequencingPolicy<EventMessage<?>> { @Override public Object getSequenceIdentifierFor(EventMessage<?> event) { return event.getIdentifier(); } } ``` Please check the usage of it and adjust it accordingly.
non_defect
full concurrency policy sequence change currently fullconcurrencypolicy returns a null for its sequence since messages are processed in concurrent mode when their sequence is not equal we should return a message identifier as a sequence for this policy proposed implementation public class fullconcurrencypolicy implements sequencingpolicy override public object getsequenceidentifierfor eventmessage event return event getidentifier please check the usage of it and adjust it accordingly
0
60,710
17,023,500,067
IssuesEvent
2021-07-03 02:20:35
tomhughes/trac-tickets
https://api.github.com/repos/tomhughes/trac-tickets
closed
Island label not rendered right when island is represented as a way
Component: mapnik Priority: minor Resolution: invalid Type: defect
**[Submitted to the original trac issue database at 4.00pm, Wednesday, 28th October 2009]** example: [http://www.openstreetmap.org/?lat=39.38512&lon=-83.89296&zoom=16&layers=B000FTF] The way's "name" is rendered along its outline. But the way is closed, and tagged place=island, so the renderer should understand that it's an area without an area=yes tag. Ideally, the label should be formatted exactly as if the island were represented with a single node. (The same ideal holds for other features that can be mapped as single nodes or as ways/multipolygons, though many don't...)
1.0
Island label not rendered right when island is represented as a way - **[Submitted to the original trac issue database at 4.00pm, Wednesday, 28th October 2009]** example: [http://www.openstreetmap.org/?lat=39.38512&lon=-83.89296&zoom=16&layers=B000FTF] The way's "name" is rendered along its outline. But the way is closed, and tagged place=island, so the renderer should understand that it's an area without an area=yes tag. Ideally, the label should be formatted exactly as if the island were represented with a single node. (The same ideal holds for other features that can be mapped as single nodes or as ways/multipolygons, though many don't...)
defect
island label not rendered right when island is represented as a way example the way s name is rendered along its outline but the way is closed and tagged place island so the renderer should understand that it s an area without an area yes tag ideally the label should be formatted exactly as if the island were represented with a single node the same ideal holds for other features that can be mapped as single nodes or as ways multipolygons though many don t
1
27,949
5,141,814,766
IssuesEvent
2017-01-12 11:06:04
bridgedotnet/Bridge
https://api.github.com/repos/bridgedotnet/Bridge
closed
Invalid runtime exception in checked conversion to ulong
defect
When I use the following code to convert a int to ulong inside a checked expression: ```c# var a = 0; var b = checked((ulong)a); ``` I get the following js code: ```js var a = 0; var b = Bridge.Int.check(a, System.UInt64); ``` It throws `System.OverflowException` at runtime in `Bridge.Int` at: ```js check: function (x, type) { if (System.Int64.is64Bit(x)) { return System.Int64.check(x, type); } else if (x instanceof System.Decimal) { return System.Decimal.toInt(x, type); } if (Bridge.isNumber(x) && !type.$is(x)) { //type.$is(x) returns false throw new System.OverflowException(); } if (Bridge.Int.isInfinite(x)) { if (type === System.Int64 || type === System.UInt64) { return type.MinValue; } return type.min; } return x; }, ``` ### Steps To Reproduce [Deck](http://deck.net/1387d92a6177f8fdab399def8b09bbf6) ```cs public class Program { public static void Main() { var a = 0; var b = checked((ulong)a); } } ```
1.0
Invalid runtime exception in checked conversion to ulong - When I use the following code to convert a int to ulong inside a checked expression: ```c# var a = 0; var b = checked((ulong)a); ``` I get the following js code: ```js var a = 0; var b = Bridge.Int.check(a, System.UInt64); ``` It throws `System.OverflowException` at runtime in `Bridge.Int` at: ```js check: function (x, type) { if (System.Int64.is64Bit(x)) { return System.Int64.check(x, type); } else if (x instanceof System.Decimal) { return System.Decimal.toInt(x, type); } if (Bridge.isNumber(x) && !type.$is(x)) { //type.$is(x) returns false throw new System.OverflowException(); } if (Bridge.Int.isInfinite(x)) { if (type === System.Int64 || type === System.UInt64) { return type.MinValue; } return type.min; } return x; }, ``` ### Steps To Reproduce [Deck](http://deck.net/1387d92a6177f8fdab399def8b09bbf6) ```cs public class Program { public static void Main() { var a = 0; var b = checked((ulong)a); } } ```
defect
invalid runtime exception in checked conversion to ulong when i use the following code to convert a int to ulong inside a checked expression c var a var b checked ulong a i get the following js code js var a var b bridge int check a system it throws system overflowexception at runtime in bridge int at js check function x type if system x return system check x type else if x instanceof system decimal return system decimal toint x type if bridge isnumber x type is x type is x returns false throw new system overflowexception if bridge int isinfinite x if type system type system return type minvalue return type min return x steps to reproduce cs public class program public static void main var a var b checked ulong a
1
179,339
21,566,742,522
IssuesEvent
2022-05-02 00:02:46
drakeg/udemy_django_vue
https://api.github.com/repos/drakeg/udemy_django_vue
closed
CVE-2021-27290 (High) detected in ssri-5.3.0.tgz, ssri-6.0.1.tgz - autoclosed
security vulnerability
## CVE-2021-27290 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ssri-5.3.0.tgz</b>, <b>ssri-6.0.1.tgz</b></p></summary> <p> <details><summary><b>ssri-5.3.0.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-5.3.0.tgz">https://registry.npmjs.org/ssri/-/ssri-5.3.0.tgz</a></p> <p> Dependency Hierarchy: </details> <details><summary><b>ssri-6.0.1.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p> <p> Dependency Hierarchy: </details> <p>Found in HEAD commit: <a href="https://github.com/drakeg/udemy_django_vue/commit/4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8">4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-vx3p-948g-6vhq">https://github.com/advisories/GHSA-vx3p-948g-6vhq</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: 6.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2021-27290 (High) detected in ssri-5.3.0.tgz, ssri-6.0.1.tgz - autoclosed - ## CVE-2021-27290 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>ssri-5.3.0.tgz</b>, <b>ssri-6.0.1.tgz</b></p></summary> <p> <details><summary><b>ssri-5.3.0.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-5.3.0.tgz">https://registry.npmjs.org/ssri/-/ssri-5.3.0.tgz</a></p> <p> Dependency Hierarchy: </details> <details><summary><b>ssri-6.0.1.tgz</b></p></summary> <p>Standard Subresource Integrity library -- parses, serializes, generates, and verifies integrity metadata according to the SRI spec.</p> <p>Library home page: <a href="https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz">https://registry.npmjs.org/ssri/-/ssri-6.0.1.tgz</a></p> <p> Dependency Hierarchy: </details> <p>Found in HEAD commit: <a href="https://github.com/drakeg/udemy_django_vue/commit/4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8">4ec6f55e03b63785c5b7b8e39eba942b3c9f2ae8</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> ssri 5.2.2-8.0.0, fixed in 8.0.1, processes SRIs using a regular expression which is vulnerable to a denial of service. Malicious SRIs could take an extremely long time to process, leading to denial of service. This issue only affects consumers using the strict option. <p>Publish Date: 2021-03-12 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2021-27290>CVE-2021-27290</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/advisories/GHSA-vx3p-948g-6vhq">https://github.com/advisories/GHSA-vx3p-948g-6vhq</a></p> <p>Release Date: 2021-03-12</p> <p>Fix Resolution: 6.0.2</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in ssri tgz ssri tgz autoclosed cve high severity vulnerability vulnerable libraries ssri tgz ssri tgz ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href dependency hierarchy ssri tgz standard subresource integrity library parses serializes generates and verifies integrity metadata according to the sri spec library home page a href dependency hierarchy found in head commit a href found in base branch master vulnerability details ssri fixed in processes sris using a regular expression which is vulnerable to a denial of service malicious sris could take an extremely long time to process leading to denial of service this issue only affects consumers using the strict option publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
104,591
13,099,317,370
IssuesEvent
2020-08-03 21:20:18
StrangeLoopGames/EcoIssues
https://api.github.com/repos/StrangeLoopGames/EcoIssues
opened
Solid Pollutant Containment Structures
Category: Art Category: Balance Category: Design Category: Gameplay Category: UI
This document covers not just this epic but pollution in general with sections about pollutant types, liquid pollutants, and many possible expansions to containment mechanics: [Pollution Design Document] [https://docs.google.com/document/d/1VOpHQVpoq4wsFdKKkwODrsuUAZHfu5aS_k9EC78g40w/edit?usp=sharing](url) In Eco, right now, containment mechanics for tailings are some of the least developed mechanics we have, and bear little relation to a simulation of containing pollutants-- currently, players must cover the blocks *over* tailings to get them to stop polluting. In Reality, tailings are usually contained in the opposite way: with an open top, and sealed sides and bottoms. There are also many classes of tailings, some of which are far less harmful when un-contained-- but that will be covered in another Epic. Containment is not just about tailings, but about containment mechanics used for all physical pollutant blocks in Eco-- especially garbage. Heavy industrial liquid pollutants will eventually require treatment into byproducts including solid pollutants like a new Hazardous Waste block, since containing huge amounts of highly toxic liquids is not feasible beyond holding tanks. In all likelyhood then, containment mechanics will only apply to *solid* blocks in Eco, with liquid pollutants wreaking havoc wherever they are let into the environment without treatment. We might revise or alter that general trajectory later, but this Epic is concerned only with solid pollutant containment. **Containment Structures** The main way players should have to contain solid pollutants in Eco like garbage and wet tailings is with Containment Structures: these are **pits (or walls and floors) that have sides and bottoms with *no gaps* including on the corners and are made of one of three materials: Clay, Plastic-Lined Clay, or Concrete.** **Materials and Layers** What materials a containment structure is made out of is critical-- it needs to be a material that in non-pourous and thus stops leakage of pollutants and water moving through the structure that becomes saturated with pollutants. **Clay** is the primary material required for containment structures since it blocks liquids and solids alike, and this should be what most containment structures are built from in Eco, since it is the cheapest and most readily available. Beyond clay, a plastic lining is another extremely useful part of containment structures for garbage. A new block will be added called 'Plastic Lined Clay' that should provide improved pollutant containment for garbage over just clay. The other material to consider for containment structures is concrete, which should provide a bonus to large tailings containment structures. This represents both an additional blocking layer for pollutants, but especially it represents the added *stability* tailings containment structures need to be safe. In the real world, disaster comes from tailings containment structures most often when they collapse spectacularly, though long term the pollution of the water table is probably comparably bad. **Multi-Layer Structures** In reality containment structures are built usually with multiple kinds of layers all working together to ensure that rainwater does not break containment, pollutants do not escape into the water table, and gases or volatile liquids do not build up. Single walled containment structures can be built of Clay, Plastic Lined Clay, or Concrete, but advanced structures with maximum benefit **(see the document)** should require layers of Clay or Plastic Lined Clay *and* Concrete (for wet tailings), or multiple layers of Plastic Lined Clay (for garbage). These multi-layered structures are an improvement to the basic single-walled mechanics that aren't necessary for an MVP implementation of containment. **Overcapacity** A basic rule that should be added to containment structure mechanics that stipulates that any pollutant on the topmost block of the structure does not count as contained. **Grounding** An extension to the containment mechanics probably reliant on code that can detect multiple layers that requires an additional 5 layers in all directions of 'Solid' blocks, to establish that the structure is well grounded and not in danger of collapse, this is particularly relevant for tailings blocks, and garbage blocks could potentially not have this mechanic applied when calculating how much they are polluting. **Drainage** This expansion of containment mechanics means requiring pipes and pumps added to containment structures to avoid build-up of rainfall, etc, and leaching of pollutants from the bottoms of containment structures. Basically it means hooking up pipes to the bottom of the structure which must be powered with pumps and over time produce small amounts of Sewage (from garbage) or Industrial Sewage (from wet tailings) to be processed. **Caps** Eventually, Caps should be added to containment mechanics to further advance players options for improved containment of pollutants, especially for garbage landfills. This is a secondary part of Containment mechanics and not covered here.
1.0
Solid Pollutant Containment Structures - This document covers not just this epic but pollution in general with sections about pollutant types, liquid pollutants, and many possible expansions to containment mechanics: [Pollution Design Document] [https://docs.google.com/document/d/1VOpHQVpoq4wsFdKKkwODrsuUAZHfu5aS_k9EC78g40w/edit?usp=sharing](url) In Eco, right now, containment mechanics for tailings are some of the least developed mechanics we have, and bear little relation to a simulation of containing pollutants-- currently, players must cover the blocks *over* tailings to get them to stop polluting. In Reality, tailings are usually contained in the opposite way: with an open top, and sealed sides and bottoms. There are also many classes of tailings, some of which are far less harmful when un-contained-- but that will be covered in another Epic. Containment is not just about tailings, but about containment mechanics used for all physical pollutant blocks in Eco-- especially garbage. Heavy industrial liquid pollutants will eventually require treatment into byproducts including solid pollutants like a new Hazardous Waste block, since containing huge amounts of highly toxic liquids is not feasible beyond holding tanks. In all likelyhood then, containment mechanics will only apply to *solid* blocks in Eco, with liquid pollutants wreaking havoc wherever they are let into the environment without treatment. We might revise or alter that general trajectory later, but this Epic is concerned only with solid pollutant containment. **Containment Structures** The main way players should have to contain solid pollutants in Eco like garbage and wet tailings is with Containment Structures: these are **pits (or walls and floors) that have sides and bottoms with *no gaps* including on the corners and are made of one of three materials: Clay, Plastic-Lined Clay, or Concrete.** **Materials and Layers** What materials a containment structure is made out of is critical-- it needs to be a material that in non-pourous and thus stops leakage of pollutants and water moving through the structure that becomes saturated with pollutants. **Clay** is the primary material required for containment structures since it blocks liquids and solids alike, and this should be what most containment structures are built from in Eco, since it is the cheapest and most readily available. Beyond clay, a plastic lining is another extremely useful part of containment structures for garbage. A new block will be added called 'Plastic Lined Clay' that should provide improved pollutant containment for garbage over just clay. The other material to consider for containment structures is concrete, which should provide a bonus to large tailings containment structures. This represents both an additional blocking layer for pollutants, but especially it represents the added *stability* tailings containment structures need to be safe. In the real world, disaster comes from tailings containment structures most often when they collapse spectacularly, though long term the pollution of the water table is probably comparably bad. **Multi-Layer Structures** In reality containment structures are built usually with multiple kinds of layers all working together to ensure that rainwater does not break containment, pollutants do not escape into the water table, and gases or volatile liquids do not build up. Single walled containment structures can be built of Clay, Plastic Lined Clay, or Concrete, but advanced structures with maximum benefit **(see the document)** should require layers of Clay or Plastic Lined Clay *and* Concrete (for wet tailings), or multiple layers of Plastic Lined Clay (for garbage). These multi-layered structures are an improvement to the basic single-walled mechanics that aren't necessary for an MVP implementation of containment. **Overcapacity** A basic rule that should be added to containment structure mechanics that stipulates that any pollutant on the topmost block of the structure does not count as contained. **Grounding** An extension to the containment mechanics probably reliant on code that can detect multiple layers that requires an additional 5 layers in all directions of 'Solid' blocks, to establish that the structure is well grounded and not in danger of collapse, this is particularly relevant for tailings blocks, and garbage blocks could potentially not have this mechanic applied when calculating how much they are polluting. **Drainage** This expansion of containment mechanics means requiring pipes and pumps added to containment structures to avoid build-up of rainfall, etc, and leaching of pollutants from the bottoms of containment structures. Basically it means hooking up pipes to the bottom of the structure which must be powered with pumps and over time produce small amounts of Sewage (from garbage) or Industrial Sewage (from wet tailings) to be processed. **Caps** Eventually, Caps should be added to containment mechanics to further advance players options for improved containment of pollutants, especially for garbage landfills. This is a secondary part of Containment mechanics and not covered here.
non_defect
solid pollutant containment structures this document covers not just this epic but pollution in general with sections about pollutant types liquid pollutants and many possible expansions to containment mechanics url in eco right now containment mechanics for tailings are some of the least developed mechanics we have and bear little relation to a simulation of containing pollutants currently players must cover the blocks over tailings to get them to stop polluting in reality tailings are usually contained in the opposite way with an open top and sealed sides and bottoms there are also many classes of tailings some of which are far less harmful when un contained but that will be covered in another epic containment is not just about tailings but about containment mechanics used for all physical pollutant blocks in eco especially garbage heavy industrial liquid pollutants will eventually require treatment into byproducts including solid pollutants like a new hazardous waste block since containing huge amounts of highly toxic liquids is not feasible beyond holding tanks in all likelyhood then containment mechanics will only apply to solid blocks in eco with liquid pollutants wreaking havoc wherever they are let into the environment without treatment we might revise or alter that general trajectory later but this epic is concerned only with solid pollutant containment containment structures the main way players should have to contain solid pollutants in eco like garbage and wet tailings is with containment structures these are pits or walls and floors that have sides and bottoms with no gaps including on the corners and are made of one of three materials clay plastic lined clay or concrete materials and layers what materials a containment structure is made out of is critical it needs to be a material that in non pourous and thus stops leakage of pollutants and water moving through the structure that becomes saturated with pollutants clay is the primary material required for containment structures since it blocks liquids and solids alike and this should be what most containment structures are built from in eco since it is the cheapest and most readily available beyond clay a plastic lining is another extremely useful part of containment structures for garbage a new block will be added called plastic lined clay that should provide improved pollutant containment for garbage over just clay the other material to consider for containment structures is concrete which should provide a bonus to large tailings containment structures this represents both an additional blocking layer for pollutants but especially it represents the added stability tailings containment structures need to be safe in the real world disaster comes from tailings containment structures most often when they collapse spectacularly though long term the pollution of the water table is probably comparably bad multi layer structures in reality containment structures are built usually with multiple kinds of layers all working together to ensure that rainwater does not break containment pollutants do not escape into the water table and gases or volatile liquids do not build up single walled containment structures can be built of clay plastic lined clay or concrete but advanced structures with maximum benefit see the document should require layers of clay or plastic lined clay and concrete for wet tailings or multiple layers of plastic lined clay for garbage these multi layered structures are an improvement to the basic single walled mechanics that aren t necessary for an mvp implementation of containment overcapacity a basic rule that should be added to containment structure mechanics that stipulates that any pollutant on the topmost block of the structure does not count as contained grounding an extension to the containment mechanics probably reliant on code that can detect multiple layers that requires an additional layers in all directions of solid blocks to establish that the structure is well grounded and not in danger of collapse this is particularly relevant for tailings blocks and garbage blocks could potentially not have this mechanic applied when calculating how much they are polluting drainage this expansion of containment mechanics means requiring pipes and pumps added to containment structures to avoid build up of rainfall etc and leaching of pollutants from the bottoms of containment structures basically it means hooking up pipes to the bottom of the structure which must be powered with pumps and over time produce small amounts of sewage from garbage or industrial sewage from wet tailings to be processed caps eventually caps should be added to containment mechanics to further advance players options for improved containment of pollutants especially for garbage landfills this is a secondary part of containment mechanics and not covered here
0
29,822
11,780,013,347
IssuesEvent
2020-03-16 19:09:37
tektoncd/dashboard
https://api.github.com/repos/tektoncd/dashboard
closed
Use X-Frame-Options header to prevent Clickjacking
security-medium
Dashboard should set the X-Frame-Options header in order to prevent "Clickjacking", also known as a "UI redress attack", in which an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top-level page. IBM staff can find more details via https://github.ibm.com/IBMCloudPak4Apps/icpa-security/issues/3.
True
Use X-Frame-Options header to prevent Clickjacking - Dashboard should set the X-Frame-Options header in order to prevent "Clickjacking", also known as a "UI redress attack", in which an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top-level page. IBM staff can find more details via https://github.ibm.com/IBMCloudPak4Apps/icpa-security/issues/3.
non_defect
use x frame options header to prevent clickjacking dashboard should set the x frame options header in order to prevent clickjacking also known as a ui redress attack in which an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top level page ibm staff can find more details via
0
81,525
30,922,362,534
IssuesEvent
2023-08-06 03:47:58
zed-industries/community
https://api.github.com/repos/zed-industries/community
opened
Buggy rename from sidebar
defect triage admin read
### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Reproduce steps: 1. Run "workspace::NewFile" by shortcut or command box. (DON'T create the new file from sidebar) 2. Run "workspace::Save" to save the file immediately with name `test.hml` in the Finder dialog. 3. Rename the file to `test.html` in the sidebar using the right click menu and confirm. Observation: 1. The file name is renamed to `test.html` in the sidebar. 2. The opened file / buffer is not rename to `test.html`, still `test.tml`. 3. Write some test in the opened buffer `test.hml` and save, a `test.hml` is created in the sidebar. 4. Clicking `test.html` from sidebar will not open the file `test.html`. 5. Close file / buffer `test.hml`, click `test.html` and an empty file is opened. Note that if the file `test.hml` is created from sidebar in reproduce steps 1. The issue does not happen. ### Environment Zed: v0.98.1 (preview) OS: macOS 13.2.1 Memory: 32 GiB Architecture: x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
1.0
Buggy rename from sidebar - ### Check for existing issues - [X] Completed ### Describe the bug / provide steps to reproduce it Reproduce steps: 1. Run "workspace::NewFile" by shortcut or command box. (DON'T create the new file from sidebar) 2. Run "workspace::Save" to save the file immediately with name `test.hml` in the Finder dialog. 3. Rename the file to `test.html` in the sidebar using the right click menu and confirm. Observation: 1. The file name is renamed to `test.html` in the sidebar. 2. The opened file / buffer is not rename to `test.html`, still `test.tml`. 3. Write some test in the opened buffer `test.hml` and save, a `test.hml` is created in the sidebar. 4. Clicking `test.html` from sidebar will not open the file `test.html`. 5. Close file / buffer `test.hml`, click `test.html` and an empty file is opened. Note that if the file `test.hml` is created from sidebar in reproduce steps 1. The issue does not happen. ### Environment Zed: v0.98.1 (preview) OS: macOS 13.2.1 Memory: 32 GiB Architecture: x86_64 ### If applicable, add mockups / screenshots to help explain present your vision of the feature _No response_ ### If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue. If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000. _No response_
defect
buggy rename from sidebar check for existing issues completed describe the bug provide steps to reproduce it reproduce steps run workspace newfile by shortcut or command box don t create the new file from sidebar run workspace save to save the file immediately with name test hml in the finder dialog rename the file to test html in the sidebar using the right click menu and confirm observation the file name is renamed to test html in the sidebar the opened file buffer is not rename to test html still test tml write some test in the opened buffer test hml and save a test hml is created in the sidebar clicking test html from sidebar will not open the file test html close file buffer test hml click test html and an empty file is opened note that if the file test hml is created from sidebar in reproduce steps the issue does not happen environment zed preview os macos memory gib architecture if applicable add mockups screenshots to help explain present your vision of the feature no response if applicable attach your library logs zed zed log file to this issue if you only need the most recent lines you can run the zed open log command palette action to see the last no response
1
74,917
7,452,151,116
IssuesEvent
2018-03-29 07:15:41
pravega/pravega
https://api.github.com/repos/pravega/pravega
closed
NPE in testReadWrite
area/client area/testing kind/bug priority/P0 status/needs-attention
**Problem description** We got the following stack trace in one of the Travis builds: ``` io.pravega.test.integration.ReadWriteTest > readWriteTest FAILED java.util.concurrent.ExecutionException: java.lang.NullPointerException at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at io.pravega.test.integration.ReadWriteTest.readWriteTest(ReadWriteTest.java:180) Caused by: java.lang.NullPointerException at io.pravega.client.stream.impl.EventStreamReaderImpl.acquireSegmentsIfNeeded(EventStreamReaderImpl.java:189) at io.pravega.client.stream.impl.EventStreamReaderImpl.updateGroupStateIfNeeded(EventStreamReaderImpl.java:157) at io.pravega.client.stream.impl.EventStreamReaderImpl.readNextEvent(EventStreamReaderImpl.java:91) at io.pravega.test.integration.ReadWriteTest.lambda$startNewReader$1(ReadWriteTest.java:238) ``` **Problem location** Reader group state manager. **Suggestions for an improvement** Track and fix NPE.
1.0
NPE in testReadWrite - **Problem description** We got the following stack trace in one of the Travis builds: ``` io.pravega.test.integration.ReadWriteTest > readWriteTest FAILED java.util.concurrent.ExecutionException: java.lang.NullPointerException at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895) at io.pravega.test.integration.ReadWriteTest.readWriteTest(ReadWriteTest.java:180) Caused by: java.lang.NullPointerException at io.pravega.client.stream.impl.EventStreamReaderImpl.acquireSegmentsIfNeeded(EventStreamReaderImpl.java:189) at io.pravega.client.stream.impl.EventStreamReaderImpl.updateGroupStateIfNeeded(EventStreamReaderImpl.java:157) at io.pravega.client.stream.impl.EventStreamReaderImpl.readNextEvent(EventStreamReaderImpl.java:91) at io.pravega.test.integration.ReadWriteTest.lambda$startNewReader$1(ReadWriteTest.java:238) ``` **Problem location** Reader group state manager. **Suggestions for an improvement** Track and fix NPE.
non_defect
npe in testreadwrite problem description we got the following stack trace in one of the travis builds io pravega test integration readwritetest readwritetest failed java util concurrent executionexception java lang nullpointerexception at java util concurrent completablefuture reportget completablefuture java at java util concurrent completablefuture get completablefuture java at io pravega test integration readwritetest readwritetest readwritetest java caused by java lang nullpointerexception at io pravega client stream impl eventstreamreaderimpl acquiresegmentsifneeded eventstreamreaderimpl java at io pravega client stream impl eventstreamreaderimpl updategroupstateifneeded eventstreamreaderimpl java at io pravega client stream impl eventstreamreaderimpl readnextevent eventstreamreaderimpl java at io pravega test integration readwritetest lambda startnewreader readwritetest java problem location reader group state manager suggestions for an improvement track and fix npe
0
25,595
4,417,701,335
IssuesEvent
2016-08-15 07:20:27
snowie2000/mactype
https://api.github.com/repos/snowie2000/mactype
closed
Chromium 40.0.2202.0 (32 and 64 bit) and Chrome 39.0.2171.36 m (32 and 64bit) are not affected by Mactype 1.12.1022 anymore
auto-migrated Priority-Medium Type-Defect
``` I used an older version of Chromium 64bit and the Mactype rendering was enabled. It was also enabled in Chrome, also an older version. For other reasons I decided to update Chromium 64bit to the newest version. After updating the text lost its Mactype rendering. I checked in the older version of Chrome and it was disabled there too. Now I've updated both Chromium and Chrome, both 32 and 64bit and I've reinstalled Mactype but the font rendering is gone. It's still present in my Windows install, I checked every browser I have and they all lack the rendering now. ``` Original issue reported on code.google.com by `mirelmir...@gmail.com` on 2 Nov 2014 at 10:51
1.0
Chromium 40.0.2202.0 (32 and 64 bit) and Chrome 39.0.2171.36 m (32 and 64bit) are not affected by Mactype 1.12.1022 anymore - ``` I used an older version of Chromium 64bit and the Mactype rendering was enabled. It was also enabled in Chrome, also an older version. For other reasons I decided to update Chromium 64bit to the newest version. After updating the text lost its Mactype rendering. I checked in the older version of Chrome and it was disabled there too. Now I've updated both Chromium and Chrome, both 32 and 64bit and I've reinstalled Mactype but the font rendering is gone. It's still present in my Windows install, I checked every browser I have and they all lack the rendering now. ``` Original issue reported on code.google.com by `mirelmir...@gmail.com` on 2 Nov 2014 at 10:51
defect
chromium and bit and chrome m and are not affected by mactype anymore i used an older version of chromium and the mactype rendering was enabled it was also enabled in chrome also an older version for other reasons i decided to update chromium to the newest version after updating the text lost its mactype rendering i checked in the older version of chrome and it was disabled there too now i ve updated both chromium and chrome both and and i ve reinstalled mactype but the font rendering is gone it s still present in my windows install i checked every browser i have and they all lack the rendering now original issue reported on code google com by mirelmir gmail com on nov at
1
339,751
30,471,709,296
IssuesEvent
2023-07-17 14:01:59
flutter/flutter
https://api.github.com/repos/flutter/flutter
opened
[google_maps_flutter] Android integration test crashing in CI: `glCreateShader failed`
a: tests platform-android p: maps package c: flake team-android fyi-ecosystem
I'm seeing flaky failures with the maps integration tests recently; example runs: - https://firebase.corp.google.com/project/flutter-cirrus/testlab/histories/bh.ccd6322d1559daa/matrices/7573375672107147278/executions/bs.4e441d9bcae26d4/issues - https://firebase.corp.google.com/project/flutter-cirrus/testlab/histories/bh.ccd6322d1559daa/matrices/8994558717084734910/executions/bs.b5c61781c34f7b94 The report is: ``` Test run failed to complete. Instrumentation run failed due to Process crashed. Fatal exception Fatal AndroidRuntime Exception detected. FATAL EXCEPTION: GL-Map Process: io.flutter.plugins.googlemapsexample, PID: 23876 ers: glCreateShader failed with return value 0and GL error code 0 GL info log: <no info log> Shader vert source: <unused source> at eor.b(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:3) at eor.a(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:1) at epn.<init>(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:2) at epo.<init>(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:1) at efd.e(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:17) at eom.run(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:35) ``` It's possible this is a regression on the maps side rather than a plugin issue? Filing for tracking and further investigation.
1.0
[google_maps_flutter] Android integration test crashing in CI: `glCreateShader failed` - I'm seeing flaky failures with the maps integration tests recently; example runs: - https://firebase.corp.google.com/project/flutter-cirrus/testlab/histories/bh.ccd6322d1559daa/matrices/7573375672107147278/executions/bs.4e441d9bcae26d4/issues - https://firebase.corp.google.com/project/flutter-cirrus/testlab/histories/bh.ccd6322d1559daa/matrices/8994558717084734910/executions/bs.b5c61781c34f7b94 The report is: ``` Test run failed to complete. Instrumentation run failed due to Process crashed. Fatal exception Fatal AndroidRuntime Exception detected. FATAL EXCEPTION: GL-Map Process: io.flutter.plugins.googlemapsexample, PID: 23876 ers: glCreateShader failed with return value 0and GL error code 0 GL info log: <no info log> Shader vert source: <unused source> at eor.b(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:3) at eor.a(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:1) at epn.<init>(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:2) at epo.<init>(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:1) at efd.e(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:17) at eom.run(:com.google.android.gms.policy_maps_core_dynamite@224312102@224312102065.485900589.485900589:35) ``` It's possible this is a regression on the maps side rather than a plugin issue? Filing for tracking and further investigation.
non_defect
android integration test crashing in ci glcreateshader failed i m seeing flaky failures with the maps integration tests recently example runs the report is test run failed to complete instrumentation run failed due to process crashed fatal exception fatal androidruntime exception detected fatal exception gl map process io flutter plugins googlemapsexample pid ers glcreateshader failed with return value gl error code gl info log shader vert source at eor b com google android gms policy maps core dynamite at eor a com google android gms policy maps core dynamite at epn com google android gms policy maps core dynamite at epo com google android gms policy maps core dynamite at efd e com google android gms policy maps core dynamite at eom run com google android gms policy maps core dynamite it s possible this is a regression on the maps side rather than a plugin issue filing for tracking and further investigation
0
441,596
12,727,699,537
IssuesEvent
2020-06-25 00:03:16
ahmedkaludi/accelerated-mobile-pages
https://api.github.com/repos/ahmedkaludi/accelerated-mobile-pages
closed
The below script is adding when the user using orbital theme and it is showing a validation error on the AMP Pages
NEXT UPDATE [Priority: HIGH] bug
The below script is adding when the user using orbital theme and it showing a validation error on the AMP Pages Script: <script>function orbital_expand_navbar() {var element = document.getElementById("search-navbar");if (element.classList.contains('expand-searchform')) {element.classList.remove("expand-searchform");return;}else {element.classList.add("expand-searchform");document.getElementById("search-input").focus();}}</script> Ref:https://secure.helpscout.net/conversation/1172829716/131356?folderId=3575684
1.0
The below script is adding when the user using orbital theme and it is showing a validation error on the AMP Pages - The below script is adding when the user using orbital theme and it showing a validation error on the AMP Pages Script: <script>function orbital_expand_navbar() {var element = document.getElementById("search-navbar");if (element.classList.contains('expand-searchform')) {element.classList.remove("expand-searchform");return;}else {element.classList.add("expand-searchform");document.getElementById("search-input").focus();}}</script> Ref:https://secure.helpscout.net/conversation/1172829716/131356?folderId=3575684
non_defect
the below script is adding when the user using orbital theme and it is showing a validation error on the amp pages the below script is adding when the user using orbital theme and it showing a validation error on the amp pages script function orbital expand navbar var element document getelementbyid search navbar if element classlist contains expand searchform element classlist remove expand searchform return else element classlist add expand searchform document getelementbyid search input focus ref
0
71,610
23,719,817,152
IssuesEvent
2022-08-30 14:32:11
matrix-org/synapse
https://api.github.com/repos/matrix-org/synapse
closed
`synapse_rate_limit_sleep_affected_hosts` registered twice
T-Defect X-Release-Blocker X-Regression A-Metrics
this is typically a sign that metrics will not be reported correctly: ``` 2022-08-26 11:49:42,388 - synapse.metrics - 117 - WARNING - sentinel - synapse_rate_limit_sleep_affected_hosts already registered, reregistering 2022-08-26 11:49:42,388 - synapse.metrics - 117 - WARNING - sentinel - synapse_rate_limit_reject_affected_hosts already registered, reregistering ``` These things were added in ~~#13534~~ https://github.com/matrix-org/synapse/pull/13541.
1.0
`synapse_rate_limit_sleep_affected_hosts` registered twice - this is typically a sign that metrics will not be reported correctly: ``` 2022-08-26 11:49:42,388 - synapse.metrics - 117 - WARNING - sentinel - synapse_rate_limit_sleep_affected_hosts already registered, reregistering 2022-08-26 11:49:42,388 - synapse.metrics - 117 - WARNING - sentinel - synapse_rate_limit_reject_affected_hosts already registered, reregistering ``` These things were added in ~~#13534~~ https://github.com/matrix-org/synapse/pull/13541.
defect
synapse rate limit sleep affected hosts registered twice this is typically a sign that metrics will not be reported correctly synapse metrics warning sentinel synapse rate limit sleep affected hosts already registered reregistering synapse metrics warning sentinel synapse rate limit reject affected hosts already registered reregistering these things were added in
1
763,436
26,756,353,983
IssuesEvent
2023-01-31 00:44:54
apcountryman/picolibrary
https://api.github.com/repos/apcountryman/picolibrary
closed
Add IPv4 facilities documentation
priority-normal status-awaiting_review type-feature
Add IPv4 facilities documentation. - [x] `::picolibrary::IPv4::Address` - [x] `::picolibrary::Output_Formatter<IPv4::Address>` - [x] `::picolibrary::Testing::Automated::random<IPv4::Address>()` - [x] `test-automated-picolibrary-ipv4-address`
1.0
Add IPv4 facilities documentation - Add IPv4 facilities documentation. - [x] `::picolibrary::IPv4::Address` - [x] `::picolibrary::Output_Formatter<IPv4::Address>` - [x] `::picolibrary::Testing::Automated::random<IPv4::Address>()` - [x] `test-automated-picolibrary-ipv4-address`
non_defect
add facilities documentation add facilities documentation picolibrary address picolibrary output formatter picolibrary testing automated random test automated picolibrary address
0
21,496
3,512,775,647
IssuesEvent
2016-01-11 05:04:37
Virtual-Labs/problem-solving-iiith
https://api.github.com/repos/Virtual-Labs/problem-solving-iiith
reopened
QA_Searching and Sorting_Indroduction
Category :Functionality Defect raised on: 26-11-2015 Developed by:IIIT Hyd Release Number Severity :S2 Status :Open Version Number :1.1
Defect Description: In the introduction page of "Searching and Sorting" experiment,when we click on the feedback link it is redirecting to an error page(404) instead feedback page should get opened. Actual Result: In the introduction page of "Searching and Sorting" experiment,when we click on the feedback link it is redirecting to an error page(404). Environment : OS: Windows 7, Ubuntu-16.04,Centos-6 Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0 Bandwidth : 100Mbps Hardware Configuration:8GBRAM , Processor:i5 Test Step Link: https://github.com/Virtual-Labs/problem-solving-iiith/blob/master/test-cases/integration_test-cases/Searching%20and%20Sorting/Searching%20and%20Sorting_03_Introduction_p1.org ![404](https://cloud.githubusercontent.com/assets/14869397/11435860/dba4a612-9504-11e5-9c88-24d88b9d7622.png)
1.0
QA_Searching and Sorting_Indroduction - Defect Description: In the introduction page of "Searching and Sorting" experiment,when we click on the feedback link it is redirecting to an error page(404) instead feedback page should get opened. Actual Result: In the introduction page of "Searching and Sorting" experiment,when we click on the feedback link it is redirecting to an error page(404). Environment : OS: Windows 7, Ubuntu-16.04,Centos-6 Browsers: Firefox-42.0,Chrome-47.0,chromium-45.0 Bandwidth : 100Mbps Hardware Configuration:8GBRAM , Processor:i5 Test Step Link: https://github.com/Virtual-Labs/problem-solving-iiith/blob/master/test-cases/integration_test-cases/Searching%20and%20Sorting/Searching%20and%20Sorting_03_Introduction_p1.org ![404](https://cloud.githubusercontent.com/assets/14869397/11435860/dba4a612-9504-11e5-9c88-24d88b9d7622.png)
defect
qa searching and sorting indroduction defect description in the introduction page of searching and sorting experiment when we click on the feedback link it is redirecting to an error page instead feedback page should get opened actual result in the introduction page of searching and sorting experiment when we click on the feedback link it is redirecting to an error page environment os windows ubuntu centos browsers firefox chrome chromium bandwidth hardware configuration processor test step link
1
9,182
2,615,137,557
IssuesEvent
2015-03-01 06:11:08
chrsmith/reaver-wps
https://api.github.com/repos/chrsmith/reaver-wps
closed
feature request: os x support
auto-migrated Priority-Medium Type-Defect
``` Currently, only Linux platform is supported. How difficult would it be to port reaver-wps to OS X? ``` Original issue reported on code.google.com by `tester.t...@gmail.com` on 29 Dec 2011 at 9:35
1.0
feature request: os x support - ``` Currently, only Linux platform is supported. How difficult would it be to port reaver-wps to OS X? ``` Original issue reported on code.google.com by `tester.t...@gmail.com` on 29 Dec 2011 at 9:35
defect
feature request os x support currently only linux platform is supported how difficult would it be to port reaver wps to os x original issue reported on code google com by tester t gmail com on dec at
1
77,526
27,040,637,474
IssuesEvent
2023-02-13 04:52:19
vector-im/element-android
https://api.github.com/repos/vector-im/element-android
opened
URL previews do not work in E2EE rooms
T-Defect
### Steps to reproduce A URL preview option is present in the settings menu, but unlike the desktop clients, URL previews do not work in E2EE rooms, even when the option is enabled. ### Outcome See above for detail. ### Your phone model OnePlus 7 Pro ### Operating system version LineageOS 20.0 (Android 13) ### Application version and app store _No response_ ### Homeserver Synapse 1.74.0-1 ### Will you send logs? No ### Are you willing to provide a PR? No
1.0
URL previews do not work in E2EE rooms - ### Steps to reproduce A URL preview option is present in the settings menu, but unlike the desktop clients, URL previews do not work in E2EE rooms, even when the option is enabled. ### Outcome See above for detail. ### Your phone model OnePlus 7 Pro ### Operating system version LineageOS 20.0 (Android 13) ### Application version and app store _No response_ ### Homeserver Synapse 1.74.0-1 ### Will you send logs? No ### Are you willing to provide a PR? No
defect
url previews do not work in rooms steps to reproduce a url preview option is present in the settings menu but unlike the desktop clients url previews do not work in rooms even when the option is enabled outcome see above for detail your phone model oneplus pro operating system version lineageos android application version and app store no response homeserver synapse will you send logs no are you willing to provide a pr no
1
67,474
20,963,543,006
IssuesEvent
2022-03-28 02:39:16
openzfs/zfs
https://api.github.com/repos/openzfs/zfs
opened
Unable to compile on ARM64 with clang
Type: Defect
<!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Lenovo c630 w/Snapdragon sdm850 cortex.a75-cortex.a55 --- | --- Distribution Name | Arch Distribution Version | Kernel Version | 5.17.0 Architecture | ARM64 OpenZFS Version | Latest git + stable release <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing While building under GCC works fine, I am unable to with clang+lto or -lto ### Describe how to reproduce the problem Compile linux kernel with ZFS +clang or compile ZFS as a dkms module for a kernel compiled with clang ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> LTO [M] drivers/bluetooth/btrsi.lto.o LTO [M] drivers/target/target_core_user.lto.o GEN .version CHK include/generated/compile.h GEN .tmp_initcalls.lds GEN .tmp_symversions.lds LTO vmlinux.o ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2150099659 ld.lld: error: couldn't allocate output register for constraint '{v7}' at line 2150100117 ld.lld: error: couldn't allocate input reg for constraint '{v1}' at line 2150100832 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2150101251 ld.lld: error: couldn't allocate output register for constraint '{v7}' at line 2150101709 ld.lld: error: couldn't allocate input reg for constraint '{v1}' at line 2150102484 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157696494 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157700699 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157704887 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157709206 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157714137 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157718730 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157724053 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157728241 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157732948 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157738385 ld.lld: error: couldn't allocate output register for constraint '{v16}' at line 2157781218 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157782350 ld.lld: error: couldn't allocate output register for constraint '{v4}' at line 2157786517 ld.lld: error: couldn't allocate output register for constraint '{v4}' at line 2157789877 ld.lld: error: too many errors emitted, stopping now (use -error-limit=0 to see all errors) make: *** [Makefile:1156: vmlinux] Error 1 ==> ERROR: A failure occurred in build(). Aborting... ``` ``` More detail is obtained when not using LTO, here is output without LTO and using with "make v=1" : CC fs/proc/inode.o fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:151:2: error: couldn't allocate output register for constraint '{v0}' NEON_INIT_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:75:6: note: expanded from macro 'NEON_INIT_LOOP' asm("eor %[ZERO].16b,%[ZERO].16b,%[ZERO].16b\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:154:3: error: couldn't allocate output register for constraint '{v7}' NEON_MAIN_LOOP(NEON_DONT_REVERSE); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:93:6: note: expanded from macro 'NEON_MAIN_LOOP' asm("ld1 { %[SRC].4s }, %[IP]\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:157:2: error: couldn't allocate input reg for constraint '{v1}' NEON_FINI_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:112:6: note: expanded from macro 'NEON_FINI_LOOP' asm("st1 { %[ACC0].4s },%[DST0]\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:190:2: error: couldn't allocate output register for constraint '{v0}' NEON_INIT_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:75:6: note: expanded from macro 'NEON_INIT_LOOP' asm("eor %[ZERO].16b,%[ZERO].16b,%[ZERO].16b\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:193:3: error: couldn't allocate output register for constraint '{v7}' NEON_MAIN_LOOP(NEON_DO_REVERSE); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:93:6: note: expanded from macro 'NEON_MAIN_LOOP' asm("ld1 { %[SRC].4s }, %[IP]\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:196:2: error: couldn't allocate input reg for constraint '{v1}' NEON_FINI_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:112:6: note: expanded from macro 'NEON_FINI_LOOP' asm("st1 { %[ACC0].4s },%[DST0]\n" \ ^ 6 errors generated. make[3]: *** [scripts/Makefile.build:288: fs/zfs/zcommon/zfs_fletcher_aarch64_neon.o] Error 1 make[2]: *** [scripts/Makefile.build:550: fs/zfs/zcommon] Error 2 make[1]: *** [scripts/Makefile.build:550: fs/zfs] Error 2 make[1]: *** Waiting for unfinished jobs.... ``` ```
1.0
Unable to compile on ARM64 with clang - <!-- Please fill out the following template, which will help other contributors address your issue. --> <!-- Thank you for reporting an issue. *IMPORTANT* - Please check our issue tracker before opening a new issue. Additional valuable information can be found in the OpenZFS documentation and mailing list archives. Please fill in as much of the template as possible. --> ### System information <!-- add version after "|" character --> Type | Lenovo c630 w/Snapdragon sdm850 cortex.a75-cortex.a55 --- | --- Distribution Name | Arch Distribution Version | Kernel Version | 5.17.0 Architecture | ARM64 OpenZFS Version | Latest git + stable release <!-- Command to find OpenZFS version: zfs version Commands to find kernel version: uname -r # Linux freebsd-version -r # FreeBSD --> ### Describe the problem you're observing While building under GCC works fine, I am unable to with clang+lto or -lto ### Describe how to reproduce the problem Compile linux kernel with ZFS +clang or compile ZFS as a dkms module for a kernel compiled with clang ### Include any warning/errors/backtraces from the system logs <!-- *IMPORTANT* - Please mark logs and text output from terminal commands or else Github will not display them correctly. An example is provided below. Example: ``` this is an example how log text should be marked (wrap it with ```) ``` --> LTO [M] drivers/bluetooth/btrsi.lto.o LTO [M] drivers/target/target_core_user.lto.o GEN .version CHK include/generated/compile.h GEN .tmp_initcalls.lds GEN .tmp_symversions.lds LTO vmlinux.o ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2150099659 ld.lld: error: couldn't allocate output register for constraint '{v7}' at line 2150100117 ld.lld: error: couldn't allocate input reg for constraint '{v1}' at line 2150100832 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2150101251 ld.lld: error: couldn't allocate output register for constraint '{v7}' at line 2150101709 ld.lld: error: couldn't allocate input reg for constraint '{v1}' at line 2150102484 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157696494 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157700699 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157704887 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157709206 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157714137 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157718730 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157724053 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157728241 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157732948 ld.lld: error: couldn't allocate input reg for constraint '{v0}' at line 2157738385 ld.lld: error: couldn't allocate output register for constraint '{v16}' at line 2157781218 ld.lld: error: couldn't allocate output register for constraint '{v0}' at line 2157782350 ld.lld: error: couldn't allocate output register for constraint '{v4}' at line 2157786517 ld.lld: error: couldn't allocate output register for constraint '{v4}' at line 2157789877 ld.lld: error: too many errors emitted, stopping now (use -error-limit=0 to see all errors) make: *** [Makefile:1156: vmlinux] Error 1 ==> ERROR: A failure occurred in build(). Aborting... ``` ``` More detail is obtained when not using LTO, here is output without LTO and using with "make v=1" : CC fs/proc/inode.o fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:151:2: error: couldn't allocate output register for constraint '{v0}' NEON_INIT_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:75:6: note: expanded from macro 'NEON_INIT_LOOP' asm("eor %[ZERO].16b,%[ZERO].16b,%[ZERO].16b\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:154:3: error: couldn't allocate output register for constraint '{v7}' NEON_MAIN_LOOP(NEON_DONT_REVERSE); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:93:6: note: expanded from macro 'NEON_MAIN_LOOP' asm("ld1 { %[SRC].4s }, %[IP]\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:157:2: error: couldn't allocate input reg for constraint '{v1}' NEON_FINI_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:112:6: note: expanded from macro 'NEON_FINI_LOOP' asm("st1 { %[ACC0].4s },%[DST0]\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:190:2: error: couldn't allocate output register for constraint '{v0}' NEON_INIT_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:75:6: note: expanded from macro 'NEON_INIT_LOOP' asm("eor %[ZERO].16b,%[ZERO].16b,%[ZERO].16b\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:193:3: error: couldn't allocate output register for constraint '{v7}' NEON_MAIN_LOOP(NEON_DO_REVERSE); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:93:6: note: expanded from macro 'NEON_MAIN_LOOP' asm("ld1 { %[SRC].4s }, %[IP]\n" \ ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:196:2: error: couldn't allocate input reg for constraint '{v1}' NEON_FINI_LOOP(); ^ fs/zfs/zcommon/zfs_fletcher_aarch64_neon.c:112:6: note: expanded from macro 'NEON_FINI_LOOP' asm("st1 { %[ACC0].4s },%[DST0]\n" \ ^ 6 errors generated. make[3]: *** [scripts/Makefile.build:288: fs/zfs/zcommon/zfs_fletcher_aarch64_neon.o] Error 1 make[2]: *** [scripts/Makefile.build:550: fs/zfs/zcommon] Error 2 make[1]: *** [scripts/Makefile.build:550: fs/zfs] Error 2 make[1]: *** Waiting for unfinished jobs.... ``` ```
defect
unable to compile on with clang thank you for reporting an issue important please check our issue tracker before opening a new issue additional valuable information can be found in the openzfs documentation and mailing list archives please fill in as much of the template as possible system information type lenovo w snapdragon cortex cortex distribution name arch distribution version kernel version architecture openzfs version latest git stable release command to find openzfs version zfs version commands to find kernel version uname r linux freebsd version r freebsd describe the problem you re observing while building under gcc works fine i am unable to with clang lto or lto describe how to reproduce the problem compile linux kernel with zfs clang or compile zfs as a dkms module for a kernel compiled with clang include any warning errors backtraces from the system logs important please mark logs and text output from terminal commands or else github will not display them correctly an example is provided below example this is an example how log text should be marked wrap it with lto drivers bluetooth btrsi lto o lto drivers target target core user lto o gen version chk include generated compile h gen tmp initcalls lds gen tmp symversions lds lto vmlinux o ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate input reg for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate input reg for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate input reg for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate input reg for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate input reg for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate input reg for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error couldn t allocate output register for constraint at line ld lld error too many errors emitted stopping now use error limit to see all errors make error error a failure occurred in build aborting more detail is obtained when not using lto here is output without lto and using with make v cc fs proc inode o fs zfs zcommon zfs fletcher neon c error couldn t allocate output register for constraint neon init loop fs zfs zcommon zfs fletcher neon c note expanded from macro neon init loop asm eor n fs zfs zcommon zfs fletcher neon c error couldn t allocate output register for constraint neon main loop neon dont reverse fs zfs zcommon zfs fletcher neon c note expanded from macro neon main loop asm n fs zfs zcommon zfs fletcher neon c error couldn t allocate input reg for constraint neon fini loop fs zfs zcommon zfs fletcher neon c note expanded from macro neon fini loop asm n fs zfs zcommon zfs fletcher neon c error couldn t allocate output register for constraint neon init loop fs zfs zcommon zfs fletcher neon c note expanded from macro neon init loop asm eor n fs zfs zcommon zfs fletcher neon c error couldn t allocate output register for constraint neon main loop neon do reverse fs zfs zcommon zfs fletcher neon c note expanded from macro neon main loop asm n fs zfs zcommon zfs fletcher neon c error couldn t allocate input reg for constraint neon fini loop fs zfs zcommon zfs fletcher neon c note expanded from macro neon fini loop asm n errors generated make error make error make error make waiting for unfinished jobs
1
73,185
19,590,414,363
IssuesEvent
2022-01-05 12:18:51
envoyproxy/envoy
https://api.github.com/repos/envoyproxy/envoy
reopened
Newer release available org_llvm_releases_compiler_rt: llvmorg-13.0.0
area/build no stalebot dependencies
*WARNING* org_llvm_releases_compiler_rt has a newer release than llvmorg-12.0.1@<2021-07-09 05:26:48>:llvmorg-13.0.0@<2021-10-01 02:46:16>
1.0
Newer release available org_llvm_releases_compiler_rt: llvmorg-13.0.0 - *WARNING* org_llvm_releases_compiler_rt has a newer release than llvmorg-12.0.1@<2021-07-09 05:26:48>:llvmorg-13.0.0@<2021-10-01 02:46:16>
non_defect
newer release available org llvm releases compiler rt llvmorg warning org llvm releases compiler rt has a newer release than llvmorg llvmorg
0
818,720
30,701,353,365
IssuesEvent
2023-07-26 23:56:36
marbl/HG002-issues
https://api.github.com/repos/marbl/HG002-issues
closed
Issue: chr11_MATERNAL:39789935-39789936
priority shortread_homnonref element_evidence sbb_evidence correction_applied
### Assembly Region chr11_MATERNAL:39789935-39789936 ### Assembly Version v0.7 ### DeepVariant Call chr11_MATERNAL 39789935 . CA C 28.6 PASS . GT:GQ:DP:AD:VAF:PL 1/1:22:75:3,71:0.946667:28,23,0
1.0
Issue: chr11_MATERNAL:39789935-39789936 - ### Assembly Region chr11_MATERNAL:39789935-39789936 ### Assembly Version v0.7 ### DeepVariant Call chr11_MATERNAL 39789935 . CA C 28.6 PASS . GT:GQ:DP:AD:VAF:PL 1/1:22:75:3,71:0.946667:28,23,0
non_defect
issue maternal assembly region maternal assembly version deepvariant call maternal ca c pass gt gq dp ad vaf pl
0
240,837
7,806,443,597
IssuesEvent
2018-06-11 14:05:17
zephyrproject-rtos/zephyr
https://api.github.com/repos/zephyrproject-rtos/zephyr
closed
Rework ARC exception stack
area: ARC enhancement priority: high
Per @andrewboie in #8246: > Having 256 bytes reserved for a fatal exception stack does not sit well with me, if we go that route it will never get fixed and users will lose that RAM forever. We do have a double-fault stack for x86 but it's only 8 bytes and takes advantage of ancient i386 hardware task switching. We agreed to merge the change in 1.12 to fix the failure and close out the release, then rework in 1.13 to reclaim the RAM.
1.0
Rework ARC exception stack - Per @andrewboie in #8246: > Having 256 bytes reserved for a fatal exception stack does not sit well with me, if we go that route it will never get fixed and users will lose that RAM forever. We do have a double-fault stack for x86 but it's only 8 bytes and takes advantage of ancient i386 hardware task switching. We agreed to merge the change in 1.12 to fix the failure and close out the release, then rework in 1.13 to reclaim the RAM.
non_defect
rework arc exception stack per andrewboie in having bytes reserved for a fatal exception stack does not sit well with me if we go that route it will never get fixed and users will lose that ram forever we do have a double fault stack for but it s only bytes and takes advantage of ancient hardware task switching we agreed to merge the change in to fix the failure and close out the release then rework in to reclaim the ram
0
264,726
23,136,278,479
IssuesEvent
2022-07-28 14:34:11
elastic/kibana
https://api.github.com/repos/elastic/kibana
closed
Failing test: Chrome X-Pack UI Functional Tests - ML data_visualizer.x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer·ts - machine learning - data visualizer index based with farequote KQL saved search displays index details
blocker :ml failed-test skipped-test v8.4.0
A test failed on a tracked branch ``` Error: retry.tryForTime timeout: Error: retry.tryForTime timeout: Error: Expected total document count to be '34,415' (got '35,000') at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11) at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-c226823aee5eb1bd/elastic/kibana-on-merge/kibana/x-pack/test/functional/services/ml/data_visualizer_index_based.ts:29:29 at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at runAttempt (test/common/services/retry/retry_for_success.ts:29:15) at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.assertTotalDocumentCount (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:27:7) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-c226823aee5eb1bd/elastic/kibana-on-merge/kibana/x-pack/test/functional/services/ml/data_visualizer_index_based.ts:40:9 at onFailure (test/common/services/retry/retry_for_success.ts:17:9) at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.assertTotalDocumentCount (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:27:7) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-c226823aee5eb1bd/elastic/kibana-on-merge/kibana/x-pack/test/functional/services/ml/data_visualizer_index_based.ts:40:9 at runAttempt (test/common/services/retry/retry_for_success.ts:29:15) at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.clickUseFullDataButton (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:37:7) at Context.<anonymous> (x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer.ts:44:7) at onFailure (test/common/services/retry/retry_for_success.ts:17:9) at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.clickUseFullDataButton (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:37:7) at Context.<anonymous> (x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer.ts:44:7) at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/18781#0182343d-7335-4777-af5f-8aa64800eff3) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests - ML data_visualizer.x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer·ts","test.name":"machine learning - data visualizer index based with farequote KQL saved search displays index details","test.failCount":1}} -->
2.0
Failing test: Chrome X-Pack UI Functional Tests - ML data_visualizer.x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer·ts - machine learning - data visualizer index based with farequote KQL saved search displays index details - A test failed on a tracked branch ``` Error: retry.tryForTime timeout: Error: retry.tryForTime timeout: Error: Expected total document count to be '34,415' (got '35,000') at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11) at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-c226823aee5eb1bd/elastic/kibana-on-merge/kibana/x-pack/test/functional/services/ml/data_visualizer_index_based.ts:29:29 at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at runAttempt (test/common/services/retry/retry_for_success.ts:29:15) at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.assertTotalDocumentCount (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:27:7) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-c226823aee5eb1bd/elastic/kibana-on-merge/kibana/x-pack/test/functional/services/ml/data_visualizer_index_based.ts:40:9 at onFailure (test/common/services/retry/retry_for_success.ts:17:9) at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.assertTotalDocumentCount (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:27:7) at /var/lib/buildkite-agent/builds/kb-n2-4-spot-c226823aee5eb1bd/elastic/kibana-on-merge/kibana/x-pack/test/functional/services/ml/data_visualizer_index_based.ts:40:9 at runAttempt (test/common/services/retry/retry_for_success.ts:29:15) at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.clickUseFullDataButton (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:37:7) at Context.<anonymous> (x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer.ts:44:7) at onFailure (test/common/services/retry/retry_for_success.ts:17:9) at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13) at RetryService.tryForTime (test/common/services/retry/retry.ts:22:12) at Object.clickUseFullDataButton (x-pack/test/functional/services/ml/data_visualizer_index_based.ts:37:7) at Context.<anonymous> (x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer.ts:44:7) at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16) ``` First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/18781#0182343d-7335-4777-af5f-8aa64800eff3) <!-- kibanaCiData = {"failed-test":{"test.class":"Chrome X-Pack UI Functional Tests - ML data_visualizer.x-pack/test/functional/apps/ml/data_visualizer/index_data_visualizer·ts","test.name":"machine learning - data visualizer index based with farequote KQL saved search displays index details","test.failCount":1}} -->
non_defect
failing test chrome x pack ui functional tests ml data visualizer x pack test functional apps ml data visualizer index data visualizer·ts machine learning data visualizer index based with farequote kql saved search displays index details a test failed on a tracked branch error retry tryfortime timeout error retry tryfortime timeout error expected total document count to be got at assertion assert node modules kbn expect expect js at assertion eql node modules kbn expect expect js at var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack test functional services ml data visualizer index based ts at runmicrotasks at processticksandrejections node internal process task queues at runattempt test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice tryfortime test common services retry retry ts at object asserttotaldocumentcount x pack test functional services ml data visualizer index based ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack test functional services ml data visualizer index based ts at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice tryfortime test common services retry retry ts at object asserttotaldocumentcount x pack test functional services ml data visualizer index based ts at var lib buildkite agent builds kb spot elastic kibana on merge kibana x pack test functional services ml data visualizer index based ts at runattempt test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice tryfortime test common services retry retry ts at object clickusefulldatabutton x pack test functional services ml data visualizer index based ts at context x pack test functional apps ml data visualizer index data visualizer ts at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice tryfortime test common services retry retry ts at object clickusefulldatabutton x pack test functional services ml data visualizer index based ts at context x pack test functional apps ml data visualizer index data visualizer ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js first failure
0
140,439
18,901,489,236
IssuesEvent
2021-11-16 01:48:16
Srinivasanms16/EmployeeInformation
https://api.github.com/repos/Srinivasanms16/EmployeeInformation
closed
WS-2021-0039 (Low) detected in core-9.0.7.tgz - autoclosed
security vulnerability
## WS-2021-0039 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>core-9.0.7.tgz</b></p></summary> <p>Angular - the core framework</p> <p>Library home page: <a href="https://registry.npmjs.org/@angular/core/-/core-9.0.7.tgz">https://registry.npmjs.org/@angular/core/-/core-9.0.7.tgz</a></p> <p>Path to dependency file: EmployeeInformation/package.json</p> <p>Path to vulnerable library: /node_modules/@angular/core/package.json</p> <p> Dependency Hierarchy: - :x: **core-9.0.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Srinivasanms16/EmployeeInformation/commit/ea3536ba0d959fc4f94bf484768cf26c70ed48cb">ea3536ba0d959fc4f94bf484768cf26c70ed48cb</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-Site Scripting (XSS) vulnerability was found in @angular/core before 11.1.1. HTML doesn't specify any way to escape comment end text inside the comment. <p>Publish Date: 2021-01-26 <p>URL: <a href=https://github.com/angular/angular/commit/97ec6e48493bf9418971436d885470a66e71f045>WS-2021-0039</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/angular/angular/releases/tag/11.1.1">https://github.com/angular/angular/releases/tag/11.1.1</a></p> <p>Release Date: 2021-01-26</p> <p>Fix Resolution: @angular/core - 11.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2021-0039 (Low) detected in core-9.0.7.tgz - autoclosed - ## WS-2021-0039 - Low Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>core-9.0.7.tgz</b></p></summary> <p>Angular - the core framework</p> <p>Library home page: <a href="https://registry.npmjs.org/@angular/core/-/core-9.0.7.tgz">https://registry.npmjs.org/@angular/core/-/core-9.0.7.tgz</a></p> <p>Path to dependency file: EmployeeInformation/package.json</p> <p>Path to vulnerable library: /node_modules/@angular/core/package.json</p> <p> Dependency Hierarchy: - :x: **core-9.0.7.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/Srinivasanms16/EmployeeInformation/commit/ea3536ba0d959fc4f94bf484768cf26c70ed48cb">ea3536ba0d959fc4f94bf484768cf26c70ed48cb</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/low_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Cross-Site Scripting (XSS) vulnerability was found in @angular/core before 11.1.1. HTML doesn't specify any way to escape comment end text inside the comment. <p>Publish Date: 2021-01-26 <p>URL: <a href=https://github.com/angular/angular/commit/97ec6e48493bf9418971436d885470a66e71f045>WS-2021-0039</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>3.9</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: High - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/angular/angular/releases/tag/11.1.1">https://github.com/angular/angular/releases/tag/11.1.1</a></p> <p>Release Date: 2021-01-26</p> <p>Fix Resolution: @angular/core - 11.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
ws low detected in core tgz autoclosed ws low severity vulnerability vulnerable library core tgz angular the core framework library home page a href path to dependency file employeeinformation package json path to vulnerable library node modules angular core package json dependency hierarchy x core tgz vulnerable library found in head commit a href found in base branch master vulnerability details cross site scripting xss vulnerability was found in angular core before html doesn t specify any way to escape comment end text inside the comment publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required high user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution angular core step up your open source security game with whitesource
0
167,041
14,099,843,000
IssuesEvent
2020-11-06 02:29:24
saltstack/salt
https://api.github.com/repos/saltstack/salt
opened
[DOCS] Is key_logfile meant to be a deprecated minion/master config option?
Documentation
**Description** Not sure what is going on here: https://github.com/saltstack/salt/blob/8afbb8e00556e0d491f6052308469e9f4a0c7672/salt/config/__init__.py#L1486-L1487 This option has a comment saying it was meant to be removed long ago, first introduced as a comment 7 years ago: - [Commit: 9a4ce0ea9d90f659d1a4036cb2c046171535292f](https://github.com/saltstack/salt/commit/9a4ce0ea9d90f659d1a4036cb2c046171535292f#diff-df84855281b5022d7d6868fbde71940627a923595f5f38d0be7c17ec7ee696cfR327) Either this is meant to be deprecated or it is meant to stay. **Suggested Fix** If it is meant to stay: - Remove the comment - Add the appropriate information needed to the minion/master confs that support it, and the associate .rst files If it is meant to go, create a deprecation path. - Create a deprecation path **Type of documentation** - minion/master configuration files - Salt Documentation (.rst) for master/minion configuration files **Location or format of documentation** Master: - https://github.com/saltstack/salt/blob/master/conf/master - https://github.com/saltstack/salt/blob/master/doc/ref/configuration/master.rst - Rendered HTML: https://docs.saltstack.com/en/latest/ref/configuration/master.html Minion: - https://github.com/saltstack/salt/blob/master/conf/minion - https://github.com/saltstack/salt/blob/master/doc/ref/configuration/minion.rst - Rendered HTML: https://docs.saltstack.com/en/latest/ref/configuration/minion.html **Additional context** Assigning Pedro since past-Pedro seemed to know something about this back in an earlier time of Salt land, when version numbers were much smaller and elsewhere on the periodic table. If it does go the deprecation path, it appears in several places as being used: ``` 20:24 $ grep -Inr "key_logfile" | grep -v "_build" salt/utils/parsers.py:2483: _logfile_config_setting_name_ = "key_logfile" salt/config/__init__.py:178: # key_logfile, pidfile: salt/config/__init__.py:709: "key_logfile": str, salt/config/__init__.py:1486: # XXX: Remove 'key_logfile' support in 2014.1.0 salt/config/__init__.py:1487: "key_logfile": os.path.join(salt.syspaths.LOGS_DIR, "key"), salt/config/__init__.py:2361: for config_key in ("log_file", "key_logfile", "syndic_log_file"): salt/config/__init__.py:3682: for config_key in ("log_file", "key_logfile"): salt/config/__init__.py:3886: for config_key in ("log_file", "key_logfile", "ssh_log_file"): doc/man/salt.7:16525:# key_logfile, pidfile, autosign_grains_dir: doc/man/salt.7:17655:#key_logfile: /var/log/salt/key doc/man/salt.7:18601:#key_logfile: /var/log/salt/key doc/man/salt.7:19312:#key_logfile: /var/log/salt/key conf/suse/master:40:# key_logfile, pidfile, autosign_grains_dir: conf/suse/master:1076:#key_logfile: /var/log/salt/key conf/minion:777:#key_logfile: /var/log/salt/key conf/master:42:# key_logfile, pidfile, autosign_grains_dir: conf/master:1172:#key_logfile: /var/log/salt/key conf/proxy:542:#key_logfile: /var/log/salt/key tests/unit/test_config.py:255: wfh.write("root_dir: /\n" "key_logfile: key\n") tests/unit/test_config.py:262: wfh.write("root_dir: /\n" "key_logfile: key\n") tests/unit/test_config.py:268: temp_config = "root_dir: /\n" "key_logfile: key\n" tests/unit/test_config.py:270: temp_config = "root_dir: c:\\\n" "key_logfile: key\n" tests/unit/test_config.py:282: self.assertEqual(config["key_logfile"], expect_path_join) tests/unit/test_config.py:284: self.assertNotEqual(config["key_logfile"], expect_sep_join) tests/unit/utils/test_parsers.py:569: if log_file_name == "key_logfile": tests/unit/utils/test_parsers.py:815: self.logfile_config_setting_name = "key_logfile" tests/unit/utils/test_parsers.py:823: self.key_logfile = "/tmp/key_logfile" tests/unit/utils/test_parsers.py:872: self.logfile_config_setting_name: "key_logfile", tests/unit/utils/test_parsers.py:924: if os.path.exists(self.key_logfile): tests/unit/utils/test_parsers.py:925: os.unlink(self.key_logfile) tests/integration/files/conf/master:18:key_logfile: key.log ```
1.0
[DOCS] Is key_logfile meant to be a deprecated minion/master config option? - **Description** Not sure what is going on here: https://github.com/saltstack/salt/blob/8afbb8e00556e0d491f6052308469e9f4a0c7672/salt/config/__init__.py#L1486-L1487 This option has a comment saying it was meant to be removed long ago, first introduced as a comment 7 years ago: - [Commit: 9a4ce0ea9d90f659d1a4036cb2c046171535292f](https://github.com/saltstack/salt/commit/9a4ce0ea9d90f659d1a4036cb2c046171535292f#diff-df84855281b5022d7d6868fbde71940627a923595f5f38d0be7c17ec7ee696cfR327) Either this is meant to be deprecated or it is meant to stay. **Suggested Fix** If it is meant to stay: - Remove the comment - Add the appropriate information needed to the minion/master confs that support it, and the associate .rst files If it is meant to go, create a deprecation path. - Create a deprecation path **Type of documentation** - minion/master configuration files - Salt Documentation (.rst) for master/minion configuration files **Location or format of documentation** Master: - https://github.com/saltstack/salt/blob/master/conf/master - https://github.com/saltstack/salt/blob/master/doc/ref/configuration/master.rst - Rendered HTML: https://docs.saltstack.com/en/latest/ref/configuration/master.html Minion: - https://github.com/saltstack/salt/blob/master/conf/minion - https://github.com/saltstack/salt/blob/master/doc/ref/configuration/minion.rst - Rendered HTML: https://docs.saltstack.com/en/latest/ref/configuration/minion.html **Additional context** Assigning Pedro since past-Pedro seemed to know something about this back in an earlier time of Salt land, when version numbers were much smaller and elsewhere on the periodic table. If it does go the deprecation path, it appears in several places as being used: ``` 20:24 $ grep -Inr "key_logfile" | grep -v "_build" salt/utils/parsers.py:2483: _logfile_config_setting_name_ = "key_logfile" salt/config/__init__.py:178: # key_logfile, pidfile: salt/config/__init__.py:709: "key_logfile": str, salt/config/__init__.py:1486: # XXX: Remove 'key_logfile' support in 2014.1.0 salt/config/__init__.py:1487: "key_logfile": os.path.join(salt.syspaths.LOGS_DIR, "key"), salt/config/__init__.py:2361: for config_key in ("log_file", "key_logfile", "syndic_log_file"): salt/config/__init__.py:3682: for config_key in ("log_file", "key_logfile"): salt/config/__init__.py:3886: for config_key in ("log_file", "key_logfile", "ssh_log_file"): doc/man/salt.7:16525:# key_logfile, pidfile, autosign_grains_dir: doc/man/salt.7:17655:#key_logfile: /var/log/salt/key doc/man/salt.7:18601:#key_logfile: /var/log/salt/key doc/man/salt.7:19312:#key_logfile: /var/log/salt/key conf/suse/master:40:# key_logfile, pidfile, autosign_grains_dir: conf/suse/master:1076:#key_logfile: /var/log/salt/key conf/minion:777:#key_logfile: /var/log/salt/key conf/master:42:# key_logfile, pidfile, autosign_grains_dir: conf/master:1172:#key_logfile: /var/log/salt/key conf/proxy:542:#key_logfile: /var/log/salt/key tests/unit/test_config.py:255: wfh.write("root_dir: /\n" "key_logfile: key\n") tests/unit/test_config.py:262: wfh.write("root_dir: /\n" "key_logfile: key\n") tests/unit/test_config.py:268: temp_config = "root_dir: /\n" "key_logfile: key\n" tests/unit/test_config.py:270: temp_config = "root_dir: c:\\\n" "key_logfile: key\n" tests/unit/test_config.py:282: self.assertEqual(config["key_logfile"], expect_path_join) tests/unit/test_config.py:284: self.assertNotEqual(config["key_logfile"], expect_sep_join) tests/unit/utils/test_parsers.py:569: if log_file_name == "key_logfile": tests/unit/utils/test_parsers.py:815: self.logfile_config_setting_name = "key_logfile" tests/unit/utils/test_parsers.py:823: self.key_logfile = "/tmp/key_logfile" tests/unit/utils/test_parsers.py:872: self.logfile_config_setting_name: "key_logfile", tests/unit/utils/test_parsers.py:924: if os.path.exists(self.key_logfile): tests/unit/utils/test_parsers.py:925: os.unlink(self.key_logfile) tests/integration/files/conf/master:18:key_logfile: key.log ```
non_defect
is key logfile meant to be a deprecated minion master config option description not sure what is going on here this option has a comment saying it was meant to be removed long ago first introduced as a comment years ago either this is meant to be deprecated or it is meant to stay suggested fix if it is meant to stay remove the comment add the appropriate information needed to the minion master confs that support it and the associate rst files if it is meant to go create a deprecation path create a deprecation path type of documentation minion master configuration files salt documentation rst for master minion configuration files location or format of documentation master rendered html minion rendered html additional context assigning pedro since past pedro seemed to know something about this back in an earlier time of salt land when version numbers were much smaller and elsewhere on the periodic table if it does go the deprecation path it appears in several places as being used grep inr key logfile grep v build salt utils parsers py logfile config setting name key logfile salt config init py key logfile pidfile salt config init py key logfile str salt config init py xxx remove key logfile support in salt config init py key logfile os path join salt syspaths logs dir key salt config init py for config key in log file key logfile syndic log file salt config init py for config key in log file key logfile salt config init py for config key in log file key logfile ssh log file doc man salt key logfile pidfile autosign grains dir doc man salt key logfile var log salt key doc man salt key logfile var log salt key doc man salt key logfile var log salt key conf suse master key logfile pidfile autosign grains dir conf suse master key logfile var log salt key conf minion key logfile var log salt key conf master key logfile pidfile autosign grains dir conf master key logfile var log salt key conf proxy key logfile var log salt key tests unit test config py wfh write root dir n key logfile key n tests unit test config py wfh write root dir n key logfile key n tests unit test config py temp config root dir n key logfile key n tests unit test config py temp config root dir c n key logfile key n tests unit test config py self assertequal config expect path join tests unit test config py self assertnotequal config expect sep join tests unit utils test parsers py if log file name key logfile tests unit utils test parsers py self logfile config setting name key logfile tests unit utils test parsers py self key logfile tmp key logfile tests unit utils test parsers py self logfile config setting name key logfile tests unit utils test parsers py if os path exists self key logfile tests unit utils test parsers py os unlink self key logfile tests integration files conf master key logfile key log
0
20,695
3,620,790,080
IssuesEvent
2016-02-08 21:23:58
kubernetes/kubernetes
https://api.github.com/repos/kubernetes/kubernetes
closed
Problem statement: interact with 3rd party API sources
area/extensibility kind/design priority/P1 team/CSI
For 1.2, we want to make API groups into a feature that enables people to build kubernetes APIs outside of our repository. The biggest missing piece of that is some mechanism by which kube-apiserver can find and report these resources for use by clients. There's at least three mechanisms we can use to solve this: * 3rd party storage (@brendandburns' system, partially implemented). * Webhooks (I started to write something in #11823) - it seems we at least need them for admission, but that's probably a separate topic. * 3rd party apiserver registration - something like this is probably necessary for this feature to be useful for OpenShift Nikhil was interested in driving this for 1.2, so I'm making this tracking issue. The first step is to write a proposal or a roadmap for this.
1.0
Problem statement: interact with 3rd party API sources - For 1.2, we want to make API groups into a feature that enables people to build kubernetes APIs outside of our repository. The biggest missing piece of that is some mechanism by which kube-apiserver can find and report these resources for use by clients. There's at least three mechanisms we can use to solve this: * 3rd party storage (@brendandburns' system, partially implemented). * Webhooks (I started to write something in #11823) - it seems we at least need them for admission, but that's probably a separate topic. * 3rd party apiserver registration - something like this is probably necessary for this feature to be useful for OpenShift Nikhil was interested in driving this for 1.2, so I'm making this tracking issue. The first step is to write a proposal or a roadmap for this.
non_defect
problem statement interact with party api sources for we want to make api groups into a feature that enables people to build kubernetes apis outside of our repository the biggest missing piece of that is some mechanism by which kube apiserver can find and report these resources for use by clients there s at least three mechanisms we can use to solve this party storage brendandburns system partially implemented webhooks i started to write something in it seems we at least need them for admission but that s probably a separate topic party apiserver registration something like this is probably necessary for this feature to be useful for openshift nikhil was interested in driving this for so i m making this tracking issue the first step is to write a proposal or a roadmap for this
0
1,523
2,603,966,957
IssuesEvent
2015-02-24 18:59:16
chrsmith/nishazi6
https://api.github.com/repos/chrsmith/nishazi6
opened
沈阳包皮里有水泡
auto-migrated Priority-Medium Type-Defect
``` 沈阳包皮里有水泡〓沈陽軍區政治部醫院性病〓TEL:024-3102330 8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于� ��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:07
1.0
沈阳包皮里有水泡 - ``` 沈阳包皮里有水泡〓沈陽軍區政治部醫院性病〓TEL:024-3102330 8〓成立于1946年,68年專注于性傳播疾病的研究和治療。位于� ��陽市沈河區二緯路32號。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 ``` ----- Original issue reported on code.google.com by `q964105...@gmail.com` on 4 Jun 2014 at 7:07
defect
沈阳包皮里有水泡 沈阳包皮里有水泡〓沈陽軍區政治部醫院性病〓tel: 〓 , 。位于� �� 。是一所與新中國同建立共輝煌的歷� ��悠久、設備精良、技術權威、專家云集,是預防、保健、醫 療、科研康復為一體的綜合性醫院。是國家首批公立甲等部�� �醫院、全國首批醫療規范定點單位,是第四軍醫大學、東南� ��學等知名高等院校的教學醫院。曾被中國人民解放軍空軍后 勤部衛生部評為衛生工作先進單位,先后兩次榮立集體二等�� �。 original issue reported on code google com by gmail com on jun at
1
64,714
18,843,279,996
IssuesEvent
2021-11-11 12:09:51
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
closed
ChartJS: PieChart breaks when setting legend options
defect
**Describe the defect** In PF 11.0.0-RC2, setting a legend breaks the pie chart. I get the following console error: `Uncaught SyntaxError: Unexpected token ','` When I check the javascript, I see a leading comma before "legend": `"options":{"animation":{"animateRotate":false,"animateScale":false},"plugins":{,"legend":{"display":true,"position":"right","fullWidth":true,"reverse":false,"rtl":false,"` It's most probably a side-effect of #8028 resp #8029. **Reproducer** Use pie chart with some legend options: `Legend legend = new Legend(); legend.setPosition("right"); legend.setLabels(legendLabels); PieChartOptions options = new PieChartOptions(); options.setLegend(legend);` **Expected behavior** I see the legend on the right. **Actual behavior** The pie chart isn't displayed, and I see the above JS error. ```
1.0
ChartJS: PieChart breaks when setting legend options - **Describe the defect** In PF 11.0.0-RC2, setting a legend breaks the pie chart. I get the following console error: `Uncaught SyntaxError: Unexpected token ','` When I check the javascript, I see a leading comma before "legend": `"options":{"animation":{"animateRotate":false,"animateScale":false},"plugins":{,"legend":{"display":true,"position":"right","fullWidth":true,"reverse":false,"rtl":false,"` It's most probably a side-effect of #8028 resp #8029. **Reproducer** Use pie chart with some legend options: `Legend legend = new Legend(); legend.setPosition("right"); legend.setLabels(legendLabels); PieChartOptions options = new PieChartOptions(); options.setLegend(legend);` **Expected behavior** I see the legend on the right. **Actual behavior** The pie chart isn't displayed, and I see the above JS error. ```
defect
chartjs piechart breaks when setting legend options describe the defect in pf setting a legend breaks the pie chart i get the following console error uncaught syntaxerror unexpected token when i check the javascript i see a leading comma before legend options animation animaterotate false animatescale false plugins legend display true position right fullwidth true reverse false rtl false it s most probably a side effect of resp reproducer use pie chart with some legend options legend legend new legend legend setposition right legend setlabels legendlabels piechartoptions options new piechartoptions options setlegend legend expected behavior i see the legend on the right actual behavior the pie chart isn t displayed and i see the above js error
1
171,793
13,248,831,552
IssuesEvent
2020-08-19 19:41:24
kubeflow/pipelines
https://api.github.com/repos/kubeflow/pipelines
closed
The kubeflow-pipelines-tfx-python36 test is failing
area/testing kind/bug
``` + python3 -m pip install . --upgrade Processing /home/prow/go/src/github.com/kubeflow/pipelines/tfx ERROR: Command errored out with exit status 255: command: /usr/local/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-vg98fja9/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-vg98fja9/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-lbr5xhyc cwd: /tmp/pip-req-build-vg98fja9/ Complete output (2 lines): Generating tfx/proto/example_gen_pb2.py... protoc is not installed nor found in ../src. Please compile it or install the binary package. ```
1.0
The kubeflow-pipelines-tfx-python36 test is failing - ``` + python3 -m pip install . --upgrade Processing /home/prow/go/src/github.com/kubeflow/pipelines/tfx ERROR: Command errored out with exit status 255: command: /usr/local/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-vg98fja9/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-vg98fja9/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-lbr5xhyc cwd: /tmp/pip-req-build-vg98fja9/ Complete output (2 lines): Generating tfx/proto/example_gen_pb2.py... protoc is not installed nor found in ../src. Please compile it or install the binary package. ```
non_defect
the kubeflow pipelines tfx test is failing m pip install upgrade processing home prow go src github com kubeflow pipelines tfx error command errored out with exit status command usr local bin c import sys setuptools tokenize sys argv tmp pip req build setup py file tmp pip req build setup py f getattr tokenize open open file code f read replace r n n f close exec compile code file exec egg info egg base tmp pip pip egg info cwd tmp pip req build complete output lines generating tfx proto example gen py protoc is not installed nor found in src please compile it or install the binary package
0
30,303
6,096,976,218
IssuesEvent
2017-06-20 01:08:19
jsplumb/jsPlumb
https://api.github.com/repos/jsplumb/jsPlumb
closed
connection bug with group
defect
I have an issue with the coordinates of a connection when moving group. This bug can be seen [demo/groups](https://jsplumbtoolkit.com/community/demo/groups/index.html) ![jsplumb_group_bug](https://cloud.githubusercontent.com/assets/10080433/23354243/e07a62c0-fcd8-11e6-83cb-00ca38537d2d.gif)
1.0
connection bug with group - I have an issue with the coordinates of a connection when moving group. This bug can be seen [demo/groups](https://jsplumbtoolkit.com/community/demo/groups/index.html) ![jsplumb_group_bug](https://cloud.githubusercontent.com/assets/10080433/23354243/e07a62c0-fcd8-11e6-83cb-00ca38537d2d.gif)
defect
connection bug with group i have an issue with the coordinates of a connection when moving group this bug can be seen
1
80,265
30,203,427,656
IssuesEvent
2023-07-05 07:49:00
primefaces/primefaces
https://api.github.com/repos/primefaces/primefaces
opened
datePicker : updating @this blocks tabulation and popup doesn't work
:lady_beetle: defect :bangbang: needs-triage
### Describe the bug In the showcase, modify `datePickerJava8.xhtml` : add a blur event and update the datePicker like this : ``` <p:datePicker id="button" value="#{calendarJava8View.date7}" showIcon="true"> <p:ajax event="blur" update="@this" /> </p:datePicker> ``` Problems : - You can't go to the next field with the tabulation key - You can't enter a date with the dialog The reason I want to update the datePicker is that I'd like to enable "fast input". For example, the user enters 31122023 and it displays 31/12/2023 and I still want the button to appear. ### Reproducer _No response_ ### Expected behavior Updating the datePicker should not block the tabulation. The popup should work as usual. ### PrimeFaces edition None ### PrimeFaces version 12.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3 ### Java version 11 ### Browser(s) _No response_
1.0
datePicker : updating @this blocks tabulation and popup doesn't work - ### Describe the bug In the showcase, modify `datePickerJava8.xhtml` : add a blur event and update the datePicker like this : ``` <p:datePicker id="button" value="#{calendarJava8View.date7}" showIcon="true"> <p:ajax event="blur" update="@this" /> </p:datePicker> ``` Problems : - You can't go to the next field with the tabulation key - You can't enter a date with the dialog The reason I want to update the datePicker is that I'd like to enable "fast input". For example, the user enters 31122023 and it displays 31/12/2023 and I still want the button to appear. ### Reproducer _No response_ ### Expected behavior Updating the datePicker should not block the tabulation. The popup should work as usual. ### PrimeFaces edition None ### PrimeFaces version 12.0.0 ### Theme _No response_ ### JSF implementation Mojarra ### JSF version 2.3 ### Java version 11 ### Browser(s) _No response_
defect
datepicker updating this blocks tabulation and popup doesn t work describe the bug in the showcase modify xhtml add a blur event and update the datepicker like this problems you can t go to the next field with the tabulation key you can t enter a date with the dialog the reason i want to update the datepicker is that i d like to enable fast input for example the user enters and it displays and i still want the button to appear reproducer no response expected behavior updating the datepicker should not block the tabulation the popup should work as usual primefaces edition none primefaces version theme no response jsf implementation mojarra jsf version java version browser s no response
1
347,779
31,274,097,984
IssuesEvent
2023-08-22 04:11:47
nasa/opera-sds-int
https://api.github.com/repos/nasa/opera-sds-int
closed
[New Feature]: Complete PGE smoke test automation
enhancement r2_rc10_testing
### Checked for duplicates Yes - I've already checked ### Alternatives considered Yes - and alternatives don't suffice ### Related problems _No response_ ### Describe the feature request We've been making significant progress towards being able to execute the PGE smoke test by a single script invocation: https://github.com/nasa/opera-sds-int/blob/main/r2_smoketest/run_r2_smoketest_validation.sh Two more changes need to be made: 1. @collinss-jpl will take out version in the .py and .sh scripts and check them into this repo this folder. Currently the script pulls down these scripts from artifactory. 2. I will add S3 polling to the above script that looks for the products periodically. The test run takes anywhere from 2-4 hrs so it would make for more efficient usage of everyone's time if we didn't have to manually poll to see if the runs were completed.
1.0
[New Feature]: Complete PGE smoke test automation - ### Checked for duplicates Yes - I've already checked ### Alternatives considered Yes - and alternatives don't suffice ### Related problems _No response_ ### Describe the feature request We've been making significant progress towards being able to execute the PGE smoke test by a single script invocation: https://github.com/nasa/opera-sds-int/blob/main/r2_smoketest/run_r2_smoketest_validation.sh Two more changes need to be made: 1. @collinss-jpl will take out version in the .py and .sh scripts and check them into this repo this folder. Currently the script pulls down these scripts from artifactory. 2. I will add S3 polling to the above script that looks for the products periodically. The test run takes anywhere from 2-4 hrs so it would make for more efficient usage of everyone's time if we didn't have to manually poll to see if the runs were completed.
non_defect
complete pge smoke test automation checked for duplicates yes i ve already checked alternatives considered yes and alternatives don t suffice related problems no response describe the feature request we ve been making significant progress towards being able to execute the pge smoke test by a single script invocation two more changes need to be made collinss jpl will take out version in the py and sh scripts and check them into this repo this folder currently the script pulls down these scripts from artifactory i will add polling to the above script that looks for the products periodically the test run takes anywhere from hrs so it would make for more efficient usage of everyone s time if we didn t have to manually poll to see if the runs were completed
0
67,438
20,961,612,153
IssuesEvent
2022-03-27 21:49:24
abedmaatalla/sipdroid
https://api.github.com/repos/abedmaatalla/sipdroid
closed
Call options - ask when sipdroid is connected
Priority-Medium Type-Defect auto-migrated
``` Enhancement : Under call options where you say how sipdroid has to behave with an outbound call, can you add an option where by sipdroid will only ask if it has an account registered ? For me this will avoid a bunch of funny faces when someone else uses.my phone and i dont have sip coverage Thanks ``` Original issue reported on code.google.com by `tazz...@gmail.com` on 9 May 2012 at 6:26 - Merged into: #901
1.0
Call options - ask when sipdroid is connected - ``` Enhancement : Under call options where you say how sipdroid has to behave with an outbound call, can you add an option where by sipdroid will only ask if it has an account registered ? For me this will avoid a bunch of funny faces when someone else uses.my phone and i dont have sip coverage Thanks ``` Original issue reported on code.google.com by `tazz...@gmail.com` on 9 May 2012 at 6:26 - Merged into: #901
defect
call options ask when sipdroid is connected enhancement under call options where you say how sipdroid has to behave with an outbound call can you add an option where by sipdroid will only ask if it has an account registered for me this will avoid a bunch of funny faces when someone else uses my phone and i dont have sip coverage thanks original issue reported on code google com by tazz gmail com on may at merged into
1
32,904
6,967,618,098
IssuesEvent
2017-12-10 11:35:12
Openki/Openki
https://api.github.com/repos/Openki/Openki
opened
Venue Create: can't save
Defect Urgent ☠
> User XYZ reports a problem on the page [""](https://openki.net/venue) > > Their report: > ------------------------------------------------------------------------ > > wenn ich versuche eine location hinzuzufügen dann kommt page not found > nachdem ich speichern gedrückt habe > ------------------------------------------------------------------------ > /end of report. > > > The running version is [Openki] v0.6.3 @ commit 786bd0db > It was deployed on Nov 27, 2017 10:00 PM, > and last restarted on Nov 27, 2017 10:00 PM. > Now it's Fri Dec 08 2017 19:40:35 GMT+0100 (CET). > User Agent: > Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 > See you! bye.
1.0
Venue Create: can't save - > User XYZ reports a problem on the page [""](https://openki.net/venue) > > Their report: > ------------------------------------------------------------------------ > > wenn ich versuche eine location hinzuzufügen dann kommt page not found > nachdem ich speichern gedrückt habe > ------------------------------------------------------------------------ > /end of report. > > > The running version is [Openki] v0.6.3 @ commit 786bd0db > It was deployed on Nov 27, 2017 10:00 PM, > and last restarted on Nov 27, 2017 10:00 PM. > Now it's Fri Dec 08 2017 19:40:35 GMT+0100 (CET). > User Agent: > Mozilla/5.0 (X11; Fedora; Linux x86_64; rv:57.0) Gecko/20100101 Firefox/57.0 > See you! bye.
defect
venue create can t save user xyz reports a problem on the page their report wenn ich versuche eine location hinzuzufügen dann kommt page not found nachdem ich speichern gedrückt habe end of report the running version is commit it was deployed on nov pm and last restarted on nov pm now it s fri dec gmt cet user agent mozilla fedora linux rv gecko firefox see you bye
1
40,017
6,794,787,742
IssuesEvent
2017-11-01 13:36:43
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
ReplicatedMap API Doc reflects Member only behaviour
Team: Client Team: Core Team: Documentation
ReplicatedMap Api Doc contains member only behaviours/requirements that client cannot meet due to its technical difference. Most of the data providing methods like `Collection<V> values();` promise to return a lazy data set backed by the map where changes to the map might be reflected in the set. As all data exists on all members, such a behaviour easily provided. Whereas client provide simple proxy of a member and always return a clone of data. The documentation should be review with this in mind.
1.0
ReplicatedMap API Doc reflects Member only behaviour - ReplicatedMap Api Doc contains member only behaviours/requirements that client cannot meet due to its technical difference. Most of the data providing methods like `Collection<V> values();` promise to return a lazy data set backed by the map where changes to the map might be reflected in the set. As all data exists on all members, such a behaviour easily provided. Whereas client provide simple proxy of a member and always return a clone of data. The documentation should be review with this in mind.
non_defect
replicatedmap api doc reflects member only behaviour replicatedmap api doc contains member only behaviours requirements that client cannot meet due to its technical difference most of the data providing methods like collection values promise to return a lazy data set backed by the map where changes to the map might be reflected in the set as all data exists on all members such a behaviour easily provided whereas client provide simple proxy of a member and always return a clone of data the documentation should be review with this in mind
0
95,376
16,096,549,886
IssuesEvent
2021-04-27 01:13:27
Thezone1975/choosealicense.com
https://api.github.com/repos/Thezone1975/choosealicense.com
opened
CVE-2016-10541 (High) detected in shell-quote-0.0.1.tgz
security vulnerability
## CVE-2016-10541 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>shell-quote-0.0.1.tgz</b></p></summary> <p>quote and parse shell commands</p> <p>Library home page: <a href="https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz">https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz</a></p> <p>Path to dependency file: /choosealicense.com/assets/vendor/clipboard/package.json</p> <p>Path to vulnerable library: choosealicense.com/assets/vendor/clipboard/node_modules/shell-quote/package.json</p> <p> Dependency Hierarchy: - browserify-11.2.0.tgz (Root Library) - :x: **shell-quote-0.0.1.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm module "shell-quote" 1.6.0 and earlier cannot correctly escape ">" and "<" operator used for redirection in shell. Applications that depend on shell-quote may also be vulnerable. A malicious user could perform code injection. <p>Publish Date: 2018-05-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10541>CVE-2016-10541</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10541">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10541</a></p> <p>Release Date: 2018-12-15</p> <p>Fix Resolution: 1.6.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-10541 (High) detected in shell-quote-0.0.1.tgz - ## CVE-2016-10541 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>shell-quote-0.0.1.tgz</b></p></summary> <p>quote and parse shell commands</p> <p>Library home page: <a href="https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz">https://registry.npmjs.org/shell-quote/-/shell-quote-0.0.1.tgz</a></p> <p>Path to dependency file: /choosealicense.com/assets/vendor/clipboard/package.json</p> <p>Path to vulnerable library: choosealicense.com/assets/vendor/clipboard/node_modules/shell-quote/package.json</p> <p> Dependency Hierarchy: - browserify-11.2.0.tgz (Root Library) - :x: **shell-quote-0.0.1.tgz** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> The npm module "shell-quote" 1.6.0 and earlier cannot correctly escape ">" and "<" operator used for redirection in shell. Applications that depend on shell-quote may also be vulnerable. A malicious user could perform code injection. <p>Publish Date: 2018-05-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10541>CVE-2016-10541</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>9.8</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: High - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10541">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-10541</a></p> <p>Release Date: 2018-12-15</p> <p>Fix Resolution: 1.6.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in shell quote tgz cve high severity vulnerability vulnerable library shell quote tgz quote and parse shell commands library home page a href path to dependency file choosealicense com assets vendor clipboard package json path to vulnerable library choosealicense com assets vendor clipboard node modules shell quote package json dependency hierarchy browserify tgz root library x shell quote tgz vulnerable library vulnerability details the npm module shell quote and earlier cannot correctly escape and operator used for redirection in shell applications that depend on shell quote may also be vulnerable a malicious user could perform code injection publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
79,411
28,182,524,508
IssuesEvent
2023-04-04 04:33:42
jccastillo0007/eFacturaT
https://api.github.com/repos/jccastillo0007/eFacturaT
opened
Escritorio en Windows 10 Pro, se adelanta una hora y marca error al intentar timbrar por fuera de horario
defect
Está sumamente raro. Lo que es un hecho, que tiene que ver con el cambio de horario del sábado, que bueno en México no se hizo, pero que en otros lados si. El punto es que el reloj de la compu, marcaba una cierta hora y el reloj de la pantalla de facturación, marcaba una hora adelante. Esto lo vi en 4 casos hoy. No se si tenga que ver en cómo estamos obteniendo la hora o que pasa. De hecho, desde que inicia en la consola, el timestamp de la bitácora, ya está adelantado una hora también... desde ahí se ve que no jalará... Qué crees que pueda ser?
1.0
Escritorio en Windows 10 Pro, se adelanta una hora y marca error al intentar timbrar por fuera de horario - Está sumamente raro. Lo que es un hecho, que tiene que ver con el cambio de horario del sábado, que bueno en México no se hizo, pero que en otros lados si. El punto es que el reloj de la compu, marcaba una cierta hora y el reloj de la pantalla de facturación, marcaba una hora adelante. Esto lo vi en 4 casos hoy. No se si tenga que ver en cómo estamos obteniendo la hora o que pasa. De hecho, desde que inicia en la consola, el timestamp de la bitácora, ya está adelantado una hora también... desde ahí se ve que no jalará... Qué crees que pueda ser?
defect
escritorio en windows pro se adelanta una hora y marca error al intentar timbrar por fuera de horario está sumamente raro lo que es un hecho que tiene que ver con el cambio de horario del sábado que bueno en méxico no se hizo pero que en otros lados si el punto es que el reloj de la compu marcaba una cierta hora y el reloj de la pantalla de facturación marcaba una hora adelante esto lo vi en casos hoy no se si tenga que ver en cómo estamos obteniendo la hora o que pasa de hecho desde que inicia en la consola el timestamp de la bitácora ya está adelantado una hora también desde ahí se ve que no jalará qué crees que pueda ser
1
79,721
28,503,818,540
IssuesEvent
2023-04-18 19:34:57
vector-im/element-desktop
https://api.github.com/repos/vector-im/element-desktop
opened
Element keeps showing unread messages, despite I read them all
T-Defect
### Steps to reproduce 1. Where are you starting? What can you see? I open the desktop client, read all threads and messages, but the threads icon is still with a 'button' on it, showing there are more messages to be read, despite I read them all. Three times :) 2. What do you click? Normal navigation inside the app in the threads. 3. More steps… Sometimes app restart helps. But this time it doesn't. Nor even clear cash and restart app does. ### Outcome #### What did you expect? Messages to be marked as read when they are read. #### What happened instead? Client shows me there are still things to read. ### Operating system MacOS ### Application version Version 1.11.29 (1.11.29) ### How did you install the app? Official site ### Homeserver 1.64.0; but I believe it was the same with 1.63.0 ### Will you send logs? No
1.0
Element keeps showing unread messages, despite I read them all - ### Steps to reproduce 1. Where are you starting? What can you see? I open the desktop client, read all threads and messages, but the threads icon is still with a 'button' on it, showing there are more messages to be read, despite I read them all. Three times :) 2. What do you click? Normal navigation inside the app in the threads. 3. More steps… Sometimes app restart helps. But this time it doesn't. Nor even clear cash and restart app does. ### Outcome #### What did you expect? Messages to be marked as read when they are read. #### What happened instead? Client shows me there are still things to read. ### Operating system MacOS ### Application version Version 1.11.29 (1.11.29) ### How did you install the app? Official site ### Homeserver 1.64.0; but I believe it was the same with 1.63.0 ### Will you send logs? No
defect
element keeps showing unread messages despite i read them all steps to reproduce where are you starting what can you see i open the desktop client read all threads and messages but the threads icon is still with a button on it showing there are more messages to be read despite i read them all three times what do you click normal navigation inside the app in the threads more steps… sometimes app restart helps but this time it doesn t nor even clear cash and restart app does outcome what did you expect messages to be marked as read when they are read what happened instead client shows me there are still things to read operating system macos application version version how did you install the app official site homeserver but i believe it was the same with will you send logs no
1
216,086
24,223,174,193
IssuesEvent
2022-09-26 12:34:29
LynRodWS/alcor
https://api.github.com/repos/LynRodWS/alcor
opened
CVE-2022-33681 (Medium) detected in pulsar-client-2.6.1.jar
security vulnerability
## CVE-2022-33681 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pulsar-client-2.6.1.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://github.com/apache/pulsar">https://github.com/apache/pulsar</a></p> <p>Path to dependency file: /services/data_plane_manager/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/org/apache/pulsar/pulsar-client/2.6.1/pulsar-client-2.6.1.jar</p> <p> Dependency Hierarchy: - :x: **pulsar-client-2.6.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LynRodWS/alcor/commit/3c4e28556738e020da91fd03c3aaa5d9a7c1cfed">3c4e28556738e020da91fd03c3aaa5d9a7c1cfed</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Delayed TLS hostname verification in the Pulsar Java Client and the Pulsar Proxy make each client vulnerable to a man in the middle attack. Connections from the Pulsar Java Client to the Pulsar Broker/Proxy and connections from the Pulsar Proxy to the Pulsar Broker are vulnerable. Authentication data is sent before verifying the server’s TLS certificate matches the hostname, which means authentication data could be exposed to an attacker. An attacker can only take advantage of this vulnerability by taking control of a machine 'between' the client and the server. The attacker must then actively manipulate traffic to perform the attack by providing the client with a cryptographically valid certificate for an unrelated host. Because the client sends authentication data before performing hostname verification, an attacker could gain access to the client’s authentication data. The client eventually closes the connection when it verifies the hostname and identifies the targeted hostname does not match a hostname on the certificate. Because the client eventually closes the connection, the value of the intercepted authentication data depends on the authentication method used by the client. Token based authentication and username/password authentication methods are vulnerable because the authentication data can be used to impersonate the client in a separate session. This issue affects Apache Pulsar Java Client versions 2.7.0 to 2.7.4; 2.8.0 to 2.8.3; 2.9.0 to 2.9.2; 2.10.0; 2.6.4 and earlier. <p>Publish Date: 2022-09-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33681>CVE-2022-33681</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread/fpo6x10trvn20hlk0dmnr5vlz5v4kl3d">https://lists.apache.org/thread/fpo6x10trvn20hlk0dmnr5vlz5v4kl3d</a></p> <p>Release Date: 2022-09-23</p> <p>Fix Resolution: 2.8.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
True
CVE-2022-33681 (Medium) detected in pulsar-client-2.6.1.jar - ## CVE-2022-33681 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>pulsar-client-2.6.1.jar</b></p></summary> <p></p> <p>Library home page: <a href="https://github.com/apache/pulsar">https://github.com/apache/pulsar</a></p> <p>Path to dependency file: /services/data_plane_manager/pom.xml</p> <p>Path to vulnerable library: /canner/.m2/repository/org/apache/pulsar/pulsar-client/2.6.1/pulsar-client-2.6.1.jar</p> <p> Dependency Hierarchy: - :x: **pulsar-client-2.6.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/LynRodWS/alcor/commit/3c4e28556738e020da91fd03c3aaa5d9a7c1cfed">3c4e28556738e020da91fd03c3aaa5d9a7c1cfed</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Delayed TLS hostname verification in the Pulsar Java Client and the Pulsar Proxy make each client vulnerable to a man in the middle attack. Connections from the Pulsar Java Client to the Pulsar Broker/Proxy and connections from the Pulsar Proxy to the Pulsar Broker are vulnerable. Authentication data is sent before verifying the server’s TLS certificate matches the hostname, which means authentication data could be exposed to an attacker. An attacker can only take advantage of this vulnerability by taking control of a machine 'between' the client and the server. The attacker must then actively manipulate traffic to perform the attack by providing the client with a cryptographically valid certificate for an unrelated host. Because the client sends authentication data before performing hostname verification, an attacker could gain access to the client’s authentication data. The client eventually closes the connection when it verifies the hostname and identifies the targeted hostname does not match a hostname on the certificate. Because the client eventually closes the connection, the value of the intercepted authentication data depends on the authentication method used by the client. Token based authentication and username/password authentication methods are vulnerable because the authentication data can be used to impersonate the client in a separate session. This issue affects Apache Pulsar Java Client versions 2.7.0 to 2.7.4; 2.8.0 to 2.8.3; 2.9.0 to 2.9.2; 2.10.0; 2.6.4 and earlier. <p>Publish Date: 2022-09-23 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-33681>CVE-2022-33681</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://lists.apache.org/thread/fpo6x10trvn20hlk0dmnr5vlz5v4kl3d">https://lists.apache.org/thread/fpo6x10trvn20hlk0dmnr5vlz5v4kl3d</a></p> <p>Release Date: 2022-09-23</p> <p>Fix Resolution: 2.8.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END -->
non_defect
cve medium detected in pulsar client jar cve medium severity vulnerability vulnerable library pulsar client jar library home page a href path to dependency file services data plane manager pom xml path to vulnerable library canner repository org apache pulsar pulsar client pulsar client jar dependency hierarchy x pulsar client jar vulnerable library found in head commit a href found in base branch master vulnerability details delayed tls hostname verification in the pulsar java client and the pulsar proxy make each client vulnerable to a man in the middle attack connections from the pulsar java client to the pulsar broker proxy and connections from the pulsar proxy to the pulsar broker are vulnerable authentication data is sent before verifying the server’s tls certificate matches the hostname which means authentication data could be exposed to an attacker an attacker can only take advantage of this vulnerability by taking control of a machine between the client and the server the attacker must then actively manipulate traffic to perform the attack by providing the client with a cryptographically valid certificate for an unrelated host because the client sends authentication data before performing hostname verification an attacker could gain access to the client’s authentication data the client eventually closes the connection when it verifies the hostname and identifies the targeted hostname does not match a hostname on the certificate because the client eventually closes the connection the value of the intercepted authentication data depends on the authentication method used by the client token based authentication and username password authentication methods are vulnerable because the authentication data can be used to impersonate the client in a separate session this issue affects apache pulsar java client versions to to to and earlier publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution check this box to open an automated fix pr
0
24,874
24,405,588,461
IssuesEvent
2022-10-05 07:40:37
lgarron/first-world
https://api.github.com/repos/lgarron/first-world
opened
Lightroom on macOS is flaky about accepting ⌘⇧E as a keyboard shortcut for `Export…`
Type: Bug Software: macOS Reason: Usability Software: Lightroom CC Level: Moderate Workaround: Functionality Compromise
This is for Adobe Lightroom (CC) from the Mac App Store (`com.adobe.mas.lightroomCC`). 1. Set the keyboard shortcut for `Export…` to `⌘⇧E` 2. Set the keyboard shortcut for `Edit in Photoshop…` to `⌘⇧P` (to avoid a conflict). 3. Open Lightroom, select an image, and try to use `⌘⇧E` The first few times I press `⌘⇧E`, it just straight-up doesn't work, and I get a small OS error beep as if it was an invalid shortcut. After a bit of using the app (e.g. going to "Edit" view and/or back for an image[^1]), it usually starts working, and then *keeps working until I switch to another app*, at which point the problem consistently returns when I switch back to Lightroom. I've never had an issue swapping out keyboard shortcuts like this. But older versions of Lightroom were also really finicky about shortcuts, so I'm inclined to guess this is a problem due to an assumption in Adobe's code. This wouldn't really surprise me, given what kinds of shenanigans apps can pull with menus: https://bugs.chromium.org/p/chromium/issues/detail?id=429371 [^1]: This issue can definitely happen both in the "Edit" view and also with photos selected in the "Photo Grid" view. It's not just restricted to a particular view.
True
Lightroom on macOS is flaky about accepting ⌘⇧E as a keyboard shortcut for `Export…` - This is for Adobe Lightroom (CC) from the Mac App Store (`com.adobe.mas.lightroomCC`). 1. Set the keyboard shortcut for `Export…` to `⌘⇧E` 2. Set the keyboard shortcut for `Edit in Photoshop…` to `⌘⇧P` (to avoid a conflict). 3. Open Lightroom, select an image, and try to use `⌘⇧E` The first few times I press `⌘⇧E`, it just straight-up doesn't work, and I get a small OS error beep as if it was an invalid shortcut. After a bit of using the app (e.g. going to "Edit" view and/or back for an image[^1]), it usually starts working, and then *keeps working until I switch to another app*, at which point the problem consistently returns when I switch back to Lightroom. I've never had an issue swapping out keyboard shortcuts like this. But older versions of Lightroom were also really finicky about shortcuts, so I'm inclined to guess this is a problem due to an assumption in Adobe's code. This wouldn't really surprise me, given what kinds of shenanigans apps can pull with menus: https://bugs.chromium.org/p/chromium/issues/detail?id=429371 [^1]: This issue can definitely happen both in the "Edit" view and also with photos selected in the "Photo Grid" view. It's not just restricted to a particular view.
non_defect
lightroom on macos is flaky about accepting ⌘⇧e as a keyboard shortcut for export… this is for adobe lightroom cc from the mac app store com adobe mas lightroomcc set the keyboard shortcut for export… to ⌘⇧e set the keyboard shortcut for edit in photoshop… to ⌘⇧p to avoid a conflict open lightroom select an image and try to use ⌘⇧e the first few times i press ⌘⇧e it just straight up doesn t work and i get a small os error beep as if it was an invalid shortcut after a bit of using the app e g going to edit view and or back for an image it usually starts working and then keeps working until i switch to another app at which point the problem consistently returns when i switch back to lightroom i ve never had an issue swapping out keyboard shortcuts like this but older versions of lightroom were also really finicky about shortcuts so i m inclined to guess this is a problem due to an assumption in adobe s code this wouldn t really surprise me given what kinds of shenanigans apps can pull with menus this issue can definitely happen both in the edit view and also with photos selected in the photo grid view it s not just restricted to a particular view
0
256,903
8,130,219,778
IssuesEvent
2018-08-17 17:41:50
Microsoft/PTVS
https://api.github.com/repos/Microsoft/PTVS
reopened
Code analysis features not updated after package is installed or removed
area:IntelliSense bug priority:P1
New Bottle Web Project Accept Virtual Env + Install requirements option app.py automatically opens Notice the squiggles on `import bottle` Wait for virtual env + package install to finish Result: Squiggles don't go away. No completions on `bottle.`. Close/reopen file doesn't help. You have to close/reopen solution. The reverse problem also exists. Now that you've reloaded the solution and get completions on `bottle`, uninstall the package. Result: Squiggles don't appear. Close/reopen file doesn't help. You have to close/reopen solution.
1.0
Code analysis features not updated after package is installed or removed - New Bottle Web Project Accept Virtual Env + Install requirements option app.py automatically opens Notice the squiggles on `import bottle` Wait for virtual env + package install to finish Result: Squiggles don't go away. No completions on `bottle.`. Close/reopen file doesn't help. You have to close/reopen solution. The reverse problem also exists. Now that you've reloaded the solution and get completions on `bottle`, uninstall the package. Result: Squiggles don't appear. Close/reopen file doesn't help. You have to close/reopen solution.
non_defect
code analysis features not updated after package is installed or removed new bottle web project accept virtual env install requirements option app py automatically opens notice the squiggles on import bottle wait for virtual env package install to finish result squiggles don t go away no completions on bottle close reopen file doesn t help you have to close reopen solution the reverse problem also exists now that you ve reloaded the solution and get completions on bottle uninstall the package result squiggles don t appear close reopen file doesn t help you have to close reopen solution
0
20,930
11,569,262,793
IssuesEvent
2020-02-20 17:14:10
yaxim-org/yaxim
https://api.github.com/repos/yaxim-org/yaxim
closed
Feature request: XEP-0313 (Message Archive Management)
feature request service xep
So I know this was [mentioned previously in a comment](https://github.com/pfleidi/yaxim/issues/73), but I figured I'd give it its own issue here. Carbons are great to have (yaxim is one of the few clients that support it) but clients will still miss messages sent while you are offline. Together with [XEP-0059](http://xmpp.org/extensions/xep-0059.html) this could be used for a sort of "infinite scrollback" similar to what Facebook, Hangouts, and the like have. (For a mobile client you probably do want this, else I believe you'd have to request all history from the server.)
1.0
Feature request: XEP-0313 (Message Archive Management) - So I know this was [mentioned previously in a comment](https://github.com/pfleidi/yaxim/issues/73), but I figured I'd give it its own issue here. Carbons are great to have (yaxim is one of the few clients that support it) but clients will still miss messages sent while you are offline. Together with [XEP-0059](http://xmpp.org/extensions/xep-0059.html) this could be used for a sort of "infinite scrollback" similar to what Facebook, Hangouts, and the like have. (For a mobile client you probably do want this, else I believe you'd have to request all history from the server.)
non_defect
feature request xep message archive management so i know this was but i figured i d give it its own issue here carbons are great to have yaxim is one of the few clients that support it but clients will still miss messages sent while you are offline together with this could be used for a sort of infinite scrollback similar to what facebook hangouts and the like have for a mobile client you probably do want this else i believe you d have to request all history from the server
0
38,260
5,170,680,905
IssuesEvent
2017-01-18 07:33:22
MajkiIT/polish-ads-filter
https://api.github.com/repos/MajkiIT/polish-ads-filter
closed
ckm.pl
reguły gotowe/testowanie reklama
`||nsaudience.pl^` `www.ckm.pl##.prenumerata-sklep-ckm` `||exs.pl/ads.js$script` `||adkontekst.pl^$image` `||vendimob.pl^` http://www.ckm.pl/m/dziewczyny/,18277,a.html
1.0
ckm.pl - `||nsaudience.pl^` `www.ckm.pl##.prenumerata-sklep-ckm` `||exs.pl/ads.js$script` `||adkontekst.pl^$image` `||vendimob.pl^` http://www.ckm.pl/m/dziewczyny/,18277,a.html
non_defect
ckm pl nsaudience pl exs pl ads js script adkontekst pl image vendimob pl
0
45,488
12,825,846,562
IssuesEvent
2020-07-06 15:34:50
carbon-design-system/ibm-security
https://api.github.com/repos/carbon-design-system/ibm-security
closed
Modal component frozen in Chrome when the Modal content is long
Defect severity 3
## Bug - <!-- Short description -->In Chrome browser, the Modal is frozen and not usable anymore when the Modal content is longer than around screen size. Issue seen in In this Modal component https://ibm-security.carbondesignsystem.com/?path=/story/components-modal--default **Expected behavior -** Modal should behave as expected regardless of the size of content in the Modal, so the styling should keep the same and Modal should be closable. <!-- Expected behavior --> **Actual behavior -** When put long content inside Modal, click something inside Modal, the Modal will not be responding anymore. Styling looks broken and Modal not closable. <!-- Actual behavior --> ### Steps for reproducing <!-- Please try to re-create the issue as a reduced test case using our CodeSandbox template - https://codesandbox.io/s/codesandbox-nmmqp --> [CodeSandbox](https://codesandbox.io/s/<!-- URL -->) 1. <!-- Step 1 --> Use Google Chrome and put a Modal with React on the page. 2. <!-- Step 2 --> Set long content (example: a list of more than 100 checkbox items) inside Modal. 3. <!-- Step 3 --> Click the bottom area or the last checkbox item inside the Modal, then the Modal will be frozen. ### Screenshots #### <!-- Step 2 --> ![<!-- Screenshot of step 2 -->](<!-- Screenshot URL -->) <img width="924" alt="image" src="https://user-images.githubusercontent.com/24817328/83250280-c8dcb180-a19f-11ea-8d3a-fe2a8995ca5b.png"> #### <!-- Step 3 --> ![<!-- Screenshot of step 3 -->](<!-- Screenshot URL -->) <img width="924" alt="image" src="https://user-images.githubusercontent.com/24817328/83250464-10633d80-a1a0-11ea-9fc0-6a52891ec6e8.png"> ### Affected browsers [What's my browser?](http://www.whatsmyua.com) and [browserl.ist supported browsers](http://browserl.ist/?q=%3E+1%25%2C+not+IE+11) On Mac OSX 10 Chrome Version 81.0.4044.138 (Official Build) (64-bit) (not sure about other version, may affected) Seen on Development Environment (it is working fine with Firefox browser) ### Optional information **Version -** <!-- Version -->
1.0
Modal component frozen in Chrome when the Modal content is long - ## Bug - <!-- Short description -->In Chrome browser, the Modal is frozen and not usable anymore when the Modal content is longer than around screen size. Issue seen in In this Modal component https://ibm-security.carbondesignsystem.com/?path=/story/components-modal--default **Expected behavior -** Modal should behave as expected regardless of the size of content in the Modal, so the styling should keep the same and Modal should be closable. <!-- Expected behavior --> **Actual behavior -** When put long content inside Modal, click something inside Modal, the Modal will not be responding anymore. Styling looks broken and Modal not closable. <!-- Actual behavior --> ### Steps for reproducing <!-- Please try to re-create the issue as a reduced test case using our CodeSandbox template - https://codesandbox.io/s/codesandbox-nmmqp --> [CodeSandbox](https://codesandbox.io/s/<!-- URL -->) 1. <!-- Step 1 --> Use Google Chrome and put a Modal with React on the page. 2. <!-- Step 2 --> Set long content (example: a list of more than 100 checkbox items) inside Modal. 3. <!-- Step 3 --> Click the bottom area or the last checkbox item inside the Modal, then the Modal will be frozen. ### Screenshots #### <!-- Step 2 --> ![<!-- Screenshot of step 2 -->](<!-- Screenshot URL -->) <img width="924" alt="image" src="https://user-images.githubusercontent.com/24817328/83250280-c8dcb180-a19f-11ea-8d3a-fe2a8995ca5b.png"> #### <!-- Step 3 --> ![<!-- Screenshot of step 3 -->](<!-- Screenshot URL -->) <img width="924" alt="image" src="https://user-images.githubusercontent.com/24817328/83250464-10633d80-a1a0-11ea-9fc0-6a52891ec6e8.png"> ### Affected browsers [What's my browser?](http://www.whatsmyua.com) and [browserl.ist supported browsers](http://browserl.ist/?q=%3E+1%25%2C+not+IE+11) On Mac OSX 10 Chrome Version 81.0.4044.138 (Official Build) (64-bit) (not sure about other version, may affected) Seen on Development Environment (it is working fine with Firefox browser) ### Optional information **Version -** <!-- Version -->
defect
modal component frozen in chrome when the modal content is long bug in chrome browser the modal is frozen and not usable anymore when the modal content is longer than around screen size issue seen in in this modal component expected behavior modal should behave as expected regardless of the size of content in the modal so the styling should keep the same and modal should be closable actual behavior when put long content inside modal click something inside modal the modal will not be responding anymore styling looks broken and modal not closable steps for reproducing url use google chrome and put a modal with react on the page set long content example a list of more than checkbox items inside modal click the bottom area or the last checkbox item inside the modal then the modal will be frozen screenshots img width alt image src img width alt image src affected browsers and on mac osx chrome version official build bit not sure about other version may affected seen on development environment it is working fine with firefox browser optional information version
1
199,458
15,039,453,627
IssuesEvent
2021-02-02 18:40:29
GlobantUy/STB-Bank
https://api.github.com/repos/GlobantUy/STB-Bank
closed
[Contraseña] Validación de la longitud de la contraseña
TestCase
**Precondiciones:** ======================================================= Pasos para la ejecución | Resultado Esperado ------------ | ------------- 1: Acceder al simulador de préstamos | 2: Ir a la pantalla de Login | 3: Ingresar usuario válido | 4: Ingresar contraseña menor a 8 caracteres| Debajo del campo "Contraseña" se notificará al usuario un error que dirá que está ingresando una contraseña con una cantidad de caracteres menores a los especificados 5: Ingresar una contraseña de 8 caracteres| El mensaje de error desaparece 6: Borrar algún caracter de la contraseña ingresada en el Paso 5| Vuelve a aparecer el mensaje de error ======================================================= **US asociada:** #119
1.0
[Contraseña] Validación de la longitud de la contraseña - **Precondiciones:** ======================================================= Pasos para la ejecución | Resultado Esperado ------------ | ------------- 1: Acceder al simulador de préstamos | 2: Ir a la pantalla de Login | 3: Ingresar usuario válido | 4: Ingresar contraseña menor a 8 caracteres| Debajo del campo "Contraseña" se notificará al usuario un error que dirá que está ingresando una contraseña con una cantidad de caracteres menores a los especificados 5: Ingresar una contraseña de 8 caracteres| El mensaje de error desaparece 6: Borrar algún caracter de la contraseña ingresada en el Paso 5| Vuelve a aparecer el mensaje de error ======================================================= **US asociada:** #119
non_defect
validación de la longitud de la contraseña precondiciones pasos para la ejecución resultado esperado acceder al simulador de préstamos ir a la pantalla de login ingresar usuario válido ingresar contraseña menor a caracteres debajo del campo contraseña se notificará al usuario un error que dirá que está ingresando una contraseña con una cantidad de caracteres menores a los especificados ingresar una contraseña de caracteres el mensaje de error desaparece borrar algún caracter de la contraseña ingresada en el paso vuelve a aparecer el mensaje de error us asociada
0
2,808
2,607,946,239
IssuesEvent
2015-02-26 00:33:23
chrsmithdemos/switchlist
https://api.github.com/repos/chrsmithdemos/switchlist
opened
SwitchList gives no feedback if all (or many) cars have no location set.
auto-migrated Priority-Medium Type-Defect
``` It's possible for a user to leave the location of all freight cars at "No Value", and be surprised when no cars show up in SwitchList. We ought to do a better job of showing this - either warning when some or all cars are in "No Value" ("Note: didn't assign 3 cars without a location), automatically placing new cars in "Workbench" and banning "No Value", etc. ``` ----- Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 26 Apr 2013 at 4:35
1.0
SwitchList gives no feedback if all (or many) cars have no location set. - ``` It's possible for a user to leave the location of all freight cars at "No Value", and be surprised when no cars show up in SwitchList. We ought to do a better job of showing this - either warning when some or all cars are in "No Value" ("Note: didn't assign 3 cars without a location), automatically placing new cars in "Workbench" and banning "No Value", etc. ``` ----- Original issue reported on code.google.com by `rwbowdi...@gmail.com` on 26 Apr 2013 at 4:35
defect
switchlist gives no feedback if all or many cars have no location set it s possible for a user to leave the location of all freight cars at no value and be surprised when no cars show up in switchlist we ought to do a better job of showing this either warning when some or all cars are in no value note didn t assign cars without a location automatically placing new cars in workbench and banning no value etc original issue reported on code google com by rwbowdi gmail com on apr at
1
197,368
6,954,801,470
IssuesEvent
2017-12-07 03:35:40
dileep-kishore/microbial-ai
https://api.github.com/repos/dileep-kishore/microbial-ai
closed
Scaling rewards and balancing growth and exchange flux
Priority: High Status: Review Needed Type: Question
- [ ] Can the rewards be continuous? Or do they need to be discrete - [ ] Do the rewards need scaling? - [ ] Balance reward between microbial growth rate and minimal exchange fluxes
1.0
Scaling rewards and balancing growth and exchange flux - - [ ] Can the rewards be continuous? Or do they need to be discrete - [ ] Do the rewards need scaling? - [ ] Balance reward between microbial growth rate and minimal exchange fluxes
non_defect
scaling rewards and balancing growth and exchange flux can the rewards be continuous or do they need to be discrete do the rewards need scaling balance reward between microbial growth rate and minimal exchange fluxes
0
47,567
13,240,640,402
IssuesEvent
2020-08-19 06:46:44
benchabot/gitlabhq
https://api.github.com/repos/benchabot/gitlabhq
opened
CVE-2020-8161 (High) detected in rack-2.0.7.gem
security vulnerability
## CVE-2020-8161 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rack-2.0.7.gem</b></p></summary> <p>Rack provides a minimal, modular and adaptable interface for developing web applications in Ruby. By wrapping HTTP requests and responses in the simplest way possible, it unifies and distills the API for web servers, web frameworks, and software in between (the so-called middleware) into a single method call. Also see https://rack.github.io/. </p> <p>Library home page: <a href="https://rubygems.org/gems/rack-2.0.7.gem">https://rubygems.org/gems/rack-2.0.7.gem</a></p> <p> Dependency Hierarchy: - thin-1.7.2.gem (Root Library) - :x: **rack-2.0.7.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchabot/gitlabhq/commit/16cda14e4359f7411b389dcbf70ec966a6db2353">16cda14e4359f7411b389dcbf70ec966a6db2353</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A directory traversal vulnerability exists in rack < 2.2.0 that allows an attacker perform directory traversal vulnerability in the Rack::Directory app that is bundled with Rack which could result in information disclosure. <p>Publish Date: 2020-07-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8161>CVE-2020-8161</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rack/rack/tree/2.2.0">https://github.com/rack/rack/tree/2.2.0</a></p> <p>Release Date: 2020-06-01</p> <p>Fix Resolution: 2.2.0,2.1.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-8161 (High) detected in rack-2.0.7.gem - ## CVE-2020-8161 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>rack-2.0.7.gem</b></p></summary> <p>Rack provides a minimal, modular and adaptable interface for developing web applications in Ruby. By wrapping HTTP requests and responses in the simplest way possible, it unifies and distills the API for web servers, web frameworks, and software in between (the so-called middleware) into a single method call. Also see https://rack.github.io/. </p> <p>Library home page: <a href="https://rubygems.org/gems/rack-2.0.7.gem">https://rubygems.org/gems/rack-2.0.7.gem</a></p> <p> Dependency Hierarchy: - thin-1.7.2.gem (Root Library) - :x: **rack-2.0.7.gem** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/benchabot/gitlabhq/commit/16cda14e4359f7411b389dcbf70ec966a6db2353">16cda14e4359f7411b389dcbf70ec966a6db2353</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> A directory traversal vulnerability exists in rack < 2.2.0 that allows an attacker perform directory traversal vulnerability in the Rack::Directory app that is bundled with Rack which could result in information disclosure. <p>Publish Date: 2020-07-02 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-8161>CVE-2020-8161</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.6</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Changed - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/rack/rack/tree/2.2.0">https://github.com/rack/rack/tree/2.2.0</a></p> <p>Release Date: 2020-06-01</p> <p>Fix Resolution: 2.2.0,2.1.3</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_defect
cve high detected in rack gem cve high severity vulnerability vulnerable library rack gem rack provides a minimal modular and adaptable interface for developing web applications in ruby by wrapping http requests and responses in the simplest way possible it unifies and distills the api for web servers web frameworks and software in between the so called middleware into a single method call also see library home page a href dependency hierarchy thin gem root library x rack gem vulnerable library found in head commit a href vulnerability details a directory traversal vulnerability exists in rack that allows an attacker perform directory traversal vulnerability in the rack directory app that is bundled with rack which could result in information disclosure publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
50,616
13,187,625,673
IssuesEvent
2020-08-13 04:01:55
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
closed
[dataio] filestager test broken by python 2.7.9 ssl change (Trac #1054)
Migrated from Trac combo core defect
On python 2.7.9, I get: `urlopen error [SSL: CERTIFICATE_VERIFY_FAILED]` This is a result of the ssl security features added in https://www.python.org/dev/peps/pep-0476, and our use of a self-signed cert for testing. This is a placeholder ticket to notify people that it's already being worked on. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1054">https://code.icecube.wisc.edu/ticket/1054</a>, reported by david.schultz and owned by david.schultz</em></summary> <p> ```json { "status": "closed", "changetime": "2019-01-11T21:45:42", "description": "On python 2.7.9, I get: `urlopen error [SSL: CERTIFICATE_VERIFY_FAILED]`\n\nThis is a result of the ssl security features added in https://www.python.org/dev/peps/pep-0476, and our use of a self-signed cert for testing.\n\nThis is a placeholder ticket to notify people that it's already being worked on.\n\n", "reporter": "david.schultz", "cc": "jvansanten", "resolution": "fixed", "_ts": "1547243142967943", "component": "combo core", "summary": "[dataio] filestager test broken by python 2.7.9 ssl change", "priority": "major", "keywords": "", "time": "2015-07-15T22:05:40", "milestone": "", "owner": "david.schultz", "type": "defect" } ``` </p> </details>
1.0
[dataio] filestager test broken by python 2.7.9 ssl change (Trac #1054) - On python 2.7.9, I get: `urlopen error [SSL: CERTIFICATE_VERIFY_FAILED]` This is a result of the ssl security features added in https://www.python.org/dev/peps/pep-0476, and our use of a self-signed cert for testing. This is a placeholder ticket to notify people that it's already being worked on. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1054">https://code.icecube.wisc.edu/ticket/1054</a>, reported by david.schultz and owned by david.schultz</em></summary> <p> ```json { "status": "closed", "changetime": "2019-01-11T21:45:42", "description": "On python 2.7.9, I get: `urlopen error [SSL: CERTIFICATE_VERIFY_FAILED]`\n\nThis is a result of the ssl security features added in https://www.python.org/dev/peps/pep-0476, and our use of a self-signed cert for testing.\n\nThis is a placeholder ticket to notify people that it's already being worked on.\n\n", "reporter": "david.schultz", "cc": "jvansanten", "resolution": "fixed", "_ts": "1547243142967943", "component": "combo core", "summary": "[dataio] filestager test broken by python 2.7.9 ssl change", "priority": "major", "keywords": "", "time": "2015-07-15T22:05:40", "milestone": "", "owner": "david.schultz", "type": "defect" } ``` </p> </details>
defect
filestager test broken by python ssl change trac on python i get urlopen error this is a result of the ssl security features added in and our use of a self signed cert for testing this is a placeholder ticket to notify people that it s already being worked on migrated from json status closed changetime description on python i get urlopen error n nthis is a result of the ssl security features added in and our use of a self signed cert for testing n nthis is a placeholder ticket to notify people that it s already being worked on n n reporter david schultz cc jvansanten resolution fixed ts component combo core summary filestager test broken by python ssl change priority major keywords time milestone owner david schultz type defect
1
52,119
13,211,390,475
IssuesEvent
2020-08-15 22:47:54
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
[BadDomList] Needs to take drop time of dropped doms into account (Trac #1710)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1710">https://code.icecube.wisc.edu/projects/icecube/ticket/1710</a>, reported by joertlinand owned by joertlin</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:58", "_ts": "1550067178841456", "description": "If getting dropped doms from I3Live, it is still needed to verify the drop time. Otherwise, also dropped doms that are dropped after the (good) run end time are included.", "reporter": "joertlin", "cc": "", "resolution": "fixed", "time": "2016-05-17T19:40:33", "component": "combo reconstruction", "summary": "[BadDomList] Needs to take drop time of dropped doms into account", "priority": "blocker", "keywords": "", "milestone": "", "owner": "joertlin", "type": "defect" } ``` </p> </details>
1.0
[BadDomList] Needs to take drop time of dropped doms into account (Trac #1710) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1710">https://code.icecube.wisc.edu/projects/icecube/ticket/1710</a>, reported by joertlinand owned by joertlin</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:12:58", "_ts": "1550067178841456", "description": "If getting dropped doms from I3Live, it is still needed to verify the drop time. Otherwise, also dropped doms that are dropped after the (good) run end time are included.", "reporter": "joertlin", "cc": "", "resolution": "fixed", "time": "2016-05-17T19:40:33", "component": "combo reconstruction", "summary": "[BadDomList] Needs to take drop time of dropped doms into account", "priority": "blocker", "keywords": "", "milestone": "", "owner": "joertlin", "type": "defect" } ``` </p> </details>
defect
needs to take drop time of dropped doms into account trac migrated from json status closed changetime ts description if getting dropped doms from it is still needed to verify the drop time otherwise also dropped doms that are dropped after the good run end time are included reporter joertlin cc resolution fixed time component combo reconstruction summary needs to take drop time of dropped doms into account priority blocker keywords milestone owner joertlin type defect
1
72,079
23,924,679,522
IssuesEvent
2022-09-09 20:51:42
department-of-veterans-affairs/va.gov-cms
https://api.github.com/repos/department-of-veterans-affairs/va.gov-cms
closed
FE SPIKE: Resources & Support page shows incorrect breadcrumbs
Defect VA.gov frontend Drupal engineering ⭐️ Public Websites
## Describe the defect https://dsva.slack.com/archives/CDHBKAL9W/p1660661901076319 The breadcrumbs at the top of this R&S article are showing incorrectly. It should be Home > Resources and support > Choosing a decision review option. But instead it's added Decision reviews and appeals after Home. I think this is because we have this article listed under "More resources" on the decision reviews landing page. I can't figure out if there's a way in Drupal to keep the article linked on that landing page but not have that extra step show up in the breadcrumbs. - [prod.cms.va.gov/resources/choosing-a-decision-review-option](http://prod.cms.va.gov/resources/choosing-a-decision-review-option) - https://www.va.gov/resources/choosing-a-decision-review-option/ Randi wasn't expecting "decision reviews and appeals" to show up in the breadcrumbs. It should be Home > Resources and support > Choosing a decision review. R&S shouldn't be a category within decision reviews and appeals Randi suspects that it's showing up that way because SWS has it as a link on the decision reviews landing page. But maybe it's something else? ## Additional context This work may blur the lines between Drupal & Front-end work. We know Drupal generates some breadcrumbs, from menu or from path. We believe some page breadcrumbs are custom in the front-end. This ticket should take into account both backend/frontend breadcrumb handling. ## AC / Expected behavior - [ ] Document any findings about how breadcrumbs work in comments - [ ] Understand why this is happening - [ ] Determine how widespread this particular issue is - [ ] Meet with team/Dave to discuss next steps in context of all the breadcrumb woes ## Screenshots <img width="777" alt="Screen Shot 2022-08-16 at 10 58 33 AM" src="https://user-images.githubusercontent.com/85581471/185191277-456b66c1-a997-4820-ba02-20813f703627.png"> <img width="665" alt="Screen Shot 2022-08-16 at 11 22 15 AM" src="https://user-images.githubusercontent.com/85581471/185191439-bcb14973-26bc-4c15-a580-4e8d70f609bd.png"> ## Additional context Resources and Support is currently disabled as a top-level menu item, with nothing listed beneath it. ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS workstream (orange) (not needed for bug tickets) - [ ] CMS-supported product (black) ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [X] `⭐️ Public Websites` - [ ] `⭐️ Facilities` - [ ] `⭐️ User support`
1.0
FE SPIKE: Resources & Support page shows incorrect breadcrumbs - ## Describe the defect https://dsva.slack.com/archives/CDHBKAL9W/p1660661901076319 The breadcrumbs at the top of this R&S article are showing incorrectly. It should be Home > Resources and support > Choosing a decision review option. But instead it's added Decision reviews and appeals after Home. I think this is because we have this article listed under "More resources" on the decision reviews landing page. I can't figure out if there's a way in Drupal to keep the article linked on that landing page but not have that extra step show up in the breadcrumbs. - [prod.cms.va.gov/resources/choosing-a-decision-review-option](http://prod.cms.va.gov/resources/choosing-a-decision-review-option) - https://www.va.gov/resources/choosing-a-decision-review-option/ Randi wasn't expecting "decision reviews and appeals" to show up in the breadcrumbs. It should be Home > Resources and support > Choosing a decision review. R&S shouldn't be a category within decision reviews and appeals Randi suspects that it's showing up that way because SWS has it as a link on the decision reviews landing page. But maybe it's something else? ## Additional context This work may blur the lines between Drupal & Front-end work. We know Drupal generates some breadcrumbs, from menu or from path. We believe some page breadcrumbs are custom in the front-end. This ticket should take into account both backend/frontend breadcrumb handling. ## AC / Expected behavior - [ ] Document any findings about how breadcrumbs work in comments - [ ] Understand why this is happening - [ ] Determine how widespread this particular issue is - [ ] Meet with team/Dave to discuss next steps in context of all the breadcrumb woes ## Screenshots <img width="777" alt="Screen Shot 2022-08-16 at 10 58 33 AM" src="https://user-images.githubusercontent.com/85581471/185191277-456b66c1-a997-4820-ba02-20813f703627.png"> <img width="665" alt="Screen Shot 2022-08-16 at 11 22 15 AM" src="https://user-images.githubusercontent.com/85581471/185191439-bcb14973-26bc-4c15-a580-4e8d70f609bd.png"> ## Additional context Resources and Support is currently disabled as a top-level menu item, with nothing listed beneath it. ## Labels (You can delete this section once it's complete) - [x] Issue type (red) (defaults to "Defect") - [ ] CMS subsystem (green) - [ ] CMS practice area (blue) - [x] CMS workstream (orange) (not needed for bug tickets) - [ ] CMS-supported product (black) ### CMS Team Please check the team(s) that will do this work. - [ ] `Program` - [ ] `Platform CMS Team` - [ ] `Sitewide Crew` - [ ] `⭐️ Sitewide CMS` - [X] `⭐️ Public Websites` - [ ] `⭐️ Facilities` - [ ] `⭐️ User support`
defect
fe spike resources support page shows incorrect breadcrumbs describe the defect the breadcrumbs at the top of this r s article are showing incorrectly it should be home resources and support choosing a decision review option but instead it s added decision reviews and appeals after home i think this is because we have this article listed under more resources on the decision reviews landing page i can t figure out if there s a way in drupal to keep the article linked on that landing page but not have that extra step show up in the breadcrumbs randi wasn t expecting decision reviews and appeals to show up in the breadcrumbs it should be home resources and support choosing a decision review r s shouldn t be a category within decision reviews and appeals randi suspects that it s showing up that way because sws has it as a link on the decision reviews landing page but maybe it s something else additional context this work may blur the lines between drupal front end work we know drupal generates some breadcrumbs from menu or from path we believe some page breadcrumbs are custom in the front end this ticket should take into account both backend frontend breadcrumb handling ac expected behavior document any findings about how breadcrumbs work in comments understand why this is happening determine how widespread this particular issue is meet with team dave to discuss next steps in context of all the breadcrumb woes screenshots img width alt screen shot at am src img width alt screen shot at am src additional context resources and support is currently disabled as a top level menu item with nothing listed beneath it labels you can delete this section once it s complete issue type red defaults to defect cms subsystem green cms practice area blue cms workstream orange not needed for bug tickets cms supported product black cms team please check the team s that will do this work program platform cms team sitewide crew ⭐️ sitewide cms ⭐️ public websites ⭐️ facilities ⭐️ user support
1
78,505
27,558,878,775
IssuesEvent
2023-03-07 20:11:53
scipy/scipy
https://api.github.com/repos/scipy/scipy
opened
BUG: ValueError: setting an array element with a sequence.
defect
### Describe your issue. When using [scipy.interpolate.make_smoothing_spline](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_smoothing_spline.html#scipy.interpolate.make_smoothing_spline) if you pass inputs of length less that 5 you get value error, values of the inputs does not matter affect this. ### Reproducing Code Example ```python x = np.arange(4) y = np.ones(4) spl = make_smoothing_spline(x, y) ``` ### Error message ```shell ValueError: setting an array element with a sequence. ``` ### SciPy/NumPy/Python version and system information ```shell 1.10.1 1.21.0 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0) Build Dependencies: blas: detection method: cmake found: true include directory: unknown lib directory: unknown name: OpenBLAS openblas configuration: unknown pc file directory: unknown version: 0.3.18 lapack: detection method: cmake found: true include directory: unknown lib directory: unknown name: OpenBLAS openblas configuration: unknown pc file directory: unknown version: 0.3.18 Compilers: c: commands: cc linker: ld.bfd name: gcc version: 10.2.1 c++: commands: c++ linker: ld.bfd name: gcc version: 10.2.1 cython: commands: cython linker: cython name: cython version: 0.29.33 fortran: commands: gfortran linker: ld.bfd name: gcc version: 10.2.1 pythran: include directory: /tmp/pip-build-env-q2fwe5jt/overlay/lib/python3.8/site-packages/pythran version: 0.12.1 Machine Information: build: cpu: x86_64 endian: little family: x86_64 system: linux cross-compiled: false host: cpu: x86_64 endian: little family: x86_64 system: linux Python Information: path: /opt/python/cp38-cp38/bin/python version: '3.8' ```
1.0
BUG: ValueError: setting an array element with a sequence. - ### Describe your issue. When using [scipy.interpolate.make_smoothing_spline](https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_smoothing_spline.html#scipy.interpolate.make_smoothing_spline) if you pass inputs of length less that 5 you get value error, values of the inputs does not matter affect this. ### Reproducing Code Example ```python x = np.arange(4) y = np.ones(4) spl = make_smoothing_spline(x, y) ``` ### Error message ```shell ValueError: setting an array element with a sequence. ``` ### SciPy/NumPy/Python version and system information ```shell 1.10.1 1.21.0 sys.version_info(major=3, minor=8, micro=5, releaselevel='final', serial=0) Build Dependencies: blas: detection method: cmake found: true include directory: unknown lib directory: unknown name: OpenBLAS openblas configuration: unknown pc file directory: unknown version: 0.3.18 lapack: detection method: cmake found: true include directory: unknown lib directory: unknown name: OpenBLAS openblas configuration: unknown pc file directory: unknown version: 0.3.18 Compilers: c: commands: cc linker: ld.bfd name: gcc version: 10.2.1 c++: commands: c++ linker: ld.bfd name: gcc version: 10.2.1 cython: commands: cython linker: cython name: cython version: 0.29.33 fortran: commands: gfortran linker: ld.bfd name: gcc version: 10.2.1 pythran: include directory: /tmp/pip-build-env-q2fwe5jt/overlay/lib/python3.8/site-packages/pythran version: 0.12.1 Machine Information: build: cpu: x86_64 endian: little family: x86_64 system: linux cross-compiled: false host: cpu: x86_64 endian: little family: x86_64 system: linux Python Information: path: /opt/python/cp38-cp38/bin/python version: '3.8' ```
defect
bug valueerror setting an array element with a sequence describe your issue when using if you pass inputs of length less that you get value error values of the inputs does not matter affect this reproducing code example python x np arange y np ones spl make smoothing spline x y error message shell valueerror setting an array element with a sequence scipy numpy python version and system information shell sys version info major minor micro releaselevel final serial build dependencies blas detection method cmake found true include directory unknown lib directory unknown name openblas openblas configuration unknown pc file directory unknown version lapack detection method cmake found true include directory unknown lib directory unknown name openblas openblas configuration unknown pc file directory unknown version compilers c commands cc linker ld bfd name gcc version c commands c linker ld bfd name gcc version cython commands cython linker cython name cython version fortran commands gfortran linker ld bfd name gcc version pythran include directory tmp pip build env overlay lib site packages pythran version machine information build cpu endian little family system linux cross compiled false host cpu endian little family system linux python information path opt python bin python version
1
52,690
13,224,934,785
IssuesEvent
2020-08-17 20:08:59
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
I3Db: all test/scripts failing (Trac #161)
Migrated from Trac combo core defect
Since the I3Db update for noiseRate/relativeDomEff, I3Db is now going log_fatal on older dates than IC59 times: ./I3Db/resources/scripts/dump_db_nofile.py FATAL: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009 ) later than End Date (Tue May 19 23:38:00 2009 )- ?!? Traceback (most recent call last): File "./I3Db/resources/scripts/dump_db_nofile.py", line 77, in <module> tray.Execute(4) File "/data/i3home/blaufuss/icework/offline-software/trunk/build_release/lib/I3Tray.py", line 72, in Execute args[0].the_tray.Execute(args[1]) RuntimeError: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009 ) later than End Date (Tue May 19 23:38:00 2009 )- ?!? This GCD file should be from 2007. It seems the date is getting set by the new tables when the these values don't exist. For earlier years than IC59, can the std noise rate for "std" doms of 480 Hz <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/161">https://code.icecube.wisc.edu/projects/icecube/ticket/161</a>, reported by blaufussand owned by kohnen</em></summary> <p> ```json { "status": "closed", "changetime": "2014-11-23T03:37:56", "_ts": "1416713876862109", "description": "Since the I3Db update for noiseRate/relativeDomEff, I3Db is now going log_fatal on older dates than IC59 times:\n\n./I3Db/resources/scripts/dump_db_nofile.py\n\nFATAL: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009\n) later than End Date (Tue May 19 23:38:00 2009\n)- ?!?\nTraceback (most recent call last):\n File \"./I3Db/resources/scripts/dump_db_nofile.py\", line 77, in <module>\n tray.Execute(4)\n File \"/data/i3home/blaufuss/icework/offline-software/trunk/build_release/lib/I3Tray.py\", line 72, in Execute\n args[0].the_tray.Execute(args[1])\nRuntimeError: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009\n) later than End Date (Tue May 19 23:38:00 2009\n)- ?!?\n\nThis GCD file should be from 2007. It seems the date is getting set by the new tables when the these values don't exist. \n\nFor earlier years than IC59, can the std noise rate for \"std\" doms of 480 Hz", "reporter": "blaufuss", "cc": "kohnen@umh.ac.be", "resolution": "fixed", "time": "2009-06-12T20:10:07", "component": "combo core", "summary": "I3Db: all test/scripts failing", "priority": "normal", "keywords": "", "milestone": "", "owner": "kohnen", "type": "defect" } ``` </p> </details>
1.0
I3Db: all test/scripts failing (Trac #161) - Since the I3Db update for noiseRate/relativeDomEff, I3Db is now going log_fatal on older dates than IC59 times: ./I3Db/resources/scripts/dump_db_nofile.py FATAL: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009 ) later than End Date (Tue May 19 23:38:00 2009 )- ?!? Traceback (most recent call last): File "./I3Db/resources/scripts/dump_db_nofile.py", line 77, in <module> tray.Execute(4) File "/data/i3home/blaufuss/icework/offline-software/trunk/build_release/lib/I3Tray.py", line 72, in Execute args[0].the_tray.Execute(args[1]) RuntimeError: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009 ) later than End Date (Tue May 19 23:38:00 2009 )- ?!? This GCD file should be from 2007. It seems the date is getting set by the new tables when the these values don't exist. For earlier years than IC59, can the std noise rate for "std" doms of 480 Hz <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/161">https://code.icecube.wisc.edu/projects/icecube/ticket/161</a>, reported by blaufussand owned by kohnen</em></summary> <p> ```json { "status": "closed", "changetime": "2014-11-23T03:37:56", "_ts": "1416713876862109", "description": "Since the I3Db update for noiseRate/relativeDomEff, I3Db is now going log_fatal on older dates than IC59 times:\n\n./I3Db/resources/scripts/dump_db_nofile.py\n\nFATAL: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009\n) later than End Date (Tue May 19 23:38:00 2009\n)- ?!?\nTraceback (most recent call last):\n File \"./I3Db/resources/scripts/dump_db_nofile.py\", line 77, in <module>\n tray.Execute(4)\n File \"/data/i3home/blaufuss/icework/offline-software/trunk/build_release/lib/I3Tray.py\", line 72, in Execute\n args[0].the_tray.Execute(args[1])\nRuntimeError: I3DbCalibrationService: Calibration Start Date (Tue May 19 23:38:00 2009\n) later than End Date (Tue May 19 23:38:00 2009\n)- ?!?\n\nThis GCD file should be from 2007. It seems the date is getting set by the new tables when the these values don't exist. \n\nFor earlier years than IC59, can the std noise rate for \"std\" doms of 480 Hz", "reporter": "blaufuss", "cc": "kohnen@umh.ac.be", "resolution": "fixed", "time": "2009-06-12T20:10:07", "component": "combo core", "summary": "I3Db: all test/scripts failing", "priority": "normal", "keywords": "", "milestone": "", "owner": "kohnen", "type": "defect" } ``` </p> </details>
defect
all test scripts failing trac since the update for noiserate relativedomeff is now going log fatal on older dates than times resources scripts dump db nofile py fatal calibration start date tue may later than end date tue may traceback most recent call last file resources scripts dump db nofile py line in tray execute file data blaufuss icework offline software trunk build release lib py line in execute args the tray execute args runtimeerror calibration start date tue may later than end date tue may this gcd file should be from it seems the date is getting set by the new tables when the these values don t exist for earlier years than can the std noise rate for std doms of hz migrated from json status closed changetime ts description since the update for noiserate relativedomeff is now going log fatal on older dates than times n n resources scripts dump db nofile py n nfatal calibration start date tue may n later than end date tue may n ntraceback most recent call last n file resources scripts dump db nofile py line in n tray execute n file data blaufuss icework offline software trunk build release lib py line in execute n args the tray execute args nruntimeerror calibration start date tue may n later than end date tue may n n nthis gcd file should be from it seems the date is getting set by the new tables when the these values don t exist n nfor earlier years than can the std noise rate for std doms of hz reporter blaufuss cc kohnen umh ac be resolution fixed time component combo core summary all test scripts failing priority normal keywords milestone owner kohnen type defect
1
51,855
13,211,323,939
IssuesEvent
2020-08-15 22:18:48
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
opened
gulliver-modules examples need to be fixed (Trac #1178)
Incomplete Migration Migrated from Trac combo reconstruction defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1178">https://code.icecube.wisc.edu/projects/icecube/ticket/1178</a>, reported by kjmeagherand owned by kkrings</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "_ts": "1550067117911749", "description": "the two python scripts in resources/examples are old and dont run: gulliview_demo.py trace.py\n\nIn addition, all of the python files in resources/scripts work fine but they all are icetray scripts so they should be moved to examples, or if they are unit tests moved to resources/test", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "time": "2015-08-19T11:07:23", "component": "combo reconstruction", "summary": "gulliver-modules examples need to be fixed", "priority": "blocker", "keywords": "", "milestone": "", "owner": "kkrings", "type": "defect" } ``` </p> </details>
1.0
gulliver-modules examples need to be fixed (Trac #1178) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/1178">https://code.icecube.wisc.edu/projects/icecube/ticket/1178</a>, reported by kjmeagherand owned by kkrings</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:57", "_ts": "1550067117911749", "description": "the two python scripts in resources/examples are old and dont run: gulliview_demo.py trace.py\n\nIn addition, all of the python files in resources/scripts work fine but they all are icetray scripts so they should be moved to examples, or if they are unit tests moved to resources/test", "reporter": "kjmeagher", "cc": "", "resolution": "fixed", "time": "2015-08-19T11:07:23", "component": "combo reconstruction", "summary": "gulliver-modules examples need to be fixed", "priority": "blocker", "keywords": "", "milestone": "", "owner": "kkrings", "type": "defect" } ``` </p> </details>
defect
gulliver modules examples need to be fixed trac migrated from json status closed changetime ts description the two python scripts in resources examples are old and dont run gulliview demo py trace py n nin addition all of the python files in resources scripts work fine but they all are icetray scripts so they should be moved to examples or if they are unit tests moved to resources test reporter kjmeagher cc resolution fixed time component combo reconstruction summary gulliver modules examples need to be fixed priority blocker keywords milestone owner kkrings type defect
1
1,707
2,822,758,034
IssuesEvent
2015-05-21 02:22:01
grpc/grpc
https://api.github.com/repos/grpc/grpc
opened
Build process errors with latest protobuf update
buildgen Linux P0
Following #1660, fresh `git clone`s of `github.com/grpc/grpc.git` master (no prior system installation of `grpc` or `protobuf`) fail to build against GCC 4.8 on Linux (Ubuntu 14.04 and 15.04) with the following steps: ``` git clone https://github.com/grpc/grpc cd grpc git submodule update --init make ``` The error is: ``` <this is a placeholder for the error - will be filled in soon™> ``` Against GCC 4.9, the above works fine. Performing the following series of steps builds everything properly against GCC 4.8: ``` git clone https://github.com/grpc/grpc cd grpc git submodule update --init cd third_party/protobuf ./autogen.sh ./configure make sudo make install cd ../.. # cd to grpc root make ``` It is thus suspected that the `Makefile` for `grpc` is somehow polluting the configuration step for `protobuf` in a manner sensitive to GCC version numbers. CC @dgquintas, @jtattermusch
1.0
Build process errors with latest protobuf update - Following #1660, fresh `git clone`s of `github.com/grpc/grpc.git` master (no prior system installation of `grpc` or `protobuf`) fail to build against GCC 4.8 on Linux (Ubuntu 14.04 and 15.04) with the following steps: ``` git clone https://github.com/grpc/grpc cd grpc git submodule update --init make ``` The error is: ``` <this is a placeholder for the error - will be filled in soon™> ``` Against GCC 4.9, the above works fine. Performing the following series of steps builds everything properly against GCC 4.8: ``` git clone https://github.com/grpc/grpc cd grpc git submodule update --init cd third_party/protobuf ./autogen.sh ./configure make sudo make install cd ../.. # cd to grpc root make ``` It is thus suspected that the `Makefile` for `grpc` is somehow polluting the configuration step for `protobuf` in a manner sensitive to GCC version numbers. CC @dgquintas, @jtattermusch
non_defect
build process errors with latest protobuf update following fresh git clone s of github com grpc grpc git master no prior system installation of grpc or protobuf fail to build against gcc on linux ubuntu and with the following steps git clone cd grpc git submodule update init make the error is against gcc the above works fine performing the following series of steps builds everything properly against gcc git clone cd grpc git submodule update init cd third party protobuf autogen sh configure make sudo make install cd cd to grpc root make it is thus suspected that the makefile for grpc is somehow polluting the configuration step for protobuf in a manner sensitive to gcc version numbers cc dgquintas jtattermusch
0
57,111
15,691,598,564
IssuesEvent
2021-03-25 18:03:55
owncloud/ocis
https://api.github.com/repos/owncloud/ocis
closed
Single big file upload with dekstop-client fails if OIDC token expires
Category:Defect OCIS-Fastlane Platform:Desktop Platform:oCIS Status:Bug-Analysis Status:Stale p3-medium
The issue seems to be that the upload aborts after KONNECTD_ACCESS_TOKEN_EXPIRATION which defaults to 10 minutes. Latest desktop sync-client testpilot daily. - Start ocis server with KONNECTD_ACCESS_TOKEN_EXPIRATION=60 (1 minute) - Set upload throttling to 10kb/s - ```fallocate -l 4G 4G.bin``` inside the clients data-dir - Upload does not complete and is restarted on every failure from the beginning. Client logs => This probably leads to the sync-abort as the request is (probably?) done with an expired token. ``` 12-02 16:51:40:442 [ warning sync.networkjob.propfind ]: *not* successful, http result code is 401 "" 12-02 16:51:40:442 [ debug sync.networkjob ] [ OCC::AbstractNetworkJob::slotFinished ]: Network job OCC::PropfindJob finished for "/" 12-02 16:51:40:443 [ info sync.httplogger ]: "5424b26a-9be6-462c-817c-8f036df39da8: Response: POST 0 https://localhost:9090/remote.php/dav/files ``` See attached mitmproxy log Server logs: ``` 2020-12-02 12:34:49.162768 I | http: proxy error: readfrom tcp 127.0.0.1:44974->127.0.0.1:9140: unexpected EOF 2020-12-02 12:34:49.165016 I | http: proxy error: unexpected EOF 2020-12-02T12:34:49Z ERR error doing GET request to data service error="Patch \"https://ocis.owncloud.works/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do\": context canceled" pkg=rhttp service=storage traceid=9d24d20a11df0a269e4b3af7833b18f8 2020-12-02T12:34:49Z WRN srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:528 > path not found when stating pkg=rgrpc service=storage traceid=a51466e41eba5cef6919bacc5e9e339c 2020-12-02T12:34:49Z ERR srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:530 > permission denied error="error: permission denied: 73abb2bc-d6c9-4161-9e53-0c35332e79c3" pkg=rgrpc service=storage traceid=a51466e41eba5cef6919bacc5e9e339c [tusd] 2020/12/02 12:34:49 event="ChunkWriteComplete" id="30999212-21b7-43b6-bfee-e5683f628f27" bytesWritten="632799232" [tusd] 2020/12/02 12:34:49 event="ResponseOutgoing" status="204" method="PATCH" path="/30999212-21b7-43b6-bfee-e5683f628f27" requestId="" 2020-12-02T12:34:49Z ERR http end="02/Dec/2020:12:34:49 +0000" host=127.0.0.1 method=POST pkg=rhttp proto=HTTP/1.1 service=storage size=0 start="02/Dec/2020:12:25:51 +0000" status=500 time_ns=537634144018 traceid=9d24d20a11df0a269e4b3af7833b18f8 uri=/remote.php/dav/files/einstein/ url=/remote.php/dav/files/einstein/ 2020-12-02T12:34:49Z ERR error doing PATCH request to data service error="Patch \"http://localhost:9155/data/tus/30999212-21b7-43b6-bfee-e5683f628f27\": unexpected EOF" pkg=rhttp service=storage traceid=9d24d20a11df0a269e4b3af7833b18f8 2020-12-02T12:34:49Z ERR http end="02/Dec/2020:12:34:49 +0000" host=127.0.0.1 method=PATCH pkg=rhttp proto=HTTP/1.1 service=storage size=0 start="02/Dec/2020:12:25:51 +0000" status=500 time_ns=537608042622 traceid=9d24d20a11df0a269e4b3af7833b18f8 uri=/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do url=/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do 2020-12-02T12:34:49Z WRN srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:528 > path not found when stating pkg=rgrpc service=storage traceid=dc2750ce6e0925812342b158aee9ea3b 2020-12-02T12:34:49Z ERR srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:530 > permission denied error="error: permission denied: 73abb2bc-d6c9-4161-9e53-0c35332e79c3" pkg=rgrpc service=storage traceid=dc2750ce6e0925812342b158aee9ea3b ```
1.0
Single big file upload with dekstop-client fails if OIDC token expires - The issue seems to be that the upload aborts after KONNECTD_ACCESS_TOKEN_EXPIRATION which defaults to 10 minutes. Latest desktop sync-client testpilot daily. - Start ocis server with KONNECTD_ACCESS_TOKEN_EXPIRATION=60 (1 minute) - Set upload throttling to 10kb/s - ```fallocate -l 4G 4G.bin``` inside the clients data-dir - Upload does not complete and is restarted on every failure from the beginning. Client logs => This probably leads to the sync-abort as the request is (probably?) done with an expired token. ``` 12-02 16:51:40:442 [ warning sync.networkjob.propfind ]: *not* successful, http result code is 401 "" 12-02 16:51:40:442 [ debug sync.networkjob ] [ OCC::AbstractNetworkJob::slotFinished ]: Network job OCC::PropfindJob finished for "/" 12-02 16:51:40:443 [ info sync.httplogger ]: "5424b26a-9be6-462c-817c-8f036df39da8: Response: POST 0 https://localhost:9090/remote.php/dav/files ``` See attached mitmproxy log Server logs: ``` 2020-12-02 12:34:49.162768 I | http: proxy error: readfrom tcp 127.0.0.1:44974->127.0.0.1:9140: unexpected EOF 2020-12-02 12:34:49.165016 I | http: proxy error: unexpected EOF 2020-12-02T12:34:49Z ERR error doing GET request to data service error="Patch \"https://ocis.owncloud.works/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do\": context canceled" pkg=rhttp service=storage traceid=9d24d20a11df0a269e4b3af7833b18f8 2020-12-02T12:34:49Z WRN srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:528 > path not found when stating pkg=rgrpc service=storage traceid=a51466e41eba5cef6919bacc5e9e339c 2020-12-02T12:34:49Z ERR srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:530 > permission denied error="error: permission denied: 73abb2bc-d6c9-4161-9e53-0c35332e79c3" pkg=rgrpc service=storage traceid=a51466e41eba5cef6919bacc5e9e339c [tusd] 2020/12/02 12:34:49 event="ChunkWriteComplete" id="30999212-21b7-43b6-bfee-e5683f628f27" bytesWritten="632799232" [tusd] 2020/12/02 12:34:49 event="ResponseOutgoing" status="204" method="PATCH" path="/30999212-21b7-43b6-bfee-e5683f628f27" requestId="" 2020-12-02T12:34:49Z ERR http end="02/Dec/2020:12:34:49 +0000" host=127.0.0.1 method=POST pkg=rhttp proto=HTTP/1.1 service=storage size=0 start="02/Dec/2020:12:25:51 +0000" status=500 time_ns=537634144018 traceid=9d24d20a11df0a269e4b3af7833b18f8 uri=/remote.php/dav/files/einstein/ url=/remote.php/dav/files/einstein/ 2020-12-02T12:34:49Z ERR error doing PATCH request to data service error="Patch \"http://localhost:9155/data/tus/30999212-21b7-43b6-bfee-e5683f628f27\": unexpected EOF" pkg=rhttp service=storage traceid=9d24d20a11df0a269e4b3af7833b18f8 2020-12-02T12:34:49Z ERR http end="02/Dec/2020:12:34:49 +0000" host=127.0.0.1 method=PATCH pkg=rhttp proto=HTTP/1.1 service=storage size=0 start="02/Dec/2020:12:25:51 +0000" status=500 time_ns=537608042622 traceid=9d24d20a11df0a269e4b3af7833b18f8 uri=/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do url=/data/eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJyZXZhIiwiZXhwIjoxNjU4NzUxOTUxLCJpYXQiOjE2MDY5MTE5NTEsInRhcmdldCI6Imh0dHA6Ly9sb2NhbGhvc3Q6OTE1NS9kYXRhL3R1cy8zMDk5OTIxMi0yMWI3LTQzYjYtYmZlZS1lNTY4M2Y2MjhmMjcifQ.BzTtgiI8_jWrmTTyP9nqj5Alu-TuzP_9jIG6fyEM1Do 2020-12-02T12:34:49Z WRN srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:528 > path not found when stating pkg=rgrpc service=storage traceid=dc2750ce6e0925812342b158aee9ea3b 2020-12-02T12:34:49Z ERR srv/app/pkg/mod/github.com/cs3org/reva@v1.4.1-0.20201130061320-ac85e68e0600/internal/grpc/services/storageprovider/storageprovider.go:530 > permission denied error="error: permission denied: 73abb2bc-d6c9-4161-9e53-0c35332e79c3" pkg=rgrpc service=storage traceid=dc2750ce6e0925812342b158aee9ea3b ```
defect
single big file upload with dekstop client fails if oidc token expires the issue seems to be that the upload aborts after konnectd access token expiration which defaults to minutes latest desktop sync client testpilot daily start ocis server with konnectd access token expiration minute set upload throttling to s fallocate l bin inside the clients data dir upload does not complete and is restarted on every failure from the beginning client logs this probably leads to the sync abort as the request is probably done with an expired token not successful http result code is network job occ propfindjob finished for response post see attached mitmproxy log server logs i http proxy error readfrom tcp unexpected eof i http proxy error unexpected eof err error doing get request to data service error patch context canceled pkg rhttp service storage traceid wrn srv app pkg mod github com reva internal grpc services storageprovider storageprovider go path not found when stating pkg rgrpc service storage traceid err srv app pkg mod github com reva internal grpc services storageprovider storageprovider go permission denied error error permission denied pkg rgrpc service storage traceid event chunkwritecomplete id bfee byteswritten event responseoutgoing status method patch path bfee requestid err http end dec host method post pkg rhttp proto http service storage size start dec status time ns traceid uri remote php dav files einstein url remote php dav files einstein err error doing patch request to data service error patch unexpected eof pkg rhttp service storage traceid err http end dec host method patch pkg rhttp proto http service storage size start dec status time ns traceid uri data tuzp url data tuzp wrn srv app pkg mod github com reva internal grpc services storageprovider storageprovider go path not found when stating pkg rgrpc service storage traceid err srv app pkg mod github com reva internal grpc services storageprovider storageprovider go permission denied error error permission denied pkg rgrpc service storage traceid
1
286,086
24,718,431,580
IssuesEvent
2022-10-20 08:51:00
MartinaB91/project5-task-app-front
https://api.github.com/repos/MartinaB91/project5-task-app-front
closed
Test: Filter tasks, All
test
_Test for check that a user can filter the tasks on all tasks that has the status "All"._ ## Story: #13 ## Testcases: |Test id: #96 | | |--------|------------------------------| |**Purpose:**|Check that a user can filter tasks on All| |**Requirements:**| As a **Family member** I want to **filter tasks** so that I can **view tasks that has a specific status.** | |**Data:**| Username: Tester Password: Only available for tester Family Member: Parent | |**Preconditions:**| Signed-in as User, Completed test #95 | |**Procedure step:**|**Expected result:**| |**Step 1:** Go to the scoreboard and choose filter "All" | One tasks that isn't assigned and one task that is assigned should be shown. |
1.0
Test: Filter tasks, All - _Test for check that a user can filter the tasks on all tasks that has the status "All"._ ## Story: #13 ## Testcases: |Test id: #96 | | |--------|------------------------------| |**Purpose:**|Check that a user can filter tasks on All| |**Requirements:**| As a **Family member** I want to **filter tasks** so that I can **view tasks that has a specific status.** | |**Data:**| Username: Tester Password: Only available for tester Family Member: Parent | |**Preconditions:**| Signed-in as User, Completed test #95 | |**Procedure step:**|**Expected result:**| |**Step 1:** Go to the scoreboard and choose filter "All" | One tasks that isn't assigned and one task that is assigned should be shown. |
non_defect
test filter tasks all test for check that a user can filter the tasks on all tasks that has the status all story testcases test id purpose check that a user can filter tasks on all requirements as a family member i want to filter tasks so that i can view tasks that has a specific status data username tester password only available for tester family member parent preconditions signed in as user completed test procedure step expected result step go to the scoreboard and choose filter all one tasks that isn t assigned and one task that is assigned should be shown
0
73,046
24,419,821,951
IssuesEvent
2022-10-05 19:15:02
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
opened
JdbcSqlConnector issues
Type: Defect Team: SQL
1. the connector doesn't escape or quote identifiers when generating SQL. This enables SQL injection. For example here: https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L150 or here: https://github.com/hazelcast/hazelcast/blob/master/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/SelectQueryBuilder.java#L70 and probably more. 2. This should rather by "unsupported column type", instead of "unknown" https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L421 3. We should use the nodeEngine.getLogger() here. This way it won't contain hazelcast instance name: https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L73 4. This is too wide-catching, the illegal-arg-exc is too unspecific. I think the whole try-catch can be removed, I guess it's supposed to "improve" the illegal-arg-exc thrown within `resolveType()` and that it's a leftover: https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L105-L107 5. We throw IllegalStateException in multiple places, but I think this exception is for something else. We should throw `QueryException.error()`. 6. Officially? https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L114 I didn't review everything. Questions: - why don't we use `connection.getMetaData().getColumns()` here, as we do in `readPrimaryKeyColumns`? https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L150 - why do we use column type name, and not the constants here? https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L161
1.0
JdbcSqlConnector issues - 1. the connector doesn't escape or quote identifiers when generating SQL. This enables SQL injection. For example here: https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L150 or here: https://github.com/hazelcast/hazelcast/blob/master/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/SelectQueryBuilder.java#L70 and probably more. 2. This should rather by "unsupported column type", instead of "unknown" https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L421 3. We should use the nodeEngine.getLogger() here. This way it won't contain hazelcast instance name: https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L73 4. This is too wide-catching, the illegal-arg-exc is too unspecific. I think the whole try-catch can be removed, I guess it's supposed to "improve" the illegal-arg-exc thrown within `resolveType()` and that it's a leftover: https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L105-L107 5. We throw IllegalStateException in multiple places, but I think this exception is for something else. We should throw `QueryException.error()`. 6. Officially? https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L114 I didn't review everything. Questions: - why don't we use `connection.getMetaData().getColumns()` here, as we do in `readPrimaryKeyColumns`? https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L150 - why do we use column type name, and not the constants here? https://github.com/hazelcast/hazelcast/blob/ee8ce2f87e78086c74d71b339d30eec110e28cde/hazelcast-sql/src/main/java/com/hazelcast/jet/sql/impl/connector/jdbc/JdbcSqlConnector.java#L161
defect
jdbcsqlconnector issues the connector doesn t escape or quote identifiers when generating sql this enables sql injection for example here or here and probably more this should rather by unsupported column type instead of unknown we should use the nodeengine getlogger here this way it won t contain hazelcast instance name this is too wide catching the illegal arg exc is too unspecific i think the whole try catch can be removed i guess it s supposed to improve the illegal arg exc thrown within resolvetype and that it s a leftover we throw illegalstateexception in multiple places but i think this exception is for something else we should throw queryexception error officially i didn t review everything questions why don t we use connection getmetadata getcolumns here as we do in readprimarykeycolumns why do we use column type name and not the constants here
1
79,501
9,907,909,209
IssuesEvent
2019-06-27 16:55:28
phetsims/circuit-construction-kit-common
https://api.github.com/repos/phetsims/circuit-construction-kit-common
closed
Add a pause/play button?
meeting:design type:question
While exploring https://github.com/phetsims/QA/issues/311 it occurred to me that a pause/play button could be useful. If a student want's to add reset multiple fuses, depending on the set up one may be blown out before the other could be restarted. In general, it could be useful for a student to fully set up a circuit before starting it. Of course, this could all be accomplished by the student with an appropriately placed switch, which may be more pedagogically useful anyway. Thought I'd add the idea for consideration.
1.0
Add a pause/play button? - While exploring https://github.com/phetsims/QA/issues/311 it occurred to me that a pause/play button could be useful. If a student want's to add reset multiple fuses, depending on the set up one may be blown out before the other could be restarted. In general, it could be useful for a student to fully set up a circuit before starting it. Of course, this could all be accomplished by the student with an appropriately placed switch, which may be more pedagogically useful anyway. Thought I'd add the idea for consideration.
non_defect
add a pause play button while exploring it occurred to me that a pause play button could be useful if a student want s to add reset multiple fuses depending on the set up one may be blown out before the other could be restarted in general it could be useful for a student to fully set up a circuit before starting it of course this could all be accomplished by the student with an appropriately placed switch which may be more pedagogically useful anyway thought i d add the idea for consideration
0
269,602
8,440,881,870
IssuesEvent
2018-10-18 08:40:24
strapi/strapi
https://api.github.com/repos/strapi/strapi
closed
Display fields component is not working in firefox
priority: low status: confirmed type: bug 🐛
<!-- ⚠️ If you do not respect this template your issue will be closed. --> <!-- ⚠️ Before writing your issue make sure you are using:--> <!-- Node 9.x.x --> <!-- npm 5.x.x --> <!-- The latest version of Strapi. --> **Informations** - **Node.js version**: 8.11.1 - **npm version**: 5.6.0 - **Strapi version**: v3.0.0-alpha.14 - **Database**: postgress - **Operating system**: windows 10 **What is the current behavior?** Display fields component just closes (without filed hiding) after click on any element in firefox. In chrome it works as expected. ![alt text](https://screenshotscdn.firefoxusercontent.com/images/41483ee0-f94d-404a-ae28-4c98769a7143.png) **Steps to reproduce the problem** Try to hide any field in firefox from ListView. **What is the expected behavior?** Element should not be closed after click on checkboxes. <!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
1.0
Display fields component is not working in firefox - <!-- ⚠️ If you do not respect this template your issue will be closed. --> <!-- ⚠️ Before writing your issue make sure you are using:--> <!-- Node 9.x.x --> <!-- npm 5.x.x --> <!-- The latest version of Strapi. --> **Informations** - **Node.js version**: 8.11.1 - **npm version**: 5.6.0 - **Strapi version**: v3.0.0-alpha.14 - **Database**: postgress - **Operating system**: windows 10 **What is the current behavior?** Display fields component just closes (without filed hiding) after click on any element in firefox. In chrome it works as expected. ![alt text](https://screenshotscdn.firefoxusercontent.com/images/41483ee0-f94d-404a-ae28-4c98769a7143.png) **Steps to reproduce the problem** Try to hide any field in firefox from ListView. **What is the expected behavior?** Element should not be closed after click on checkboxes. <!-- ⚠️ Make sure to browse the opened and closed issues before submitting your issue. -->
non_defect
display fields component is not working in firefox informations node js version npm version strapi version alpha database postgress operating system windows what is the current behavior display fields component just closes without filed hiding after click on any element in firefox in chrome it works as expected steps to reproduce the problem try to hide any field in firefox from listview what is the expected behavior element should not be closed after click on checkboxes
0
234,488
17,988,238,947
IssuesEvent
2021-09-15 00:06:06
aws/aws-sdk-js
https://api.github.com/repos/aws/aws-sdk-js
closed
AWS.Pricing - getProducts returning PriceList as Array<object> instead of Array<string>
documentation service-api closed-for-staleness
Confirm by changing [ ] to [x] below to ensure that it's a bug: - [x] I've gone through [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/welcome.html) and [API reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/) - [x] I've checked [AWS Forums](https://forums.aws.amazon.com) and [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) for answers - [x] I've searched for [previous similar issues](https://github.com/aws/aws-sdk-js/issues) and didn't find any solution **Describe the bug** When using the method `getProducts` from the `AWS.Pricing` class, the [documentation](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Pricing.html#getProducts-property) states that the expected return type is an object with one of the keys being `PriceList` and its respective value being an array of strings. However, when using this method, the type of the value associated with the `PriceList` key is, actually, an array of objects. This does not impact our ability of using this method, however, it does affect the type checking provided by Typescript. **Is the issue in the browser/Node.js?** Node.js **If on Node.js, are you running this on AWS Lambda?** No **Details of the browser/Node.js version** v14.8.0 **SDK version number** 2.740.0 **To Reproduce (observed behavior)** Code to reproduce the behavior: ``` const AWS = require('aws-sdk') const pricing = new AWS.Pricing({ region: 'us-east-1' }) const params = { Filters: [{ Field: 'location', Type: 'TERM_MATCH', Value: 'US East (N. Virginia)' }, { Field: 'preInstalledSw', Type: 'TERM_MATCH', Value: 'NA' }, { Field: 'operatingSystem', Type: 'TERM_MATCH', Value: 'Linux' }, { Field: 'tenancy', Type: 'TERM_MATCH', Value: 'Shared' }, { Field: 'usageType', Type: 'TERM_MATCH', Value: `BoxUsage:g4dn.12xlarge` }], FormatVersion: 'aws_v1', MaxResults: 1, ServiceCode: 'AmazonEC2' } pricing.getProducts(params, (err, data) => { if(err) console.log(err) else console.log(data) }) ``` Output: ``` { FormatVersion: 'aws_v1', PriceList: [ { product: [Object], serviceCode: 'AmazonEC2', terms: [Object], version: '20200825215021', publicationDate: '2020-08-25T21:50:21Z' } ], NextToken: 'vJJa+Vb2+2uBtIJHi3L58A==:4AUHa89lAtUYd6VwJB4KZVOC1zVwWEzLMfA/iEB6Hf3ubDdiCpqm2JKMBzZFgtjXjRWdj0w2MgOecpGmltvW+gaEs3h1GLQ6f1l6/R11dpPFr9BwEna/ej6MGemeTrHe3B8u414Ab9mfCgTuB8AaiuUl3CzvFbF3+XG/GwdILGru4UgxU1V4KiYNuyLMRwBt' } ``` **Expected behavior** We expect either the documentation to state that `PriceList` is an array of objects (with its corresponding type being set this way in the type definition for Typescript) or for `PriceList` being returned as a JSON-encoded string. **Screenshots** NA **Additional context** NA
1.0
AWS.Pricing - getProducts returning PriceList as Array<object> instead of Array<string> - Confirm by changing [ ] to [x] below to ensure that it's a bug: - [x] I've gone through [Developer Guide](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/welcome.html) and [API reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/) - [x] I've checked [AWS Forums](https://forums.aws.amazon.com) and [StackOverflow](https://stackoverflow.com/questions/tagged/aws-sdk-js) for answers - [x] I've searched for [previous similar issues](https://github.com/aws/aws-sdk-js/issues) and didn't find any solution **Describe the bug** When using the method `getProducts` from the `AWS.Pricing` class, the [documentation](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/Pricing.html#getProducts-property) states that the expected return type is an object with one of the keys being `PriceList` and its respective value being an array of strings. However, when using this method, the type of the value associated with the `PriceList` key is, actually, an array of objects. This does not impact our ability of using this method, however, it does affect the type checking provided by Typescript. **Is the issue in the browser/Node.js?** Node.js **If on Node.js, are you running this on AWS Lambda?** No **Details of the browser/Node.js version** v14.8.0 **SDK version number** 2.740.0 **To Reproduce (observed behavior)** Code to reproduce the behavior: ``` const AWS = require('aws-sdk') const pricing = new AWS.Pricing({ region: 'us-east-1' }) const params = { Filters: [{ Field: 'location', Type: 'TERM_MATCH', Value: 'US East (N. Virginia)' }, { Field: 'preInstalledSw', Type: 'TERM_MATCH', Value: 'NA' }, { Field: 'operatingSystem', Type: 'TERM_MATCH', Value: 'Linux' }, { Field: 'tenancy', Type: 'TERM_MATCH', Value: 'Shared' }, { Field: 'usageType', Type: 'TERM_MATCH', Value: `BoxUsage:g4dn.12xlarge` }], FormatVersion: 'aws_v1', MaxResults: 1, ServiceCode: 'AmazonEC2' } pricing.getProducts(params, (err, data) => { if(err) console.log(err) else console.log(data) }) ``` Output: ``` { FormatVersion: 'aws_v1', PriceList: [ { product: [Object], serviceCode: 'AmazonEC2', terms: [Object], version: '20200825215021', publicationDate: '2020-08-25T21:50:21Z' } ], NextToken: 'vJJa+Vb2+2uBtIJHi3L58A==:4AUHa89lAtUYd6VwJB4KZVOC1zVwWEzLMfA/iEB6Hf3ubDdiCpqm2JKMBzZFgtjXjRWdj0w2MgOecpGmltvW+gaEs3h1GLQ6f1l6/R11dpPFr9BwEna/ej6MGemeTrHe3B8u414Ab9mfCgTuB8AaiuUl3CzvFbF3+XG/GwdILGru4UgxU1V4KiYNuyLMRwBt' } ``` **Expected behavior** We expect either the documentation to state that `PriceList` is an array of objects (with its corresponding type being set this way in the type definition for Typescript) or for `PriceList` being returned as a JSON-encoded string. **Screenshots** NA **Additional context** NA
non_defect
aws pricing getproducts returning pricelist as array instead of array confirm by changing to below to ensure that it s a bug i ve gone through and i ve checked and for answers i ve searched for and didn t find any solution describe the bug when using the method getproducts from the aws pricing class the states that the expected return type is an object with one of the keys being pricelist and its respective value being an array of strings however when using this method the type of the value associated with the pricelist key is actually an array of objects this does not impact our ability of using this method however it does affect the type checking provided by typescript is the issue in the browser node js node js if on node js are you running this on aws lambda no details of the browser node js version sdk version number to reproduce observed behavior code to reproduce the behavior const aws require aws sdk const pricing new aws pricing region us east const params filters field location type term match value us east n virginia field preinstalledsw type term match value na field operatingsystem type term match value linux field tenancy type term match value shared field usagetype type term match value boxusage formatversion aws maxresults servicecode pricing getproducts params err data if err console log err else console log data output formatversion aws pricelist product servicecode terms version publicationdate nexttoken vjja xg expected behavior we expect either the documentation to state that pricelist is an array of objects with its corresponding type being set this way in the type definition for typescript or for pricelist being returned as a json encoded string screenshots na additional context na
0
193,477
6,885,288,547
IssuesEvent
2017-11-21 15:41:37
molgenis/molgenis
https://api.github.com/repos/molgenis/molgenis
closed
Selecting Questionnaire plugin for Chr6 displays error
2.0 bug mod:questionnaires priority-next
#### Reproduce - Import the Chr6 EMX questionnaire (will attach later if allowed, else ask me) - Select the 'Questionnaire' plugin in the 'Plugins' menu #### Expected - I see the Chr6 questionnaire and can start data entry #### Observed - Freemarker stacktrace: ``` org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.molgenis.data.validation.MolgenisValidationException: The attribute 'gender' of entity 'chromome6_i_l' can not be null. at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:979) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:858) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) ```
1.0
Selecting Questionnaire plugin for Chr6 displays error - #### Reproduce - Import the Chr6 EMX questionnaire (will attach later if allowed, else ask me) - Select the 'Questionnaire' plugin in the 'Plugins' menu #### Expected - I see the Chr6 questionnaire and can start data entry #### Observed - Freemarker stacktrace: ``` org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.molgenis.data.validation.MolgenisValidationException: The attribute 'gender' of entity 'chromome6_i_l' can not be null. at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:979) at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:858) at javax.servlet.http.HttpServlet.service(HttpServlet.java:687) ```
non_defect
selecting questionnaire plugin for displays error reproduce import the emx questionnaire will attach later if allowed else ask me select the questionnaire plugin in the plugins menu expected i see the questionnaire and can start data entry observed freemarker stacktrace org springframework web util nestedservletexception request processing failed nested exception is org molgenis data validation molgenisvalidationexception the attribute gender of entity i l can not be null at org springframework web servlet frameworkservlet processrequest frameworkservlet java at org springframework web servlet frameworkservlet doget frameworkservlet java at javax servlet http httpservlet service httpservlet java
0
20,060
3,293,393,144
IssuesEvent
2015-10-30 18:43:24
mehlon/acme-sac
https://api.github.com/repos/mehlon/acme-sac
closed
fail to compile on ubuntu 12.04 x64
auto-migrated Priority-Medium Type-Defect
``` here is the error message when compiling codes from from the repo: here's the error from gcc: 9l -o o.emu asm-386.o os.o win-x11a.o emu.root.o lock.o devroot.o devcons.o devenv.o devmnt.o devpipe.o devprog.o devprof.o devsrv.o devdup.o devssl.o devcap.o devfs.o devcmd.o cmd.o devtab.o devindir.o devdraw.o devpointer.o devsnarf.o devwmsz.o devip.o ipif6-posix.o ipaux.o deveia.o devaudio.o audio-oss.o devmem.o srv.o alloc.o cache.o chan.o dev.o dial.o dis.o discall.o env.o error.o errstr.o exception.o exportfs.o inferno.o kproc-pthreads.o latin1.o main.o parse.o pgrp.o print.o proc.o qio.o random.o segflush-386.o sysfile.o uqid.o emu.o ../../lib/libinterp.a ../../lib/libmath.a ../../lib/libdraw.a ../../lib/libmemlayer.a ../../lib/libmemdraw.a ../../lib/libkeyring.a ../../lib/libsec.a ../../lib/libmp.a ../../lib/lib9.a -lm -lX11 -lXext -lpthread gcc: error: asm-386.o: No such file or directory gcc: error: ../../lib/libmath.a: No such file or directory ``` Original issue reported on code.google.com by `isaac...@gmail.com` on 22 May 2013 at 3:59
1.0
fail to compile on ubuntu 12.04 x64 - ``` here is the error message when compiling codes from from the repo: here's the error from gcc: 9l -o o.emu asm-386.o os.o win-x11a.o emu.root.o lock.o devroot.o devcons.o devenv.o devmnt.o devpipe.o devprog.o devprof.o devsrv.o devdup.o devssl.o devcap.o devfs.o devcmd.o cmd.o devtab.o devindir.o devdraw.o devpointer.o devsnarf.o devwmsz.o devip.o ipif6-posix.o ipaux.o deveia.o devaudio.o audio-oss.o devmem.o srv.o alloc.o cache.o chan.o dev.o dial.o dis.o discall.o env.o error.o errstr.o exception.o exportfs.o inferno.o kproc-pthreads.o latin1.o main.o parse.o pgrp.o print.o proc.o qio.o random.o segflush-386.o sysfile.o uqid.o emu.o ../../lib/libinterp.a ../../lib/libmath.a ../../lib/libdraw.a ../../lib/libmemlayer.a ../../lib/libmemdraw.a ../../lib/libkeyring.a ../../lib/libsec.a ../../lib/libmp.a ../../lib/lib9.a -lm -lX11 -lXext -lpthread gcc: error: asm-386.o: No such file or directory gcc: error: ../../lib/libmath.a: No such file or directory ``` Original issue reported on code.google.com by `isaac...@gmail.com` on 22 May 2013 at 3:59
defect
fail to compile on ubuntu here is the error message when compiling codes from from the repo here s the error from gcc o o emu asm o os o win o emu root o lock o devroot o devcons o devenv o devmnt o devpipe o devprog o devprof o devsrv o devdup o devssl o devcap o devfs o devcmd o cmd o devtab o devindir o devdraw o devpointer o devsnarf o devwmsz o devip o posix o ipaux o deveia o devaudio o audio oss o devmem o srv o alloc o cache o chan o dev o dial o dis o discall o env o error o errstr o exception o exportfs o inferno o kproc pthreads o o main o parse o pgrp o print o proc o qio o random o segflush o sysfile o uqid o emu o lib libinterp a lib libmath a lib libdraw a lib libmemlayer a lib libmemdraw a lib libkeyring a lib libsec a lib libmp a lib a lm lxext lpthread gcc error asm o no such file or directory gcc error lib libmath a no such file or directory original issue reported on code google com by isaac gmail com on may at
1
7,405
2,610,366,797
IssuesEvent
2015-02-26 19:58:30
chrsmith/scribefire-chrome
https://api.github.com/repos/chrsmith/scribefire-chrome
closed
Won't authenticate to my wordpress blog...
auto-migrated Priority-Medium Type-Defect
``` What's the problem? When I go through the connect a blog process, it seems to authenticate on the pop-up window, then comes up with a 'well this is embarrasing' message in the scribefire browser window... What browser are you using? Firefox 21 on linux What version of ScribeFire are you running? Latest linux.. ``` ----- Original issue reported on code.google.com by `nathansu...@gmail.com` on 27 Jun 2013 at 10:05
1.0
Won't authenticate to my wordpress blog... - ``` What's the problem? When I go through the connect a blog process, it seems to authenticate on the pop-up window, then comes up with a 'well this is embarrasing' message in the scribefire browser window... What browser are you using? Firefox 21 on linux What version of ScribeFire are you running? Latest linux.. ``` ----- Original issue reported on code.google.com by `nathansu...@gmail.com` on 27 Jun 2013 at 10:05
defect
won t authenticate to my wordpress blog what s the problem when i go through the connect a blog process it seems to authenticate on the pop up window then comes up with a well this is embarrasing message in the scribefire browser window what browser are you using firefox on linux what version of scribefire are you running latest linux original issue reported on code google com by nathansu gmail com on jun at
1
320,443
9,781,001,172
IssuesEvent
2019-06-07 18:30:26
kubernetes/minikube
https://api.github.com/repos/kubernetes/minikube
closed
xhyve suggests obsolete WantShowDriverDeprecationNotification setting
co/xhyve good first issue help wanted kind/bug priority/backlog r/2019q2
minikube v1.0.0 I use `minikube start --vm-driver xhyve` which tells me ``` ⚠️ The xhyve driver is deprecated and support for it will be removed in a future release. Please consider switching to the hyperkit driver, which is intended to replace the xhyve driver. See https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver for more information. To disable this message, run [minikube config set WantShowDriverDeprecationNotification false] ``` but ``` $ minikube config set WantShowDriverDeprecationNotification false 💣 Set failed: Property name WantShowDriverDeprecationNotification not found ```
1.0
xhyve suggests obsolete WantShowDriverDeprecationNotification setting - minikube v1.0.0 I use `minikube start --vm-driver xhyve` which tells me ``` ⚠️ The xhyve driver is deprecated and support for it will be removed in a future release. Please consider switching to the hyperkit driver, which is intended to replace the xhyve driver. See https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#hyperkit-driver for more information. To disable this message, run [minikube config set WantShowDriverDeprecationNotification false] ``` but ``` $ minikube config set WantShowDriverDeprecationNotification false 💣 Set failed: Property name WantShowDriverDeprecationNotification not found ```
non_defect
xhyve suggests obsolete wantshowdriverdeprecationnotification setting minikube i use minikube start vm driver xhyve which tells me ⚠️ the xhyve driver is deprecated and support for it will be removed in a future release please consider switching to the hyperkit driver which is intended to replace the xhyve driver see for more information to disable this message run but minikube config set wantshowdriverdeprecationnotification false 💣 set failed property name wantshowdriverdeprecationnotification not found
0
462,166
13,242,261,353
IssuesEvent
2020-08-19 09:28:54
weaveworks/eksctl
https://api.github.com/repos/weaveworks/eksctl
closed
eksctl command to update mapAccounts
help wanted kind/feature kind/help low-hanging-fruit priority/important-longterm
I want to know the eksctl command to update mapAccounts in aws-auth configmap. For mapUsers and mapRoles,we have eksctl create iamidentitymapping command. For mapAccounts,what is the eksctl command? If there is no command,then how to add multiple accounts under mapAccounts in a programmatic way? `kubectl patch configmap aws-auth -n kube-system --type merge -p '{"data":{"mapAccounts":"[\"12345\",\"56789\"]"}}` The Result is, ` mapAccounts: '["12345","56789"]'` Expected Result: ``` mapAccounts: | - "12345" - "56789" ```
1.0
eksctl command to update mapAccounts - I want to know the eksctl command to update mapAccounts in aws-auth configmap. For mapUsers and mapRoles,we have eksctl create iamidentitymapping command. For mapAccounts,what is the eksctl command? If there is no command,then how to add multiple accounts under mapAccounts in a programmatic way? `kubectl patch configmap aws-auth -n kube-system --type merge -p '{"data":{"mapAccounts":"[\"12345\",\"56789\"]"}}` The Result is, ` mapAccounts: '["12345","56789"]'` Expected Result: ``` mapAccounts: | - "12345" - "56789" ```
non_defect
eksctl command to update mapaccounts i want to know the eksctl command to update mapaccounts in aws auth configmap for mapusers and maproles we have eksctl create iamidentitymapping command for mapaccounts what is the eksctl command if there is no command then how to add multiple accounts under mapaccounts in a programmatic way kubectl patch configmap aws auth n kube system type merge p data mapaccounts the result is mapaccounts expected result mapaccounts
0
45,328
11,633,228,302
IssuesEvent
2020-02-28 07:42:56
GoogleContainerTools/skaffold
https://api.github.com/repos/GoogleContainerTools/skaffold
opened
Custom artifacts with a dependency on a Dockerfile should support inferred sync
area/sync build/custom
When a custom artifact as a dependency on a Dockerfile, we can assume the custom script is used to run something similar to `docker build`, with non supported flags or using a different tool. In that case, it should be safe to support inferred sync mode.
1.0
Custom artifacts with a dependency on a Dockerfile should support inferred sync - When a custom artifact as a dependency on a Dockerfile, we can assume the custom script is used to run something similar to `docker build`, with non supported flags or using a different tool. In that case, it should be safe to support inferred sync mode.
non_defect
custom artifacts with a dependency on a dockerfile should support inferred sync when a custom artifact as a dependency on a dockerfile we can assume the custom script is used to run something similar to docker build with non supported flags or using a different tool in that case it should be safe to support inferred sync mode
0
75,582
25,928,470,548
IssuesEvent
2022-12-16 07:44:53
SeleniumHQ/selenium
https://api.github.com/repos/SeleniumHQ/selenium
closed
[🐛 Bug]: DevTools - RemoteObject.Type.OBJECT properties in consoleEvents are not properly mapped and lost (null) on listener call
C-java I-defect I-issue-template C-devtools
### What happened? DevTools console entries fetched by Selenium DevTools support have value equals to "null" if are RemoteObject.Type.OBJECT Through debugging Selenium's code it seems that in this [place](https://github.com/SeleniumHQ/selenium/blob/7a469e02e14e44b0173b31b3b9baa1bfe57591ba/java/src/org/openqa/selenium/devtools/v106/V106Events.java#L68) mapping is incorrectly performed and getValue always defaults to null. I would like to log DevTools messages as 'they are' for further debugging in case of application defect occurence. Look at attached images - here is my tested application devtools console tab with Intellij's debug on breakpoint ![selenium_devtools_consoleEvent_serialization-highlighted](https://user-images.githubusercontent.com/24297270/206179523-305b9e31-4814-4291-a805-69b4892953f3.png) ModifiedArgs are later forwarded to customListener and original args are no longer available. ![selenium_devtools_consoleEvent_serialization-line_later](https://user-images.githubusercontent.com/24297270/206178638-5b27b01e-c6a9-4d60-b50a-a95db16bfe87.png) ### How can we reproduce the issue? ```shell import io.github.bonigarcia.wdm.WebDriverManager; import org.openqa.selenium.JavascriptExecutor; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.devtools.HasDevTools; import org.openqa.selenium.devtools.v106.log.Log; import org.openqa.selenium.devtools.v106.network.Network; import java.util.Optional; class DevToolsEventsTest { public static void main(String[] args) { WebDriver driver = null; try { //Init WebDriver WebDriverManager.chromedriver().setup(); driver = new ChromeDriver(); //Init DevTools ((HasDevTools) driver).maybeGetDevTools().ifPresent(devTools -> { devTools.createSessionIfThereIsNotOne(); devTools.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Log.enable()); //Add Listeners devTools.getDomains().events().addConsoleListener(consoleEvent -> System.out.printf("[consoleEvent] %s%n", consoleEvent)); }); //Trigger consoleEvents ((JavascriptExecutor) driver).executeScript("console.log(\"simple\", 1, \"test\", true)"); ((JavascriptExecutor) driver).executeScript("console.log(\"array\", [1, 2, 3])"); ((JavascriptExecutor) driver).executeScript("console.log(\"object\", { a: 1 })"); ((JavascriptExecutor) driver).executeScript("console.log(\"nested object\", { a: { b: 1 } })"); Thread.sleep(5000); } catch (InterruptedException e) { throw new RuntimeException(e); } finally { if (driver != null) { driver.quit(); } } } } ``` ### Relevant log output ```shell > Task :DevToolsEventsTest.main() 12:39:07 [main] [] DEBUG ResolutionCache - Resolution chrome=108 in cache (valid until 13:07:27 09/12/2022 CET) 12:39:07 [main] [] DEBUG ResolutionCache - Resolution chrome108=108.0.5359.71 in cache (valid until 16:36:58 09/12/2022 CET) 12:39:07 [main] [] INFO WebDriverManager - Using chromedriver 108.0.5359.71 (resolved driver for Chrome 108) 12:39:07 [main] [] DEBUG WebDriverManager - Driver chromedriver 108.0.5359.71 found in cache 12:39:07 [main] [] INFO WebDriverManager - Exporting webdriver.chrome.driver as C:\Users\Wojciech\.cache\selenium\chromedriver\win32\108.0.5359.71\chromedriver.exe Starting ChromeDriver 108.0.5359.71 (1e0e3868ee06e91ad636a874420e3ca3ae3756ac-refs/branch-heads/5359@{#1016}) on port 54167 Only local connections are allowed. Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe. ChromeDriver was started successfully. Dec 09, 2022 12:39:11 PM org.openqa.selenium.remote.ProtocolHandshake createSession INFO: Detected upstream dialect: W3C Dec 09, 2022 12:39:11 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch WARNING: Unable to find an exact match for CDP version 108, so returning the closest version found: 106 Dec 09, 2022 12:39:11 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch INFO: Found CDP implementation for version 108 of 106 [consoleEvent] 2022-12-09T11:39:12.035Z [log] [["simple", 1, "test", true]] [consoleEvent] 2022-12-09T11:39:12.095Z [log] [["object", null]] [consoleEvent] 2022-12-09T11:39:12.062Z [log] [["array", null]] [consoleEvent] 2022-12-09T11:39:12.141Z [log] [["nested object", null]] ``` ### Operating System Windows 10 ### Selenium version Java 4.5.0 ### What are the browser(s) and version(s) where you see this issue? Chrome 108 ### What are the browser driver(s) and version(s) where you see this issue? ChromeDriver 108.0.5359.71 ### Are you using Selenium Grid? No
1.0
[🐛 Bug]: DevTools - RemoteObject.Type.OBJECT properties in consoleEvents are not properly mapped and lost (null) on listener call - ### What happened? DevTools console entries fetched by Selenium DevTools support have value equals to "null" if are RemoteObject.Type.OBJECT Through debugging Selenium's code it seems that in this [place](https://github.com/SeleniumHQ/selenium/blob/7a469e02e14e44b0173b31b3b9baa1bfe57591ba/java/src/org/openqa/selenium/devtools/v106/V106Events.java#L68) mapping is incorrectly performed and getValue always defaults to null. I would like to log DevTools messages as 'they are' for further debugging in case of application defect occurence. Look at attached images - here is my tested application devtools console tab with Intellij's debug on breakpoint ![selenium_devtools_consoleEvent_serialization-highlighted](https://user-images.githubusercontent.com/24297270/206179523-305b9e31-4814-4291-a805-69b4892953f3.png) ModifiedArgs are later forwarded to customListener and original args are no longer available. ![selenium_devtools_consoleEvent_serialization-line_later](https://user-images.githubusercontent.com/24297270/206178638-5b27b01e-c6a9-4d60-b50a-a95db16bfe87.png) ### How can we reproduce the issue? ```shell import io.github.bonigarcia.wdm.WebDriverManager; import org.openqa.selenium.JavascriptExecutor; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.devtools.HasDevTools; import org.openqa.selenium.devtools.v106.log.Log; import org.openqa.selenium.devtools.v106.network.Network; import java.util.Optional; class DevToolsEventsTest { public static void main(String[] args) { WebDriver driver = null; try { //Init WebDriver WebDriverManager.chromedriver().setup(); driver = new ChromeDriver(); //Init DevTools ((HasDevTools) driver).maybeGetDevTools().ifPresent(devTools -> { devTools.createSessionIfThereIsNotOne(); devTools.send(Network.enable(Optional.empty(), Optional.empty(), Optional.empty())); devTools.send(Log.enable()); //Add Listeners devTools.getDomains().events().addConsoleListener(consoleEvent -> System.out.printf("[consoleEvent] %s%n", consoleEvent)); }); //Trigger consoleEvents ((JavascriptExecutor) driver).executeScript("console.log(\"simple\", 1, \"test\", true)"); ((JavascriptExecutor) driver).executeScript("console.log(\"array\", [1, 2, 3])"); ((JavascriptExecutor) driver).executeScript("console.log(\"object\", { a: 1 })"); ((JavascriptExecutor) driver).executeScript("console.log(\"nested object\", { a: { b: 1 } })"); Thread.sleep(5000); } catch (InterruptedException e) { throw new RuntimeException(e); } finally { if (driver != null) { driver.quit(); } } } } ``` ### Relevant log output ```shell > Task :DevToolsEventsTest.main() 12:39:07 [main] [] DEBUG ResolutionCache - Resolution chrome=108 in cache (valid until 13:07:27 09/12/2022 CET) 12:39:07 [main] [] DEBUG ResolutionCache - Resolution chrome108=108.0.5359.71 in cache (valid until 16:36:58 09/12/2022 CET) 12:39:07 [main] [] INFO WebDriverManager - Using chromedriver 108.0.5359.71 (resolved driver for Chrome 108) 12:39:07 [main] [] DEBUG WebDriverManager - Driver chromedriver 108.0.5359.71 found in cache 12:39:07 [main] [] INFO WebDriverManager - Exporting webdriver.chrome.driver as C:\Users\Wojciech\.cache\selenium\chromedriver\win32\108.0.5359.71\chromedriver.exe Starting ChromeDriver 108.0.5359.71 (1e0e3868ee06e91ad636a874420e3ca3ae3756ac-refs/branch-heads/5359@{#1016}) on port 54167 Only local connections are allowed. Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe. ChromeDriver was started successfully. Dec 09, 2022 12:39:11 PM org.openqa.selenium.remote.ProtocolHandshake createSession INFO: Detected upstream dialect: W3C Dec 09, 2022 12:39:11 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch WARNING: Unable to find an exact match for CDP version 108, so returning the closest version found: 106 Dec 09, 2022 12:39:11 PM org.openqa.selenium.devtools.CdpVersionFinder findNearestMatch INFO: Found CDP implementation for version 108 of 106 [consoleEvent] 2022-12-09T11:39:12.035Z [log] [["simple", 1, "test", true]] [consoleEvent] 2022-12-09T11:39:12.095Z [log] [["object", null]] [consoleEvent] 2022-12-09T11:39:12.062Z [log] [["array", null]] [consoleEvent] 2022-12-09T11:39:12.141Z [log] [["nested object", null]] ``` ### Operating System Windows 10 ### Selenium version Java 4.5.0 ### What are the browser(s) and version(s) where you see this issue? Chrome 108 ### What are the browser driver(s) and version(s) where you see this issue? ChromeDriver 108.0.5359.71 ### Are you using Selenium Grid? No
defect
devtools remoteobject type object properties in consoleevents are not properly mapped and lost null on listener call what happened devtools console entries fetched by selenium devtools support have value equals to null if are remoteobject type object through debugging selenium s code it seems that in this mapping is incorrectly performed and getvalue always defaults to null i would like to log devtools messages as they are for further debugging in case of application defect occurence look at attached images here is my tested application devtools console tab with intellij s debug on breakpoint modifiedargs are later forwarded to customlistener and original args are no longer available how can we reproduce the issue shell import io github bonigarcia wdm webdrivermanager import org openqa selenium javascriptexecutor import org openqa selenium webdriver import org openqa selenium chrome chromedriver import org openqa selenium devtools hasdevtools import org openqa selenium devtools log log import org openqa selenium devtools network network import java util optional class devtoolseventstest public static void main string args webdriver driver null try init webdriver webdrivermanager chromedriver setup driver new chromedriver init devtools hasdevtools driver maybegetdevtools ifpresent devtools devtools createsessionifthereisnotone devtools send network enable optional empty optional empty optional empty devtools send log enable add listeners devtools getdomains events addconsolelistener consoleevent system out printf s n consoleevent trigger consoleevents javascriptexecutor driver executescript console log simple test true javascriptexecutor driver executescript console log array javascriptexecutor driver executescript console log object a javascriptexecutor driver executescript console log nested object a b thread sleep catch interruptedexception e throw new runtimeexception e finally if driver null driver quit relevant log output shell task devtoolseventstest main debug resolutioncache resolution chrome in cache valid until cet debug resolutioncache resolution in cache valid until cet info webdrivermanager using chromedriver resolved driver for chrome debug webdrivermanager driver chromedriver found in cache info webdrivermanager exporting webdriver chrome driver as c users wojciech cache selenium chromedriver chromedriver exe starting chromedriver refs branch heads on port only local connections are allowed please see for suggestions on keeping chromedriver safe chromedriver was started successfully dec pm org openqa selenium remote protocolhandshake createsession info detected upstream dialect dec pm org openqa selenium devtools cdpversionfinder findnearestmatch warning unable to find an exact match for cdp version so returning the closest version found dec pm org openqa selenium devtools cdpversionfinder findnearestmatch info found cdp implementation for version of operating system windows selenium version java what are the browser s and version s where you see this issue chrome what are the browser driver s and version s where you see this issue chromedriver are you using selenium grid no
1
11,430
2,651,518,126
IssuesEvent
2015-03-16 12:08:54
douglasdrumond/cloaked-computing-machine
https://api.github.com/repos/douglasdrumond/cloaked-computing-machine
opened
[CLOSED] visual block and line mode don't work
auto-migrated Priority-Medium Type-Defect
<a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)** _Monday Mar 16, 2015 at 09:15 GMT_ _Originally opened as https://github.com/douglasdrumond/macvim/issues/16_ ---- ``` What steps will reproduce the problem? 1. enter text 2. use C-v or V to enter selection mode 3. move the cursor with the arrow keys. Selection is lost. Also for some reason entering a key with a selection puts you in replace mode. Ie you can't select then do esc y to copy Why? What version of the product are you using? On what operating system? 10.4.10 ``` Original issue reported on code.google.com by `georgeha...@gmail.com` on 16 Sep 2007 at 7:55
1.0
[CLOSED] visual block and line mode don't work - <a href="https://github.com/GoogleCodeExporter"><img src="https://avatars.githubusercontent.com/u/9614759?v=3" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [GoogleCodeExporter](https://github.com/GoogleCodeExporter)** _Monday Mar 16, 2015 at 09:15 GMT_ _Originally opened as https://github.com/douglasdrumond/macvim/issues/16_ ---- ``` What steps will reproduce the problem? 1. enter text 2. use C-v or V to enter selection mode 3. move the cursor with the arrow keys. Selection is lost. Also for some reason entering a key with a selection puts you in replace mode. Ie you can't select then do esc y to copy Why? What version of the product are you using? On what operating system? 10.4.10 ``` Original issue reported on code.google.com by `georgeha...@gmail.com` on 16 Sep 2007 at 7:55
defect
visual block and line mode don t work issue by monday mar at gmt originally opened as what steps will reproduce the problem enter text use c v or v to enter selection mode move the cursor with the arrow keys selection is lost also for some reason entering a key with a selection puts you in replace mode ie you can t select then do esc y to copy why what version of the product are you using on what operating system original issue reported on code google com by georgeha gmail com on sep at
1
533,606
15,594,586,924
IssuesEvent
2021-03-18 14:03:41
tud-zih-energy/lo2s
https://api.github.com/repos/tud-zih-energy/lo2s
opened
Add a replacement for {PID} in the lo2s trace output name
enhancement low priority
Running lo2s with this command `lo2s -o lo2s_trace_{PID} -- <app>` should create an output trace with the PID of the root process. Similarly, when attaching to a process using `lo2s -o lo2s_trace_{PID} -p <pid>`, the output trace should contain the PID of the attached process. Obviously, in system-monitoring mode, the PID does not make sense, thus lo2s should complain about it and exit. From a technical standpoint, this is complicated, because the PID is only known once the observed process was spawned, but at that point down the initialization, the otf2 trace was already created. Thus, the name is already determined before the time the PID is knowable.
1.0
Add a replacement for {PID} in the lo2s trace output name - Running lo2s with this command `lo2s -o lo2s_trace_{PID} -- <app>` should create an output trace with the PID of the root process. Similarly, when attaching to a process using `lo2s -o lo2s_trace_{PID} -p <pid>`, the output trace should contain the PID of the attached process. Obviously, in system-monitoring mode, the PID does not make sense, thus lo2s should complain about it and exit. From a technical standpoint, this is complicated, because the PID is only known once the observed process was spawned, but at that point down the initialization, the otf2 trace was already created. Thus, the name is already determined before the time the PID is knowable.
non_defect
add a replacement for pid in the trace output name running with this command o trace pid should create an output trace with the pid of the root process similarly when attaching to a process using o trace pid p the output trace should contain the pid of the attached process obviously in system monitoring mode the pid does not make sense thus should complain about it and exit from a technical standpoint this is complicated because the pid is only known once the observed process was spawned but at that point down the initialization the trace was already created thus the name is already determined before the time the pid is knowable
0
387,043
11,455,048,195
IssuesEvent
2020-02-06 18:17:42
mozilla/DeepSpeech
https://api.github.com/repos/mozilla/DeepSpeech
reopened
Make model convertible by CoreML
Priority: P4 enhancement
It would be wonderful if DeepSpeech models could be converted to CoreML, for offline use in apps. Here is documentation to do just that. https://developer.apple.com/documentation/coreml/converting_trained_models_to_core_ml Thanks!
1.0
Make model convertible by CoreML - It would be wonderful if DeepSpeech models could be converted to CoreML, for offline use in apps. Here is documentation to do just that. https://developer.apple.com/documentation/coreml/converting_trained_models_to_core_ml Thanks!
non_defect
make model convertible by coreml it would be wonderful if deepspeech models could be converted to coreml for offline use in apps here is documentation to do just that thanks
0
2,615
2,633,673,787
IssuesEvent
2015-03-09 06:52:32
go-rat/language-design
https://api.github.com/repos/go-rat/language-design
opened
New feature: Treat `generic` as a normal predeclared type.
design work
In previous spec, `generic` itself is not a type. There is no way to declare a type constant to hold `generic` as a type. Now I decide to eliminate this exception. Following code will be valid in Rat. ```go g := generic // OK. g is an alias of generic. const T1 g = int // OK. The same as `const T generic = int`. // Gen is a type based on generic type G generic const T2 G = int // OK. T1 == T2 // Compile-time error. T1 and T2 have different types. // OK. Define a method on G. func (g G) IsInt() bool { return g == int } T2.IsInt() // OK. This function will be compiled to a bool constant `true`. ``` If receiver is generic, all its methods will return constant values. So calling any non-constant function/method will cause compile time error.
1.0
New feature: Treat `generic` as a normal predeclared type. - In previous spec, `generic` itself is not a type. There is no way to declare a type constant to hold `generic` as a type. Now I decide to eliminate this exception. Following code will be valid in Rat. ```go g := generic // OK. g is an alias of generic. const T1 g = int // OK. The same as `const T generic = int`. // Gen is a type based on generic type G generic const T2 G = int // OK. T1 == T2 // Compile-time error. T1 and T2 have different types. // OK. Define a method on G. func (g G) IsInt() bool { return g == int } T2.IsInt() // OK. This function will be compiled to a bool constant `true`. ``` If receiver is generic, all its methods will return constant values. So calling any non-constant function/method will cause compile time error.
non_defect
new feature treat generic as a normal predeclared type in previous spec generic itself is not a type there is no way to declare a type constant to hold generic as a type now i decide to eliminate this exception following code will be valid in rat go g generic ok g is an alias of generic const g int ok the same as const t generic int gen is a type based on generic type g generic const g int ok compile time error and have different types ok define a method on g func g g isint bool return g int isint ok this function will be compiled to a bool constant true if receiver is generic all its methods will return constant values so calling any non constant function method will cause compile time error
0
68,287
28,311,533,055
IssuesEvent
2023-04-10 15:50:26
amplication/amplication
https://api.github.com/repos/amplication/amplication
closed
As User - I want the service wizard to populate for me the server and admin 'Base Directories'
epic: Service Creation impact: User Experience
As User - I want the service wizard to populate for me the server and admin 'Base Directories' when I am selecting to use a monorepo. ![Image](https://user-images.githubusercontent.com/112330500/226311193-6d183d97-deaf-4f46-9907-e31475a27cd4.png) **Requirements:** 1. The service creation wizard should have an input field for the user to enter a name for the main folder. 2. The default value for the main folder name should be "apps". 3. The input field for the main folder name should have a watermark displaying "./" before the name the user needs to fill. 4. When the user enters the main folder name, the server base directory should be generated as "./[main-folder]/[service-name]". 5. When the user enters the main folder name, the admin-ui base directory should be generated as "./[main-folder]/[service-name]-admin". * Notice the append of "-admin" at the end
1.0
As User - I want the service wizard to populate for me the server and admin 'Base Directories' - As User - I want the service wizard to populate for me the server and admin 'Base Directories' when I am selecting to use a monorepo. ![Image](https://user-images.githubusercontent.com/112330500/226311193-6d183d97-deaf-4f46-9907-e31475a27cd4.png) **Requirements:** 1. The service creation wizard should have an input field for the user to enter a name for the main folder. 2. The default value for the main folder name should be "apps". 3. The input field for the main folder name should have a watermark displaying "./" before the name the user needs to fill. 4. When the user enters the main folder name, the server base directory should be generated as "./[main-folder]/[service-name]". 5. When the user enters the main folder name, the admin-ui base directory should be generated as "./[main-folder]/[service-name]-admin". * Notice the append of "-admin" at the end
non_defect
as user i want the service wizard to populate for me the server and admin base directories as user i want the service wizard to populate for me the server and admin base directories when i am selecting to use a monorepo requirements the service creation wizard should have an input field for the user to enter a name for the main folder the default value for the main folder name should be apps the input field for the main folder name should have a watermark displaying before the name the user needs to fill when the user enters the main folder name the server base directory should be generated as when the user enters the main folder name the admin ui base directory should be generated as admin notice the append of admin at the end
0
2,485
2,607,904,799
IssuesEvent
2015-02-26 00:15:10
chrsmithdemos/zen-coding
https://api.github.com/repos/chrsmithdemos/zen-coding
closed
Enhancement, quick, complex, multiplication
auto-migrated Priority-Medium Type-Defect
``` The ability to multiply the contents of something in parentheses would be great. IE. (input>div + a)*3 should turn into <input><div></div><a></a></input> <input><div></div><a></a></input> <input><div></div><a></a></input> ``` ----- Original issue reported on code.google.com by `cog...@gmail.com` on 17 May 2010 at 6:21 * Merged into: #155
1.0
Enhancement, quick, complex, multiplication - ``` The ability to multiply the contents of something in parentheses would be great. IE. (input>div + a)*3 should turn into <input><div></div><a></a></input> <input><div></div><a></a></input> <input><div></div><a></a></input> ``` ----- Original issue reported on code.google.com by `cog...@gmail.com` on 17 May 2010 at 6:21 * Merged into: #155
defect
enhancement quick complex multiplication the ability to multiply the contents of something in parentheses would be great ie input div a should turn into original issue reported on code google com by cog gmail com on may at merged into
1
540,544
15,813,309,447
IssuesEvent
2021-04-05 07:26:26
wso2/product-apim
https://api.github.com/repos/wso2/product-apim
closed
Unexpected error massage
API-M 4.0.0 General Priority/Normal Type/Bug
### Description: Gives the following response when an unexpected character is in the password ![Screenshot from 2021-03-30 20-09-58](https://user-images.githubusercontent.com/17605554/113008741-ee0dc800-9194-11eb-8568-a0ef894e963e.png) ### Steps to reproduce: 1. login to carbon as admin 2. create user with username abc@gmail.com, password abc@gmail.com ### Affected Product Version: <!-- Members can use Affected/*** labels --> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
1.0
Unexpected error massage - ### Description: Gives the following response when an unexpected character is in the password ![Screenshot from 2021-03-30 20-09-58](https://user-images.githubusercontent.com/17605554/113008741-ee0dc800-9194-11eb-8568-a0ef894e963e.png) ### Steps to reproduce: 1. login to carbon as admin 2. create user with username abc@gmail.com, password abc@gmail.com ### Affected Product Version: <!-- Members can use Affected/*** labels --> ### Environment details (with versions): - OS: - Client: - Env (Docker/K8s): --- ### Optional Fields #### Related Issues: <!-- Any related issues from this/other repositories--> #### Suggested Labels: <!--Only to be used by non-members--> #### Suggested Assignees: <!--Only to be used by non-members-->
non_defect
unexpected error massage description gives the following response when an unexpected character is in the password steps to reproduce login to carbon as admin create user with username abc gmail com password abc gmail com affected product version environment details with versions os client env docker optional fields related issues suggested labels suggested assignees
0
17,136
9,628,992,413
IssuesEvent
2019-05-15 08:36:11
filestack/filestack-js
https://api.github.com/repos/filestack/filestack-js
closed
`.upload` freezes the browser for a while
performance
Hi @velveteer, I realized that while uploading a file over a certain size, it makes the whole browser unresponsive. Here is the video ss of the issue: http://recordit.co/KC8Yvpru7z. To demonstrate it, I scrolled the JS side up and down all the time, but as you can realize it didn't scroll like 304 secs, also it didn't update the `seconds` in the profiler. It jumps from 7th sec to 12th sec. My initial thought is that there is an expensive computation, which blocks the UI, before uploading the chunks. But, this is just a guess. Let me know if you need more information around it, like a deeper profile etc.
True
`.upload` freezes the browser for a while - Hi @velveteer, I realized that while uploading a file over a certain size, it makes the whole browser unresponsive. Here is the video ss of the issue: http://recordit.co/KC8Yvpru7z. To demonstrate it, I scrolled the JS side up and down all the time, but as you can realize it didn't scroll like 304 secs, also it didn't update the `seconds` in the profiler. It jumps from 7th sec to 12th sec. My initial thought is that there is an expensive computation, which blocks the UI, before uploading the chunks. But, this is just a guess. Let me know if you need more information around it, like a deeper profile etc.
non_defect
upload freezes the browser for a while hi velveteer i realized that while uploading a file over a certain size it makes the whole browser unresponsive here is the video ss of the issue to demonstrate it i scrolled the js side up and down all the time but as you can realize it didn t scroll like secs also it didn t update the seconds in the profiler it jumps from sec to sec my initial thought is that there is an expensive computation which blocks the ui before uploading the chunks but this is just a guess let me know if you need more information around it like a deeper profile etc
0
811,134
30,275,785,095
IssuesEvent
2023-07-07 19:29:42
metabase/metabase
https://api.github.com/repos/metabase/metabase
closed
Fix styling for heading cards in dashboard subscriptions
Priority:P2 Reporting/Pulses .Backend .Team/PixelPolice :police_officer:
### Describe the bug Heading cards appear as plain text in dashboard subscriptions "Visual Charts" ![image](https://github.com/metabase/metabase/assets/22608765/9144421e-4ad6-44a9-8d95-3d32691ccb1b) ### To Reproduce **Heading** 1. Go to any dashboard 2. Click on edit 3. Add a heading card and add some text to the heading 4. Create dashboard subscription 5. Send dashboard subscription to an email you can view 6. See heading in plain text ### Expected behavior Should have the same styling as it does on a dashboard ### Fix Criteria ### Information about your Metabase installation ```JSON - Master ``` ### Severity P2
1.0
Fix styling for heading cards in dashboard subscriptions - ### Describe the bug Heading cards appear as plain text in dashboard subscriptions "Visual Charts" ![image](https://github.com/metabase/metabase/assets/22608765/9144421e-4ad6-44a9-8d95-3d32691ccb1b) ### To Reproduce **Heading** 1. Go to any dashboard 2. Click on edit 3. Add a heading card and add some text to the heading 4. Create dashboard subscription 5. Send dashboard subscription to an email you can view 6. See heading in plain text ### Expected behavior Should have the same styling as it does on a dashboard ### Fix Criteria ### Information about your Metabase installation ```JSON - Master ``` ### Severity P2
non_defect
fix styling for heading cards in dashboard subscriptions describe the bug heading cards appear as plain text in dashboard subscriptions visual charts to reproduce heading go to any dashboard click on edit add a heading card and add some text to the heading create dashboard subscription send dashboard subscription to an email you can view see heading in plain text expected behavior should have the same styling as it does on a dashboard fix criteria information about your metabase installation json master severity
0
48,624
13,171,049,208
IssuesEvent
2020-08-11 16:02:22
mozilla/experimenter
https://api.github.com/repos/mozilla/experimenter
closed
Nimbus version comparisons in filter expressions don't work for non-Release
Defect Nimbus-Experimenter
During the initial launch test, we learned that the `versionCompare` filter treats versions like "80.0a1" (as used in Nightly) as less than "80.0" (as used in Release). Because of this we couldn't effectively target Nightly versions. We found a workaround, but it's something we should fix. When targeting version X and a non-Release channel, Experimenter should generate a filter expression that works.
1.0
Nimbus version comparisons in filter expressions don't work for non-Release - During the initial launch test, we learned that the `versionCompare` filter treats versions like "80.0a1" (as used in Nightly) as less than "80.0" (as used in Release). Because of this we couldn't effectively target Nightly versions. We found a workaround, but it's something we should fix. When targeting version X and a non-Release channel, Experimenter should generate a filter expression that works.
defect
nimbus version comparisons in filter expressions don t work for non release during the initial launch test we learned that the versioncompare filter treats versions like as used in nightly as less than as used in release because of this we couldn t effectively target nightly versions we found a workaround but it s something we should fix when targeting version x and a non release channel experimenter should generate a filter expression that works
1
110,047
9,428,540,241
IssuesEvent
2019-04-12 01:40:10
eatmyvenom/browserr
https://api.github.com/repos/eatmyvenom/browserr
closed
[FEATURE] Auto updater
Beauty In progress Needs platform tests New feature enhancement
## Current Behavior Currently the applications behavior is to have to re download for each update. I hate that ### Effects Makes the app annoying ## Wanted Behavior What I would like to see is an integrated auto updater ```NodeJS version : 11.1.0; ElectronJS version : 4.0.7; Browserr version : 1.0.0;```
1.0
[FEATURE] Auto updater - ## Current Behavior Currently the applications behavior is to have to re download for each update. I hate that ### Effects Makes the app annoying ## Wanted Behavior What I would like to see is an integrated auto updater ```NodeJS version : 11.1.0; ElectronJS version : 4.0.7; Browserr version : 1.0.0;```
non_defect
auto updater current behavior currently the applications behavior is to have to re download for each update i hate that effects makes the app annoying wanted behavior what i would like to see is an integrated auto updater nodejs version electronjs version browserr version
0
1,087
2,531,723,385
IssuesEvent
2015-01-23 10:06:24
HeikoBecker/Repograms
https://api.github.com/repos/HeikoBecker/Repograms
closed
Commit modularity metric
JavaScript metric need testing
**Metric description:** this metric captures the number of modules modified by a commit. Ideally, developers would use one commit per task. But, a poor developer may mix tasks or a poorly designed project may not provide sufficient modularity for a developer to make commits that are scoped to a single module. **Metric details:** the key challenge for this metric is to define the notion of a module for an arbitrary project being analyzed. In addition the list of modules might evolves during the life of the project. For a given commit we are gonna look at the directory paths (full path minus the filename) from all the files modified by the commit. For each combination of paths we compute to which extend they are similar by using the [Jaro–Winkler distance](http://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance). The metric value for the commit is the mean value of the distribution of similarity scores. **Pseudo algorithm:** ```java forall (Commit c : commits from the input repository){ int metricValue = 0; File[] filesMod = files modified by commit; if (filesMod.length == 1) { metricValue = 1; } else { List<double> similaritiesScores; for (int iFirstfile = 0; iFirstfile < (filesMod.length - 1); iFirstfile++) { for (int iSecondFile = (iFirstfile + 1); iSecondFile < filesMod.length; iSecondFile++) { // Extract directory paths from full paths String dm1 = extractDirPath(filesMod[iFirstfile]); String dm2 = extractDirPath(filesMod[iSecondFile]); similaritiesScores.add(computeJaroWinklerSimilarity(dm1, dm2)) } } metricValue = mean(similaritiesScores); } resultList.add(churn(c), metricValue); } ```` Input: one repository Output: list of pairs of values [(blen1,bmetric1),...,(blenN,bmetricN)], blen type is float, bmetric type is float. Post conditions: blen > 0, 0 <= bmetric >= 1, resultList.size() = repository.getCommits.size()
1.0
Commit modularity metric - **Metric description:** this metric captures the number of modules modified by a commit. Ideally, developers would use one commit per task. But, a poor developer may mix tasks or a poorly designed project may not provide sufficient modularity for a developer to make commits that are scoped to a single module. **Metric details:** the key challenge for this metric is to define the notion of a module for an arbitrary project being analyzed. In addition the list of modules might evolves during the life of the project. For a given commit we are gonna look at the directory paths (full path minus the filename) from all the files modified by the commit. For each combination of paths we compute to which extend they are similar by using the [Jaro–Winkler distance](http://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance). The metric value for the commit is the mean value of the distribution of similarity scores. **Pseudo algorithm:** ```java forall (Commit c : commits from the input repository){ int metricValue = 0; File[] filesMod = files modified by commit; if (filesMod.length == 1) { metricValue = 1; } else { List<double> similaritiesScores; for (int iFirstfile = 0; iFirstfile < (filesMod.length - 1); iFirstfile++) { for (int iSecondFile = (iFirstfile + 1); iSecondFile < filesMod.length; iSecondFile++) { // Extract directory paths from full paths String dm1 = extractDirPath(filesMod[iFirstfile]); String dm2 = extractDirPath(filesMod[iSecondFile]); similaritiesScores.add(computeJaroWinklerSimilarity(dm1, dm2)) } } metricValue = mean(similaritiesScores); } resultList.add(churn(c), metricValue); } ```` Input: one repository Output: list of pairs of values [(blen1,bmetric1),...,(blenN,bmetricN)], blen type is float, bmetric type is float. Post conditions: blen > 0, 0 <= bmetric >= 1, resultList.size() = repository.getCommits.size()
non_defect
commit modularity metric metric description this metric captures the number of modules modified by a commit ideally developers would use one commit per task but a poor developer may mix tasks or a poorly designed project may not provide sufficient modularity for a developer to make commits that are scoped to a single module metric details the key challenge for this metric is to define the notion of a module for an arbitrary project being analyzed in addition the list of modules might evolves during the life of the project for a given commit we are gonna look at the directory paths full path minus the filename from all the files modified by the commit for each combination of paths we compute to which extend they are similar by using the the metric value for the commit is the mean value of the distribution of similarity scores pseudo algorithm java forall commit c commits from the input repository int metricvalue file filesmod files modified by commit if filesmod length metricvalue else list similaritiesscores for int ifirstfile ifirstfile filesmod length ifirstfile for int isecondfile ifirstfile isecondfile filesmod length isecondfile extract directory paths from full paths string extractdirpath filesmod string extractdirpath filesmod similaritiesscores add computejarowinklersimilarity metricvalue mean similaritiesscores resultlist add churn c metricvalue input one repository output list of pairs of values blen type is float bmetric type is float post conditions blen resultlist size repository getcommits size
0
44,672
12,323,708,728
IssuesEvent
2020-05-13 12:37:54
jOOQ/jOOQ
https://api.github.com/repos/jOOQ/jOOQ
closed
DefaultRecordMapper incorrectly maps immutable Kotlin classes with defaulted properties
C: Functionality C: Integration: Kotlin E: All Editions P: Medium T: Defect
This Kotlin class cannot be mapped correctly by `DefaultRecordMapper`: ```kotlin data class T( val i: Int? = null , val j: String? = null ) ``` The `DefaultRecordMapper` finds a default constructor (no args constructor) and pretends the data class is mutable, then doesn't find any setters. The values are always `null`. The fact that this is a data class is probably irrelevant.
1.0
DefaultRecordMapper incorrectly maps immutable Kotlin classes with defaulted properties - This Kotlin class cannot be mapped correctly by `DefaultRecordMapper`: ```kotlin data class T( val i: Int? = null , val j: String? = null ) ``` The `DefaultRecordMapper` finds a default constructor (no args constructor) and pretends the data class is mutable, then doesn't find any setters. The values are always `null`. The fact that this is a data class is probably irrelevant.
defect
defaultrecordmapper incorrectly maps immutable kotlin classes with defaulted properties this kotlin class cannot be mapped correctly by defaultrecordmapper kotlin data class t val i int null val j string null the defaultrecordmapper finds a default constructor no args constructor and pretends the data class is mutable then doesn t find any setters the values are always null the fact that this is a data class is probably irrelevant
1
507,397
14,679,977,998
IssuesEvent
2020-12-31 08:40:29
k8smeetup/website-tasks
https://api.github.com/repos/k8smeetup/website-tasks
opened
/docs/reference/setup-tools/kubeadm/generated/_index.md
lang/zh priority/P0 sync/update version/master welcome
Source File: [/docs/reference/setup-tools/kubeadm/generated/_index.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/setup-tools/kubeadm/generated/_index.md) Diff 命令参考: ```bash # 查看原始文档与翻译文档更新差异 git diff --no-index -- content/en/docs/reference/setup-tools/kubeadm/generated/_index.md content/zh/docs/reference/setup-tools/kubeadm/generated/_index.md # 跨分支持查看原始文档更新差异 git diff release-1.19 master -- content/en/docs/reference/setup-tools/kubeadm/generated/_index.md ```
1.0
/docs/reference/setup-tools/kubeadm/generated/_index.md - Source File: [/docs/reference/setup-tools/kubeadm/generated/_index.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/setup-tools/kubeadm/generated/_index.md) Diff 命令参考: ```bash # 查看原始文档与翻译文档更新差异 git diff --no-index -- content/en/docs/reference/setup-tools/kubeadm/generated/_index.md content/zh/docs/reference/setup-tools/kubeadm/generated/_index.md # 跨分支持查看原始文档更新差异 git diff release-1.19 master -- content/en/docs/reference/setup-tools/kubeadm/generated/_index.md ```
non_defect
docs reference setup tools kubeadm generated index md source file diff 命令参考 bash 查看原始文档与翻译文档更新差异 git diff no index content en docs reference setup tools kubeadm generated index md content zh docs reference setup tools kubeadm generated index md 跨分支持查看原始文档更新差异 git diff release master content en docs reference setup tools kubeadm generated index md
0
58,023
3,087,081,846
IssuesEvent
2015-08-25 09:15:07
pavel-pimenov/flylinkdc-r5xx
https://api.github.com/repos/pavel-pimenov/flylinkdc-r5xx
closed
На вкладках иконки хабов не соответствуют теме
bug imported Priority-Medium
_From [toss.Alexey](https://code.google.com/u/toss.Alexey/) on September 28, 2013 15:05:54_ На вкладках иконки хабов не соответствуют теме. **Attachment:** [20130928_FLDCPP_hubicons.png](http://code.google.com/p/flylinkdc/issues/detail?id=1320) _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1320_
1.0
На вкладках иконки хабов не соответствуют теме - _From [toss.Alexey](https://code.google.com/u/toss.Alexey/) on September 28, 2013 15:05:54_ На вкладках иконки хабов не соответствуют теме. **Attachment:** [20130928_FLDCPP_hubicons.png](http://code.google.com/p/flylinkdc/issues/detail?id=1320) _Original issue: http://code.google.com/p/flylinkdc/issues/detail?id=1320_
non_defect
на вкладках иконки хабов не соответствуют теме from on september на вкладках иконки хабов не соответствуют теме attachment original issue
0
10,585
15,501,633,958
IssuesEvent
2021-03-11 10:47:01
renovatebot/renovate
https://api.github.com/repos/renovatebot/renovate
closed
Update for private registry docker image in docker-compose file is detected, but no PR is created
priority-5-triage status:requirements type:bug
**What Renovate type, platform and version are you using?** GitHub private repositories private docker registry (artifactory) using app.renovatebot.com <!-- Tell us if you're using the hosted App, or if you are self-hosted Renovate yourself. Platform too (GitHub, GitLab, etc) plus which version of Renovate if you're self-hosted. --> **Describe the bug** renovate correctly detects a docker image in a docker-compose file, the expected update is mentioned in the logs, with a branchName. The pull request is not created. For public images mentioned in the same docker-compose file, PRs are created. <!-- A clear and concise description of what the bug is. --> **Relevant debug logs** In particular: `DEBUG: Processing 1 branch:` simply omits the branch with the private docker update ``` DEBUG: packageFiles with updates { "config": { "docker-compose": [ { "packageFile": "docker-compose.yml", "deps": [ { "depName": "elasticsearch", "currentValue": "6.4.2", "replaceString": "elasticsearch:6.4.2", "autoReplaceStringTemplate": "{{depName}}{{#if newValue}}:{{newValue}}{{/if}}{{#if newDigest}}@{{newDigest}}{{/if}}", "datasource": "docker", "depIndex": 0, "updates": [ { "currentVersion": "6.4.2", "newVersion": "6.8.14", "newValue": "6.8.14", "bucket": "non-major", "newMajor": 6, "newMinor": 8, "updateType": "minor", "isSingleVersion": true, "skippedOverVersions": [ "6.4.3", # skipped logs "6.8.13" ], "branchName": "renovate/docker-elasticsearch-6.x" }, { "currentVersion": "6.4.2", "newVersion": "7.11.1", "newValue": "7.11.1", "bucket": "major", "newMajor": 7, "newMinor": 11, "updateType": "major", "isSingleVersion": true, "skippedOverVersions": [ "7.0.0", # skipped logs "7.10.1" ], "branchName": "renovate/docker-elasticsearch-7.x" } ], "warnings": [], "sourceUrl": "https://github.com/elastic/elasticsearch", "fixedVersion": "6.4.2" }, { "depName": "dckr.skryv.com/docmod-server", "currentValue": "v5.22.0", "replaceString": "dckr.skryv.com/docmod-server:v5.22.0", "autoReplaceStringTemplate": "{{depName}}{{#if newValue}}:{{newValue}}{{/if}}{{#if newDigest}}@{{newDigest}}{{/if}}", "datasource": "docker", "depIndex": 1, "updates": [ { "currentVersion": "v5.22.0", "newVersion": "v6.0.0", "newValue": "v6.0.0", "bucket": "major", "newMajor": 6, "newMinor": 0, "updateType": "major", "isSingleVersion": true, "branchName": "renovate/docker-dckr.skryv.com-docmod-server-6.x" } ], "warnings": [], "fixedVersion": "v5.22.0" } ] } ] } } # skipped logs DEBUG: processRepo() DEBUG: Processing 1 branch: renovate/docker-elasticsearch-6.x DEBUG: Calculating hourly PRs remaining DEBUG: Retrieving PR list DEBUG: Retrieved 0 Pull Requests DEBUG: currentHourStart=2021-03-11T10:00:00.000+00:00 DEBUG: PR hourly limit remaining: 2 DEBUG: Calculating prConcurrentLimit (20) DEBUG: getBranchPr(renovate/docker-elasticsearch-6.x) DEBUG: findPr(renovate/docker-elasticsearch-6.x, undefined, open) DEBUG: 0 PRs are currently open ``` **Have you created a minimal reproduction repository?** I have reproduced this in a minimal repository, but it needs credentials to the the private repository that I cannot share. It just contains the following docker-compose file, and a base renovate config with credentials ``` version: '3' services: elasticsearch: image: elasticsearch:6.4.2 docmod-server: image: dckr.skryv.com/docmod-server:v5.22.0 ``` **Additional context** The only cause I can think of is that the image name (and therefore the branch name) contains dots and the forward slash, and that is somehow an issue.
1.0
Update for private registry docker image in docker-compose file is detected, but no PR is created - **What Renovate type, platform and version are you using?** GitHub private repositories private docker registry (artifactory) using app.renovatebot.com <!-- Tell us if you're using the hosted App, or if you are self-hosted Renovate yourself. Platform too (GitHub, GitLab, etc) plus which version of Renovate if you're self-hosted. --> **Describe the bug** renovate correctly detects a docker image in a docker-compose file, the expected update is mentioned in the logs, with a branchName. The pull request is not created. For public images mentioned in the same docker-compose file, PRs are created. <!-- A clear and concise description of what the bug is. --> **Relevant debug logs** In particular: `DEBUG: Processing 1 branch:` simply omits the branch with the private docker update ``` DEBUG: packageFiles with updates { "config": { "docker-compose": [ { "packageFile": "docker-compose.yml", "deps": [ { "depName": "elasticsearch", "currentValue": "6.4.2", "replaceString": "elasticsearch:6.4.2", "autoReplaceStringTemplate": "{{depName}}{{#if newValue}}:{{newValue}}{{/if}}{{#if newDigest}}@{{newDigest}}{{/if}}", "datasource": "docker", "depIndex": 0, "updates": [ { "currentVersion": "6.4.2", "newVersion": "6.8.14", "newValue": "6.8.14", "bucket": "non-major", "newMajor": 6, "newMinor": 8, "updateType": "minor", "isSingleVersion": true, "skippedOverVersions": [ "6.4.3", # skipped logs "6.8.13" ], "branchName": "renovate/docker-elasticsearch-6.x" }, { "currentVersion": "6.4.2", "newVersion": "7.11.1", "newValue": "7.11.1", "bucket": "major", "newMajor": 7, "newMinor": 11, "updateType": "major", "isSingleVersion": true, "skippedOverVersions": [ "7.0.0", # skipped logs "7.10.1" ], "branchName": "renovate/docker-elasticsearch-7.x" } ], "warnings": [], "sourceUrl": "https://github.com/elastic/elasticsearch", "fixedVersion": "6.4.2" }, { "depName": "dckr.skryv.com/docmod-server", "currentValue": "v5.22.0", "replaceString": "dckr.skryv.com/docmod-server:v5.22.0", "autoReplaceStringTemplate": "{{depName}}{{#if newValue}}:{{newValue}}{{/if}}{{#if newDigest}}@{{newDigest}}{{/if}}", "datasource": "docker", "depIndex": 1, "updates": [ { "currentVersion": "v5.22.0", "newVersion": "v6.0.0", "newValue": "v6.0.0", "bucket": "major", "newMajor": 6, "newMinor": 0, "updateType": "major", "isSingleVersion": true, "branchName": "renovate/docker-dckr.skryv.com-docmod-server-6.x" } ], "warnings": [], "fixedVersion": "v5.22.0" } ] } ] } } # skipped logs DEBUG: processRepo() DEBUG: Processing 1 branch: renovate/docker-elasticsearch-6.x DEBUG: Calculating hourly PRs remaining DEBUG: Retrieving PR list DEBUG: Retrieved 0 Pull Requests DEBUG: currentHourStart=2021-03-11T10:00:00.000+00:00 DEBUG: PR hourly limit remaining: 2 DEBUG: Calculating prConcurrentLimit (20) DEBUG: getBranchPr(renovate/docker-elasticsearch-6.x) DEBUG: findPr(renovate/docker-elasticsearch-6.x, undefined, open) DEBUG: 0 PRs are currently open ``` **Have you created a minimal reproduction repository?** I have reproduced this in a minimal repository, but it needs credentials to the the private repository that I cannot share. It just contains the following docker-compose file, and a base renovate config with credentials ``` version: '3' services: elasticsearch: image: elasticsearch:6.4.2 docmod-server: image: dckr.skryv.com/docmod-server:v5.22.0 ``` **Additional context** The only cause I can think of is that the image name (and therefore the branch name) contains dots and the forward slash, and that is somehow an issue.
non_defect
update for private registry docker image in docker compose file is detected but no pr is created what renovate type platform and version are you using github private repositories private docker registry artifactory using app renovatebot com describe the bug renovate correctly detects a docker image in a docker compose file the expected update is mentioned in the logs with a branchname the pull request is not created for public images mentioned in the same docker compose file prs are created relevant debug logs in particular debug processing branch simply omits the branch with the private docker update debug packagefiles with updates config docker compose packagefile docker compose yml deps depname elasticsearch currentvalue replacestring elasticsearch autoreplacestringtemplate depname if newvalue newvalue if if newdigest newdigest if datasource docker depindex updates currentversion newversion newvalue bucket non major newmajor newminor updatetype minor issingleversion true skippedoverversions skipped logs branchname renovate docker elasticsearch x currentversion newversion newvalue bucket major newmajor newminor updatetype major issingleversion true skippedoverversions skipped logs branchname renovate docker elasticsearch x warnings sourceurl fixedversion depname dckr skryv com docmod server currentvalue replacestring dckr skryv com docmod server autoreplacestringtemplate depname if newvalue newvalue if if newdigest newdigest if datasource docker depindex updates currentversion newversion newvalue bucket major newmajor newminor updatetype major issingleversion true branchname renovate docker dckr skryv com docmod server x warnings fixedversion skipped logs debug processrepo debug processing branch renovate docker elasticsearch x debug calculating hourly prs remaining debug retrieving pr list debug retrieved pull requests debug currenthourstart debug pr hourly limit remaining debug calculating prconcurrentlimit debug getbranchpr renovate docker elasticsearch x debug findpr renovate docker elasticsearch x undefined open debug prs are currently open have you created a minimal reproduction repository i have reproduced this in a minimal repository but it needs credentials to the the private repository that i cannot share it just contains the following docker compose file and a base renovate config with credentials version services elasticsearch image elasticsearch docmod server image dckr skryv com docmod server additional context the only cause i can think of is that the image name and therefore the branch name contains dots and the forward slash and that is somehow an issue
0
384,419
26,584,464,200
IssuesEvent
2023-01-22 21:14:37
TexZK/bytesparse
https://api.github.com/repos/TexZK/bytesparse
closed
Make `ImmutableMemory.copy()` a deep copy
documentation enhancement
See: https://github.com/TexZK/bytesparse/issues/19#issuecomment-1399594046_ For usability reasons, `copy()` should be better as a deep copy. Indeed, such a method is not mandatory for any `collection.abc` interfaces.
1.0
Make `ImmutableMemory.copy()` a deep copy - See: https://github.com/TexZK/bytesparse/issues/19#issuecomment-1399594046_ For usability reasons, `copy()` should be better as a deep copy. Indeed, such a method is not mandatory for any `collection.abc` interfaces.
non_defect
make immutablememory copy a deep copy see for usability reasons copy should be better as a deep copy indeed such a method is not mandatory for any collection abc interfaces
0
44,797
12,392,527,226
IssuesEvent
2020-05-20 14:08:18
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
Resizable Dialog is broken
defect
Resizer is not displayed at all inside the dialog since it moves to the bottom right of the page.
1.0
Resizable Dialog is broken - Resizer is not displayed at all inside the dialog since it moves to the bottom right of the page.
defect
resizable dialog is broken resizer is not displayed at all inside the dialog since it moves to the bottom right of the page
1
43,031
9,367,348,389
IssuesEvent
2019-04-03 05:09:59
joomla/joomla-cms
https://api.github.com/repos/joomla/joomla-cms
closed
Module styles from specific template only work when that template is used on that page
No Code Attached Yet
### Steps to reproduce the issue - Make sure you have the Beez & Protostar templates installed and enabled - Have the Beez template as the default template - Create a new "Custom" module and put it into a position where it can be seen on the front end of the website. - In the advanced tab choose the "well" Protostar module style ### Expected result - The "well" Protostar module style is being used ### Actual result - It is not being used ### System information (as much as possible) PHP Built On Windows NT TERROR 10.0 build 17763 (Windows 10) i586 Database Type mysql Database Version 5.5.5-10.1.31-MariaDB Database Collation utf8_general_ci Database Connection Collation utf8mb4_general_ci PHP Version 7.2.4 Web Server Apache/2.4.33 (Win32) OpenSSL/1.1.0g PHP/7.2.4 WebServer to PHP Interface apache2handler Joomla! Version Joomla! 3.9.4 Stable [ Amani ] 12-March-2019 15:00 GMT Joomla! Platform Version Joomla Platform 13.1.0 Stable [ Curiosity ] 24-Apr-2013 00:00 GMT User Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36 ### Additional comments This is not really a bug, I presume that the system is meant to work this way. However it would be MUCH more useful if you could use module styles from ALL templates that are installed and enabled, not just the template that is currently being used on that page. The fact that they are all listed in the drop-down for you to choose from, misleads an admin into believing that they are available to use. Thank you :)
1.0
Module styles from specific template only work when that template is used on that page - ### Steps to reproduce the issue - Make sure you have the Beez & Protostar templates installed and enabled - Have the Beez template as the default template - Create a new "Custom" module and put it into a position where it can be seen on the front end of the website. - In the advanced tab choose the "well" Protostar module style ### Expected result - The "well" Protostar module style is being used ### Actual result - It is not being used ### System information (as much as possible) PHP Built On Windows NT TERROR 10.0 build 17763 (Windows 10) i586 Database Type mysql Database Version 5.5.5-10.1.31-MariaDB Database Collation utf8_general_ci Database Connection Collation utf8mb4_general_ci PHP Version 7.2.4 Web Server Apache/2.4.33 (Win32) OpenSSL/1.1.0g PHP/7.2.4 WebServer to PHP Interface apache2handler Joomla! Version Joomla! 3.9.4 Stable [ Amani ] 12-March-2019 15:00 GMT Joomla! Platform Version Joomla Platform 13.1.0 Stable [ Curiosity ] 24-Apr-2013 00:00 GMT User Agent Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36 ### Additional comments This is not really a bug, I presume that the system is meant to work this way. However it would be MUCH more useful if you could use module styles from ALL templates that are installed and enabled, not just the template that is currently being used on that page. The fact that they are all listed in the drop-down for you to choose from, misleads an admin into believing that they are available to use. Thank you :)
non_defect
module styles from specific template only work when that template is used on that page steps to reproduce the issue make sure you have the beez protostar templates installed and enabled have the beez template as the default template create a new custom module and put it into a position where it can be seen on the front end of the website in the advanced tab choose the well protostar module style expected result the well protostar module style is being used actual result it is not being used system information as much as possible php built on windows nt terror build windows database type mysql database version mariadb database collation general ci database connection collation general ci php version web server apache openssl php webserver to php interface joomla version joomla stable march gmt joomla platform version joomla platform stable apr gmt user agent mozilla windows nt applewebkit khtml like gecko chrome safari additional comments this is not really a bug i presume that the system is meant to work this way however it would be much more useful if you could use module styles from all templates that are installed and enabled not just the template that is currently being used on that page the fact that they are all listed in the drop down for you to choose from misleads an admin into believing that they are available to use thank you
0
74,656
25,241,613,567
IssuesEvent
2022-11-15 07:59:10
line/centraldogma
https://api.github.com/repos/line/centraldogma
closed
The equals implementation in AbstractCommand is not correct
defect
Here: https://github.com/line/centraldogma/blob/c59d2a1defdb7eda3a27959eca0630fd261034cd/server/src/main/java/com/linecorp/centraldogma/server/command/AbstractCommand.java#L62 It says: ```java if (!(this instanceof AbstractCommand)) { return false; } ``` This condition is always false, as `this` is always `AbstractCommand`. It should be instead: ```java if (!(obj instanceof AbstractCommand)) { return false; } ``` It's interesting that IntelliJ IDEA tried to warn about this, but the warning was suppressed with `@SuppressWarnings("EqualsWhichDoesntCheckParameterClass")`. After the fix, the suppression would be unnecessary.
1.0
The equals implementation in AbstractCommand is not correct - Here: https://github.com/line/centraldogma/blob/c59d2a1defdb7eda3a27959eca0630fd261034cd/server/src/main/java/com/linecorp/centraldogma/server/command/AbstractCommand.java#L62 It says: ```java if (!(this instanceof AbstractCommand)) { return false; } ``` This condition is always false, as `this` is always `AbstractCommand`. It should be instead: ```java if (!(obj instanceof AbstractCommand)) { return false; } ``` It's interesting that IntelliJ IDEA tried to warn about this, but the warning was suppressed with `@SuppressWarnings("EqualsWhichDoesntCheckParameterClass")`. After the fix, the suppression would be unnecessary.
defect
the equals implementation in abstractcommand is not correct here it says java if this instanceof abstractcommand return false this condition is always false as this is always abstractcommand it should be instead java if obj instanceof abstractcommand return false it s interesting that intellij idea tried to warn about this but the warning was suppressed with suppresswarnings equalswhichdoesntcheckparameterclass after the fix the suppression would be unnecessary
1
229,521
7,575,416,430
IssuesEvent
2018-04-24 01:34:28
SETI/pds-opus
https://api.github.com/repos/SETI/pds-opus
closed
new data import could be improved
A-Enhancement B-Import Pipeline Effort 1 Hard Priority 2
Originally reported by: **lisa ballard (Bitbucket: [basilleaf](https://bitbucket.org/basilleaf), GitHub: [basilleaf](https://github.com/basilleaf))** --- when new data is imported to the database it results in destroying the old database and building a complete new one, but this is a lot of time and cpu cycles and database load and doesn't need to be this way, and might be getting in the way of other queries hitting the database. It would also be nice to be able to do incremental imports without this multi day process. --- - Bitbucket: https://bitbucket.org/ringsnode/opus2/issue/117
1.0
new data import could be improved - Originally reported by: **lisa ballard (Bitbucket: [basilleaf](https://bitbucket.org/basilleaf), GitHub: [basilleaf](https://github.com/basilleaf))** --- when new data is imported to the database it results in destroying the old database and building a complete new one, but this is a lot of time and cpu cycles and database load and doesn't need to be this way, and might be getting in the way of other queries hitting the database. It would also be nice to be able to do incremental imports without this multi day process. --- - Bitbucket: https://bitbucket.org/ringsnode/opus2/issue/117
non_defect
new data import could be improved originally reported by lisa ballard bitbucket github when new data is imported to the database it results in destroying the old database and building a complete new one but this is a lot of time and cpu cycles and database load and doesn t need to be this way and might be getting in the way of other queries hitting the database it would also be nice to be able to do incremental imports without this multi day process bitbucket
0
187,970
15,112,704,444
IssuesEvent
2021-02-08 22:17:42
Dguipla/TFM-SemiSup
https://api.github.com/repos/Dguipla/TFM-SemiSup
closed
Familiarizarse con Spark y ML (vision general)
documentation research
Documentar en la memoria: - Que es Spark -Porque Spark - Arch Spark -Componentes Spark -.... Posteriormente profundizar en la capa ML y en concreto sobre los clasificadores base que vamos a utilizar
1.0
Familiarizarse con Spark y ML (vision general) - Documentar en la memoria: - Que es Spark -Porque Spark - Arch Spark -Componentes Spark -.... Posteriormente profundizar en la capa ML y en concreto sobre los clasificadores base que vamos a utilizar
non_defect
familiarizarse con spark y ml vision general documentar en la memoria que es spark porque spark arch spark componentes spark posteriormente profundizar en la capa ml y en concreto sobre los clasificadores base que vamos a utilizar
0
73,461
24,644,929,060
IssuesEvent
2022-10-17 14:15:23
hazelcast/hazelcast
https://api.github.com/repos/hazelcast/hazelcast
closed
CDC "op is not a valid field name" during creation of ChangeRecord when "include.schema.changes" is set to true
Type: Defect Source: Internal Module: Jet Team: Platform
**Describe the bug** _com.hazelcast.jet.cdc.impl.ChangeRecordCdcSourceP#map_ expects that in _SourceRecord.value().values()_ will be "**op**" String to map operation. For MySqlConnector and DebeziumCdcSources with property "include.schema.changes" set to true, the above code returns exception and job is in failed state as first command cannot be mapped. Example operation which throws exception: ![image](https://user-images.githubusercontent.com/20545793/194873975-800eb8d0-6aa5-4add-a9aa-183b7ba3b971.png) Logs: ``` 14:13:28,919 ERROR |someTest| - [JoinSubmittedJobOperation] hz.friendly_banzai.cached.thread-6 - [127.0.0.1]:5701 [dev] [5.2.0-SNAPSHOT] Exception in ProcessorTasklet{08c4-b2db-9200-0001/mysql#0}: com.hazelcast.jet.JetException: Failed to connect to database com.hazelcast.jet.JetException: Exception in ProcessorTasklet{08c4-b2db-9200-0001/mysql#0}: com.hazelcast.jet.JetException: Failed to connect to database at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:325) ~[classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_345] at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) ~[?:1.8.0_345] at java.util.concurrent.FutureTask.run(FutureTask.java) ~[?:1.8.0_345] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_345] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_345] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_345] Caused by: com.hazelcast.jet.JetException: Failed to connect to database at com.hazelcast.jet.cdc.impl.CdcSourceP.reconnect(CdcSourceP.java:214) ~[classes/:?] at com.hazelcast.jet.cdc.impl.CdcSourceP.complete(CdcSourceP.java:194) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:315) ~[classes/:?] ... 6 more Caused by: org.apache.kafka.connect.errors.DataException: op is not a valid field name at org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254) ~[connect-api-2.8.2.jar:?] at org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:261) ~[connect-api-2.8.2.jar:?] at org.apache.kafka.connect.data.Struct.getString(Struct.java:158) ~[connect-api-2.8.2.jar:?] at com.hazelcast.jet.cdc.impl.ChangeRecordCdcSourceP.map(ChangeRecordCdcSourceP.java:67) ~[classes/:?] ``` **Expected behavior** Above change can be mapped to _Operation.UNSPECIFIED_ (this status already exist) **To Reproduce** Steps to reproduce the behavior: 1. Go to 'com.hazelcast.jet.cdc.DebeziumCdcIntegrationTest' 2. Change _'include.schema.changes'_ from _'false'_ to _'true'_ in _DebeziumCdcIntegrationTest#mySqlSource_ method 3. Run _DebeziumCdcIntegrationTest#mysql_ test **Additional context** Found during testing on 5.2.0-SNAPSHOT
1.0
CDC "op is not a valid field name" during creation of ChangeRecord when "include.schema.changes" is set to true - **Describe the bug** _com.hazelcast.jet.cdc.impl.ChangeRecordCdcSourceP#map_ expects that in _SourceRecord.value().values()_ will be "**op**" String to map operation. For MySqlConnector and DebeziumCdcSources with property "include.schema.changes" set to true, the above code returns exception and job is in failed state as first command cannot be mapped. Example operation which throws exception: ![image](https://user-images.githubusercontent.com/20545793/194873975-800eb8d0-6aa5-4add-a9aa-183b7ba3b971.png) Logs: ``` 14:13:28,919 ERROR |someTest| - [JoinSubmittedJobOperation] hz.friendly_banzai.cached.thread-6 - [127.0.0.1]:5701 [dev] [5.2.0-SNAPSHOT] Exception in ProcessorTasklet{08c4-b2db-9200-0001/mysql#0}: com.hazelcast.jet.JetException: Failed to connect to database com.hazelcast.jet.JetException: Exception in ProcessorTasklet{08c4-b2db-9200-0001/mysql#0}: com.hazelcast.jet.JetException: Failed to connect to database at com.hazelcast.jet.impl.execution.TaskletExecutionService.handleTaskletExecutionError(TaskletExecutionService.java:286) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService.access$600(TaskletExecutionService.java:80) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:325) ~[classes/:?] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[?:1.8.0_345] at java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:266) ~[?:1.8.0_345] at java.util.concurrent.FutureTask.run(FutureTask.java) ~[?:1.8.0_345] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_345] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_345] at java.lang.Thread.run(Thread.java:750) ~[?:1.8.0_345] Caused by: com.hazelcast.jet.JetException: Failed to connect to database at com.hazelcast.jet.cdc.impl.CdcSourceP.reconnect(CdcSourceP.java:214) ~[classes/:?] at com.hazelcast.jet.cdc.impl.CdcSourceP.complete(CdcSourceP.java:194) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.complete(ProcessorTasklet.java:541) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.stateMachineStep(ProcessorTasklet.java:421) ~[classes/:?] at com.hazelcast.jet.impl.execution.ProcessorTasklet.call(ProcessorTasklet.java:291) ~[classes/:?] at com.hazelcast.jet.impl.execution.TaskletExecutionService$BlockingWorker.run(TaskletExecutionService.java:315) ~[classes/:?] ... 6 more Caused by: org.apache.kafka.connect.errors.DataException: op is not a valid field name at org.apache.kafka.connect.data.Struct.lookupField(Struct.java:254) ~[connect-api-2.8.2.jar:?] at org.apache.kafka.connect.data.Struct.getCheckType(Struct.java:261) ~[connect-api-2.8.2.jar:?] at org.apache.kafka.connect.data.Struct.getString(Struct.java:158) ~[connect-api-2.8.2.jar:?] at com.hazelcast.jet.cdc.impl.ChangeRecordCdcSourceP.map(ChangeRecordCdcSourceP.java:67) ~[classes/:?] ``` **Expected behavior** Above change can be mapped to _Operation.UNSPECIFIED_ (this status already exist) **To Reproduce** Steps to reproduce the behavior: 1. Go to 'com.hazelcast.jet.cdc.DebeziumCdcIntegrationTest' 2. Change _'include.schema.changes'_ from _'false'_ to _'true'_ in _DebeziumCdcIntegrationTest#mySqlSource_ method 3. Run _DebeziumCdcIntegrationTest#mysql_ test **Additional context** Found during testing on 5.2.0-SNAPSHOT
defect
cdc op is not a valid field name during creation of changerecord when include schema changes is set to true describe the bug com hazelcast jet cdc impl changerecordcdcsourcep map expects that in sourcerecord value values will be op string to map operation for mysqlconnector and debeziumcdcsources with property include schema changes set to true the above code returns exception and job is in failed state as first command cannot be mapped example operation which throws exception logs error sometest hz friendly banzai cached thread exception in processortasklet mysql com hazelcast jet jetexception failed to connect to database com hazelcast jet jetexception exception in processortasklet mysql com hazelcast jet jetexception failed to connect to database at com hazelcast jet impl execution taskletexecutionservice handletaskletexecutionerror taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice access taskletexecutionservice java at com hazelcast jet impl execution taskletexecutionservice blockingworker run taskletexecutionservice java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run capture futuretask java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by com hazelcast jet jetexception failed to connect to database at com hazelcast jet cdc impl cdcsourcep reconnect cdcsourcep java at com hazelcast jet cdc impl cdcsourcep complete cdcsourcep java at com hazelcast jet impl execution processortasklet complete processortasklet java at com hazelcast jet impl execution processortasklet statemachinestep processortasklet java at com hazelcast jet impl execution processortasklet call processortasklet java at com hazelcast jet impl execution taskletexecutionservice blockingworker run taskletexecutionservice java more caused by org apache kafka connect errors dataexception op is not a valid field name at org apache kafka connect data struct lookupfield struct java at org apache kafka connect data struct getchecktype struct java at org apache kafka connect data struct getstring struct java at com hazelcast jet cdc impl changerecordcdcsourcep map changerecordcdcsourcep java expected behavior above change can be mapped to operation unspecified this status already exist to reproduce steps to reproduce the behavior go to com hazelcast jet cdc debeziumcdcintegrationtest change include schema changes from false to true in debeziumcdcintegrationtest mysqlsource method run debeziumcdcintegrationtest mysql test additional context found during testing on snapshot
1
35,399
7,727,767,464
IssuesEvent
2018-05-25 04:52:09
CenturyLinkCloud/mdw
https://api.github.com/repos/CenturyLinkCloud/mdw
closed
Missing package versions file prevents MDW startup
defect
``` [(s)20180523.13:06:56.236 ~12] Failed to check/upgrade db: assets/hc/nlp/.mdw/versions (No such file or directory) com.centurylink.mdw.cache.CachingException: assets/hc/nlp/.mdw/versions (No such file or directory) at com.centurylink.mdw.cache.impl.PackageCache.load(PackageCache.java:100) at com.centurylink.mdw.cache.impl.PackageCache.getPackageList(PackageCache.java:66) at com.centurylink.mdw.cache.impl.PackageCache.getPackages(PackageCache.java:72) at com.centurylink.mdw.dataaccess.DatabaseAccess.openConnection(DatabaseAccess.java:269) at com.centurylink.mdw.dataaccess.DbAccess.<init>(DbAccess.java:35) at com.centurylink.mdw.dataaccess.DbAccess.<init>(DbAccess.java:30) at com.centurylink.mdw.dataaccess.DatabaseAccess.checkAndUpgradeSchema(DatabaseAccess.java:233) at com.centurylink.mdw.hub.MdwMain.startup(MdwMain.java:79) at com.centurylink.mdw.hub.StartupListener.contextInitialized(StartupListener.java:46) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4743) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1419) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1409) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.centurylink.mdw.dataaccess.DataAccessException: assets/hc/nlp/.mdw/versions (No such file or directory) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.getPackageList(LoaderPersisterVcs.java:817) at com.centurylink.mdw.cache.impl.PackageCache.load(PackageCache.java:78) ... 17 more Caused by: java.io.FileNotFoundException: assets/hc/nlp/.mdw/versions (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at com.centurylink.mdw.util.file.VersionProperties.<init>(VersionProperties.java:41) at com.centurylink.mdw.dataaccess.file.VersionControlGit.getVersionProps(VersionControlGit.java:259) at com.centurylink.mdw.dataaccess.file.VersionControlGit.getRevisionInVersionsFile(VersionControlGit.java:226) at com.centurylink.mdw.dataaccess.file.VersionControlGit.getRevision(VersionControlGit.java:218) at com.centurylink.mdw.dataaccess.file.PackageDir.getAssetFile(PackageDir.java:219) at com.centurylink.mdw.dataaccess.file.PackageDir.getAssetFile(PackageDir.java:205) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.loadProcesses(LoaderPersisterVcs.java:586) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.loadPackage(LoaderPersisterVcs.java:393) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.getPackageList(LoaderPersisterVcs.java:811) ... 18 more ```
1.0
Missing package versions file prevents MDW startup - ``` [(s)20180523.13:06:56.236 ~12] Failed to check/upgrade db: assets/hc/nlp/.mdw/versions (No such file or directory) com.centurylink.mdw.cache.CachingException: assets/hc/nlp/.mdw/versions (No such file or directory) at com.centurylink.mdw.cache.impl.PackageCache.load(PackageCache.java:100) at com.centurylink.mdw.cache.impl.PackageCache.getPackageList(PackageCache.java:66) at com.centurylink.mdw.cache.impl.PackageCache.getPackages(PackageCache.java:72) at com.centurylink.mdw.dataaccess.DatabaseAccess.openConnection(DatabaseAccess.java:269) at com.centurylink.mdw.dataaccess.DbAccess.<init>(DbAccess.java:35) at com.centurylink.mdw.dataaccess.DbAccess.<init>(DbAccess.java:30) at com.centurylink.mdw.dataaccess.DatabaseAccess.checkAndUpgradeSchema(DatabaseAccess.java:233) at com.centurylink.mdw.hub.MdwMain.startup(MdwMain.java:79) at com.centurylink.mdw.hub.StartupListener.contextInitialized(StartupListener.java:46) at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4743) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5207) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1419) at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1409) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: com.centurylink.mdw.dataaccess.DataAccessException: assets/hc/nlp/.mdw/versions (No such file or directory) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.getPackageList(LoaderPersisterVcs.java:817) at com.centurylink.mdw.cache.impl.PackageCache.load(PackageCache.java:78) ... 17 more Caused by: java.io.FileNotFoundException: assets/hc/nlp/.mdw/versions (No such file or directory) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at com.centurylink.mdw.util.file.VersionProperties.<init>(VersionProperties.java:41) at com.centurylink.mdw.dataaccess.file.VersionControlGit.getVersionProps(VersionControlGit.java:259) at com.centurylink.mdw.dataaccess.file.VersionControlGit.getRevisionInVersionsFile(VersionControlGit.java:226) at com.centurylink.mdw.dataaccess.file.VersionControlGit.getRevision(VersionControlGit.java:218) at com.centurylink.mdw.dataaccess.file.PackageDir.getAssetFile(PackageDir.java:219) at com.centurylink.mdw.dataaccess.file.PackageDir.getAssetFile(PackageDir.java:205) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.loadProcesses(LoaderPersisterVcs.java:586) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.loadPackage(LoaderPersisterVcs.java:393) at com.centurylink.mdw.dataaccess.file.LoaderPersisterVcs.getPackageList(LoaderPersisterVcs.java:811) ... 18 more ```
defect
missing package versions file prevents mdw startup failed to check upgrade db assets hc nlp mdw versions no such file or directory com centurylink mdw cache cachingexception assets hc nlp mdw versions no such file or directory at com centurylink mdw cache impl packagecache load packagecache java at com centurylink mdw cache impl packagecache getpackagelist packagecache java at com centurylink mdw cache impl packagecache getpackages packagecache java at com centurylink mdw dataaccess databaseaccess openconnection databaseaccess java at com centurylink mdw dataaccess dbaccess dbaccess java at com centurylink mdw dataaccess dbaccess dbaccess java at com centurylink mdw dataaccess databaseaccess checkandupgradeschema databaseaccess java at com centurylink mdw hub mdwmain startup mdwmain java at com centurylink mdw hub startuplistener contextinitialized startuplistener java at org apache catalina core standardcontext listenerstart standardcontext java at org apache catalina core standardcontext startinternal standardcontext java at org apache catalina util lifecyclebase start lifecyclebase java at org apache catalina core containerbase startchild call containerbase java at org apache catalina core containerbase startchild call containerbase java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by com centurylink mdw dataaccess dataaccessexception assets hc nlp mdw versions no such file or directory at com centurylink mdw dataaccess file loaderpersistervcs getpackagelist loaderpersistervcs java at com centurylink mdw cache impl packagecache load packagecache java more caused by java io filenotfoundexception assets hc nlp mdw versions no such file or directory at java io fileinputstream native method at java io fileinputstream open fileinputstream java at java io fileinputstream fileinputstream java at com centurylink mdw util file versionproperties versionproperties java at com centurylink mdw dataaccess file versioncontrolgit getversionprops versioncontrolgit java at com centurylink mdw dataaccess file versioncontrolgit getrevisioninversionsfile versioncontrolgit java at com centurylink mdw dataaccess file versioncontrolgit getrevision versioncontrolgit java at com centurylink mdw dataaccess file packagedir getassetfile packagedir java at com centurylink mdw dataaccess file packagedir getassetfile packagedir java at com centurylink mdw dataaccess file loaderpersistervcs loadprocesses loaderpersistervcs java at com centurylink mdw dataaccess file loaderpersistervcs loadpackage loaderpersistervcs java at com centurylink mdw dataaccess file loaderpersistervcs getpackagelist loaderpersistervcs java more
1
68,278
28,311,346,731
IssuesEvent
2023-04-10 15:40:31
amplication/amplication
https://api.github.com/repos/amplication/amplication
closed
🐛 Bug Report: Onboarding process doesn't show build code and create git organzation.
type: bug epic: Service Creation
### What happened? When i start the onboarding process, I don't see the option to create a new Git organization and the build code process. ### What you expected to happen I can connect to new Git organizations and see the code build process. ### How to reproduce Assign with a new User => Go over the onboarding process and try to create a service. you will not see the "connect to git organization" and the code build process. ### Amplication version 1.4.3 ### Environment _No response_ ### Are you willing to submit PR? Yes I am willing to submit a PR!
1.0
🐛 Bug Report: Onboarding process doesn't show build code and create git organzation. - ### What happened? When i start the onboarding process, I don't see the option to create a new Git organization and the build code process. ### What you expected to happen I can connect to new Git organizations and see the code build process. ### How to reproduce Assign with a new User => Go over the onboarding process and try to create a service. you will not see the "connect to git organization" and the code build process. ### Amplication version 1.4.3 ### Environment _No response_ ### Are you willing to submit PR? Yes I am willing to submit a PR!
non_defect
🐛 bug report onboarding process doesn t show build code and create git organzation what happened when i start the onboarding process i don t see the option to create a new git organization and the build code process what you expected to happen i can connect to new git organizations and see the code build process how to reproduce assign with a new user go over the onboarding process and try to create a service you will not see the connect to git organization and the code build process amplication version environment no response are you willing to submit pr yes i am willing to submit a pr
0
26,315
4,676,685,220
IssuesEvent
2016-10-07 12:50:24
phingofficial/phing-issues-test
https://api.github.com/repos/phingofficial/phing-issues-test
opened
Missing the "else" part ... (Trac #18)
defect Incomplete Migration Migrated from Trac
Migrated from https://www.phing.info/trac/ticket/18 ```json { "status": "closed", "changetime": "2006-04-30T13:42:24", "description": "Hi folks,\n\ni saw this on some of my code reports this morning. Look at the folling if statment:\n\nif (x > 0) {\n //foo \n} else {\n // bar\n}\n\nThe code coverage report tells me that it handled the foo-part and the bar-part and also the if-part but tells me, it doesn't cover the else-part. This will lead to wrong coverage outputs even when it is summed up.\n\nThanks in advance\n\nNorman\n\n\n\n", "reporter": "norman@sefiroth.de", "cc": "", "resolution": "invalid", "_ts": "1146404544000000", "component": "", "summary": "Missing the \"else\" part ...", "priority": "minor", "keywords": "Coverage Report, missing else part", "version": "2.2.0RC1", "time": "2006-03-14T08:35:23", "milestone": "2.2.0", "owner": "", "type": "defect" } ```
1.0
Missing the "else" part ... (Trac #18) - Migrated from https://www.phing.info/trac/ticket/18 ```json { "status": "closed", "changetime": "2006-04-30T13:42:24", "description": "Hi folks,\n\ni saw this on some of my code reports this morning. Look at the folling if statment:\n\nif (x > 0) {\n //foo \n} else {\n // bar\n}\n\nThe code coverage report tells me that it handled the foo-part and the bar-part and also the if-part but tells me, it doesn't cover the else-part. This will lead to wrong coverage outputs even when it is summed up.\n\nThanks in advance\n\nNorman\n\n\n\n", "reporter": "norman@sefiroth.de", "cc": "", "resolution": "invalid", "_ts": "1146404544000000", "component": "", "summary": "Missing the \"else\" part ...", "priority": "minor", "keywords": "Coverage Report, missing else part", "version": "2.2.0RC1", "time": "2006-03-14T08:35:23", "milestone": "2.2.0", "owner": "", "type": "defect" } ```
defect
missing the else part trac migrated from json status closed changetime description hi folks n ni saw this on some of my code reports this morning look at the folling if statment n nif x n foo n else n bar n n nthe code coverage report tells me that it handled the foo part and the bar part and also the if part but tells me it doesn t cover the else part this will lead to wrong coverage outputs even when it is summed up n nthanks in advance n nnorman n n n n reporter norman sefiroth de cc resolution invalid ts component summary missing the else part priority minor keywords coverage report missing else part version time milestone owner type defect
1
9,918
8,246,057,747
IssuesEvent
2018-09-11 11:44:09
dfds/blaster
https://api.github.com/repos/dfds/blaster
closed
SSO to our services and APIs
Infrastructure enhancement
When a team is created in selfservice it should automatic have SSO to all other services Policy server structure https://wiki.build.dfds.com/infrastructure/access-flow - [ ] Talk to platform team - [ ] Look into Policy server - [ ] Look into Open policy - [ ] Create our "hello world in policy land" https://aws.amazon.com/cloud-directory/ https://aws.amazon.com/organizations/?nc2=h_m1 https://aws.amazon.com/cognito/?nc2=h_m1 https://aws.amazon.com/single-sign-on/?nc2=h_m1
1.0
SSO to our services and APIs - When a team is created in selfservice it should automatic have SSO to all other services Policy server structure https://wiki.build.dfds.com/infrastructure/access-flow - [ ] Talk to platform team - [ ] Look into Policy server - [ ] Look into Open policy - [ ] Create our "hello world in policy land" https://aws.amazon.com/cloud-directory/ https://aws.amazon.com/organizations/?nc2=h_m1 https://aws.amazon.com/cognito/?nc2=h_m1 https://aws.amazon.com/single-sign-on/?nc2=h_m1
non_defect
sso to our services and apis when a team is created in selfservice it should automatic have sso to all other services policy server structure talk to platform team look into policy server look into open policy create our hello world in policy land
0
53,091
13,260,881,928
IssuesEvent
2020-08-20 18:55:28
icecube-trac/tix4
https://api.github.com/repos/icecube-trac/tix4
closed
pythia6 Portfile: make gfortran the default compiler (Trac #684)
Migrated from Trac defect tools/ports
pythia6 tends to fail to link against libraries generated by modern compilers when compiled with f77. Unfortunately gnu make uses f77 as the default for $FC. This patch changes the Portfile to force FC=gfortran in case FC is not set to anything else. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/684">https://code.icecube.wisc.edu/projects/icecube/ticket/684</a>, reported by claudio.kopperand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2012-06-03T01:37:50", "_ts": "1338687470000000", "description": "pythia6 tends to fail to link against libraries generated by modern compilers when compiled with f77. Unfortunately gnu make uses f77 as the default for $FC. This patch changes the Portfile to force FC=gfortran in case FC is not set to anything else.", "reporter": "claudio.kopper", "cc": "", "resolution": "fixed", "time": "2012-06-02T23:02:35", "component": "tools/ports", "summary": "pythia6 Portfile: make gfortran the default compiler", "priority": "normal", "keywords": "pythia gfortran f77", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
1.0
pythia6 Portfile: make gfortran the default compiler (Trac #684) - pythia6 tends to fail to link against libraries generated by modern compilers when compiled with f77. Unfortunately gnu make uses f77 as the default for $FC. This patch changes the Portfile to force FC=gfortran in case FC is not set to anything else. <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/projects/icecube/ticket/684">https://code.icecube.wisc.edu/projects/icecube/ticket/684</a>, reported by claudio.kopperand owned by nega</em></summary> <p> ```json { "status": "closed", "changetime": "2012-06-03T01:37:50", "_ts": "1338687470000000", "description": "pythia6 tends to fail to link against libraries generated by modern compilers when compiled with f77. Unfortunately gnu make uses f77 as the default for $FC. This patch changes the Portfile to force FC=gfortran in case FC is not set to anything else.", "reporter": "claudio.kopper", "cc": "", "resolution": "fixed", "time": "2012-06-02T23:02:35", "component": "tools/ports", "summary": "pythia6 Portfile: make gfortran the default compiler", "priority": "normal", "keywords": "pythia gfortran f77", "milestone": "", "owner": "nega", "type": "defect" } ``` </p> </details>
defect
portfile make gfortran the default compiler trac tends to fail to link against libraries generated by modern compilers when compiled with unfortunately gnu make uses as the default for fc this patch changes the portfile to force fc gfortran in case fc is not set to anything else migrated from json status closed changetime ts description tends to fail to link against libraries generated by modern compilers when compiled with unfortunately gnu make uses as the default for fc this patch changes the portfile to force fc gfortran in case fc is not set to anything else reporter claudio kopper cc resolution fixed time component tools ports summary portfile make gfortran the default compiler priority normal keywords pythia gfortran milestone owner nega type defect
1
371,125
25,938,625,322
IssuesEvent
2022-12-16 16:13:28
Sharonina/DEV002-social-network
https://api.github.com/repos/Sharonina/DEV002-social-network
opened
Crear prototipo Mobile en Figma
documentation
- [ ] Crear Figma - [ ] Unir todas las participantes al Figma - [ ] Terminar el prototipo
1.0
Crear prototipo Mobile en Figma - - [ ] Crear Figma - [ ] Unir todas las participantes al Figma - [ ] Terminar el prototipo
non_defect
crear prototipo mobile en figma crear figma unir todas las participantes al figma terminar el prototipo
0
45,473
12,815,022,552
IssuesEvent
2020-07-04 22:50:51
coin-or/pulp
https://api.github.com/repos/coin-or/pulp
closed
Memory leak when using PYGLPK in a loop
Priority-Medium Type-Defect auto-migrated
``` The attached Python script will solve a large number of (identical) problems in a loop using PYGLPK (before you ask - yes, unfortunately I have a use case for this ;]). Even though no intermediate results of any kind are stored, you can watch memory usage gradually rise. I would expect all data to be garbage collected instead, and for memory to remain constant. Suggested fix: the `PYGLPK` class calls `glpk.glp_create_prob()` in its `buildSolverModel` method. I suspect this allocates memory through the GLPK Python binding which isn't automatically cleared. Preliminary tests show that memory remains constant if I call `glpk.glp_delete_prob()` on the GLPK model representation (i.e. problem.solverModel) after solving has taken place. I would therefore suggest storing the `solverModel` on the PYGLPK instance also (it's currently only stored on the problem), adding a call to `glp_delete_prob` in the its `__del__` method such that this memory is cleared when the problem object is garbage collected. ``` Original issue reported on code.google.com by `ell...@gmail.com` on 11 Dec 2014 at 11:57 Attachments: - [memleak.py](https://storage.googleapis.com/google-code-attachments/pulp-or/issue-67/comment-0/memleak.py)
1.0
Memory leak when using PYGLPK in a loop - ``` The attached Python script will solve a large number of (identical) problems in a loop using PYGLPK (before you ask - yes, unfortunately I have a use case for this ;]). Even though no intermediate results of any kind are stored, you can watch memory usage gradually rise. I would expect all data to be garbage collected instead, and for memory to remain constant. Suggested fix: the `PYGLPK` class calls `glpk.glp_create_prob()` in its `buildSolverModel` method. I suspect this allocates memory through the GLPK Python binding which isn't automatically cleared. Preliminary tests show that memory remains constant if I call `glpk.glp_delete_prob()` on the GLPK model representation (i.e. problem.solverModel) after solving has taken place. I would therefore suggest storing the `solverModel` on the PYGLPK instance also (it's currently only stored on the problem), adding a call to `glp_delete_prob` in the its `__del__` method such that this memory is cleared when the problem object is garbage collected. ``` Original issue reported on code.google.com by `ell...@gmail.com` on 11 Dec 2014 at 11:57 Attachments: - [memleak.py](https://storage.googleapis.com/google-code-attachments/pulp-or/issue-67/comment-0/memleak.py)
defect
memory leak when using pyglpk in a loop the attached python script will solve a large number of identical problems in a loop using pyglpk before you ask yes unfortunately i have a use case for this even though no intermediate results of any kind are stored you can watch memory usage gradually rise i would expect all data to be garbage collected instead and for memory to remain constant suggested fix the pyglpk class calls glpk glp create prob in its buildsolvermodel method i suspect this allocates memory through the glpk python binding which isn t automatically cleared preliminary tests show that memory remains constant if i call glpk glp delete prob on the glpk model representation i e problem solvermodel after solving has taken place i would therefore suggest storing the solvermodel on the pyglpk instance also it s currently only stored on the problem adding a call to glp delete prob in the its del method such that this memory is cleared when the problem object is garbage collected original issue reported on code google com by ell gmail com on dec at attachments
1
119,500
10,055,802,231
IssuesEvent
2019-07-22 07:34:20
microsoft/AzureStorageExplorer
https://api.github.com/repos/microsoft/AzureStorageExplorer
closed
Remove 'Back'/'Next' buttons on 'Connect to Cosmos DB' dialog
:gear: cosmosdb 🧪 testing
**Storage Explorer Version**: 1.5.0 **Platform/OS Version**: Windows 10/MacOS High Sierra/ Linux Ubuntu 16.04 **Architecture**: ia32/x64 **Build Number**: 20180808.1 **Commit**: 93925d3c **Regression From**: Not a regression #### Steps to Reproduce: #### 1. Apply subscriptions -> Right click 'Cosmos DB Accounts(Preview)' -> Select 'Connect to Cosmos DB...'. 2. Check the popped dialog. #### Expected Experience: #### There is no 'Back'/'Next' buttons. #### Actual Experience: #### An extra 'Back'/'Next' buttons display. ![warning](https://user-images.githubusercontent.com/34729022/43889149-e9c2def0-9bb2-11e8-9cbe-88296595ed4d.png) **More info:** 1. Remove the default displayed warning message **'Connection string should not be empty'**. 2. **'Ok**' should display as '**OK**'.
1.0
Remove 'Back'/'Next' buttons on 'Connect to Cosmos DB' dialog - **Storage Explorer Version**: 1.5.0 **Platform/OS Version**: Windows 10/MacOS High Sierra/ Linux Ubuntu 16.04 **Architecture**: ia32/x64 **Build Number**: 20180808.1 **Commit**: 93925d3c **Regression From**: Not a regression #### Steps to Reproduce: #### 1. Apply subscriptions -> Right click 'Cosmos DB Accounts(Preview)' -> Select 'Connect to Cosmos DB...'. 2. Check the popped dialog. #### Expected Experience: #### There is no 'Back'/'Next' buttons. #### Actual Experience: #### An extra 'Back'/'Next' buttons display. ![warning](https://user-images.githubusercontent.com/34729022/43889149-e9c2def0-9bb2-11e8-9cbe-88296595ed4d.png) **More info:** 1. Remove the default displayed warning message **'Connection string should not be empty'**. 2. **'Ok**' should display as '**OK**'.
non_defect
remove back next buttons on connect to cosmos db dialog storage explorer version platform os version windows macos high sierra linux ubuntu architecture build number commit regression from not a regression steps to reproduce apply subscriptions right click cosmos db accounts preview select connect to cosmos db check the popped dialog expected experience there is no back next buttons actual experience an extra back next buttons display more info remove the default displayed warning message connection string should not be empty ok should display as ok
0
49,520
13,187,225,535
IssuesEvent
2020-08-13 02:44:48
icecube-trac/tix3
https://api.github.com/repos/icecube-trac/tix3
opened
[steamshovel] log viewer setting not saved (Trac #1587)
Incomplete Migration Migrated from Trac combo core defect
<details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1587">https://code.icecube.wisc.edu/ticket/1587</a>, reported by david.schultz and owned by hdembinski</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:26", "description": "In window->configuration, there is a setting to \"automatically open log viewer\"\n\nThis doesn't get saved in the session.", "reporter": "david.schultz", "cc": "cweaver", "resolution": "fixed", "_ts": "1550067086520250", "component": "combo core", "summary": "[steamshovel] log viewer setting not saved", "priority": "critical", "keywords": "", "time": "2016-03-16T02:29:29", "milestone": "", "owner": "hdembinski", "type": "defect" } ``` </p> </details>
1.0
[steamshovel] log viewer setting not saved (Trac #1587) - <details> <summary><em>Migrated from <a href="https://code.icecube.wisc.edu/ticket/1587">https://code.icecube.wisc.edu/ticket/1587</a>, reported by david.schultz and owned by hdembinski</em></summary> <p> ```json { "status": "closed", "changetime": "2019-02-13T14:11:26", "description": "In window->configuration, there is a setting to \"automatically open log viewer\"\n\nThis doesn't get saved in the session.", "reporter": "david.schultz", "cc": "cweaver", "resolution": "fixed", "_ts": "1550067086520250", "component": "combo core", "summary": "[steamshovel] log viewer setting not saved", "priority": "critical", "keywords": "", "time": "2016-03-16T02:29:29", "milestone": "", "owner": "hdembinski", "type": "defect" } ``` </p> </details>
defect
log viewer setting not saved trac migrated from json status closed changetime description in window configuration there is a setting to automatically open log viewer n nthis doesn t get saved in the session reporter david schultz cc cweaver resolution fixed ts component combo core summary log viewer setting not saved priority critical keywords time milestone owner hdembinski type defect
1
30,231
6,046,970,181
IssuesEvent
2017-06-12 13:28:38
primefaces/primeng
https://api.github.com/repos/primefaces/primeng
closed
p-autocomplete bug in field based object mapping
defect
There is template code: ``` <span [ngClass]="{'ui-autocomplete ui-widget':true,'ui-autocomplete-dd':dropdown}" [ngStyle]="style" [class]="styleClass"> <input *ngIf="!multiple" #in pInputText type="text" [ngStyle]="inputStyle" [class]="inputStyleClass" [value]="value ? (field ? resolveFieldData(value)||value : value) : null" ``` So if I have object as value and there is field in this object with undefined value then value is set to [value] with result [object Object] as string example: ``` p-autocomplete field="something" value: {something:undefined} ``` Something like: ``` [value]=\"value ? (field ? resolveFieldData(value)||'' : value) : null\" ```
1.0
p-autocomplete bug in field based object mapping - There is template code: ``` <span [ngClass]="{'ui-autocomplete ui-widget':true,'ui-autocomplete-dd':dropdown}" [ngStyle]="style" [class]="styleClass"> <input *ngIf="!multiple" #in pInputText type="text" [ngStyle]="inputStyle" [class]="inputStyleClass" [value]="value ? (field ? resolveFieldData(value)||value : value) : null" ``` So if I have object as value and there is field in this object with undefined value then value is set to [value] with result [object Object] as string example: ``` p-autocomplete field="something" value: {something:undefined} ``` Something like: ``` [value]=\"value ? (field ? resolveFieldData(value)||'' : value) : null\" ```
defect
p autocomplete bug in field based object mapping there is template code input ngif multiple in pinputtext type text inputstyle inputstyleclass value field resolvefielddata value value value null so if i have object as value and there is field in this object with undefined value then value is set to with result as string example p autocomplete field something value something undefined something like value field resolvefielddata value value null
1