Unnamed: 0
int64
0
832k
id
float64
2.49B
32.1B
type
stringclasses
1 value
created_at
stringlengths
19
19
repo
stringlengths
7
112
repo_url
stringlengths
36
141
action
stringclasses
3 values
title
stringlengths
1
744
labels
stringlengths
4
574
body
stringlengths
9
211k
index
stringclasses
10 values
text_combine
stringlengths
96
211k
label
stringclasses
2 values
text
stringlengths
96
188k
binary_label
int64
0
1
17,991
24,010,646,572
IssuesEvent
2022-09-14 18:29:17
googleapis/repo-automation-bots
https://api.github.com/repos/googleapis/repo-automation-bots
closed
migrate policy bot to yargs
type: process priority: p2
It becomes difficult to catch up with meow updates. Related: https://github.com/googleapis/repo-automation-bots/pull/4372#issuecomment-1244453649
1.0
migrate policy bot to yargs - It becomes difficult to catch up with meow updates. Related: https://github.com/googleapis/repo-automation-bots/pull/4372#issuecomment-1244453649
process
migrate policy bot to yargs it becomes difficult to catch up with meow updates related
1
9,624
12,562,174,286
IssuesEvent
2020-06-08 03:24:37
pingcap/tidb
https://api.github.com/repos/pingcap/tidb
opened
Add unit test for Corp Cache in TiDB side
component/coprocessor type/enhancement
## Development Task Currently, Corp Cache has no related unit test in TiDB side. We may need to implement Corp Cache protocol in mocktikv or unistore then write the unit test for it.
1.0
Add unit test for Corp Cache in TiDB side - ## Development Task Currently, Corp Cache has no related unit test in TiDB side. We may need to implement Corp Cache protocol in mocktikv or unistore then write the unit test for it.
process
add unit test for corp cache in tidb side development task currently corp cache has no related unit test in tidb side we may need to implement corp cache protocol in mocktikv or unistore then write the unit test for it
1
1,093
3,561,210,242
IssuesEvent
2016-01-23 17:01:29
csscomb/csscomb.js
https://api.github.com/repos/csscomb/csscomb.js
closed
Sass: sorting bug with mixins with content
bug gonzales preprocessors
csscomb 3.0.0-5, note how include breakpoint was moved and content left in original place: Input file ``` html, body { height: 100%; } body { @include font-size(13); } .center-box { margin-left: auto; margin-right: auto; max-width: $page-width; .lt-ie9 & { width: $page-width; } @include breakpoint(max-width ($page-width - 1)) { margin-left: 15px; margin-right: 15px; } } #main-box { min-height: 100%; } /* Header ------------------------------------ */ #header { } /* Content ------------------------------------ */ #content-box { } /* Footer ------------------------------------ */ $footerHeight: 100px; #padder { padding: 0 0 $footerHeight; height: 0; overflow: hidden; } #footer { margin-top: -$footerHeight; height: $footerHeight; } ``` Output ``` html, body { height: 100%; } body { @include font-size(13); } .center-box { @include breakpoint; margin-right: auto; margin-left: auto; max-width: $page-width; .lt-ie9 & { width: $page-width; }(max-width ($page-width - 1)) { margin-right: 15px; margin-left: 15px; } } #main-box { min-height: 100%; } /* Header ------------------------------------ */ #header { } /* Content ------------------------------------ */ #content-box { } /* Footer ------------------------------------ */ $footerHeight: 100px; #padder { padding: 0 0 $footerHeight; height: 0; overflow: hidden; } #footer { margin-top: -$footerHeight; height: $footerHeight; } ``` Config: ``` { "exclude": [ ".git/**", ".hg/**", "node_modules/**" ], "always-semicolon": true, "color-case": "lower", "block-indent": "\t", "color-shorthand": true, "element-case": "lower", "leading-zero": true, "quotes": "single", "space-before-colon": "", "space-after-colon": " ", "space-before-combinator": " ", "space-after-combinator": " ", "space-between-declarations": "\n", "space-before-opening-brace": " ", "space-after-opening-brace": "\n", "space-before-selector-delimiter": "", "space-before-closing-brace": "\n", "strip-spaces": true, "unitless-zero": true, "sort-order-fallback": "abc", "sort-order": [ [ "$variables", "$include" ], [ "content", "position", "z-index", "top", "right", "bottom", "left", "margin", "margin-top", "margin-right", "margin-bottom", "margin-left", "border", "border-collapse", "border-width", "border-style", "border-color", "border-top", "border-top-width", "border-top-style", "border-top-color", "border-right", "border-right-width", "border-right-style", "border-right-color", "border-bottom", "border-bottom-width", "border-bottom-style", "border-bottom-color", "border-left", "border-left-width", "border-left-style", "border-left-color", "padding", "padding-top", "padding-right", "padding-bottom", "padding-left", "-webkit-box-sizing", "-moz-box-sizing", "box-sizing", "width", "min-width", "max-width", "height", "min-height", "max-height", "display", "visibility", "float", "clear", "overflow", "overflow-x", "overflow-y", "-ms-overflow-x", "-ms-overflow-y", "-webkit-overflow-scrolling", "clip", "zoom", "flex-direction", "flex-order", "flex-pack", "flex-align", "table-layout", "empty-cells", "caption-side", "border-spacing", "border-collapse", "list-style", "list-style-position", "list-style-type", "list-style-image", "background", "filter:progid:DXImageTransform.Microsoft.AlphaImageLoader", "background-color", "background-image", "background-repeat", "background-attachment", "background-position", "background-position-x", "-ms-background-position-x", "background-position-y", "-ms-background-position-y", "-webkit-background-clip", "-moz-background-clip", "background-clip", "background-origin", "-webkit-background-size", "-moz-background-size", "-o-background-size", "background-size", "color", "font", "font-family", "font-size", "font-weight", "font-style", "font-variant", "font-size-adjust", "font-stretch", "font-effect", "font-emphasize", "font-emphasize-position", "font-emphasize-style", "font-smooth", "line-height", "text-align", "-webkit-text-align-last", "-moz-text-align-last", "-ms-text-align-last", "text-align-last", "vertical-align", "white-space", "text-decoration", "text-emphasis", "text-emphasis-color", "text-emphasis-style", "text-emphasis-position", "text-indent", "-ms-text-justify", "text-justify", "text-transform", "letter-spacing", "word-spacing", "-ms-writing-mode", "text-outline", "text-transform", "text-wrap", "text-overflow", "-ms-text-overflow", "text-overflow-ellipsis", "text-overflow-mode", "-ms-word-wrap", "word-wrap", "word-break", "-ms-word-break", "-moz-tab-size", "-o-tab-size", "tab-size", "-webkit-hyphens", "-moz-hyphens", "hyphens", "quotes", "counter-reset", "counter-increment", "resize", "cursor", "pointer-events", "-webkit-user-select", "-moz-user-select", "-ms-user-select", "user-select", "nav-index", "nav-up", "nav-right", "nav-down", "nav-left", "opacity", "filter:progid:DXImageTransform.Microsoft.Alpha(Opacity", "-ms-filter:\\'progid:DXImageTransform.Microsoft.Alpha", "-ms-interpolation-mode", "-webkit-border-radius", "-moz-border-radius", "border-radius", "-webkit-border-top-left-radius", "-moz-border-radius-topleft", "border-top-left-radius", "-webkit-border-top-right-radius", "-moz-border-radius-topright", "border-top-right-radius", "-webkit-border-bottom-right-radius", "-moz-border-radius-bottomright", "border-bottom-right-radius", "-webkit-border-bottom-left-radius", "-moz-border-radius-bottomleft", "border-bottom-left-radius", "-webkit-border-image", "-moz-border-image", "-o-border-image", "border-image", "-webkit-border-image-source", "-moz-border-image-source", "-o-border-image-source", "border-image-source", "-webkit-border-image-slice", "-moz-border-image-slice", "-o-border-image-slice", "border-image-slice", "-webkit-border-image-width", "-moz-border-image-width", "-o-border-image-width", "border-image-width", "-webkit-border-image-outset", "-moz-border-image-outset", "-o-border-image-outset", "border-image-outset", "-webkit-border-image-repeat", "-moz-border-image-repeat", "-o-border-image-repeat", "border-image-repeat", "outline", "outline-width", "outline-style", "outline-color", "outline-offset", "box-decoration-break", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "filter:progid:DXImageTransform.Microsoft.gradient", "-ms-filter:\\'progid:DXImageTransform.Microsoft.gradient", "text-shadow", "-webkit-transition", "-moz-transition", "-ms-transition", "-o-transition", "transition", "-webkit-transition-delay", "-moz-transition-delay", "-ms-transition-delay", "-o-transition-delay", "transition-delay", "-webkit-transition-timing-function", "-moz-transition-timing-function", "-ms-transition-timing-function", "-o-transition-timing-function", "transition-timing-function", "-webkit-transition-duration", "-moz-transition-duration", "-ms-transition-duration", "-o-transition-duration", "transition-duration", "-webkit-transition-property", "-moz-transition-property", "-ms-transition-property", "-o-transition-property", "transition-property", "-webkit-transform", "-moz-transform", "-ms-transform", "-o-transform", "transform", "-webkit-transform-origin", "-moz-transform-origin", "-ms-transform-origin", "-o-transform-origin", "transform-origin", "-webkit-animation", "-moz-animation", "-ms-animation", "-o-animation", "animation", "-webkit-animation-name", "-moz-animation-name", "-ms-animation-name", "-o-animation-name", "animation-name", "-webkit-animation-duration", "-moz-animation-duration", "-ms-animation-duration", "-o-animation-duration", "animation-duration", "-webkit-animation-play-state", "-moz-animation-play-state", "-ms-animation-play-state", "-o-animation-play-state", "animation-play-state", "-webkit-animation-timing-function", "-moz-animation-timing-function", "-ms-animation-timing-function", "-o-animation-timing-function", "animation-timing-function", "-webkit-animation-delay", "-moz-animation-delay", "-ms-animation-delay", "-o-animation-delay", "animation-delay", "-webkit-animation-iteration-count", "-moz-animation-iteration-count", "-ms-animation-iteration-count", "-o-animation-iteration-count", "animation-iteration-count", "-webkit-animation-iteration-count", "-moz-animation-iteration-count", "-ms-animation-iteration-count", "-o-animation-iteration-count", "animation-iteration-count", "-webkit-animation-direction", "-moz-animation-direction", "-ms-animation-direction", "-o-animation-direction", "animation-direction" ] ] } ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/3321043-sass-sorting-bug-with-mixins-with-content?utm_campaign=plugin&utm_content=tracker%2F214563&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F214563&utm_medium=issues&utm_source=github). </bountysource-plugin>
1.0
Sass: sorting bug with mixins with content - csscomb 3.0.0-5, note how include breakpoint was moved and content left in original place: Input file ``` html, body { height: 100%; } body { @include font-size(13); } .center-box { margin-left: auto; margin-right: auto; max-width: $page-width; .lt-ie9 & { width: $page-width; } @include breakpoint(max-width ($page-width - 1)) { margin-left: 15px; margin-right: 15px; } } #main-box { min-height: 100%; } /* Header ------------------------------------ */ #header { } /* Content ------------------------------------ */ #content-box { } /* Footer ------------------------------------ */ $footerHeight: 100px; #padder { padding: 0 0 $footerHeight; height: 0; overflow: hidden; } #footer { margin-top: -$footerHeight; height: $footerHeight; } ``` Output ``` html, body { height: 100%; } body { @include font-size(13); } .center-box { @include breakpoint; margin-right: auto; margin-left: auto; max-width: $page-width; .lt-ie9 & { width: $page-width; }(max-width ($page-width - 1)) { margin-right: 15px; margin-left: 15px; } } #main-box { min-height: 100%; } /* Header ------------------------------------ */ #header { } /* Content ------------------------------------ */ #content-box { } /* Footer ------------------------------------ */ $footerHeight: 100px; #padder { padding: 0 0 $footerHeight; height: 0; overflow: hidden; } #footer { margin-top: -$footerHeight; height: $footerHeight; } ``` Config: ``` { "exclude": [ ".git/**", ".hg/**", "node_modules/**" ], "always-semicolon": true, "color-case": "lower", "block-indent": "\t", "color-shorthand": true, "element-case": "lower", "leading-zero": true, "quotes": "single", "space-before-colon": "", "space-after-colon": " ", "space-before-combinator": " ", "space-after-combinator": " ", "space-between-declarations": "\n", "space-before-opening-brace": " ", "space-after-opening-brace": "\n", "space-before-selector-delimiter": "", "space-before-closing-brace": "\n", "strip-spaces": true, "unitless-zero": true, "sort-order-fallback": "abc", "sort-order": [ [ "$variables", "$include" ], [ "content", "position", "z-index", "top", "right", "bottom", "left", "margin", "margin-top", "margin-right", "margin-bottom", "margin-left", "border", "border-collapse", "border-width", "border-style", "border-color", "border-top", "border-top-width", "border-top-style", "border-top-color", "border-right", "border-right-width", "border-right-style", "border-right-color", "border-bottom", "border-bottom-width", "border-bottom-style", "border-bottom-color", "border-left", "border-left-width", "border-left-style", "border-left-color", "padding", "padding-top", "padding-right", "padding-bottom", "padding-left", "-webkit-box-sizing", "-moz-box-sizing", "box-sizing", "width", "min-width", "max-width", "height", "min-height", "max-height", "display", "visibility", "float", "clear", "overflow", "overflow-x", "overflow-y", "-ms-overflow-x", "-ms-overflow-y", "-webkit-overflow-scrolling", "clip", "zoom", "flex-direction", "flex-order", "flex-pack", "flex-align", "table-layout", "empty-cells", "caption-side", "border-spacing", "border-collapse", "list-style", "list-style-position", "list-style-type", "list-style-image", "background", "filter:progid:DXImageTransform.Microsoft.AlphaImageLoader", "background-color", "background-image", "background-repeat", "background-attachment", "background-position", "background-position-x", "-ms-background-position-x", "background-position-y", "-ms-background-position-y", "-webkit-background-clip", "-moz-background-clip", "background-clip", "background-origin", "-webkit-background-size", "-moz-background-size", "-o-background-size", "background-size", "color", "font", "font-family", "font-size", "font-weight", "font-style", "font-variant", "font-size-adjust", "font-stretch", "font-effect", "font-emphasize", "font-emphasize-position", "font-emphasize-style", "font-smooth", "line-height", "text-align", "-webkit-text-align-last", "-moz-text-align-last", "-ms-text-align-last", "text-align-last", "vertical-align", "white-space", "text-decoration", "text-emphasis", "text-emphasis-color", "text-emphasis-style", "text-emphasis-position", "text-indent", "-ms-text-justify", "text-justify", "text-transform", "letter-spacing", "word-spacing", "-ms-writing-mode", "text-outline", "text-transform", "text-wrap", "text-overflow", "-ms-text-overflow", "text-overflow-ellipsis", "text-overflow-mode", "-ms-word-wrap", "word-wrap", "word-break", "-ms-word-break", "-moz-tab-size", "-o-tab-size", "tab-size", "-webkit-hyphens", "-moz-hyphens", "hyphens", "quotes", "counter-reset", "counter-increment", "resize", "cursor", "pointer-events", "-webkit-user-select", "-moz-user-select", "-ms-user-select", "user-select", "nav-index", "nav-up", "nav-right", "nav-down", "nav-left", "opacity", "filter:progid:DXImageTransform.Microsoft.Alpha(Opacity", "-ms-filter:\\'progid:DXImageTransform.Microsoft.Alpha", "-ms-interpolation-mode", "-webkit-border-radius", "-moz-border-radius", "border-radius", "-webkit-border-top-left-radius", "-moz-border-radius-topleft", "border-top-left-radius", "-webkit-border-top-right-radius", "-moz-border-radius-topright", "border-top-right-radius", "-webkit-border-bottom-right-radius", "-moz-border-radius-bottomright", "border-bottom-right-radius", "-webkit-border-bottom-left-radius", "-moz-border-radius-bottomleft", "border-bottom-left-radius", "-webkit-border-image", "-moz-border-image", "-o-border-image", "border-image", "-webkit-border-image-source", "-moz-border-image-source", "-o-border-image-source", "border-image-source", "-webkit-border-image-slice", "-moz-border-image-slice", "-o-border-image-slice", "border-image-slice", "-webkit-border-image-width", "-moz-border-image-width", "-o-border-image-width", "border-image-width", "-webkit-border-image-outset", "-moz-border-image-outset", "-o-border-image-outset", "border-image-outset", "-webkit-border-image-repeat", "-moz-border-image-repeat", "-o-border-image-repeat", "border-image-repeat", "outline", "outline-width", "outline-style", "outline-color", "outline-offset", "box-decoration-break", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "-webkit-box-shadow", "-moz-box-shadow", "box-shadow", "filter:progid:DXImageTransform.Microsoft.gradient", "-ms-filter:\\'progid:DXImageTransform.Microsoft.gradient", "text-shadow", "-webkit-transition", "-moz-transition", "-ms-transition", "-o-transition", "transition", "-webkit-transition-delay", "-moz-transition-delay", "-ms-transition-delay", "-o-transition-delay", "transition-delay", "-webkit-transition-timing-function", "-moz-transition-timing-function", "-ms-transition-timing-function", "-o-transition-timing-function", "transition-timing-function", "-webkit-transition-duration", "-moz-transition-duration", "-ms-transition-duration", "-o-transition-duration", "transition-duration", "-webkit-transition-property", "-moz-transition-property", "-ms-transition-property", "-o-transition-property", "transition-property", "-webkit-transform", "-moz-transform", "-ms-transform", "-o-transform", "transform", "-webkit-transform-origin", "-moz-transform-origin", "-ms-transform-origin", "-o-transform-origin", "transform-origin", "-webkit-animation", "-moz-animation", "-ms-animation", "-o-animation", "animation", "-webkit-animation-name", "-moz-animation-name", "-ms-animation-name", "-o-animation-name", "animation-name", "-webkit-animation-duration", "-moz-animation-duration", "-ms-animation-duration", "-o-animation-duration", "animation-duration", "-webkit-animation-play-state", "-moz-animation-play-state", "-ms-animation-play-state", "-o-animation-play-state", "animation-play-state", "-webkit-animation-timing-function", "-moz-animation-timing-function", "-ms-animation-timing-function", "-o-animation-timing-function", "animation-timing-function", "-webkit-animation-delay", "-moz-animation-delay", "-ms-animation-delay", "-o-animation-delay", "animation-delay", "-webkit-animation-iteration-count", "-moz-animation-iteration-count", "-ms-animation-iteration-count", "-o-animation-iteration-count", "animation-iteration-count", "-webkit-animation-iteration-count", "-moz-animation-iteration-count", "-ms-animation-iteration-count", "-o-animation-iteration-count", "animation-iteration-count", "-webkit-animation-direction", "-moz-animation-direction", "-ms-animation-direction", "-o-animation-direction", "animation-direction" ] ] } ``` <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/3321043-sass-sorting-bug-with-mixins-with-content?utm_campaign=plugin&utm_content=tracker%2F214563&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F214563&utm_medium=issues&utm_source=github). </bountysource-plugin>
process
sass sorting bug with mixins with content csscomb note how include breakpoint was moved and content left in original place input file html body height body include font size center box margin left auto margin right auto max width page width lt width page width include breakpoint max width page width margin left margin right main box min height header header content content box footer footerheight padder padding footerheight height overflow hidden footer margin top footerheight height footerheight output html body height body include font size center box include breakpoint margin right auto margin left auto max width page width lt width page width max width page width margin right margin left main box min height header header content content box footer footerheight padder padding footerheight height overflow hidden footer margin top footerheight height footerheight config exclude git hg node modules always semicolon true color case lower block indent t color shorthand true element case lower leading zero true quotes single space before colon space after colon space before combinator space after combinator space between declarations n space before opening brace space after opening brace n space before selector delimiter space before closing brace n strip spaces true unitless zero true sort order fallback abc sort order variables include content position z index top right bottom left margin margin top margin right margin bottom margin left border border collapse border width border style border color border top border top width border top style border top color border right border right width border right style border right color border bottom border bottom width border bottom style border bottom color border left border left width border left style border left color padding padding top padding right padding bottom padding left webkit box sizing moz box sizing box sizing width min width max width height min height max height display visibility float clear overflow overflow x overflow y ms overflow x ms overflow y webkit overflow scrolling clip zoom flex direction flex order flex pack flex align table layout empty cells caption side border spacing border collapse list style list style position list style type list style image background filter progid dximagetransform microsoft alphaimageloader background color background image background repeat background attachment background position background position x ms background position x background position y ms background position y webkit background clip moz background clip background clip background origin webkit background size moz background size o background size background size color font font family font size font weight font style font variant font size adjust font stretch font effect font emphasize font emphasize position font emphasize style font smooth line height text align webkit text align last moz text align last ms text align last text align last vertical align white space text decoration text emphasis text emphasis color text emphasis style text emphasis position text indent ms text justify text justify text transform letter spacing word spacing ms writing mode text outline text transform text wrap text overflow ms text overflow text overflow ellipsis text overflow mode ms word wrap word wrap word break ms word break moz tab size o tab size tab size webkit hyphens moz hyphens hyphens quotes counter reset counter increment resize cursor pointer events webkit user select moz user select ms user select user select nav index nav up nav right nav down nav left opacity filter progid dximagetransform microsoft alpha opacity ms filter progid dximagetransform microsoft alpha ms interpolation mode webkit border radius moz border radius border radius webkit border top left radius moz border radius topleft border top left radius webkit border top right radius moz border radius topright border top right radius webkit border bottom right radius moz border radius bottomright border bottom right radius webkit border bottom left radius moz border radius bottomleft border bottom left radius webkit border image moz border image o border image border image webkit border image source moz border image source o border image source border image source webkit border image slice moz border image slice o border image slice border image slice webkit border image width moz border image width o border image width border image width webkit border image outset moz border image outset o border image outset border image outset webkit border image repeat moz border image repeat o border image repeat border image repeat outline outline width outline style outline color outline offset box decoration break webkit box shadow moz box shadow box shadow webkit box shadow moz box shadow box shadow webkit box shadow moz box shadow box shadow webkit box shadow moz box shadow box shadow filter progid dximagetransform microsoft gradient ms filter progid dximagetransform microsoft gradient text shadow webkit transition moz transition ms transition o transition transition webkit transition delay moz transition delay ms transition delay o transition delay transition delay webkit transition timing function moz transition timing function ms transition timing function o transition timing function transition timing function webkit transition duration moz transition duration ms transition duration o transition duration transition duration webkit transition property moz transition property ms transition property o transition property transition property webkit transform moz transform ms transform o transform transform webkit transform origin moz transform origin ms transform origin o transform origin transform origin webkit animation moz animation ms animation o animation animation webkit animation name moz animation name ms animation name o animation name animation name webkit animation duration moz animation duration ms animation duration o animation duration animation duration webkit animation play state moz animation play state ms animation play state o animation play state animation play state webkit animation timing function moz animation timing function ms animation timing function o animation timing function animation timing function webkit animation delay moz animation delay ms animation delay o animation delay animation delay webkit animation iteration count moz animation iteration count ms animation iteration count o animation iteration count animation iteration count webkit animation iteration count moz animation iteration count ms animation iteration count o animation iteration count animation iteration count webkit animation direction moz animation direction ms animation direction o animation direction animation direction want to back this issue we accept bounties via
1
18,686
24,594,945,554
IssuesEvent
2022-10-14 07:29:17
GoogleCloudPlatform/fda-mystudies
https://api.github.com/repos/GoogleCloudPlatform/fda-mystudies
closed
[DID] The below entered name is not getting de-identified
Bug P1 Response datastore Process: Fixed Process: Tested dev
1. The below entered name is not getting de-identified 'My name is Naga' 2. Locations not getting redacted ![image](https://user-images.githubusercontent.com/71445210/182391718-43c2065d-4219-4b3c-8b03-542977d350b0.png)
2.0
[DID] The below entered name is not getting de-identified - 1. The below entered name is not getting de-identified 'My name is Naga' 2. Locations not getting redacted ![image](https://user-images.githubusercontent.com/71445210/182391718-43c2065d-4219-4b3c-8b03-542977d350b0.png)
process
the below entered name is not getting de identified the below entered name is not getting de identified my name is naga locations not getting redacted
1
14,589
17,703,525,472
IssuesEvent
2021-08-25 03:12:34
tdwg/dwc
https://api.github.com/repos/tdwg/dwc
closed
Change term: footprintSRS
Term - change Class - Location normative Process - complete
## Change term * Term identifier (URL of the term to change): http://rs.tdwg.org/dwc/terms/#footprintSRS * Justification (why is this change necessary?): currently footprintSRS must be provided using the Well-Known Text (WKT) representation of the Spatial Reference System (SRS) for the footprintWKT. This representation is very long (e.g. ESRI WKT for EPSG:28992), thus making it more prone to being written with error compared to writing the shorter EPSG code. For example, publishers might try to provide the Human-Readable OGC WKT using newline characters which can break tabular data. Furthermore, confusion can arise when deciding to use the OGC WKT or ESRI WKT representation. For all these reasons, allowing the SRS for the footprintWKT to also be provided using the EPSG code (assuming it is present in http://epsg.io) will make it easier for publishers to fill in footprintSRS, and for users of the data to understand it. * Submitter: Kyle Braak I suggest the following changes (leave blank whatever would not change): * Term name (in lowerCamelCase): footprintSRS * Class (e.g. Location, Taxon): Location * Definition of the term: **The ellipsoid, geodetic datum, or spatial reference system (SRS) upon which the geometry given in footprintWKT is based.** * Usage comments (recommendations regarding content, etc.): **Recommended best practice is to use the EPSG code of the SRS, if known. Otherwise use a controlled vocabulary for the name or code of the geodetic datum, if known. Otherwise use a controlled vocabulary for the name or code of the ellipsoid, if known. If none of these is known, use the value `unknown`. It is also permitted to provide the SRS in Well-Known-Text, especially if no EPSG code provides the necessary values for the attributes of the SRS. Do not use this term to describe the SRS of the decimalLatitude and decimalLongitude, nor of any verbatim coordinates - use the geodeticDatum and verbatimSRS instead.** * Examples: **`epsg:4326`**, `GEOGCS["GCS_WGS_1984", DATUM["D_WGS_1984", SPHEROID["WGS_1984",6378137,298.257223563]], PRIMEM["Greenwich",0], UNIT["Degree",0.0174532925199433]]` (WKT for the standard WGS84 Spatial Reference System EPSG:4326) * Refines (identifier of the broader term this term refines, if applicable): * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/footprintSRS-2018-09-06 * ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): not in ABCD Original comment: **Term name**: [footprintSRS](http://rs.tdwg.org/dwc/terms/#footprintSRS) **Term change recommendation**: to allow footprintSRS _to also be_ provided using the EPSG code as a controlled vocabulary (e.g. "EPSG:4326"). **Term change justification**: currently footprintSRS must be provided using the Well-Known Text (WKT) representation of the Spatial Reference System (SRS) for the footprintWKT. This representation is very long (e.g. [ESRI WKT for EPSG:28992](http://spatialreference.org/ref/epsg/amersfoort-rd-new/esriwkt/)), thus making it more prone to being written with error compared to writing the shorter EPSG code. For example, publishers might try to provide the [Human-Readable OGC WKT](http://spatialreference.org/ref/epsg/wgs-84/prettywkt/) using newline characters which can break tabular data. Furthermore, confusion can arise when deciding to use the [OGC WKT](http://spatialreference.org/ref/epsg/wgs-84/ogcwkt/) or [ESRI WKT](http://spatialreference.org/ref/epsg/wgs-84/esriwkt/) representation. For all these reasons, allowing the SRS for the footprintWKT to also be provided using the EPSG code (assuming it is present in http://epsg.io) will make it easier for publishers to fill in footprintSRS, and for users of the data to understand it.
1.0
Change term: footprintSRS - ## Change term * Term identifier (URL of the term to change): http://rs.tdwg.org/dwc/terms/#footprintSRS * Justification (why is this change necessary?): currently footprintSRS must be provided using the Well-Known Text (WKT) representation of the Spatial Reference System (SRS) for the footprintWKT. This representation is very long (e.g. ESRI WKT for EPSG:28992), thus making it more prone to being written with error compared to writing the shorter EPSG code. For example, publishers might try to provide the Human-Readable OGC WKT using newline characters which can break tabular data. Furthermore, confusion can arise when deciding to use the OGC WKT or ESRI WKT representation. For all these reasons, allowing the SRS for the footprintWKT to also be provided using the EPSG code (assuming it is present in http://epsg.io) will make it easier for publishers to fill in footprintSRS, and for users of the data to understand it. * Submitter: Kyle Braak I suggest the following changes (leave blank whatever would not change): * Term name (in lowerCamelCase): footprintSRS * Class (e.g. Location, Taxon): Location * Definition of the term: **The ellipsoid, geodetic datum, or spatial reference system (SRS) upon which the geometry given in footprintWKT is based.** * Usage comments (recommendations regarding content, etc.): **Recommended best practice is to use the EPSG code of the SRS, if known. Otherwise use a controlled vocabulary for the name or code of the geodetic datum, if known. Otherwise use a controlled vocabulary for the name or code of the ellipsoid, if known. If none of these is known, use the value `unknown`. It is also permitted to provide the SRS in Well-Known-Text, especially if no EPSG code provides the necessary values for the attributes of the SRS. Do not use this term to describe the SRS of the decimalLatitude and decimalLongitude, nor of any verbatim coordinates - use the geodeticDatum and verbatimSRS instead.** * Examples: **`epsg:4326`**, `GEOGCS["GCS_WGS_1984", DATUM["D_WGS_1984", SPHEROID["WGS_1984",6378137,298.257223563]], PRIMEM["Greenwich",0], UNIT["Degree",0.0174532925199433]]` (WKT for the standard WGS84 Spatial Reference System EPSG:4326) * Refines (identifier of the broader term this term refines, if applicable): * Replaces (identifier of the existing term that would be deprecated and replaced by this term, if applicable): http://rs.tdwg.org/dwc/terms/version/footprintSRS-2018-09-06 * ABCD 2.06 (XPATH of the equivalent term in ABCD, if applicable): not in ABCD Original comment: **Term name**: [footprintSRS](http://rs.tdwg.org/dwc/terms/#footprintSRS) **Term change recommendation**: to allow footprintSRS _to also be_ provided using the EPSG code as a controlled vocabulary (e.g. "EPSG:4326"). **Term change justification**: currently footprintSRS must be provided using the Well-Known Text (WKT) representation of the Spatial Reference System (SRS) for the footprintWKT. This representation is very long (e.g. [ESRI WKT for EPSG:28992](http://spatialreference.org/ref/epsg/amersfoort-rd-new/esriwkt/)), thus making it more prone to being written with error compared to writing the shorter EPSG code. For example, publishers might try to provide the [Human-Readable OGC WKT](http://spatialreference.org/ref/epsg/wgs-84/prettywkt/) using newline characters which can break tabular data. Furthermore, confusion can arise when deciding to use the [OGC WKT](http://spatialreference.org/ref/epsg/wgs-84/ogcwkt/) or [ESRI WKT](http://spatialreference.org/ref/epsg/wgs-84/esriwkt/) representation. For all these reasons, allowing the SRS for the footprintWKT to also be provided using the EPSG code (assuming it is present in http://epsg.io) will make it easier for publishers to fill in footprintSRS, and for users of the data to understand it.
process
change term footprintsrs change term term identifier url of the term to change justification why is this change necessary currently footprintsrs must be provided using the well known text wkt representation of the spatial reference system srs for the footprintwkt this representation is very long e g esri wkt for epsg thus making it more prone to being written with error compared to writing the shorter epsg code for example publishers might try to provide the human readable ogc wkt using newline characters which can break tabular data furthermore confusion can arise when deciding to use the ogc wkt or esri wkt representation for all these reasons allowing the srs for the footprintwkt to also be provided using the epsg code assuming it is present in will make it easier for publishers to fill in footprintsrs and for users of the data to understand it submitter kyle braak i suggest the following changes leave blank whatever would not change term name in lowercamelcase footprintsrs class e g location taxon location definition of the term the ellipsoid geodetic datum or spatial reference system srs upon which the geometry given in footprintwkt is based usage comments recommendations regarding content etc recommended best practice is to use the epsg code of the srs if known otherwise use a controlled vocabulary for the name or code of the geodetic datum if known otherwise use a controlled vocabulary for the name or code of the ellipsoid if known if none of these is known use the value unknown it is also permitted to provide the srs in well known text especially if no epsg code provides the necessary values for the attributes of the srs do not use this term to describe the srs of the decimallatitude and decimallongitude nor of any verbatim coordinates use the geodeticdatum and verbatimsrs instead examples epsg geogcs primem unit wkt for the standard spatial reference system epsg refines identifier of the broader term this term refines if applicable replaces identifier of the existing term that would be deprecated and replaced by this term if applicable abcd xpath of the equivalent term in abcd if applicable not in abcd original comment term name term change recommendation to allow footprintsrs to also be provided using the epsg code as a controlled vocabulary e g epsg term change justification currently footprintsrs must be provided using the well known text wkt representation of the spatial reference system srs for the footprintwkt this representation is very long e g thus making it more prone to being written with error compared to writing the shorter epsg code for example publishers might try to provide the using newline characters which can break tabular data furthermore confusion can arise when deciding to use the or representation for all these reasons allowing the srs for the footprintwkt to also be provided using the epsg code assuming it is present in will make it easier for publishers to fill in footprintsrs and for users of the data to understand it
1
1,802
3,123,069,268
IssuesEvent
2015-09-07 02:32:04
orientechnologies/orientdb
https://api.github.com/repos/orientechnologies/orientdb
closed
OrientDB takes 10x more time to run on graph mode
in progress performance waiting reply
I am trying to run our load tests on a 2.1.1 cluster of 3 nodes. The operations / group of operations that usually takes 1.5 sec in standalone mode, is now taking 15sec + in clustered mode. Is this normal?
True
OrientDB takes 10x more time to run on graph mode - I am trying to run our load tests on a 2.1.1 cluster of 3 nodes. The operations / group of operations that usually takes 1.5 sec in standalone mode, is now taking 15sec + in clustered mode. Is this normal?
non_process
orientdb takes more time to run on graph mode i am trying to run our load tests on a cluster of nodes the operations group of operations that usually takes sec in standalone mode is now taking in clustered mode is this normal
0
791,017
27,846,794,151
IssuesEvent
2023-03-20 15:58:28
bounswe/bounswe2023group1
https://api.github.com/repos/bounswe/bounswe2023group1
closed
Standardization of Meeting Notes 6
Priority/Low Type/Wiki Effort/Low State/Assigned
The action items table of meeting notes 6 is not in the standard form.
1.0
Standardization of Meeting Notes 6 - The action items table of meeting notes 6 is not in the standard form.
non_process
standardization of meeting notes the action items table of meeting notes is not in the standard form
0
2,944
5,923,237,816
IssuesEvent
2017-05-23 07:21:39
orbardugo/Hahot-Hameshulash
https://api.github.com/repos/orbardugo/Hahot-Hameshulash
closed
Create ZFR Wiki page
in process
## Checklist: - [x] Version Control - [x] Create prototype #6 - [x] [User Manual](https://github.com/orbardugo/Hahot-Hameshulash/wiki/user-manual) - [x] [Readme - for new development](https://github.com/orbardugo/Hahot-Hameshulash/blob/master/README.md#development) - [x] [Create new labels](https://github.com/orbardugo/Hahot-Hameshulash/labels) - [x] Create schedule tasks - new project - [x] Next Iteration's planning
1.0
Create ZFR Wiki page - ## Checklist: - [x] Version Control - [x] Create prototype #6 - [x] [User Manual](https://github.com/orbardugo/Hahot-Hameshulash/wiki/user-manual) - [x] [Readme - for new development](https://github.com/orbardugo/Hahot-Hameshulash/blob/master/README.md#development) - [x] [Create new labels](https://github.com/orbardugo/Hahot-Hameshulash/labels) - [x] Create schedule tasks - new project - [x] Next Iteration's planning
process
create zfr wiki page checklist version control create prototype create schedule tasks new project next iteration s planning
1
18,231
24,297,915,658
IssuesEvent
2022-09-29 11:40:19
saibrotech/mentoria
https://api.github.com/repos/saibrotech/mentoria
closed
Fazer processo seletivo Santander Code 2022
processo seletivo
https://letscode.com.br/processos-seletivos/santander-coders Etapas - [x] Realizar inscrição para "Web Full Stack" - [x] Fazer curso online - 10/09 à 18/09 - [ ] Fazer teste de Lógica - 21/09 - [ ] Participar de Dinâmica com especialistas - 26/09 à 30/09 - [ ] Realizar coding Tank - 11/10 à 21/10 - [ ] Verificar resultado - 26/10
1.0
Fazer processo seletivo Santander Code 2022 - https://letscode.com.br/processos-seletivos/santander-coders Etapas - [x] Realizar inscrição para "Web Full Stack" - [x] Fazer curso online - 10/09 à 18/09 - [ ] Fazer teste de Lógica - 21/09 - [ ] Participar de Dinâmica com especialistas - 26/09 à 30/09 - [ ] Realizar coding Tank - 11/10 à 21/10 - [ ] Verificar resultado - 26/10
process
fazer processo seletivo santander code etapas realizar inscrição para web full stack fazer curso online à fazer teste de lógica participar de dinâmica com especialistas à realizar coding tank à verificar resultado
1
8,012
4,128,258,870
IssuesEvent
2016-06-10 04:52:35
openshift/origin
https://api.github.com/repos/openshift/origin
closed
New build strategy to trigger external CI (Jenkins)
component/build kind/enhancement priority/P2
In order to support things like the fabric8 CD pipelines, we'd like to add a new build strategy that triggers external CI via a webhook. This build strategy would be configured just with a URL & optionally a secret to use to trigger the build in an external CI server. This would also involve adding the creation of a Jenkins job via `oc new-app` if the source repository contains the standard `Jenkinsfile` used for pipeline configuration. /cc @jstrachan @rawlingsj
1.0
New build strategy to trigger external CI (Jenkins) - In order to support things like the fabric8 CD pipelines, we'd like to add a new build strategy that triggers external CI via a webhook. This build strategy would be configured just with a URL & optionally a secret to use to trigger the build in an external CI server. This would also involve adding the creation of a Jenkins job via `oc new-app` if the source repository contains the standard `Jenkinsfile` used for pipeline configuration. /cc @jstrachan @rawlingsj
non_process
new build strategy to trigger external ci jenkins in order to support things like the cd pipelines we d like to add a new build strategy that triggers external ci via a webhook this build strategy would be configured just with a url optionally a secret to use to trigger the build in an external ci server this would also involve adding the creation of a jenkins job via oc new app if the source repository contains the standard jenkinsfile used for pipeline configuration cc jstrachan rawlingsj
0
18,353
24,480,289,074
IssuesEvent
2022-10-08 18:39:40
RobertCraigie/prisma-client-py
https://api.github.com/repos/RobertCraigie/prisma-client-py
closed
Schema copying error when generating to the root directory
bug/2-confirmed kind/bug process/candidate priority/high level/unknown topic: generation
<!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> https://discord.com/channels/933860922039099444/933860923117043718/1027579126703468596 ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> TODO ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> This should not crash. ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> MacOS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> SQLite - Python version: <!--[Run `python -V` to see your Python version]--> 3.9
1.0
Schema copying error when generating to the root directory - <!-- Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output. See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output. --> ## Bug description <!-- A clear and concise description of what the bug is. --> https://discord.com/channels/933860922039099444/933860923117043718/1027579126703468596 ## How to reproduce <!-- Steps to reproduce the behavior: 1. Go to '...' 2. Change '....' 3. Run '....' 4. See error --> TODO ## Expected behavior <!-- A clear and concise description of what you expected to happen. --> This should not crash. ## Environment & setup <!-- In which environment does the problem occur --> - OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> MacOS - Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> SQLite - Python version: <!--[Run `python -V` to see your Python version]--> 3.9
process
schema copying error when generating to the root directory thanks for helping us improve prisma client python 🙏 please follow the sections in the template and provide as much information as possible about your problem e g by enabling additional logging output see for how to enable additional logging output bug description how to reproduce steps to reproduce the behavior go to change run see error todo expected behavior this should not crash environment setup os macos database sqlite python version
1
12,788
15,167,585,579
IssuesEvent
2021-02-12 18:01:16
wordpress-mobile/gutenberg-mobile
https://api.github.com/repos/wordpress-mobile/gutenberg-mobile
closed
Test plan for 16.7 Beta version of WordPress Android app
release-process
Due to the recent issue regarding the merged [PR](https://github.com/wordpress-mobile/WordPress-Android/pull/13699) into `WordPress-Android` repository, we have to execute a specific test plan to ensure that it didn't break anything. **This plan will be executed only on Android.** - [x] [Writing flow tests](https://github.com/wordpress-mobile/test-cases/tree/trunk/test-cases/gutenberg/writing-flow) - @fluiddot - [x] Random tests from the [sanity check test suites](https://github.com/wordpress-mobile/test-cases/blob/trunk/test-suites/gutenberg/sanity-tests.md) (preferably one test from each section to cover a wide range) @ceyhun - [x] Smoke test the editor flow in general by doing: @fluiddot - Entering exiting the editor - Rotate the device in various cases - Open and dismiss the bottom sheet - Background the app and foreground it again
1.0
Test plan for 16.7 Beta version of WordPress Android app - Due to the recent issue regarding the merged [PR](https://github.com/wordpress-mobile/WordPress-Android/pull/13699) into `WordPress-Android` repository, we have to execute a specific test plan to ensure that it didn't break anything. **This plan will be executed only on Android.** - [x] [Writing flow tests](https://github.com/wordpress-mobile/test-cases/tree/trunk/test-cases/gutenberg/writing-flow) - @fluiddot - [x] Random tests from the [sanity check test suites](https://github.com/wordpress-mobile/test-cases/blob/trunk/test-suites/gutenberg/sanity-tests.md) (preferably one test from each section to cover a wide range) @ceyhun - [x] Smoke test the editor flow in general by doing: @fluiddot - Entering exiting the editor - Rotate the device in various cases - Open and dismiss the bottom sheet - Background the app and foreground it again
process
test plan for beta version of wordpress android app due to the recent issue regarding the merged into wordpress android repository we have to execute a specific test plan to ensure that it didn t break anything this plan will be executed only on android fluiddot random tests from the preferably one test from each section to cover a wide range ceyhun smoke test the editor flow in general by doing fluiddot entering exiting the editor rotate the device in various cases open and dismiss the bottom sheet background the app and foreground it again
1
64,699
6,917,612,577
IssuesEvent
2017-11-29 09:13:08
eclipse/californium
https://api.github.com/repos/eclipse/californium
closed
Can I run "cf-secure" on Android?
bug retest - validate PR
I want to run "cf-secure example" by running Android local server. Is it possible to change "JKS" to "BKS"?
1.0
Can I run "cf-secure" on Android? - I want to run "cf-secure example" by running Android local server. Is it possible to change "JKS" to "BKS"?
non_process
can i run cf secure on android i want to run cf secure example by running android local server is it possible to change jks to bks
0
3,327
6,445,428,179
IssuesEvent
2017-08-13 04:38:21
nodejs/node
https://api.github.com/repos/nodejs/node
closed
invalid floating point uid or gid for spawn/execSync causes uv to assert and abort node
child_process confirmed-bug v4.x v6.x v7.x
* **Version**: 0.12 to v8.0.0-pre ``` > child_process.spawnSync("cat", {uid: 3.5}) node: ../deps/uv/src/unix/core.c:166: uv_close: Assertion `0' failed. zsh: abort (core dumped) ./node ``` Also ``` > child_process.execSync("date", {uid: 3.5}) node: ../deps/uv/src/unix/core.c:161: uv_close: Assertion `0' failed. zsh: abort (core dumped) % ./node --version v8.0.0-pre ``` EDIT: git aborts, too
1.0
invalid floating point uid or gid for spawn/execSync causes uv to assert and abort node - * **Version**: 0.12 to v8.0.0-pre ``` > child_process.spawnSync("cat", {uid: 3.5}) node: ../deps/uv/src/unix/core.c:166: uv_close: Assertion `0' failed. zsh: abort (core dumped) ./node ``` Also ``` > child_process.execSync("date", {uid: 3.5}) node: ../deps/uv/src/unix/core.c:161: uv_close: Assertion `0' failed. zsh: abort (core dumped) % ./node --version v8.0.0-pre ``` EDIT: git aborts, too
process
invalid floating point uid or gid for spawn execsync causes uv to assert and abort node version to pre child process spawnsync cat uid node deps uv src unix core c uv close assertion failed zsh abort core dumped node also child process execsync date uid node deps uv src unix core c uv close assertion failed zsh abort core dumped node version pre edit git aborts too
1
696,126
23,885,740,719
IssuesEvent
2022-09-08 07:31:10
kubernetes/website
https://api.github.com/repos/kubernetes/website
closed
Grammar problem with Kubernetes Components concept
priority/awaiting-more-evidence lifecycle/stale language/en needs-triage
The subtopic for [Addons](https://kubernetes.io/docs/concepts/overview/components/#addons) has a possible grammatical error. The second sentence starts with 'Because'. A likely fix would be to replace 'Because' with 'As'.
1.0
Grammar problem with Kubernetes Components concept - The subtopic for [Addons](https://kubernetes.io/docs/concepts/overview/components/#addons) has a possible grammatical error. The second sentence starts with 'Because'. A likely fix would be to replace 'Because' with 'As'.
non_process
grammar problem with kubernetes components concept the subtopic for has a possible grammatical error the second sentence starts with because a likely fix would be to replace because with as
0
4,617
7,461,450,392
IssuesEvent
2018-03-31 03:04:24
dotnet/corefx
https://api.github.com/repos/dotnet/corefx
closed
NETFX x86 Release Build not running a set of tests
area-System.Diagnostics.Process test bug
Can't workout which ones, but open any recent PR and in the logs will be ``` xUnit.net Console Runner (64-bit Desktop .NET 4.0.30319.42000) Copyright (C) .NET Foundation. usage: xunit.console <assemblyFile> [configFile] [assemblyFile [configFile]...] [options] [reporter] [resultFormat filename [...]] Note: Configuration files must end in .json (for JSON) or .config (for XML) Valid options: -nologo : do not show the copyright message -nocolor : do not output results with colors -noappdomain : do not use app domains to run test code -failskips : convert skipped tests into failures -parallel option : set parallelization based on option : none - turn off all parallelization : collections - only parallelize collections : assemblies - only parallelize assemblies : all - parallelize assemblies & collections -maxthreads count : maximum thread count for collection parallelization : default - run with default (1 thread per CPU thread) : unlimited - run with unbounded thread count : (number) - limit task thread pool size to 'count' -noshadow : do not shadow copy assemblies -wait : wait for input after completion -diagnostics : enable diagnostics messages for all test assemblies -internaldiagnostics : enable internal diagnostics messages for all test assemblies -debug : launch the debugger to debug the tests -serialize : serialize all test cases (for diagnostic purposes only) -trait "name=value" : only run tests with matching name/value traits : if specified more than once, acts as an OR operation -notrait "name=value" : do not run tests with matching name/value traits : if specified more than once, acts as an AND operation -method "name" : run a given test method (should be fully specified; : i.e., 'MyNamespace.MyClass.MyTestMethod') : if specified more than once, acts as an OR operation xUnit.net Console Runner (64-bit Desktop .NET 4.0.30319.42000) -class "name" : run all methods in a given test class (should be fully : specified; i.e., 'MyNamespace.MyClass') : if specified more than once, acts as an OR operation -namespace "name" : run all methods in a given namespace (i.e., : 'MyNamespace.MySubNamespace') : if specified more than once, acts as an OR operation -noautoreporters : do not allow reporters to be auto-enabled by environment : (for example, auto-detecting TeamCity or AppVeyor) Result formats: (optional, choose one or more) -xml <filename> : output results to xUnit.net v2 XML file -xmlv1 <filename> : output results to xUnit.net v1 XML file -html <filename> : output results to HTML file -nunit <filename> : output results to NUnit v2.5 XML file ``` So its outputting the usage rather than running
1.0
NETFX x86 Release Build not running a set of tests - Can't workout which ones, but open any recent PR and in the logs will be ``` xUnit.net Console Runner (64-bit Desktop .NET 4.0.30319.42000) Copyright (C) .NET Foundation. usage: xunit.console <assemblyFile> [configFile] [assemblyFile [configFile]...] [options] [reporter] [resultFormat filename [...]] Note: Configuration files must end in .json (for JSON) or .config (for XML) Valid options: -nologo : do not show the copyright message -nocolor : do not output results with colors -noappdomain : do not use app domains to run test code -failskips : convert skipped tests into failures -parallel option : set parallelization based on option : none - turn off all parallelization : collections - only parallelize collections : assemblies - only parallelize assemblies : all - parallelize assemblies & collections -maxthreads count : maximum thread count for collection parallelization : default - run with default (1 thread per CPU thread) : unlimited - run with unbounded thread count : (number) - limit task thread pool size to 'count' -noshadow : do not shadow copy assemblies -wait : wait for input after completion -diagnostics : enable diagnostics messages for all test assemblies -internaldiagnostics : enable internal diagnostics messages for all test assemblies -debug : launch the debugger to debug the tests -serialize : serialize all test cases (for diagnostic purposes only) -trait "name=value" : only run tests with matching name/value traits : if specified more than once, acts as an OR operation -notrait "name=value" : do not run tests with matching name/value traits : if specified more than once, acts as an AND operation -method "name" : run a given test method (should be fully specified; : i.e., 'MyNamespace.MyClass.MyTestMethod') : if specified more than once, acts as an OR operation xUnit.net Console Runner (64-bit Desktop .NET 4.0.30319.42000) -class "name" : run all methods in a given test class (should be fully : specified; i.e., 'MyNamespace.MyClass') : if specified more than once, acts as an OR operation -namespace "name" : run all methods in a given namespace (i.e., : 'MyNamespace.MySubNamespace') : if specified more than once, acts as an OR operation -noautoreporters : do not allow reporters to be auto-enabled by environment : (for example, auto-detecting TeamCity or AppVeyor) Result formats: (optional, choose one or more) -xml <filename> : output results to xUnit.net v2 XML file -xmlv1 <filename> : output results to xUnit.net v1 XML file -html <filename> : output results to HTML file -nunit <filename> : output results to NUnit v2.5 XML file ``` So its outputting the usage rather than running
process
netfx release build not running a set of tests can t workout which ones but open any recent pr and in the logs will be xunit net console runner bit desktop net copyright c net foundation usage xunit console note configuration files must end in json for json or config for xml valid options nologo do not show the copyright message nocolor do not output results with colors noappdomain do not use app domains to run test code failskips convert skipped tests into failures parallel option set parallelization based on option none turn off all parallelization collections only parallelize collections assemblies only parallelize assemblies all parallelize assemblies collections maxthreads count maximum thread count for collection parallelization default run with default thread per cpu thread unlimited run with unbounded thread count number limit task thread pool size to count noshadow do not shadow copy assemblies wait wait for input after completion diagnostics enable diagnostics messages for all test assemblies internaldiagnostics enable internal diagnostics messages for all test assemblies debug launch the debugger to debug the tests serialize serialize all test cases for diagnostic purposes only trait name value only run tests with matching name value traits if specified more than once acts as an or operation notrait name value do not run tests with matching name value traits if specified more than once acts as an and operation method name run a given test method should be fully specified i e mynamespace myclass mytestmethod if specified more than once acts as an or operation xunit net console runner bit desktop net class name run all methods in a given test class should be fully specified i e mynamespace myclass if specified more than once acts as an or operation namespace name run all methods in a given namespace i e mynamespace mysubnamespace if specified more than once acts as an or operation noautoreporters do not allow reporters to be auto enabled by environment for example auto detecting teamcity or appveyor result formats optional choose one or more xml output results to xunit net xml file output results to xunit net xml file html output results to html file nunit output results to nunit xml file so its outputting the usage rather than running
1
7,538
10,617,478,765
IssuesEvent
2019-10-12 19:23:12
cetic/tsorage
https://api.github.com/repos/cetic/tsorage
closed
Add user and/or token as observation tags
enhancement processing
In a typical use case, a token is used in order to authenticate a submitted message containing new observations. In addition to check whether the message should be accepted or refused, ~~1. The token could be added as one of the dynamic tag associated with the observations belonging to the message.~~ 2. The user associated with the token could be added as one of the dynamic tag associated with the observations belonging to the message. For instance, if a message is submitted with the tagset {"status": "ok"}, then, the tagset should be altered in order to become {"status": "ok", "user_id": "mgoeminne"} If such tag names are already mentioned in the original tagset, their values are replaced in order to reflect the actual token / user ids. The append / update of the token / user ids should be determined by the configuration file of the interface system.
1.0
Add user and/or token as observation tags - In a typical use case, a token is used in order to authenticate a submitted message containing new observations. In addition to check whether the message should be accepted or refused, ~~1. The token could be added as one of the dynamic tag associated with the observations belonging to the message.~~ 2. The user associated with the token could be added as one of the dynamic tag associated with the observations belonging to the message. For instance, if a message is submitted with the tagset {"status": "ok"}, then, the tagset should be altered in order to become {"status": "ok", "user_id": "mgoeminne"} If such tag names are already mentioned in the original tagset, their values are replaced in order to reflect the actual token / user ids. The append / update of the token / user ids should be determined by the configuration file of the interface system.
process
add user and or token as observation tags in a typical use case a token is used in order to authenticate a submitted message containing new observations in addition to check whether the message should be accepted or refused the token could be added as one of the dynamic tag associated with the observations belonging to the message the user associated with the token could be added as one of the dynamic tag associated with the observations belonging to the message for instance if a message is submitted with the tagset status ok then the tagset should be altered in order to become status ok user id mgoeminne if such tag names are already mentioned in the original tagset their values are replaced in order to reflect the actual token user ids the append update of the token user ids should be determined by the configuration file of the interface system
1
4,966
7,806,219,383
IssuesEvent
2018-06-11 13:29:10
cptechinc/soft-dpluso
https://api.github.com/repos/cptechinc/soft-dpluso
closed
Customer Template
PHP Processwire
Break Customer Template and Break it into 3 1. Customer 2. Cust Index 3. Customer Add
1.0
Customer Template - Break Customer Template and Break it into 3 1. Customer 2. Cust Index 3. Customer Add
process
customer template break customer template and break it into customer cust index customer add
1
14,009
16,814,685,329
IssuesEvent
2021-06-17 05:32:01
e4exp/paper_manager_abstract
https://api.github.com/repos/e4exp/paper_manager_abstract
opened
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures
2021 BERT Efficient Natural Language Processing
- https://arxiv.org/abs/2106.05822 - 2021 注意に基づく言語モデルは、最先端の自然言語処理システムにおいて重要な要素となっている。 しかし、これらのモデルは、長い学習時間、高密度の演算、膨大なパラメータ数のために、大きな計算量を必要とします。 本研究では、Transformer層の構造にいくつかの変更を加え、より効率的なアーキテクチャを実現しました。 まず、自己注意モジュールを補完するために、畳み込みモジュールを追加し、局所的な相互作用と大域的な相互作用の学習を分離します。 次に、グループ化された変換により、モデルの表現力を維持しつつ、密なフィードフォワード層と畳み込みの計算コストを削減します。 結果として得られたアーキテクチャを言語表現学習に適用し、様々な規模のBERTモデルと比較して、その優れた性能を実証しました。 さらに、浮動小数点演算(FLOPs)と学習時間の両方の観点から、効率性が向上していることを明らかにした。
1.0
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures - - https://arxiv.org/abs/2106.05822 - 2021 注意に基づく言語モデルは、最先端の自然言語処理システムにおいて重要な要素となっている。 しかし、これらのモデルは、長い学習時間、高密度の演算、膨大なパラメータ数のために、大きな計算量を必要とします。 本研究では、Transformer層の構造にいくつかの変更を加え、より効率的なアーキテクチャを実現しました。 まず、自己注意モジュールを補完するために、畳み込みモジュールを追加し、局所的な相互作用と大域的な相互作用の学習を分離します。 次に、グループ化された変換により、モデルの表現力を維持しつつ、密なフィードフォワード層と畳み込みの計算コストを削減します。 結果として得られたアーキテクチャを言語表現学習に適用し、様々な規模のBERTモデルと比較して、その優れた性能を実証しました。 さらに、浮動小数点演算(FLOPs)と学習時間の両方の観点から、効率性が向上していることを明らかにした。
process
groupbert enhanced transformer architecture with efficient grouped structures 注意に基づく言語モデルは、最先端の自然言語処理システムにおいて重要な要素となっている。 しかし、これらのモデルは、長い学習時間、高密度の演算、膨大なパラメータ数のために、大きな計算量を必要とします。 本研究では、transformer層の構造にいくつかの変更を加え、より効率的なアーキテクチャを実現しました。 まず、自己注意モジュールを補完するために、畳み込みモジュールを追加し、局所的な相互作用と大域的な相互作用の学習を分離します。 次に、グループ化された変換により、モデルの表現力を維持しつつ、密なフィードフォワード層と畳み込みの計算コストを削減します。 結果として得られたアーキテクチャを言語表現学習に適用し、様々な規模のbertモデルと比較して、その優れた性能を実証しました。 さらに、浮動小数点演算(flops)と学習時間の両方の観点から、効率性が向上していることを明らかにした。
1
70,706
15,099,069,782
IssuesEvent
2021-02-08 01:20:09
TechnoConserve/personal_website
https://api.github.com/repos/TechnoConserve/personal_website
closed
CVE-2019-8331 (Medium) detected in bootstrap-3.3.7.min.js
security vulnerability
## CVE-2019-8331 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p> <p>Path to dependency file: personal_website/photo_blog/templates/base.html</p> <p>Path to vulnerable library: personal_website/photo_blog/templates/base.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.7.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/TechnoConserve/personal_website/commit/ae1b99a0f747fe02c68548331e8e39042b28bc81">ae1b99a0f747fe02c68548331e8e39042b28bc81</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute. <p>Publish Date: 2019-02-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p> <p>Release Date: 2019-02-20</p> <p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2019-8331 (Medium) detected in bootstrap-3.3.7.min.js - ## CVE-2019-8331 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bootstrap-3.3.7.min.js</b></p></summary> <p>The most popular front-end framework for developing responsive, mobile first projects on the web.</p> <p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js">https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/3.3.7/js/bootstrap.min.js</a></p> <p>Path to dependency file: personal_website/photo_blog/templates/base.html</p> <p>Path to vulnerable library: personal_website/photo_blog/templates/base.html</p> <p> Dependency Hierarchy: - :x: **bootstrap-3.3.7.min.js** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/TechnoConserve/personal_website/commit/ae1b99a0f747fe02c68548331e8e39042b28bc81">ae1b99a0f747fe02c68548331e8e39042b28bc81</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In Bootstrap before 3.4.1 and 4.3.x before 4.3.1, XSS is possible in the tooltip or popover data-template attribute. <p>Publish Date: 2019-02-20 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-8331>CVE-2019-8331</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: Required - Scope: Changed - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/twbs/bootstrap/pull/28236">https://github.com/twbs/bootstrap/pull/28236</a></p> <p>Release Date: 2019-02-20</p> <p>Fix Resolution: bootstrap - 3.4.1,4.3.1;bootstrap-sass - 3.4.1,4.3.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in bootstrap min js cve medium severity vulnerability vulnerable library bootstrap min js the most popular front end framework for developing responsive mobile first projects on the web library home page a href path to dependency file personal website photo blog templates base html path to vulnerable library personal website photo blog templates base html dependency hierarchy x bootstrap min js vulnerable library found in head commit a href found in base branch main vulnerability details in bootstrap before and x before xss is possible in the tooltip or popover data template attribute publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution bootstrap bootstrap sass step up your open source security game with whitesource
0
42,772
5,474,827,211
IssuesEvent
2017-03-11 03:52:20
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
LLVM Assertion: Both operands to ICmp instruction are not of the same type!
E-needstest I-ICE
Unfortunately, I don't know what in hyper is causing this error. ``` rustc: /home/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/llvm/include/llvm/IR/Instructions.h:997: void llvm::ICmpInst::AssertOK(): Assertion `getOperand(0)->getType() == getOperand(1)->getType() && "Both operands to ICmp instruction are not of the same type!"' failed. ```
1.0
LLVM Assertion: Both operands to ICmp instruction are not of the same type! - Unfortunately, I don't know what in hyper is causing this error. ``` rustc: /home/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc-linux/build/src/llvm/include/llvm/IR/Instructions.h:997: void llvm::ICmpInst::AssertOK(): Assertion `getOperand(0)->getType() == getOperand(1)->getType() && "Both operands to ICmp instruction are not of the same type!"' failed. ```
non_process
llvm assertion both operands to icmp instruction are not of the same type unfortunately i don t know what in hyper is causing this error rustc home rustbuild src rust buildbot slave nightly dist rustc linux build src llvm include llvm ir instructions h void llvm icmpinst assertok assertion getoperand gettype getoperand gettype both operands to icmp instruction are not of the same type failed
0
2,904
5,889,444,304
IssuesEvent
2017-05-17 12:55:35
LOVDnl/LOVD3
https://api.github.com/repos/LOVDnl/LOVD3
opened
Reform the submission process
cat: submission process feature request
The current submission process was designed as a "wizard" type of process where the user was taken through the process step by step, which was hopefully simpler than to have all the submission information together on one screen. However, several submitters have now indicated they would like to see all data together on one screen, and furthermore it will probably cause less errors with submission because the user can move around more freely around the submission process. The submission process is to be redesigned as follows: - The basis is one page where all data is displayed, similar to the individual's VE. However, places where there is no data are to be filled with clear links to adding this data. - The order in which data is added is still similar; individual data is needed before any other data, then screenings or phenotypes (the latter only if a disease has been linked), then finally variants can only be added once screenings have been defined, at least one variant should be added to be able to submit the actual submission for curation. - After each data entry form, you are always returned to this overview page.
1.0
Reform the submission process - The current submission process was designed as a "wizard" type of process where the user was taken through the process step by step, which was hopefully simpler than to have all the submission information together on one screen. However, several submitters have now indicated they would like to see all data together on one screen, and furthermore it will probably cause less errors with submission because the user can move around more freely around the submission process. The submission process is to be redesigned as follows: - The basis is one page where all data is displayed, similar to the individual's VE. However, places where there is no data are to be filled with clear links to adding this data. - The order in which data is added is still similar; individual data is needed before any other data, then screenings or phenotypes (the latter only if a disease has been linked), then finally variants can only be added once screenings have been defined, at least one variant should be added to be able to submit the actual submission for curation. - After each data entry form, you are always returned to this overview page.
process
reform the submission process the current submission process was designed as a wizard type of process where the user was taken through the process step by step which was hopefully simpler than to have all the submission information together on one screen however several submitters have now indicated they would like to see all data together on one screen and furthermore it will probably cause less errors with submission because the user can move around more freely around the submission process the submission process is to be redesigned as follows the basis is one page where all data is displayed similar to the individual s ve however places where there is no data are to be filled with clear links to adding this data the order in which data is added is still similar individual data is needed before any other data then screenings or phenotypes the latter only if a disease has been linked then finally variants can only be added once screenings have been defined at least one variant should be added to be able to submit the actual submission for curation after each data entry form you are always returned to this overview page
1
85,272
10,434,791,401
IssuesEvent
2019-09-17 15:53:26
open-contracting/ocdskit
https://api.github.com/repos/open-contracting/ocdskit
closed
Add heredocs for undocumented library methods
documentation
* [x] mapping_sheet * [x] get_schema_fields * [x] Add first parameter to documented parameters in combine.py
1.0
Add heredocs for undocumented library methods - * [x] mapping_sheet * [x] get_schema_fields * [x] Add first parameter to documented parameters in combine.py
non_process
add heredocs for undocumented library methods mapping sheet get schema fields add first parameter to documented parameters in combine py
0
11,484
14,355,631,644
IssuesEvent
2020-11-30 10:22:25
DevExpress/testcafe-hammerhead
https://api.github.com/repos/DevExpress/testcafe-hammerhead
closed
Constructor Worker requires 'new' in firefox
AREA: client BROWSER: Firefox FREQUENCY: level 1 SYSTEM: client side processing TYPE: bug
### What is your Test Scenario? The main use of my app is to view different types of documents such as pdf, office files, images etc. It uses Web Worker to run some scripts that help processing the document so that it can be rendered/viewed. ### What is the Current behavior? When the app is running in the localhost, the Web Worker can be instantiated correctly so that the document can be processed and rendered. However if I'm using testcafe to test the app, the console will error `Constructor Worker requires 'new'`. After tracking the error down a bit, I think it is coming from `window.ts`: ``` if (constructorIsCalledWithoutNewKeyword(this, window.Worker)) nativeMethods.Worker.apply(this, arguments); ``` ### What is the Expected behavior? The Web Worker can be instantiated without any error. ### What is your web application and your TestCafe test code? Please clone the repo from: https://github.com/ZhijieZhang/testcafe_sample, run npm install and then npm start. You will see the web application opened in a new tab. You can check the TestCafe test code in `test.js` and start the test by `npm run test`. ### Your Environment details: testcafe version: 1.1.0 node.js version: 9.9.0 command-line arguments: npm run test browser name and version: Firefox 66.0 platform and version: macOs 10.13.6
1.0
Constructor Worker requires 'new' in firefox - ### What is your Test Scenario? The main use of my app is to view different types of documents such as pdf, office files, images etc. It uses Web Worker to run some scripts that help processing the document so that it can be rendered/viewed. ### What is the Current behavior? When the app is running in the localhost, the Web Worker can be instantiated correctly so that the document can be processed and rendered. However if I'm using testcafe to test the app, the console will error `Constructor Worker requires 'new'`. After tracking the error down a bit, I think it is coming from `window.ts`: ``` if (constructorIsCalledWithoutNewKeyword(this, window.Worker)) nativeMethods.Worker.apply(this, arguments); ``` ### What is the Expected behavior? The Web Worker can be instantiated without any error. ### What is your web application and your TestCafe test code? Please clone the repo from: https://github.com/ZhijieZhang/testcafe_sample, run npm install and then npm start. You will see the web application opened in a new tab. You can check the TestCafe test code in `test.js` and start the test by `npm run test`. ### Your Environment details: testcafe version: 1.1.0 node.js version: 9.9.0 command-line arguments: npm run test browser name and version: Firefox 66.0 platform and version: macOs 10.13.6
process
constructor worker requires new in firefox what is your test scenario the main use of my app is to view different types of documents such as pdf office files images etc it uses web worker to run some scripts that help processing the document so that it can be rendered viewed what is the current behavior when the app is running in the localhost the web worker can be instantiated correctly so that the document can be processed and rendered however if i m using testcafe to test the app the console will error constructor worker requires new after tracking the error down a bit i think it is coming from window ts if constructoriscalledwithoutnewkeyword this window worker nativemethods worker apply this arguments what is the expected behavior the web worker can be instantiated without any error what is your web application and your testcafe test code please clone the repo from run npm install and then npm start you will see the web application opened in a new tab you can check the testcafe test code in test js and start the test by npm run test your environment details testcafe version node js version command line arguments npm run test browser name and version firefox platform and version macos
1
13,293
15,768,035,826
IssuesEvent
2021-03-31 16:45:55
googleapis/google-cloud-go
https://api.github.com/repos/googleapis/google-cloud-go
closed
all: audit startup time
help wanted type: process
Figure out a way to audit startup time (primarily time spent in init). Maybe using the profiler or statements added before/after init funcs. This is more of a nice-to-have, since I haven't heard anyone mention this (other than in passing from the core Go team). Cold starts are increasingly important for serverless platforms, which scale up on demand.
1.0
all: audit startup time - Figure out a way to audit startup time (primarily time spent in init). Maybe using the profiler or statements added before/after init funcs. This is more of a nice-to-have, since I haven't heard anyone mention this (other than in passing from the core Go team). Cold starts are increasingly important for serverless platforms, which scale up on demand.
process
all audit startup time figure out a way to audit startup time primarily time spent in init maybe using the profiler or statements added before after init funcs this is more of a nice to have since i haven t heard anyone mention this other than in passing from the core go team cold starts are increasingly important for serverless platforms which scale up on demand
1
23,768
2,663,198,330
IssuesEvent
2015-03-20 02:07:54
certtools/intelmq
https://api.github.com/repos/certtools/intelmq
closed
After reboot /var/run/intelmq
bug high priority
After reboot, if the manager throught intelmqctl try to execute any command, it will not work because /var/run/intelmq doest not exist and www-data doesnt have perms to create it.
1.0
After reboot /var/run/intelmq - After reboot, if the manager throught intelmqctl try to execute any command, it will not work because /var/run/intelmq doest not exist and www-data doesnt have perms to create it.
non_process
after reboot var run intelmq after reboot if the manager throught intelmqctl try to execute any command it will not work because var run intelmq doest not exist and www data doesnt have perms to create it
0
22,397
31,142,288,680
IssuesEvent
2023-08-16 01:44:28
cypress-io/cypress
https://api.github.com/repos/cypress-io/cypress
closed
Flaky test: Timed out retrying after 4000ms: expected undefined to be an object
OS: linux process: flaky test topic: flake ❄️ stage: flake stale
### Link to dashboard or CircleCI failure https://dashboard.cypress.io/projects/ypt4pf/runs/37673/test-results/be1a324d-2fba-4156-86ca-87b6577de629 ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/xhr.cy.js#L2399 ### Analysis <img width="429" alt="Screen Shot 2022-08-10 at 9 28 40 AM" src="https://user-images.githubusercontent.com/26726429/183963541-f8efa18a-1643-487e-9b57-183f1b5b524b.png"> ### Cypress Version 10.4.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
1.0
Flaky test: Timed out retrying after 4000ms: expected undefined to be an object - ### Link to dashboard or CircleCI failure https://dashboard.cypress.io/projects/ypt4pf/runs/37673/test-results/be1a324d-2fba-4156-86ca-87b6577de629 ### Link to failing test in GitHub https://github.com/cypress-io/cypress/blob/develop/packages/driver/cypress/e2e/commands/xhr.cy.js#L2399 ### Analysis <img width="429" alt="Screen Shot 2022-08-10 at 9 28 40 AM" src="https://user-images.githubusercontent.com/26726429/183963541-f8efa18a-1643-487e-9b57-183f1b5b524b.png"> ### Cypress Version 10.4.0 ### Other Search for this issue number in the codebase to find the test(s) skipped until this issue is fixed
process
flaky test timed out retrying after expected undefined to be an object link to dashboard or circleci failure link to failing test in github analysis img width alt screen shot at am src cypress version other search for this issue number in the codebase to find the test s skipped until this issue is fixed
1
105,558
16,652,828,922
IssuesEvent
2021-06-05 01:31:44
cfscode/react-photoswipe
https://api.github.com/repos/cfscode/react-photoswipe
opened
CVE-2016-10540 (High) detected in minimatch-0.2.14.tgz, minimatch-2.0.10.tgz
security vulnerability
## CVE-2016-10540 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-0.2.14.tgz</b>, <b>minimatch-2.0.10.tgz</b></p></summary> <p> <details><summary><b>minimatch-0.2.14.tgz</b></p></summary> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz</a></p> <p>Path to dependency file: react-photoswipe/package.json</p> <p>Path to vulnerable library: react-photoswipe/node_modules/globule/node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - vinyl-fs-0.3.14.tgz - glob-watcher-0.0.6.tgz - gaze-0.5.2.tgz - globule-0.1.0.tgz - :x: **minimatch-0.2.14.tgz** (Vulnerable Library) </details> <details><summary><b>minimatch-2.0.10.tgz</b></p></summary> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p> <p>Path to dependency file: react-photoswipe/package.json</p> <p>Path to vulnerable library: react-photoswipe/node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - babel-core-5.8.38.tgz (Root Library) - :x: **minimatch-2.0.10.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter. <p>Publish Date: 2018-05-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p> <p>Release Date: 2016-06-20</p> <p>Fix Resolution: Update to version 3.0.2 or later.</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-10540 (High) detected in minimatch-0.2.14.tgz, minimatch-2.0.10.tgz - ## CVE-2016-10540 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>minimatch-0.2.14.tgz</b>, <b>minimatch-2.0.10.tgz</b></p></summary> <p> <details><summary><b>minimatch-0.2.14.tgz</b></p></summary> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz">https://registry.npmjs.org/minimatch/-/minimatch-0.2.14.tgz</a></p> <p>Path to dependency file: react-photoswipe/package.json</p> <p>Path to vulnerable library: react-photoswipe/node_modules/globule/node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - gulp-3.9.1.tgz (Root Library) - vinyl-fs-0.3.14.tgz - glob-watcher-0.0.6.tgz - gaze-0.5.2.tgz - globule-0.1.0.tgz - :x: **minimatch-0.2.14.tgz** (Vulnerable Library) </details> <details><summary><b>minimatch-2.0.10.tgz</b></p></summary> <p>a glob matcher in javascript</p> <p>Library home page: <a href="https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz">https://registry.npmjs.org/minimatch/-/minimatch-2.0.10.tgz</a></p> <p>Path to dependency file: react-photoswipe/package.json</p> <p>Path to vulnerable library: react-photoswipe/node_modules/minimatch/package.json</p> <p> Dependency Hierarchy: - babel-core-5.8.38.tgz (Root Library) - :x: **minimatch-2.0.10.tgz** (Vulnerable Library) </details> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Minimatch is a minimal matching utility that works by converting glob expressions into JavaScript `RegExp` objects. The primary function, `minimatch(path, pattern)` in Minimatch 3.0.1 and earlier is vulnerable to ReDoS in the `pattern` parameter. <p>Publish Date: 2018-05-31 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-10540>CVE-2016-10540</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://nodesecurity.io/advisories/118">https://nodesecurity.io/advisories/118</a></p> <p>Release Date: 2016-06-20</p> <p>Fix Resolution: Update to version 3.0.2 or later.</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in minimatch tgz minimatch tgz cve high severity vulnerability vulnerable libraries minimatch tgz minimatch tgz minimatch tgz a glob matcher in javascript library home page a href path to dependency file react photoswipe package json path to vulnerable library react photoswipe node modules globule node modules minimatch package json dependency hierarchy gulp tgz root library vinyl fs tgz glob watcher tgz gaze tgz globule tgz x minimatch tgz vulnerable library minimatch tgz a glob matcher in javascript library home page a href path to dependency file react photoswipe package json path to vulnerable library react photoswipe node modules minimatch package json dependency hierarchy babel core tgz root library x minimatch tgz vulnerable library found in base branch master vulnerability details minimatch is a minimal matching utility that works by converting glob expressions into javascript regexp objects the primary function minimatch path pattern in minimatch and earlier is vulnerable to redos in the pattern parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution update to version or later step up your open source security game with whitesource
0
256,361
19,409,669,263
IssuesEvent
2021-12-20 08:05:41
fremtind/jokul
https://api.github.com/repos/fremtind/jokul
closed
Komponentdok for Accordion
📚 documentation
**Må ha** - [x] Eksempler på riktig og feil bruk - [x] Live kodeeksempel **Vil ha** - [ ] Eksempler på bruk i teamene - [ ] Kontrollspørsmål for bruk - [x] Lenker til relevante komponenter
1.0
Komponentdok for Accordion - **Må ha** - [x] Eksempler på riktig og feil bruk - [x] Live kodeeksempel **Vil ha** - [ ] Eksempler på bruk i teamene - [ ] Kontrollspørsmål for bruk - [x] Lenker til relevante komponenter
non_process
komponentdok for accordion må ha eksempler på riktig og feil bruk live kodeeksempel vil ha eksempler på bruk i teamene kontrollspørsmål for bruk lenker til relevante komponenter
0
29,027
13,037,850,230
IssuesEvent
2020-07-28 14:24:06
cityofaustin/atd-data-tech
https://api.github.com/repos/cityofaustin/atd-data-tech
closed
Document requirements for Project Tracking in AMD's Data Tracker
Product: AMD Data Tracker Product: Mobility Project Database Service: Apps Service: Product Workgroup: AMD
I will follow DTS' process for tracking requirements (i.e. user stories) for Knack apps, otherwise, I'm happy to get building in a non-production copy of Data Tracker. Will plan to deliver the requirements in the upcoming sprint; ideally, we can do it all (requirements + development) in the same sprint. See #3042 for initial research/discussion, including proposed spreadsheet to track projects in Data Tracker.
2.0
Document requirements for Project Tracking in AMD's Data Tracker - I will follow DTS' process for tracking requirements (i.e. user stories) for Knack apps, otherwise, I'm happy to get building in a non-production copy of Data Tracker. Will plan to deliver the requirements in the upcoming sprint; ideally, we can do it all (requirements + development) in the same sprint. See #3042 for initial research/discussion, including proposed spreadsheet to track projects in Data Tracker.
non_process
document requirements for project tracking in amd s data tracker i will follow dts process for tracking requirements i e user stories for knack apps otherwise i m happy to get building in a non production copy of data tracker will plan to deliver the requirements in the upcoming sprint ideally we can do it all requirements development in the same sprint see for initial research discussion including proposed spreadsheet to track projects in data tracker
0
56,990
6,535,917,841
IssuesEvent
2017-08-31 16:07:42
learn-co-curriculum/js-object-oriented-constructor-functions-readme
https://api.github.com/repos/learn-co-curriculum/js-object-oriented-constructor-functions-readme
closed
"we can create as many Puppies as we want" and then code example only shows one puppy being made
Test
And of course we can create as many objects we want with our constructor function. ``` function Puppy(name, age, color, size) { this.name = name this.age = age this.color = color this.size = size } let snoopy = new Puppy('snoopy', 3, 'white', 'medium') // {name: 'snoopy', age: 3, color: 'white', size: 'medium'} ``` Maybe create two or three puppies in this code snippet
1.0
"we can create as many Puppies as we want" and then code example only shows one puppy being made - And of course we can create as many objects we want with our constructor function. ``` function Puppy(name, age, color, size) { this.name = name this.age = age this.color = color this.size = size } let snoopy = new Puppy('snoopy', 3, 'white', 'medium') // {name: 'snoopy', age: 3, color: 'white', size: 'medium'} ``` Maybe create two or three puppies in this code snippet
non_process
we can create as many puppies as we want and then code example only shows one puppy being made and of course we can create as many objects we want with our constructor function function puppy name age color size this name name this age age this color color this size size let snoopy new puppy snoopy white medium name snoopy age color white size medium maybe create two or three puppies in this code snippet
0
9,848
12,838,132,603
IssuesEvent
2020-07-07 16:54:59
pystatgen/sgkit
https://api.github.com/repos/pystatgen/sgkit
closed
Tools for enforcing coding standards
process + tools
Which tools should we use for enforcing coding standards? Here is a table summarising what (related projects) [Zarr](https://zarr.readthedocs.io/en/stable/contributing.html#code-standards), [Dask](https://docs.dask.org/en/latest/develop.html#code-formatting), and [Xarray](https://xarray.pydata.org/en/latest/contributing.html#contributing-to-the-code-base) use. | |Zarr|Dask|Xarray| |-|----|----|------| |Formatting|-|Black|Black| |Linting|Flake8|Flake8|Flake8| |Imports|-|-|isort| |Type checking|-|-|Mypy| |Code coverage|Coveralls|Coveralls|Codecov| |CI|Travis|Travis|Azure Pipelines| I would like to propose we use: * [Black](https://black.readthedocs.io/en/stable/) * [Flake8](https://flake8.pycqa.org/en/latest/) * [isort](https://timothycrosley.github.io/isort/) * [Mypy](https://mypy.readthedocs.io/en/stable/) * [Coveralls](https://coveralls.io/) * [GitHub Actions](https://help.github.com/en/actions) GitHub Actions is the only one that isn't used by any of the three other related projects, but it seems to be a popular choice for new projects due to its close integration with GitHub.
1.0
Tools for enforcing coding standards - Which tools should we use for enforcing coding standards? Here is a table summarising what (related projects) [Zarr](https://zarr.readthedocs.io/en/stable/contributing.html#code-standards), [Dask](https://docs.dask.org/en/latest/develop.html#code-formatting), and [Xarray](https://xarray.pydata.org/en/latest/contributing.html#contributing-to-the-code-base) use. | |Zarr|Dask|Xarray| |-|----|----|------| |Formatting|-|Black|Black| |Linting|Flake8|Flake8|Flake8| |Imports|-|-|isort| |Type checking|-|-|Mypy| |Code coverage|Coveralls|Coveralls|Codecov| |CI|Travis|Travis|Azure Pipelines| I would like to propose we use: * [Black](https://black.readthedocs.io/en/stable/) * [Flake8](https://flake8.pycqa.org/en/latest/) * [isort](https://timothycrosley.github.io/isort/) * [Mypy](https://mypy.readthedocs.io/en/stable/) * [Coveralls](https://coveralls.io/) * [GitHub Actions](https://help.github.com/en/actions) GitHub Actions is the only one that isn't used by any of the three other related projects, but it seems to be a popular choice for new projects due to its close integration with GitHub.
process
tools for enforcing coding standards which tools should we use for enforcing coding standards here is a table summarising what related projects and use zarr dask xarray formatting black black linting imports isort type checking mypy code coverage coveralls coveralls codecov ci travis travis azure pipelines i would like to propose we use github actions is the only one that isn t used by any of the three other related projects but it seems to be a popular choice for new projects due to its close integration with github
1
98,413
11,082,665,964
IssuesEvent
2019-12-13 12:42:51
phpDocumentor/phpDocumentor
https://api.github.com/repos/phpDocumentor/phpDocumentor
closed
phpdoc v3 config regressions
documentation
This issue is to collect v3 config regressions. Related to #2056 I discovered that phpdoc v3 config format might have an issue with the definition of visibility. Either we need to check how we can keep the current defined format and document the new format or we should fallback to the visibility format of phpdoc v2.
1.0
phpdoc v3 config regressions - This issue is to collect v3 config regressions. Related to #2056 I discovered that phpdoc v3 config format might have an issue with the definition of visibility. Either we need to check how we can keep the current defined format and document the new format or we should fallback to the visibility format of phpdoc v2.
non_process
phpdoc config regressions this issue is to collect config regressions related to i discovered that phpdoc config format might have an issue with the definition of visibility either we need to check how we can keep the current defined format and document the new format or we should fallback to the visibility format of phpdoc
0
14,651
17,776,547,949
IssuesEvent
2021-08-30 20:02:01
GoogleCloudPlatform/professional-services-data-validator
https://api.github.com/repos/GoogleCloudPlatform/professional-services-data-validator
opened
PoC Random Row validation
type: process priority: p0 Release
Proof of concept random row validation. This includes random sampling a source table (with a TABLESAMPLE or an ORDER BY/RAND) and retrieving a specific number of rows with LIMIT. Then, using the primary keys retrieved in the source query, build a query on the target DB with a WHERE/IN statement to sample the same rows as the source.
1.0
PoC Random Row validation - Proof of concept random row validation. This includes random sampling a source table (with a TABLESAMPLE or an ORDER BY/RAND) and retrieving a specific number of rows with LIMIT. Then, using the primary keys retrieved in the source query, build a query on the target DB with a WHERE/IN statement to sample the same rows as the source.
process
poc random row validation proof of concept random row validation this includes random sampling a source table with a tablesample or an order by rand and retrieving a specific number of rows with limit then using the primary keys retrieved in the source query build a query on the target db with a where in statement to sample the same rows as the source
1
39,371
12,663,417,192
IssuesEvent
2020-06-18 01:17:23
TIBCOSoftware/ASAssets_Utilities
https://api.github.com/repos/TIBCOSoftware/ASAssets_Utilities
opened
CVE-2020-2934 (Medium) detected in mysql-connector-java-5.1.14.jar
security vulnerability
## CVE-2020-2934 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.14.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to vulnerable library: _depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q303/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q3/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q301/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q3/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2017Q2/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q2/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q4/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q4/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2016Q1/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q302/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q2/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q305/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q1/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q304/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q401/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/ASAssets_Utilities/Release/archive/Utilities_2017Q4/Utilities_2017Q4/Utilities_2017Q4/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q307/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q306/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.14.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L). <p>Publish Date: 2020-04-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934>CVE-2020-2934</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.oracle.com/security-alerts/cpuapr2020.html">https://www.oracle.com/security-alerts/cpuapr2020.html</a></p> <p>Release Date: 2020-04-15</p> <p>Fix Resolution: mysql:mysql-connector-java:5.1.49,8.0.20</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.14","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49,8.0.20"}],"vulnerabilityIdentifier":"CVE-2020-2934","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
True
CVE-2020-2934 (Medium) detected in mysql-connector-java-5.1.14.jar - ## CVE-2020-2934 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>mysql-connector-java-5.1.14.jar</b></p></summary> <p>MySQL JDBC Type 4 driver</p> <p>Library home page: <a href="http://dev.mysql.com/doc/connector-j/en/">http://dev.mysql.com/doc/connector-j/en/</a></p> <p>Path to vulnerable library: _depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q303/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q3/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q301/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q3/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2017Q2/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q2/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q4/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q4/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2016Q1/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q302/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q2/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q305/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q1/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q304/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2015Q401/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_1/ASAssets_Utilities/Release/archive/Utilities_2017Q4/Utilities_2017Q4/Utilities_2017Q4/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q307/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar,_depth_0/ASAssets_Utilities/Release/archive/Utilities_2014Q306/files/conf/adapters/system/mysql_5_0/mysql-connector-java-5.1.14-bin.jar</p> <p> Dependency Hierarchy: - :x: **mysql-connector-java-5.1.14.jar** (Vulnerable Library) </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L). <p>Publish Date: 2020-04-15 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934>CVE-2020-2934</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.0</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: High - Privileges Required: None - User Interaction: Required - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.oracle.com/security-alerts/cpuapr2020.html">https://www.oracle.com/security-alerts/cpuapr2020.html</a></p> <p>Release Date: 2020-04-15</p> <p>Fix Resolution: mysql:mysql-connector-java:5.1.49,8.0.20</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"mysql","packageName":"mysql-connector-java","packageVersion":"5.1.14","isTransitiveDependency":false,"dependencyTree":"mysql:mysql-connector-java:5.1.14","isMinimumFixVersionAvailable":true,"minimumFixVersion":"mysql:mysql-connector-java:5.1.49,8.0.20"}],"vulnerabilityIdentifier":"CVE-2020-2934","vulnerabilityDetails":"Vulnerability in the MySQL Connectors product of Oracle MySQL (component: Connector/J). Supported versions that are affected are 8.0.19 and prior and 5.1.48 and prior. Difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise MySQL Connectors. Successful attacks require human interaction from a person other than the attacker. Successful attacks of this vulnerability can result in unauthorized update, insert or delete access to some of MySQL Connectors accessible data as well as unauthorized read access to a subset of MySQL Connectors accessible data and unauthorized ability to cause a partial denial of service (partial DOS) of MySQL Connectors. CVSS 3.0 Base Score 5.0 (Confidentiality, Integrity and Availability impacts). CVSS Vector: (CVSS:3.0/AV:N/AC:H/PR:N/UI:R/S:U/C:L/I:L/A:L).","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-2934","cvss3Severity":"medium","cvss3Score":"5.0","cvss3Metrics":{"A":"Low","AC":"High","PR":"None","S":"Unchanged","C":"Low","UI":"Required","AV":"Network","I":"Low"},"extraData":{}}</REMEDIATE> -->
non_process
cve medium detected in mysql connector java jar cve medium severity vulnerability vulnerable library mysql connector java jar mysql jdbc type driver library home page a href path to vulnerable library depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities utilities utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar depth asassets utilities release archive utilities files conf adapters system mysql mysql connector java bin jar dependency hierarchy x mysql connector java jar vulnerable library vulnerability details vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior and and prior difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise mysql connectors successful attacks require human interaction from a person other than the attacker successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data and unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr n ui r s u c l i l a l publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction required scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution mysql mysql connector java check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails vulnerability in the mysql connectors product of oracle mysql component connector j supported versions that are affected are and prior and and prior difficult to exploit vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise mysql connectors successful attacks require human interaction from a person other than the attacker successful attacks of this vulnerability can result in unauthorized update insert or delete access to some of mysql connectors accessible data as well as unauthorized read access to a subset of mysql connectors accessible data and unauthorized ability to cause a partial denial of service partial dos of mysql connectors cvss base score confidentiality integrity and availability impacts cvss vector cvss av n ac h pr n ui r s u c l i l a l vulnerabilityurl
0
21,052
27,997,634,677
IssuesEvent
2023-03-27 09:32:07
nodejs/node
https://api.github.com/repos/nodejs/node
closed
node {16,17,18} doesn't respect spawn timeout and hangs
child_process
### Version 16.* ### Platform linux `Linux f092196b2008 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux` ### What steps will reproduce the bug? I've created a demo using docker so you can easily swap node versions and see the bug. That being said, I'm sharing the tarball as bas64 because I wasn't sure if you'd like to download tarballs...that being said you may also not want to decode b64 🤷‍♂️ download here if you'd like: [poc<via wetransfer>](https://wetransfer.com/downloads/4395c16ad0eaf6ff6492d67b56060af420220706154704/4edc045984510764c4a4f8980d82ffe720220706154716/0d769f?utm_campaign=WT_email_tracking&utm_content=general&utm_medium=download_button&utm_source=notify_recipient_email) or run the following: ```sh base64 -d <<< H4sIACurxWIAA+1YbWvbMBDOZ/+KqzfqZCSyZcsxBLoLjpJp+fRKchuVA7HcQLfB912y9ZxSdkuAJi4Xey5PvE9cDAmJGiAX/3UGo2ZKMJcTiW8YGH8gJ90G40e6F+sY9W+ECB7OGNJ3Ek7+SxFYlJFjHX5x06wyr/nuzL/BHvdBjhVTOY+Xnn+32zZQ5baw1BMjCIc75gpjynudiZhOqa5acQ8Oqc56EMCnQLeSidAS7M8M9piPPcyavwlkP1Bp3LEElpVDMmHLiEP8B/7S/4T38eS/x7xSM3/TeDToP8NFOd7kvQiYVPD+N4ffP2wNwCbZ4Vh7PYPjgFJyhsf948Gxwf9vf0jOAGtE2YbTJbG9Bf6IUw4rVXg5QHZKpFnUx7PEiqqKQYfUf8RQhx9/+O6/tsIkL0kcHUx1um/G9zKv9dV+o+xV+v/JhDxVBRwLbLwMj28SqM57EBOf85YTptWNJFV31mW84gKYbUMQ3nzhKKEj5vWofoNS8cIIdm3GqFppdnUap9YTI4cJonVtpCdnY9l2+lc0HzIBbVO2zJmETPek24TmrPCakPBppTPih6WG/jOm7fuhtMBVCl6RWW45963/wU6NRX/B/AY/cdBIPnvup5b6/8mUOY/C6PzcEzlLcDTp4+xTv89n9zov0PU/e/X+r8ZXBsga/lwSs0emPIomG1lkDItGE+VDSMHOaU1piLKWVYsekrjNGT62+odoK2lo5AdKoA0ZDld3AfKmUYTDtYXmiQcRjmfgkgozaS437htWbC9rV8mgGzdrQaXg811gHN6dcnzWEU4OdWWcFZMeH4zsYRFNBV6XXuHu6Yxr++MP6Dk/3J/q4mx9v3vBff03wscXPN/E7hbYqljICs6SbtLVdLR4qgsyZrN1s77O66fOY9hynNVAFqtti7ZuurTAtuGEBJeyLqwpty/jnvvfykGTx/j8e9/3/fk/e8iVM2EbuOV879GjRqvF78BY21wXQAgAAA= > out.tar.gz ``` then `tar -zxf out.tar.gz` and you'll see the following files: ```sh Dockerfile build-n-run.sh index.js node_modules pkg ``` you can run the `build-n-run.sh` that I've created (which is a silly 2liner), or build and run it yourself ### Steps to Reproduce 1. Run something like: ```js const {spawnSync} = require('child_process') console.log('Spawning...') spawnSync('npm',['install','./pkg','--verbose'], {stdio:'inherit', timeout:1000*3}) console.log('spawner bye') ``` and make the package preinstall script for `pkg` to hang (via long `setTimeout` that will make it busy) 3. Expect `ETIMEOUT`, but get nothing
1.0
node {16,17,18} doesn't respect spawn timeout and hangs - ### Version 16.* ### Platform linux `Linux f092196b2008 5.10.104-linuxkit #1 SMP Thu Mar 17 17:08:06 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux` ### What steps will reproduce the bug? I've created a demo using docker so you can easily swap node versions and see the bug. That being said, I'm sharing the tarball as bas64 because I wasn't sure if you'd like to download tarballs...that being said you may also not want to decode b64 🤷‍♂️ download here if you'd like: [poc<via wetransfer>](https://wetransfer.com/downloads/4395c16ad0eaf6ff6492d67b56060af420220706154704/4edc045984510764c4a4f8980d82ffe720220706154716/0d769f?utm_campaign=WT_email_tracking&utm_content=general&utm_medium=download_button&utm_source=notify_recipient_email) or run the following: ```sh base64 -d <<< H4sIACurxWIAA+1YbWvbMBDOZ/+KqzfqZCSyZcsxBLoLjpJp+fRKchuVA7HcQLfB912y9ZxSdkuAJi4Xey5PvE9cDAmJGiAX/3UGo2ZKMJcTiW8YGH8gJ90G40e6F+sY9W+ECB7OGNJ3Ek7+SxFYlJFjHX5x06wyr/nuzL/BHvdBjhVTOY+Xnn+32zZQ5baw1BMjCIc75gpjynudiZhOqa5acQ8Oqc56EMCnQLeSidAS7M8M9piPPcyavwlkP1Bp3LEElpVDMmHLiEP8B/7S/4T38eS/x7xSM3/TeDToP8NFOd7kvQiYVPD+N4ffP2wNwCbZ4Vh7PYPjgFJyhsf948Gxwf9vf0jOAGtE2YbTJbG9Bf6IUw4rVXg5QHZKpFnUx7PEiqqKQYfUf8RQhx9/+O6/tsIkL0kcHUx1um/G9zKv9dV+o+xV+v/JhDxVBRwLbLwMj28SqM57EBOf85YTptWNJFV31mW84gKYbUMQ3nzhKKEj5vWofoNS8cIIdm3GqFppdnUap9YTI4cJonVtpCdnY9l2+lc0HzIBbVO2zJmETPek24TmrPCakPBppTPih6WG/jOm7fuhtMBVCl6RWW45963/wU6NRX/B/AY/cdBIPnvup5b6/8mUOY/C6PzcEzlLcDTp4+xTv89n9zov0PU/e/X+r8ZXBsga/lwSs0emPIomG1lkDItGE+VDSMHOaU1piLKWVYsekrjNGT62+odoK2lo5AdKoA0ZDld3AfKmUYTDtYXmiQcRjmfgkgozaS437htWbC9rV8mgGzdrQaXg811gHN6dcnzWEU4OdWWcFZMeH4zsYRFNBV6XXuHu6Yxr++MP6Dk/3J/q4mx9v3vBff03wscXPN/E7hbYqljICs6SbtLVdLR4qgsyZrN1s77O66fOY9hynNVAFqtti7ZuurTAtuGEBJeyLqwpty/jnvvfykGTx/j8e9/3/fk/e8iVM2EbuOV879GjRqvF78BY21wXQAgAAA= > out.tar.gz ``` then `tar -zxf out.tar.gz` and you'll see the following files: ```sh Dockerfile build-n-run.sh index.js node_modules pkg ``` you can run the `build-n-run.sh` that I've created (which is a silly 2liner), or build and run it yourself ### Steps to Reproduce 1. Run something like: ```js const {spawnSync} = require('child_process') console.log('Spawning...') spawnSync('npm',['install','./pkg','--verbose'], {stdio:'inherit', timeout:1000*3}) console.log('spawner bye') ``` and make the package preinstall script for `pkg` to hang (via long `setTimeout` that will make it busy) 3. Expect `ETIMEOUT`, but get nothing
process
node doesn t respect spawn timeout and hangs version platform linux linux linuxkit smp thu mar utc gnu linux what steps will reproduce the bug i ve created a demo using docker so you can easily swap node versions and see the bug that being said i m sharing the tarball as because i wasn t sure if you d like to download tarballs that being said you may also not want to decode 🤷‍♂️ download here if you d like or run the following sh d out tar gz then tar zxf out tar gz and you ll see the following files sh dockerfile build n run sh index js node modules pkg you can run the build n run sh that i ve created which is a silly or build and run it yourself steps to reproduce run something like js const spawnsync require child process console log spawning spawnsync npm stdio inherit timeout console log spawner bye and make the package preinstall script for pkg to hang via long settimeout that will make it busy expect etimeout but get nothing
1
8,260
11,425,620,278
IssuesEvent
2020-02-03 20:12:20
NationalSecurityAgency/ghidra
https://api.github.com/repos/NationalSecurityAgency/ghidra
closed
dsPIC30F program space strings
Feature: Processor/other
I compiled a simple dsPIC30F binary using the following code: ``` #include <stdio.h> __prog__ const char __attribute__((space(prog))) str[] = "Hello!"; int main() { printf("%s\n", str); } ``` Because I am forcing the string into program space, it ends up looking like this in Ghidra: ![ss](https://user-images.githubusercontent.com/5378554/73675863-40328680-4681-11ea-99f6-197fb260b9a7.png) Note the extra 0x00's in between some of the string's characters. This is due to the weird size and wordsize of the PIC program (ROM) space. Is there any way for Ghidra's string datatype to be made aware of these various features of the address space the string is contained in? Similar question for the string search feature.
1.0
dsPIC30F program space strings - I compiled a simple dsPIC30F binary using the following code: ``` #include <stdio.h> __prog__ const char __attribute__((space(prog))) str[] = "Hello!"; int main() { printf("%s\n", str); } ``` Because I am forcing the string into program space, it ends up looking like this in Ghidra: ![ss](https://user-images.githubusercontent.com/5378554/73675863-40328680-4681-11ea-99f6-197fb260b9a7.png) Note the extra 0x00's in between some of the string's characters. This is due to the weird size and wordsize of the PIC program (ROM) space. Is there any way for Ghidra's string datatype to be made aware of these various features of the address space the string is contained in? Similar question for the string search feature.
process
program space strings i compiled a simple binary using the following code include prog const char attribute space prog str hello int main printf s n str because i am forcing the string into program space it ends up looking like this in ghidra note the extra s in between some of the string s characters this is due to the weird size and wordsize of the pic program rom space is there any way for ghidra s string datatype to be made aware of these various features of the address space the string is contained in similar question for the string search feature
1
2,092
4,928,880,201
IssuesEvent
2016-11-27 15:13:48
brucemiller/LaTeXML
https://api.github.com/repos/brucemiller/LaTeXML
closed
Cannot use non-html5 equations
bug postprocessing
First off, excellent work with this package. I am able to convert a rather complex document to html5 with lots of equations and images. Ultimately my target is to get back to a Word document from a latex document. I think I am not far off. That said, latexml always borks when I try to tell it to use svg or png equations (with ``--mathsvg`` or ``--mathimages`` respectively). The HTML5 equations will obviously be no good in Word. I always get this error: ``` Fatal:perl:die Perl died Postprocessing LaTeXML::Post::MathML::Presentation Paper.html Wide character in subroutine entry at /usr/share/perl5/LaTeXML/Post.pm line 1188. ``` It would be nicer if the error would not kill LateXML, rather it would just skip that equation.
1.0
Cannot use non-html5 equations - First off, excellent work with this package. I am able to convert a rather complex document to html5 with lots of equations and images. Ultimately my target is to get back to a Word document from a latex document. I think I am not far off. That said, latexml always borks when I try to tell it to use svg or png equations (with ``--mathsvg`` or ``--mathimages`` respectively). The HTML5 equations will obviously be no good in Word. I always get this error: ``` Fatal:perl:die Perl died Postprocessing LaTeXML::Post::MathML::Presentation Paper.html Wide character in subroutine entry at /usr/share/perl5/LaTeXML/Post.pm line 1188. ``` It would be nicer if the error would not kill LateXML, rather it would just skip that equation.
process
cannot use non equations first off excellent work with this package i am able to convert a rather complex document to with lots of equations and images ultimately my target is to get back to a word document from a latex document i think i am not far off that said latexml always borks when i try to tell it to use svg or png equations with mathsvg or mathimages respectively the equations will obviously be no good in word i always get this error fatal perl die perl died postprocessing latexml post mathml presentation paper html wide character in subroutine entry at usr share latexml post pm line it would be nicer if the error would not kill latexml rather it would just skip that equation
1
12,468
7,885,614,472
IssuesEvent
2018-06-27 13:01:04
angular/angular
https://api.github.com/repos/angular/angular
closed
Allow prod mode to be enabled via build-time environment variables + support dead-code elimination
comp: core & compiler comp: packaging comp: performance fixed by Ivy freq3: high severity3: broken type: feature
other frameworks support DCE, dead-code elimination, with environment variables for switching between `"development"`, `"production"`, and `"test"` modes ![screen shot 2015-12-24 at 6 25 08 pm](https://cloud.githubusercontent.com/assets/1016365/11999133/b4d73780-aa6b-11e5-8b9d-0c6a447bf80b.jpg) can we do the same for `enableProdMode` ``` if (process.env.NODE_ENV !== 'production') { console.log("I'm in development"); enableDevMode(); } ``` webpack(w/ DefinePlugin) and browserify(w/ envify) will replace `process.env.NODE_ENV` with `"production"` ``` if ("production" !== 'production') { console.log("I'm in development"); enableDevMode(); } ``` which would then be replaced with ``` if (false) { console.log("I'm in development"); enableDevMode(); } ``` and if they had UglifyJS then the statement would be removed. It would be great to have a list of standard environment variables that all libraries could use for universal modules for example. - ENV: `'development' || 'production' || 'test'` - PLATFORM: `'node' || 'worker' || 'browser' || 'browser-ui' || 'atom' || 'recreate-user-agent-standard'` related: https://github.com/mishoo/UglifyJS2#conditional-compilation [![github-tipe-logo](https://user-images.githubusercontent.com/1016365/34912701-7edec34c-f89c-11e7-8c89-bed6cef064b5.png)](https://tipe.io?ref=github-comment)
True
Allow prod mode to be enabled via build-time environment variables + support dead-code elimination - other frameworks support DCE, dead-code elimination, with environment variables for switching between `"development"`, `"production"`, and `"test"` modes ![screen shot 2015-12-24 at 6 25 08 pm](https://cloud.githubusercontent.com/assets/1016365/11999133/b4d73780-aa6b-11e5-8b9d-0c6a447bf80b.jpg) can we do the same for `enableProdMode` ``` if (process.env.NODE_ENV !== 'production') { console.log("I'm in development"); enableDevMode(); } ``` webpack(w/ DefinePlugin) and browserify(w/ envify) will replace `process.env.NODE_ENV` with `"production"` ``` if ("production" !== 'production') { console.log("I'm in development"); enableDevMode(); } ``` which would then be replaced with ``` if (false) { console.log("I'm in development"); enableDevMode(); } ``` and if they had UglifyJS then the statement would be removed. It would be great to have a list of standard environment variables that all libraries could use for universal modules for example. - ENV: `'development' || 'production' || 'test'` - PLATFORM: `'node' || 'worker' || 'browser' || 'browser-ui' || 'atom' || 'recreate-user-agent-standard'` related: https://github.com/mishoo/UglifyJS2#conditional-compilation [![github-tipe-logo](https://user-images.githubusercontent.com/1016365/34912701-7edec34c-f89c-11e7-8c89-bed6cef064b5.png)](https://tipe.io?ref=github-comment)
non_process
allow prod mode to be enabled via build time environment variables support dead code elimination other frameworks support dce dead code elimination with environment variables for switching between development production and test modes can we do the same for enableprodmode if process env node env production console log i m in development enabledevmode webpack w defineplugin and browserify w envify will replace process env node env with production if production production console log i m in development enabledevmode which would then be replaced with if false console log i m in development enabledevmode and if they had uglifyjs then the statement would be removed it would be great to have a list of standard environment variables that all libraries could use for universal modules for example env development production test platform node worker browser browser ui atom recreate user agent standard related
0
310,851
9,524,849,368
IssuesEvent
2019-04-28 07:26:22
projectacrn/acrn-hypervisor
https://api.github.com/repos/projectacrn/acrn-hypervisor
closed
"ST_PERF_DS3_Resume_to_IVE_Android_OS_UI" test result value is low.
priority: P3-Medium status: Assigned type: bug
 Kernel Version/Android Version | 4.19.19-quilt-2e5dc0ac-00135-gdfb7dfd764a1 (190208T031538Z) / 9 AOSP Version | PPR1.181005.003 ABL Version | 1906_GP20 IFWI Version | 3.1.55.2278a IOC Version | 4.0.14 ACRN-Hypervisor | 2019w05.3.150000p_156 SOS version: 27700 SOS Kernel Version | 4.19.19-8-.iot-lts2018-sos #1 SMP PREEMPT Mon Feb 11 cAVS FW Version | 9.22.1.3472 Test environment settings Connect 1 HDMI(on HDMI port2) and 1 eDP panel Execution steps Freshly flash the USER image on DUT (Device Under Test) and perform System Setup & Configuration section's steps. Boot the device to home screen and wait 5 minutes. Press 'ignition' button let the DUT go to S3 mode and wait about 1 minutes. open mobile video Start measurement by pressing the 'ignition' button. Until the screen lights up and stop video copy video from mobile to host execute "python ffmpeg.py -i *.MOV" calculating time Expected result less than 2 seconds Actual result more than 2seconds
1.0
"ST_PERF_DS3_Resume_to_IVE_Android_OS_UI" test result value is low. -  Kernel Version/Android Version | 4.19.19-quilt-2e5dc0ac-00135-gdfb7dfd764a1 (190208T031538Z) / 9 AOSP Version | PPR1.181005.003 ABL Version | 1906_GP20 IFWI Version | 3.1.55.2278a IOC Version | 4.0.14 ACRN-Hypervisor | 2019w05.3.150000p_156 SOS version: 27700 SOS Kernel Version | 4.19.19-8-.iot-lts2018-sos #1 SMP PREEMPT Mon Feb 11 cAVS FW Version | 9.22.1.3472 Test environment settings Connect 1 HDMI(on HDMI port2) and 1 eDP panel Execution steps Freshly flash the USER image on DUT (Device Under Test) and perform System Setup & Configuration section's steps. Boot the device to home screen and wait 5 minutes. Press 'ignition' button let the DUT go to S3 mode and wait about 1 minutes. open mobile video Start measurement by pressing the 'ignition' button. Until the screen lights up and stop video copy video from mobile to host execute "python ffmpeg.py -i *.MOV" calculating time Expected result less than 2 seconds Actual result more than 2seconds
non_process
st perf resume to ive android os ui test result value is low  kernel version android version quilt aosp version abl version ifwi version ioc version acrn hypervisor sos version sos kernel version iot sos smp preempt mon feb cavs fw version test environment settings connect hdmi on hdmi and edp panel execution steps freshly flash the user image on dut device under test and perform system setup configuration section s steps boot the device to home screen and wait minutes press ignition button let the dut go to mode and wait about minutes open mobile video start measurement by pressing the ignition button until the screen lights up and stop video copy video from mobile to host execute python ffmpeg py i mov calculating time expected result less than seconds actual result more than
0
52,196
3,022,220,698
IssuesEvent
2015-07-31 19:01:06
Microsoft/TypeScript
https://api.github.com/repos/Microsoft/TypeScript
closed
tsc no longer reports a non-zero error code when reporting errors
Bug High Priority
Try installing the nightly with `npm install -g typescript@next` and compile the following file: ```TypeScript var asdf = 123 asdf = '123' ``` If you try to get the error code from 20150730 (or really any nightly that's available right now), it'll be `0`. If you switch back to 1.5 with `npm install -g typescript@latest`, you'll get an error code of `2`.
1.0
tsc no longer reports a non-zero error code when reporting errors - Try installing the nightly with `npm install -g typescript@next` and compile the following file: ```TypeScript var asdf = 123 asdf = '123' ``` If you try to get the error code from 20150730 (or really any nightly that's available right now), it'll be `0`. If you switch back to 1.5 with `npm install -g typescript@latest`, you'll get an error code of `2`.
non_process
tsc no longer reports a non zero error code when reporting errors try installing the nightly with npm install g typescript next and compile the following file typescript var asdf asdf if you try to get the error code from or really any nightly that s available right now it ll be if you switch back to with npm install g typescript latest you ll get an error code of
0
181,889
14,891,924,642
IssuesEvent
2021-01-21 01:38:10
executablebooks/jupyter-book
https://api.github.com/repos/executablebooks/jupyter-book
opened
Document how to disable the download button
documentation
A few folks have asked whether they can *disable* the page download button. This is possible in the theme, as documented here: https://sphinx-book-theme.readthedocs.io/en/latest/configure.html?highlight=use_download_button#download-page-button but, it's not documented in jupyter book. We should either document it or add some kind of configuration for it!
1.0
Document how to disable the download button - A few folks have asked whether they can *disable* the page download button. This is possible in the theme, as documented here: https://sphinx-book-theme.readthedocs.io/en/latest/configure.html?highlight=use_download_button#download-page-button but, it's not documented in jupyter book. We should either document it or add some kind of configuration for it!
non_process
document how to disable the download button a few folks have asked whether they can disable the page download button this is possible in the theme as documented here but it s not documented in jupyter book we should either document it or add some kind of configuration for it
0
133,460
12,541,958,373
IssuesEvent
2020-06-05 13:16:53
OpenMined/SwiftSyft
https://api.github.com/repos/OpenMined/SwiftSyft
closed
Create a better Readme
Priority: 2 - High :cold_sweat: Severity: 4 - Low :sunglasses: Status: In Progress :star2: Type: Documentation :books:
[Based on this readme template](https://github.com/OpenMined/.github/blob/master/README-TEMPLATE.md), we should improve our readmes across all OpenMined projects. More specifically, you should fill out the template **at the minimum**. - [ ] Don't worry about the logo, I'll get this to you. - [ ] Change all badges to reflect your repo, include other badges as desired, but use those at the minimum. You can generate more here: https://shields.io/ - [ ] Change the title - [ ] Write a detailed description of what your library intends to accomplish. I would also advise that you provide links to the following papers: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html, https://arxiv.org/pdf/1902.01046.pdf, https://research.google/pubs/pub47246/. I would also explain that the system is driven by developing a model in PySyft, hosting it in PyGrid, and then downloading it using a worker library. Be sure to also link to the other worker libraries that aren't yours, so we can cross-promote our work! - [ ] Fill out a list of features that your library supports. A suggested list includes: PySyft plan execution, optional third-party JWT authentication, wifi detection, charge detection, sleep/wake detection, protocols for secure aggregation (put mark this as "in progress"), and a list of environments this library is expected to work in. That's a short list, you can add or remove what you please. - [ ] Installation section should be updated and specify the appropriate package manager (with a link to our deployment page on that package manager) - [ ] Usage section should be comprised of the implementation code for the MNIST example. Make sure to clean these up first. I've created an issue for this elsewhere - do that one first. - [ ] Fill out some basic contributing information to tell people how to run your library locally, what the local development instructions are, etc. - [ ] Fill out the list of contributors from All Contributors. Build this into your workflow and expect to use their Github issue commands in the future to make adding people to the readme easier.
1.0
Create a better Readme - [Based on this readme template](https://github.com/OpenMined/.github/blob/master/README-TEMPLATE.md), we should improve our readmes across all OpenMined projects. More specifically, you should fill out the template **at the minimum**. - [ ] Don't worry about the logo, I'll get this to you. - [ ] Change all badges to reflect your repo, include other badges as desired, but use those at the minimum. You can generate more here: https://shields.io/ - [ ] Change the title - [ ] Write a detailed description of what your library intends to accomplish. I would also advise that you provide links to the following papers: https://ai.googleblog.com/2017/04/federated-learning-collaborative.html, https://arxiv.org/pdf/1902.01046.pdf, https://research.google/pubs/pub47246/. I would also explain that the system is driven by developing a model in PySyft, hosting it in PyGrid, and then downloading it using a worker library. Be sure to also link to the other worker libraries that aren't yours, so we can cross-promote our work! - [ ] Fill out a list of features that your library supports. A suggested list includes: PySyft plan execution, optional third-party JWT authentication, wifi detection, charge detection, sleep/wake detection, protocols for secure aggregation (put mark this as "in progress"), and a list of environments this library is expected to work in. That's a short list, you can add or remove what you please. - [ ] Installation section should be updated and specify the appropriate package manager (with a link to our deployment page on that package manager) - [ ] Usage section should be comprised of the implementation code for the MNIST example. Make sure to clean these up first. I've created an issue for this elsewhere - do that one first. - [ ] Fill out some basic contributing information to tell people how to run your library locally, what the local development instructions are, etc. - [ ] Fill out the list of contributors from All Contributors. Build this into your workflow and expect to use their Github issue commands in the future to make adding people to the readme easier.
non_process
create a better readme we should improve our readmes across all openmined projects more specifically you should fill out the template at the minimum don t worry about the logo i ll get this to you change all badges to reflect your repo include other badges as desired but use those at the minimum you can generate more here change the title write a detailed description of what your library intends to accomplish i would also advise that you provide links to the following papers i would also explain that the system is driven by developing a model in pysyft hosting it in pygrid and then downloading it using a worker library be sure to also link to the other worker libraries that aren t yours so we can cross promote our work fill out a list of features that your library supports a suggested list includes pysyft plan execution optional third party jwt authentication wifi detection charge detection sleep wake detection protocols for secure aggregation put mark this as in progress and a list of environments this library is expected to work in that s a short list you can add or remove what you please installation section should be updated and specify the appropriate package manager with a link to our deployment page on that package manager usage section should be comprised of the implementation code for the mnist example make sure to clean these up first i ve created an issue for this elsewhere do that one first fill out some basic contributing information to tell people how to run your library locally what the local development instructions are etc fill out the list of contributors from all contributors build this into your workflow and expect to use their github issue commands in the future to make adding people to the readme easier
0
21,533
29,828,798,027
IssuesEvent
2023-06-18 02:00:09
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Fri, 16 Jun 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events ### Neural World Models for Computer Vision - **Authors:** Anthony Hu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.09179 - **Pdf link:** https://arxiv.org/pdf/2306.09179 - **Abstract** Humans navigate in their environment by learning a mental model of the world through passive observation and active interaction. Their world model allows them to anticipate what might happen next and act accordingly with respect to an underlying objective. Such world models hold strong promises for planning in complex environments like in autonomous driving. A human driver, or a self-driving system, perceives their surroundings with their eyes or their cameras. They infer an internal representation of the world which should: (i) have spatial memory (e.g. occlusions), (ii) fill partially observable or noisy inputs (e.g. when blinded by sunlight), and (iii) be able to reason about unobservable events probabilistically (e.g. predict different possible futures). They are embodied intelligent agents that can predict, plan, and act in the physical world through their world model. In this thesis we present a general framework to train a world model and a policy, parameterised by deep neural networks, from camera observations and expert demonstrations. We leverage important computer vision concepts such as geometry, semantics, and motion to scale world models to complex urban driving scenes. First, we propose a model that predicts important quantities in computer vision: depth, semantic segmentation, and optical flow. We then use 3D geometry as an inductive bias to operate in the bird's-eye view space. We present for the first time a model that can predict probabilistic future trajectories of dynamic agents in bird's-eye view from 360{\deg} surround monocular cameras only. Finally, we demonstrate the benefits of learning a world model in closed-loop driving. Our model can jointly predict static scene, dynamic scene, and ego-behaviour in an urban driving environment. ## Keyword: event camera ### E-Calib: A Fast, Robust and Accurate Calibration Toolbox for Event Cameras - **Authors:** Mohammed Salah, Abdulla Ayyad, Muhammad Humais, Daniel Gehrig, Abdelqader Abusafieh, Lakmal Seneviratne, Davide Scaramuzza, Yahya Zweiri - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.09078 - **Pdf link:** https://arxiv.org/pdf/2306.09078 - **Abstract** Event cameras triggered a paradigm shift in the computer vision community delineated by their asynchronous nature, low latency, and high dynamic range. Calibration of event cameras is always essential to account for the sensor intrinsic parameters and for 3D perception. However, conventional image-based calibration techniques are not applicable due to the asynchronous, binary output of the sensor. The current standard for calibrating event cameras relies on either blinking patterns or event-based image reconstruction algorithms. These approaches are difficult to deploy in factory settings and are affected by noise and artifacts degrading the calibration performance. To bridge these limitations, we present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras utilizing the asymmetric circle grid, for its robustness to out-of-focus scenes. The proposed method is tested in a variety of rigorous experiments for different event camera models, on circle grids with different geometric properties, and under challenging illumination conditions. The results show that our approach outperforms the state-of-the-art in detection success rate, reprojection error, and estimation accuracy of extrinsic parameters. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Dynamic Clustering Transformer Network for Point Cloud Segmentation - **Authors:** Dening Lu, Jun Zhou, Kyle Yilin Gao, Dilong Li, Jing Du, Linlin Xu, Jonathan Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.08073 - **Pdf link:** https://arxiv.org/pdf/2306.08073 - **Abstract** Point cloud segmentation is one of the most important tasks in computer vision with widespread scientific, industrial, and commercial applications. The research thereof has resulted in many breakthroughs in 3D object and scene understanding. Previous methods typically utilized hierarchical architectures for feature representation. However, the commonly used sampling and grouping methods in hierarchical networks are only based on point-wise three-dimensional coordinates, ignoring local semantic homogeneity of point clusters. Additionally, the prevalent Farthest Point Sampling (FPS) method is often a computational bottleneck. To address these issues, we propose a novel 3D point cloud representation network, called Dynamic Clustering Transformer Network (DCTNet). It has an encoder-decoder architecture, allowing for both local and global feature learning. Specifically, we propose novel semantic feature-based dynamic sampling and clustering methods in the encoder, which enables the model to be aware of local semantic homogeneity for local feature aggregation. Furthermore, in the decoder, we propose an efficient semantic feature-guided upsampling method. Our method was evaluated on an object-based dataset (ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR dataset, verifying the performance of DCTNet across a wide variety of practical engineering applications. The inference speed of DCTNet is 3.8-16.8$\times$ faster than existing State-of-the-Art (SOTA) models on the ShapeNet dataset, while achieving an instance-wise mIoU of $86.6\%$, the current top score. Our method similarly outperforms previous methods on the other datasets, verifying it as the new State-of-the-Art in point cloud segmentation. ### AVIS: Autonomous Visual Information Seeking with Large Language Models - **Authors:** Ziniu Hu, Ahmet Iscen, Chen Sun, Kai-Wei Chang, Yizhou Sun, David A Ross, Cordelia Schmid, Alireza Fathi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2306.08129 - **Pdf link:** https://arxiv.org/pdf/2306.08129 - **Abstract** In this paper, we propose an autonomous information seeking visual question answering framework, AVIS. Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools and to investigate their outputs, thereby acquiring the indispensable knowledge needed to provide answers to the posed questions. Responding to visual questions that necessitate external knowledge, such as "What event is commemorated by the building depicted in this image?", is a complex task. This task presents a combinatorial search space that demands a sequence of actions, including invoking APIs, analyzing their responses, and making informed decisions. We conduct a user study to collect a variety of instances of human decision-making when faced with this task. This data is then used to design a system comprised of three components: an LLM-powered planner that dynamically determines which tool to use next, an LLM-powered reasoner that analyzes and extracts key information from the tool outputs, and a working memory component that retains the acquired information throughout the process. The collected user behavior serves as a guide for our system in two key ways. First, we create a transition graph by analyzing the sequence of decisions made by users. This graph delineates distinct states and confines the set of actions available at each state. Second, we use examples of user decision-making to provide our LLM-powered planner and reasoner with relevant contextual instances, enhancing their capacity to make informed decisions. We show that AVIS achieves state-of-the-art results on knowledge-intensive visual question answering benchmarks such as Infoseek and OK-VQA. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Neural Network Compression using Binarization and Few Full-Precision Weights - **Authors:** Franco Maria Nardini, Cosimo Rulli, Salvatore Trani, Rossano Venturini - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.08960 - **Pdf link:** https://arxiv.org/pdf/2306.08960 - **Abstract** Quantization and pruning are known to be two effective Deep Neural Networks model compression methods. In this paper, we propose Automatic Prune Binarization (APB), a novel compression technique combining quantization with pruning. APB enhances the representational capability of binary networks using a few full-precision weights. Our technique jointly maximizes the accuracy of the network while minimizing its memory impact by deciding whether each weight should be binarized or kept in full precision. We show how to efficiently perform a forward pass through layers compressed using APB by decomposing it into a binary and a sparse-dense matrix multiplication. Moreover, we design two novel efficient algorithms for extremely quantized matrix multiplication on CPU, leveraging highly efficient bitwise operations. The proposed algorithms are 6.9x and 1.5x faster than available state-of-the-art solutions. We perform an extensive evaluation of APB on two widely adopted model compression datasets, namely CIFAR10 and ImageNet. APB shows to deliver better accuracy/memory trade-off compared to state-of-the-art methods based on i) quantization, ii) pruning, and iii) combination of pruning and quantization. APB outperforms quantization also in the accuracy/efficiency trade-off, being up to 2x faster than the 2-bits quantized model with no loss in accuracy. ### Robustness Analysis on Foundational Segmentation Models - **Authors:** Madeline Chantry Schiappa, Sachidanand VS, Yunhao Ge, Ondrej Miksik, Yogesh S. Rawat, Vibhav Vineet - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.09278 - **Pdf link:** https://arxiv.org/pdf/2306.09278 - **Abstract** Due to the increase in computational resources and accessibility of data, an increase in large, deep learning models trained on copious amounts of data using self-supervised or semi-supervised learning have emerged. These "foundation" models are often adapted to a variety of downstream tasks like classification, object detection, and segmentation with little-to-no training on the target dataset. In this work, we perform a robustness analysis of Visual Foundation Models (VFMs) for segmentation tasks and compare them to supervised models of smaller scale. We focus on robustness against real-world distribution shift perturbations.We benchmark four state-of-the-art segmentation architectures using 2 different datasets, COCO and ADE20K, with 17 different perturbations with 5 severity levels each. We find interesting insights that include (1) VFMs are not robust to compression-based corruptions, (2) while the selected VFMs do not significantly outperform or exhibit more robustness compared to non-VFM models, they remain competitively robust in zero-shot evaluations, particularly when non-VFM are under supervision and (3) selected VFMs demonstrate greater resilience to specific categories of objects, likely due to their open-vocabulary training paradigm, a feature that non-VFM models typically lack. We posit that the suggested robustness evaluation introduces new requirements for foundational models, thus sparking further research to enhance their performance. ## Keyword: RAW There is no result ## Keyword: raw image There is no result
2.0
New submissions for Fri, 16 Jun 23 - ## Keyword: events ### Neural World Models for Computer Vision - **Authors:** Anthony Hu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Robotics (cs.RO) - **Arxiv link:** https://arxiv.org/abs/2306.09179 - **Pdf link:** https://arxiv.org/pdf/2306.09179 - **Abstract** Humans navigate in their environment by learning a mental model of the world through passive observation and active interaction. Their world model allows them to anticipate what might happen next and act accordingly with respect to an underlying objective. Such world models hold strong promises for planning in complex environments like in autonomous driving. A human driver, or a self-driving system, perceives their surroundings with their eyes or their cameras. They infer an internal representation of the world which should: (i) have spatial memory (e.g. occlusions), (ii) fill partially observable or noisy inputs (e.g. when blinded by sunlight), and (iii) be able to reason about unobservable events probabilistically (e.g. predict different possible futures). They are embodied intelligent agents that can predict, plan, and act in the physical world through their world model. In this thesis we present a general framework to train a world model and a policy, parameterised by deep neural networks, from camera observations and expert demonstrations. We leverage important computer vision concepts such as geometry, semantics, and motion to scale world models to complex urban driving scenes. First, we propose a model that predicts important quantities in computer vision: depth, semantic segmentation, and optical flow. We then use 3D geometry as an inductive bias to operate in the bird's-eye view space. We present for the first time a model that can predict probabilistic future trajectories of dynamic agents in bird's-eye view from 360{\deg} surround monocular cameras only. Finally, we demonstrate the benefits of learning a world model in closed-loop driving. Our model can jointly predict static scene, dynamic scene, and ego-behaviour in an urban driving environment. ## Keyword: event camera ### E-Calib: A Fast, Robust and Accurate Calibration Toolbox for Event Cameras - **Authors:** Mohammed Salah, Abdulla Ayyad, Muhammad Humais, Daniel Gehrig, Abdelqader Abusafieh, Lakmal Seneviratne, Davide Scaramuzza, Yahya Zweiri - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.09078 - **Pdf link:** https://arxiv.org/pdf/2306.09078 - **Abstract** Event cameras triggered a paradigm shift in the computer vision community delineated by their asynchronous nature, low latency, and high dynamic range. Calibration of event cameras is always essential to account for the sensor intrinsic parameters and for 3D perception. However, conventional image-based calibration techniques are not applicable due to the asynchronous, binary output of the sensor. The current standard for calibrating event cameras relies on either blinking patterns or event-based image reconstruction algorithms. These approaches are difficult to deploy in factory settings and are affected by noise and artifacts degrading the calibration performance. To bridge these limitations, we present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras utilizing the asymmetric circle grid, for its robustness to out-of-focus scenes. The proposed method is tested in a variety of rigorous experiments for different event camera models, on circle grids with different geometric properties, and under challenging illumination conditions. The results show that our approach outperforms the state-of-the-art in detection success rate, reprojection error, and estimation accuracy of extrinsic parameters. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Dynamic Clustering Transformer Network for Point Cloud Segmentation - **Authors:** Dening Lu, Jun Zhou, Kyle Yilin Gao, Dilong Li, Jing Du, Linlin Xu, Jonathan Li - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.08073 - **Pdf link:** https://arxiv.org/pdf/2306.08073 - **Abstract** Point cloud segmentation is one of the most important tasks in computer vision with widespread scientific, industrial, and commercial applications. The research thereof has resulted in many breakthroughs in 3D object and scene understanding. Previous methods typically utilized hierarchical architectures for feature representation. However, the commonly used sampling and grouping methods in hierarchical networks are only based on point-wise three-dimensional coordinates, ignoring local semantic homogeneity of point clusters. Additionally, the prevalent Farthest Point Sampling (FPS) method is often a computational bottleneck. To address these issues, we propose a novel 3D point cloud representation network, called Dynamic Clustering Transformer Network (DCTNet). It has an encoder-decoder architecture, allowing for both local and global feature learning. Specifically, we propose novel semantic feature-based dynamic sampling and clustering methods in the encoder, which enables the model to be aware of local semantic homogeneity for local feature aggregation. Furthermore, in the decoder, we propose an efficient semantic feature-guided upsampling method. Our method was evaluated on an object-based dataset (ShapeNet), an urban navigation dataset (Toronto-3D), and a multispectral LiDAR dataset, verifying the performance of DCTNet across a wide variety of practical engineering applications. The inference speed of DCTNet is 3.8-16.8$\times$ faster than existing State-of-the-Art (SOTA) models on the ShapeNet dataset, while achieving an instance-wise mIoU of $86.6\%$, the current top score. Our method similarly outperforms previous methods on the other datasets, verifying it as the new State-of-the-Art in point cloud segmentation. ### AVIS: Autonomous Visual Information Seeking with Large Language Models - **Authors:** Ziniu Hu, Ahmet Iscen, Chen Sun, Kai-Wei Chang, Yizhou Sun, David A Ross, Cordelia Schmid, Alireza Fathi - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence (cs.AI); Computation and Language (cs.CL) - **Arxiv link:** https://arxiv.org/abs/2306.08129 - **Pdf link:** https://arxiv.org/pdf/2306.08129 - **Abstract** In this paper, we propose an autonomous information seeking visual question answering framework, AVIS. Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools and to investigate their outputs, thereby acquiring the indispensable knowledge needed to provide answers to the posed questions. Responding to visual questions that necessitate external knowledge, such as "What event is commemorated by the building depicted in this image?", is a complex task. This task presents a combinatorial search space that demands a sequence of actions, including invoking APIs, analyzing their responses, and making informed decisions. We conduct a user study to collect a variety of instances of human decision-making when faced with this task. This data is then used to design a system comprised of three components: an LLM-powered planner that dynamically determines which tool to use next, an LLM-powered reasoner that analyzes and extracts key information from the tool outputs, and a working memory component that retains the acquired information throughout the process. The collected user behavior serves as a guide for our system in two key ways. First, we create a transition graph by analyzing the sequence of decisions made by users. This graph delineates distinct states and confines the set of actions available at each state. Second, we use examples of user decision-making to provide our LLM-powered planner and reasoner with relevant contextual instances, enhancing their capacity to make informed decisions. We show that AVIS achieves state-of-the-art results on knowledge-intensive visual question answering benchmarks such as Infoseek and OK-VQA. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression ### Neural Network Compression using Binarization and Few Full-Precision Weights - **Authors:** Franco Maria Nardini, Cosimo Rulli, Salvatore Trani, Rossano Venturini - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2306.08960 - **Pdf link:** https://arxiv.org/pdf/2306.08960 - **Abstract** Quantization and pruning are known to be two effective Deep Neural Networks model compression methods. In this paper, we propose Automatic Prune Binarization (APB), a novel compression technique combining quantization with pruning. APB enhances the representational capability of binary networks using a few full-precision weights. Our technique jointly maximizes the accuracy of the network while minimizing its memory impact by deciding whether each weight should be binarized or kept in full precision. We show how to efficiently perform a forward pass through layers compressed using APB by decomposing it into a binary and a sparse-dense matrix multiplication. Moreover, we design two novel efficient algorithms for extremely quantized matrix multiplication on CPU, leveraging highly efficient bitwise operations. The proposed algorithms are 6.9x and 1.5x faster than available state-of-the-art solutions. We perform an extensive evaluation of APB on two widely adopted model compression datasets, namely CIFAR10 and ImageNet. APB shows to deliver better accuracy/memory trade-off compared to state-of-the-art methods based on i) quantization, ii) pruning, and iii) combination of pruning and quantization. APB outperforms quantization also in the accuracy/efficiency trade-off, being up to 2x faster than the 2-bits quantized model with no loss in accuracy. ### Robustness Analysis on Foundational Segmentation Models - **Authors:** Madeline Chantry Schiappa, Sachidanand VS, Yunhao Ge, Ondrej Miksik, Yogesh S. Rawat, Vibhav Vineet - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2306.09278 - **Pdf link:** https://arxiv.org/pdf/2306.09278 - **Abstract** Due to the increase in computational resources and accessibility of data, an increase in large, deep learning models trained on copious amounts of data using self-supervised or semi-supervised learning have emerged. These "foundation" models are often adapted to a variety of downstream tasks like classification, object detection, and segmentation with little-to-no training on the target dataset. In this work, we perform a robustness analysis of Visual Foundation Models (VFMs) for segmentation tasks and compare them to supervised models of smaller scale. We focus on robustness against real-world distribution shift perturbations.We benchmark four state-of-the-art segmentation architectures using 2 different datasets, COCO and ADE20K, with 17 different perturbations with 5 severity levels each. We find interesting insights that include (1) VFMs are not robust to compression-based corruptions, (2) while the selected VFMs do not significantly outperform or exhibit more robustness compared to non-VFM models, they remain competitively robust in zero-shot evaluations, particularly when non-VFM are under supervision and (3) selected VFMs demonstrate greater resilience to specific categories of objects, likely due to their open-vocabulary training paradigm, a feature that non-VFM models typically lack. We posit that the suggested robustness evaluation introduces new requirements for foundational models, thus sparking further research to enhance their performance. ## Keyword: RAW There is no result ## Keyword: raw image There is no result
process
new submissions for fri jun keyword events neural world models for computer vision authors anthony hu subjects computer vision and pattern recognition cs cv artificial intelligence cs ai robotics cs ro arxiv link pdf link abstract humans navigate in their environment by learning a mental model of the world through passive observation and active interaction their world model allows them to anticipate what might happen next and act accordingly with respect to an underlying objective such world models hold strong promises for planning in complex environments like in autonomous driving a human driver or a self driving system perceives their surroundings with their eyes or their cameras they infer an internal representation of the world which should i have spatial memory e g occlusions ii fill partially observable or noisy inputs e g when blinded by sunlight and iii be able to reason about unobservable events probabilistically e g predict different possible futures they are embodied intelligent agents that can predict plan and act in the physical world through their world model in this thesis we present a general framework to train a world model and a policy parameterised by deep neural networks from camera observations and expert demonstrations we leverage important computer vision concepts such as geometry semantics and motion to scale world models to complex urban driving scenes first we propose a model that predicts important quantities in computer vision depth semantic segmentation and optical flow we then use geometry as an inductive bias to operate in the bird s eye view space we present for the first time a model that can predict probabilistic future trajectories of dynamic agents in bird s eye view from deg surround monocular cameras only finally we demonstrate the benefits of learning a world model in closed loop driving our model can jointly predict static scene dynamic scene and ego behaviour in an urban driving environment keyword event camera e calib a fast robust and accurate calibration toolbox for event cameras authors mohammed salah abdulla ayyad muhammad humais daniel gehrig abdelqader abusafieh lakmal seneviratne davide scaramuzza yahya zweiri subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract event cameras triggered a paradigm shift in the computer vision community delineated by their asynchronous nature low latency and high dynamic range calibration of event cameras is always essential to account for the sensor intrinsic parameters and for perception however conventional image based calibration techniques are not applicable due to the asynchronous binary output of the sensor the current standard for calibrating event cameras relies on either blinking patterns or event based image reconstruction algorithms these approaches are difficult to deploy in factory settings and are affected by noise and artifacts degrading the calibration performance to bridge these limitations we present e calib a novel fast robust and accurate calibration toolbox for event cameras utilizing the asymmetric circle grid for its robustness to out of focus scenes the proposed method is tested in a variety of rigorous experiments for different event camera models on circle grids with different geometric properties and under challenging illumination conditions the results show that our approach outperforms the state of the art in detection success rate reprojection error and estimation accuracy of extrinsic parameters keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp dynamic clustering transformer network for point cloud segmentation authors dening lu jun zhou kyle yilin gao dilong li jing du linlin xu jonathan li subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract point cloud segmentation is one of the most important tasks in computer vision with widespread scientific industrial and commercial applications the research thereof has resulted in many breakthroughs in object and scene understanding previous methods typically utilized hierarchical architectures for feature representation however the commonly used sampling and grouping methods in hierarchical networks are only based on point wise three dimensional coordinates ignoring local semantic homogeneity of point clusters additionally the prevalent farthest point sampling fps method is often a computational bottleneck to address these issues we propose a novel point cloud representation network called dynamic clustering transformer network dctnet it has an encoder decoder architecture allowing for both local and global feature learning specifically we propose novel semantic feature based dynamic sampling and clustering methods in the encoder which enables the model to be aware of local semantic homogeneity for local feature aggregation furthermore in the decoder we propose an efficient semantic feature guided upsampling method our method was evaluated on an object based dataset shapenet an urban navigation dataset toronto and a multispectral lidar dataset verifying the performance of dctnet across a wide variety of practical engineering applications the inference speed of dctnet is times faster than existing state of the art sota models on the shapenet dataset while achieving an instance wise miou of the current top score our method similarly outperforms previous methods on the other datasets verifying it as the new state of the art in point cloud segmentation avis autonomous visual information seeking with large language models authors ziniu hu ahmet iscen chen sun kai wei chang yizhou sun david a ross cordelia schmid alireza fathi subjects computer vision and pattern recognition cs cv artificial intelligence cs ai computation and language cs cl arxiv link pdf link abstract in this paper we propose an autonomous information seeking visual question answering framework avis our method leverages a large language model llm to dynamically strategize the utilization of external tools and to investigate their outputs thereby acquiring the indispensable knowledge needed to provide answers to the posed questions responding to visual questions that necessitate external knowledge such as what event is commemorated by the building depicted in this image is a complex task this task presents a combinatorial search space that demands a sequence of actions including invoking apis analyzing their responses and making informed decisions we conduct a user study to collect a variety of instances of human decision making when faced with this task this data is then used to design a system comprised of three components an llm powered planner that dynamically determines which tool to use next an llm powered reasoner that analyzes and extracts key information from the tool outputs and a working memory component that retains the acquired information throughout the process the collected user behavior serves as a guide for our system in two key ways first we create a transition graph by analyzing the sequence of decisions made by users this graph delineates distinct states and confines the set of actions available at each state second we use examples of user decision making to provide our llm powered planner and reasoner with relevant contextual instances enhancing their capacity to make informed decisions we show that avis achieves state of the art results on knowledge intensive visual question answering benchmarks such as infoseek and ok vqa keyword image signal processing there is no result keyword image signal process there is no result keyword compression neural network compression using binarization and few full precision weights authors franco maria nardini cosimo rulli salvatore trani rossano venturini subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract quantization and pruning are known to be two effective deep neural networks model compression methods in this paper we propose automatic prune binarization apb a novel compression technique combining quantization with pruning apb enhances the representational capability of binary networks using a few full precision weights our technique jointly maximizes the accuracy of the network while minimizing its memory impact by deciding whether each weight should be binarized or kept in full precision we show how to efficiently perform a forward pass through layers compressed using apb by decomposing it into a binary and a sparse dense matrix multiplication moreover we design two novel efficient algorithms for extremely quantized matrix multiplication on cpu leveraging highly efficient bitwise operations the proposed algorithms are and faster than available state of the art solutions we perform an extensive evaluation of apb on two widely adopted model compression datasets namely and imagenet apb shows to deliver better accuracy memory trade off compared to state of the art methods based on i quantization ii pruning and iii combination of pruning and quantization apb outperforms quantization also in the accuracy efficiency trade off being up to faster than the bits quantized model with no loss in accuracy robustness analysis on foundational segmentation models authors madeline chantry schiappa sachidanand vs yunhao ge ondrej miksik yogesh s rawat vibhav vineet subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract due to the increase in computational resources and accessibility of data an increase in large deep learning models trained on copious amounts of data using self supervised or semi supervised learning have emerged these foundation models are often adapted to a variety of downstream tasks like classification object detection and segmentation with little to no training on the target dataset in this work we perform a robustness analysis of visual foundation models vfms for segmentation tasks and compare them to supervised models of smaller scale we focus on robustness against real world distribution shift perturbations we benchmark four state of the art segmentation architectures using different datasets coco and with different perturbations with severity levels each we find interesting insights that include vfms are not robust to compression based corruptions while the selected vfms do not significantly outperform or exhibit more robustness compared to non vfm models they remain competitively robust in zero shot evaluations particularly when non vfm are under supervision and selected vfms demonstrate greater resilience to specific categories of objects likely due to their open vocabulary training paradigm a feature that non vfm models typically lack we posit that the suggested robustness evaluation introduces new requirements for foundational models thus sparking further research to enhance their performance keyword raw there is no result keyword raw image there is no result
1
335,599
10,163,753,183
IssuesEvent
2019-08-07 09:58:34
meumobi/IRmobi
https://api.github.com/repos/meumobi/IRmobi
opened
If I change lang home headlines are not loaded/refreshed
fix high priority items
### Expected behaviour Tell us what should happen ### Actual behaviour Tell us what happens instead ### Steps to reproduce 1. Install App 2. Change lang => should observe home content is not loaded 3. ### Expected responses - Why it happens - How to fix it - How to test
1.0
If I change lang home headlines are not loaded/refreshed - ### Expected behaviour Tell us what should happen ### Actual behaviour Tell us what happens instead ### Steps to reproduce 1. Install App 2. Change lang => should observe home content is not loaded 3. ### Expected responses - Why it happens - How to fix it - How to test
non_process
if i change lang home headlines are not loaded refreshed expected behaviour tell us what should happen actual behaviour tell us what happens instead steps to reproduce install app change lang should observe home content is not loaded expected responses why it happens how to fix it how to test
0
12,148
3,587,110,962
IssuesEvent
2016-01-30 03:16:25
hoch/WAAX
https://api.github.com/repos/hoch/WAAX
closed
Generate plug-ins documentation with JSDoc.
Documentation
Plug-ins JS files have JSDoc documentation inside, but never have been used.
1.0
Generate plug-ins documentation with JSDoc. - Plug-ins JS files have JSDoc documentation inside, but never have been used.
non_process
generate plug ins documentation with jsdoc plug ins js files have jsdoc documentation inside but never have been used
0
53,693
13,195,803,683
IssuesEvent
2020-08-13 19:22:53
dhall-lang/dhall-haskell
https://api.github.com/repos/dhall-lang/dhall-haskell
opened
Test setup for preserved whitespace on AST
build parser
From #1991 we noticed that we don't have tests to ensure that the preserved whitespace on some of our AST constructors is what we expect on other packages (`dhall-docs`, for instance). To tackle this, we can have a golden test setup, similar to what we have in other packages (see `dhall-docs` and `dhall-json` error messages tests) that take the output generated from `dhall haskell-syntax-tree --noted` command and compare to the golden source files. The test cases should handle the following constructors: - [ ] `Let` bindings - [ ] `Lam` bindings - [ ] `RecordField` whitespaces - [ ] (After merging #1991) `FieldAccess` whitespaces
1.0
Test setup for preserved whitespace on AST - From #1991 we noticed that we don't have tests to ensure that the preserved whitespace on some of our AST constructors is what we expect on other packages (`dhall-docs`, for instance). To tackle this, we can have a golden test setup, similar to what we have in other packages (see `dhall-docs` and `dhall-json` error messages tests) that take the output generated from `dhall haskell-syntax-tree --noted` command and compare to the golden source files. The test cases should handle the following constructors: - [ ] `Let` bindings - [ ] `Lam` bindings - [ ] `RecordField` whitespaces - [ ] (After merging #1991) `FieldAccess` whitespaces
non_process
test setup for preserved whitespace on ast from we noticed that we don t have tests to ensure that the preserved whitespace on some of our ast constructors is what we expect on other packages dhall docs for instance to tackle this we can have a golden test setup similar to what we have in other packages see dhall docs and dhall json error messages tests that take the output generated from dhall haskell syntax tree noted command and compare to the golden source files the test cases should handle the following constructors let bindings lam bindings recordfield whitespaces after merging fieldaccess whitespaces
0
40,319
6,819,561,257
IssuesEvent
2017-11-07 10:39:08
smithfarm/dochazka-rest
https://api.github.com/repos/smithfarm/dochazka-rest
closed
doc: employee/search/nick/:key does not work as documented (fix documentation)
bug documentation
The documentation of `employee/search/nick/:key` says: Look up employee profiles using a search key, which can optionally contain a wildcard ('%'). For example: ``` GET employee/search/nick/foo% ``` would return a list of employees whose nick starts with 'foo'. However, what actually happens is Error 400 :-( It is possible to do: ``` GET employee/search/nick/foo ``` which gets translated into `%foo%` - not exactly what was advertised.
1.0
doc: employee/search/nick/:key does not work as documented (fix documentation) - The documentation of `employee/search/nick/:key` says: Look up employee profiles using a search key, which can optionally contain a wildcard ('%'). For example: ``` GET employee/search/nick/foo% ``` would return a list of employees whose nick starts with 'foo'. However, what actually happens is Error 400 :-( It is possible to do: ``` GET employee/search/nick/foo ``` which gets translated into `%foo%` - not exactly what was advertised.
non_process
doc employee search nick key does not work as documented fix documentation the documentation of employee search nick key says look up employee profiles using a search key which can optionally contain a wildcard for example get employee search nick foo would return a list of employees whose nick starts with foo however what actually happens is error it is possible to do get employee search nick foo which gets translated into foo not exactly what was advertised
0
6,238
9,182,238,092
IssuesEvent
2019-03-05 12:19:27
chuminh712/BookStorage---Group-2
https://api.github.com/repos/chuminh712/BookStorage---Group-2
opened
Interface Mockup Design
In Process
Design Interface Mockup for project: - Login page - Manage page - Report page
1.0
Interface Mockup Design - Design Interface Mockup for project: - Login page - Manage page - Report page
process
interface mockup design design interface mockup for project login page manage page report page
1
16,741
21,900,523,725
IssuesEvent
2022-05-20 13:00:33
camunda/zeebe-process-test
https://api.github.com/repos/camunda/zeebe-process-test
opened
Favor Embedded over Testcontainers
kind/feature team/process-automation
**Description** I don't know if there are other aspects, but from my impression the Embedded tests are easier to use and should be mentioned first in the documentation. Testconatiners should become the first alternative/second position.
1.0
Favor Embedded over Testcontainers - **Description** I don't know if there are other aspects, but from my impression the Embedded tests are easier to use and should be mentioned first in the documentation. Testconatiners should become the first alternative/second position.
process
favor embedded over testcontainers description i don t know if there are other aspects but from my impression the embedded tests are easier to use and should be mentioned first in the documentation testconatiners should become the first alternative second position
1
7,944
11,137,525,995
IssuesEvent
2019-12-20 19:36:04
openopps/openopps-platform
https://api.github.com/repos/openopps/openopps-platform
closed
Move USAJOBS data pull from Apply button to Next Steps "Continue"
Apply Process Approved Requirements Ready State Dept.
Who: Applicants What: Move data pull from USAJOBS to Next Steps "continue" Why: to give space to explain to the applicant how we pull the data from USAJOBS Acceptance Criteria: - Currently the USAJOBS one profile data is pulled for a student when they select "Apply" - Change the data pull to when they click "Continue" on the Next Steps page Related Tickets: 4118 - Move USAJOBS data pull from apply button to next steps continue 4009 - update next steps text 4119 - Modal to say we're pulling from USAJOBS
1.0
Move USAJOBS data pull from Apply button to Next Steps "Continue" - Who: Applicants What: Move data pull from USAJOBS to Next Steps "continue" Why: to give space to explain to the applicant how we pull the data from USAJOBS Acceptance Criteria: - Currently the USAJOBS one profile data is pulled for a student when they select "Apply" - Change the data pull to when they click "Continue" on the Next Steps page Related Tickets: 4118 - Move USAJOBS data pull from apply button to next steps continue 4009 - update next steps text 4119 - Modal to say we're pulling from USAJOBS
process
move usajobs data pull from apply button to next steps continue who applicants what move data pull from usajobs to next steps continue why to give space to explain to the applicant how we pull the data from usajobs acceptance criteria currently the usajobs one profile data is pulled for a student when they select apply change the data pull to when they click continue on the next steps page related tickets move usajobs data pull from apply button to next steps continue update next steps text modal to say we re pulling from usajobs
1
26,771
6,799,001,375
IssuesEvent
2017-11-02 08:44:24
bunq/sdk_csharp
https://api.github.com/repos/bunq/sdk_csharp
opened
Add missing fields for cvc endpoint
needs code regeneration
## Steps to reproduce: 1. Try and get `id`, `created` or `updated` from `CardGeneratedCvc2` ## What should happen: 1. These fields are there with getters/setters ## What happens: 1. There are no such fields ## Logs - `no logs` ## Extra info: - Tested on [0.12.1](https://github.com/bunq/sdk_csharp/releases/tag/0.12.1)
1.0
Add missing fields for cvc endpoint - ## Steps to reproduce: 1. Try and get `id`, `created` or `updated` from `CardGeneratedCvc2` ## What should happen: 1. These fields are there with getters/setters ## What happens: 1. There are no such fields ## Logs - `no logs` ## Extra info: - Tested on [0.12.1](https://github.com/bunq/sdk_csharp/releases/tag/0.12.1)
non_process
add missing fields for cvc endpoint steps to reproduce try and get id created or updated from what should happen these fields are there with getters setters what happens there are no such fields logs no logs extra info tested on
0
85,861
10,688,391,431
IssuesEvent
2019-10-22 18:08:01
iterative/dvc.org
https://api.github.com/repos/iterative/dvc.org
opened
engine: create title entry in CONTENTS nac (right side bar)
design docs engine enhancement good first issue
Often there's a long intro in our docs before getting to the first H2 header (example below) but the CONTENTS navigation element in the right side bar does not list the document's H1 title, so it's not possible to scroll to the very top from there, which I think would be a logical feature. ![image](https://user-images.githubusercontent.com/1477535/67315649-9d3c6f00-f4cc-11e9-9869-3562716c30a6.png) > https://dvc.org/doc/tutorials/versioning
1.0
engine: create title entry in CONTENTS nac (right side bar) - Often there's a long intro in our docs before getting to the first H2 header (example below) but the CONTENTS navigation element in the right side bar does not list the document's H1 title, so it's not possible to scroll to the very top from there, which I think would be a logical feature. ![image](https://user-images.githubusercontent.com/1477535/67315649-9d3c6f00-f4cc-11e9-9869-3562716c30a6.png) > https://dvc.org/doc/tutorials/versioning
non_process
engine create title entry in contents nac right side bar often there s a long intro in our docs before getting to the first header example below but the contents navigation element in the right side bar does not list the document s title so it s not possible to scroll to the very top from there which i think would be a logical feature
0
630,790
20,118,043,467
IssuesEvent
2022-02-07 21:50:13
status-im/status-desktop
https://api.github.com/repos/status-im/status-desktop
closed
Rendering issue in chat list when cursor is between community info and first category
bug ui Chat priority 2: medium
Super weird, when I move the cursor out of the chat list and get closer to the community info, all chat names disappear: <img width="387" alt="Screenshot 2021-09-13 at 15 59 05" src="https://user-images.githubusercontent.com/445106/133097244-1fd7dc73-2ff0-453a-b67e-f4df7a97ee98.png"> <img width="349" alt="Screenshot 2021-09-13 at 15 59 16" src="https://user-images.githubusercontent.com/445106/133097258-d213e04e-cad9-4a39-8d0c-5a5ab305f4a6.png">
1.0
Rendering issue in chat list when cursor is between community info and first category - Super weird, when I move the cursor out of the chat list and get closer to the community info, all chat names disappear: <img width="387" alt="Screenshot 2021-09-13 at 15 59 05" src="https://user-images.githubusercontent.com/445106/133097244-1fd7dc73-2ff0-453a-b67e-f4df7a97ee98.png"> <img width="349" alt="Screenshot 2021-09-13 at 15 59 16" src="https://user-images.githubusercontent.com/445106/133097258-d213e04e-cad9-4a39-8d0c-5a5ab305f4a6.png">
non_process
rendering issue in chat list when cursor is between community info and first category super weird when i move the cursor out of the chat list and get closer to the community info all chat names disappear img width alt screenshot at src img width alt screenshot at src
0
244,554
20,676,852,931
IssuesEvent
2022-03-10 10:06:11
esi-neuroscience/syncopy
https://api.github.com/repos/esi-neuroscience/syncopy
closed
Tests for SpectralData plotting routines
Tests
Similar to the `AnalogData` plotting routines, the implemented plotting functionality for `SpectralData` objects needs to have its own testing suite. This is currently not a priority but should be done as soon as time permits.
1.0
Tests for SpectralData plotting routines - Similar to the `AnalogData` plotting routines, the implemented plotting functionality for `SpectralData` objects needs to have its own testing suite. This is currently not a priority but should be done as soon as time permits.
non_process
tests for spectraldata plotting routines similar to the analogdata plotting routines the implemented plotting functionality for spectraldata objects needs to have its own testing suite this is currently not a priority but should be done as soon as time permits
0
475,474
13,710,969,541
IssuesEvent
2020-10-02 02:50:54
dgcnz/tcc1
https://api.github.com/repos/dgcnz/tcc1
closed
Wu, 2019. A Comprehensive Survey on Graph Neural Networks
priority 1 reading
> Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
1.0
Wu, 2019. A Comprehensive Survey on Graph Neural Networks - > Deep learning has revolutionized many machine learning tasks in recent years, ranging from image classification and video processing to speech recognition and natural language understanding. The data in these tasks are typically represented in the Euclidean space. However, there is an increasing number of applications where data are generated from non-Euclidean domains and are represented as graphs with complex relationships and interdependency between objects. The complexity of graph data has imposed significant challenges on existing machine learning algorithms. Recently, many studies on extending deep learning approaches for graph data have emerged. In this survey, we provide a comprehensive overview of graph neural networks (GNNs) in data mining and machine learning fields. We propose a new taxonomy to divide the state-of-the-art graph neural networks into four categories, namely recurrent graph neural networks, convolutional graph neural networks, graph autoencoders, and spatial-temporal graph neural networks. We further discuss the applications of graph neural networks across various domains and summarize the open source codes, benchmark data sets, and model evaluation of graph neural networks. Finally, we propose potential research directions in this rapidly growing field.
non_process
wu a comprehensive survey on graph neural networks deep learning has revolutionized many machine learning tasks in recent years ranging from image classification and video processing to speech recognition and natural language understanding the data in these tasks are typically represented in the euclidean space however there is an increasing number of applications where data are generated from non euclidean domains and are represented as graphs with complex relationships and interdependency between objects the complexity of graph data has imposed significant challenges on existing machine learning algorithms recently many studies on extending deep learning approaches for graph data have emerged in this survey we provide a comprehensive overview of graph neural networks gnns in data mining and machine learning fields we propose a new taxonomy to divide the state of the art graph neural networks into four categories namely recurrent graph neural networks convolutional graph neural networks graph autoencoders and spatial temporal graph neural networks we further discuss the applications of graph neural networks across various domains and summarize the open source codes benchmark data sets and model evaluation of graph neural networks finally we propose potential research directions in this rapidly growing field
0
13,581
16,115,273,297
IssuesEvent
2021-04-28 06:28:06
unicode-org/icu4x
https://api.github.com/repos/unicode-org/icu4x
opened
Add checklist for PRs
C-process S-tiny T-docs-tests discuss
In the PR template, we should make a checklist of to-do items that aren't covered elsewhere (CI). Suggestions: 1. Documentation coverage on all new exported types 2. Adherence to each section of the style guide 3. Others?
1.0
Add checklist for PRs - In the PR template, we should make a checklist of to-do items that aren't covered elsewhere (CI). Suggestions: 1. Documentation coverage on all new exported types 2. Adherence to each section of the style guide 3. Others?
process
add checklist for prs in the pr template we should make a checklist of to do items that aren t covered elsewhere ci suggestions documentation coverage on all new exported types adherence to each section of the style guide others
1
41,240
6,896,446,072
IssuesEvent
2017-11-23 17:52:57
coala/documentation
https://api.github.com/repos/coala/documentation
closed
Docs: Rename section "List of module (generated)" -> "API Reference"
area/documentation status/STALE
_From @Makman2 on July 30, 2016 16:52_ If someone needs the API, he/she looks for the word "API". Also "(generated)" does not really help ^^ _Copied from original issue: coala-analyzer/coala#2575_
1.0
Docs: Rename section "List of module (generated)" -> "API Reference" - _From @Makman2 on July 30, 2016 16:52_ If someone needs the API, he/she looks for the word "API". Also "(generated)" does not really help ^^ _Copied from original issue: coala-analyzer/coala#2575_
non_process
docs rename section list of module generated api reference from on july if someone needs the api he she looks for the word api also generated does not really help copied from original issue coala analyzer coala
0
9,613
12,552,080,819
IssuesEvent
2020-06-06 16:54:08
tikv/tikv
https://api.github.com/repos/tikv/tikv
closed
UCP: Add Tracing to Coprocessor
challenge-program-2 component/coprocessor difficulty/hard sig/coprocessor status/help-wanted
## Description It is usually difficult to know which part costs a lot of time for a single Coprocessor request. Although TiKV provides per-executor time statistics (called `ExecSummary`) for TiDB's `EXPLAIN ANALYZE` requests, it is still too rough. For example, we cannot know how long it spends in the MVCC layer itself, or the RocksDB itself. Further more, since TiDB aggregates all requests' ExecSummary together to display a single `EXPLAIN ANALYZE` result, users are not able to know per-request statistics. This task is to implement tracing at the lowest possible performance cost. It should be able to record how long each key step takes, since the request is received. This will be very helpful to trace performance spikes and performance issues for minor requests. Only requests that enables tracing should be traced. When tracing is not enabled there should be no performance impact. Goals: - Discover or design & implement a general tracing library / layer (in tikv's `components` directory): - Nearly zero-cost (i.e. performance impact for TPC-H < 1%) when tracing is **not** dynamically enabled (you may consider using generics to minimize the runtime cost) - Low-cost when tracing is enabled (i.e. performance impact for TPC-H < 5%). - Compatible with OpenTracing - Can be easily used to trace across thread boundaries without too many modification - Add tracing to Coprocessor EndPoint (`coprocessor/`) and Coprocessor Executors (`components/tidb_query`). - The effort to add tracing should be small, so that we can easily apply this mechanism to the rest of the TiKV. - Tracing context should be able to be passed into thread pools. You don't need to pass tracing context to lower levels like MVCC in this task. There could be future tasks (if time permits) for it. ## Score - 6351 The score can be adjusted to higher if the actual amount of work is notably more than we expected. ## Mentor(s) - @andylokandy - @sticnarf - @breeswish ## Recommended Skills - Professional Rust programming - System programming - OpenTracing
2.0
UCP: Add Tracing to Coprocessor - ## Description It is usually difficult to know which part costs a lot of time for a single Coprocessor request. Although TiKV provides per-executor time statistics (called `ExecSummary`) for TiDB's `EXPLAIN ANALYZE` requests, it is still too rough. For example, we cannot know how long it spends in the MVCC layer itself, or the RocksDB itself. Further more, since TiDB aggregates all requests' ExecSummary together to display a single `EXPLAIN ANALYZE` result, users are not able to know per-request statistics. This task is to implement tracing at the lowest possible performance cost. It should be able to record how long each key step takes, since the request is received. This will be very helpful to trace performance spikes and performance issues for minor requests. Only requests that enables tracing should be traced. When tracing is not enabled there should be no performance impact. Goals: - Discover or design & implement a general tracing library / layer (in tikv's `components` directory): - Nearly zero-cost (i.e. performance impact for TPC-H < 1%) when tracing is **not** dynamically enabled (you may consider using generics to minimize the runtime cost) - Low-cost when tracing is enabled (i.e. performance impact for TPC-H < 5%). - Compatible with OpenTracing - Can be easily used to trace across thread boundaries without too many modification - Add tracing to Coprocessor EndPoint (`coprocessor/`) and Coprocessor Executors (`components/tidb_query`). - The effort to add tracing should be small, so that we can easily apply this mechanism to the rest of the TiKV. - Tracing context should be able to be passed into thread pools. You don't need to pass tracing context to lower levels like MVCC in this task. There could be future tasks (if time permits) for it. ## Score - 6351 The score can be adjusted to higher if the actual amount of work is notably more than we expected. ## Mentor(s) - @andylokandy - @sticnarf - @breeswish ## Recommended Skills - Professional Rust programming - System programming - OpenTracing
process
ucp add tracing to coprocessor description it is usually difficult to know which part costs a lot of time for a single coprocessor request although tikv provides per executor time statistics called execsummary for tidb s explain analyze requests it is still too rough for example we cannot know how long it spends in the mvcc layer itself or the rocksdb itself further more since tidb aggregates all requests execsummary together to display a single explain analyze result users are not able to know per request statistics this task is to implement tracing at the lowest possible performance cost it should be able to record how long each key step takes since the request is received this will be very helpful to trace performance spikes and performance issues for minor requests only requests that enables tracing should be traced when tracing is not enabled there should be no performance impact goals discover or design implement a general tracing library layer in tikv s components directory nearly zero cost i e performance impact for tpc h when tracing is not dynamically enabled you may consider using generics to minimize the runtime cost low cost when tracing is enabled i e performance impact for tpc h compatible with opentracing can be easily used to trace across thread boundaries without too many modification add tracing to coprocessor endpoint coprocessor and coprocessor executors components tidb query the effort to add tracing should be small so that we can easily apply this mechanism to the rest of the tikv tracing context should be able to be passed into thread pools you don t need to pass tracing context to lower levels like mvcc in this task there could be future tasks if time permits for it score the score can be adjusted to higher if the actual amount of work is notably more than we expected mentor s andylokandy sticnarf breeswish recommended skills professional rust programming system programming opentracing
1
112,289
17,087,437,280
IssuesEvent
2021-07-08 13:33:31
keep-network/coverage-pools
https://api.github.com/repos/keep-network/coverage-pools
closed
harikari is error-prone
:eyeglasses: security-audit
Severity: Informational ## Description `harikari` destructs the Auction's contract: https://github.com/keep-network/coverage-pools/blob/49ba7e9dfb5fb421c2b5db18496479d25d374072/contracts/Auction.sol#L215-L219 When the contract is destructed, the code and variables are still available, until the top-level transaction finished. As a result, an attacker might still be able to interact with a contract marked as destructed. While we did not find a way to directly exploit this behavior, further investigation will be required in case of code update. ## Exploit Scenario A bug is found in the `Auctionner`, which allows `earlyClose` to be called multiple times within the same transaction. Eve exploits the bug and makes a profit ## Recommendations Short term, either remove `harikari` and implement a `isClose` variable/modifier, or makes sure no one can call an `Auction` contract once the `harikari` has been executed. Long term, do not use `selfdestruct`, unless a specific need is required. Take in consider that the selfdestruct gas-refund might be removed in an upcoming EVM's update (https://eips.ethereum.org/EIPS/eip-3298)
True
harikari is error-prone - Severity: Informational ## Description `harikari` destructs the Auction's contract: https://github.com/keep-network/coverage-pools/blob/49ba7e9dfb5fb421c2b5db18496479d25d374072/contracts/Auction.sol#L215-L219 When the contract is destructed, the code and variables are still available, until the top-level transaction finished. As a result, an attacker might still be able to interact with a contract marked as destructed. While we did not find a way to directly exploit this behavior, further investigation will be required in case of code update. ## Exploit Scenario A bug is found in the `Auctionner`, which allows `earlyClose` to be called multiple times within the same transaction. Eve exploits the bug and makes a profit ## Recommendations Short term, either remove `harikari` and implement a `isClose` variable/modifier, or makes sure no one can call an `Auction` contract once the `harikari` has been executed. Long term, do not use `selfdestruct`, unless a specific need is required. Take in consider that the selfdestruct gas-refund might be removed in an upcoming EVM's update (https://eips.ethereum.org/EIPS/eip-3298)
non_process
harikari is error prone severity informational description harikari destructs the auction s contract when the contract is destructed the code and variables are still available until the top level transaction finished as a result an attacker might still be able to interact with a contract marked as destructed while we did not find a way to directly exploit this behavior further investigation will be required in case of code update exploit scenario a bug is found in the auctionner which allows earlyclose to be called multiple times within the same transaction eve exploits the bug and makes a profit recommendations short term either remove harikari and implement a isclose variable modifier or makes sure no one can call an auction contract once the harikari has been executed long term do not use selfdestruct unless a specific need is required take in consider that the selfdestruct gas refund might be removed in an upcoming evm s update
0
61,423
25,518,310,457
IssuesEvent
2022-11-28 18:10:47
carbon-design-system/carbon-platform
https://api.github.com/repos/carbon-design-system/carbon-platform
closed
[Content sync]: carbon-website ref refs/heads/gh-action-test
role: dev 🤖 service: web-app 🌎
A pull request on carbon-website was just merged. It contains .mdx content changes that may need to be synced to platform: - https://github.com/carbon-design-system/carbon-website/commit/3dae97c4e3e1
1.0
[Content sync]: carbon-website ref refs/heads/gh-action-test - A pull request on carbon-website was just merged. It contains .mdx content changes that may need to be synced to platform: - https://github.com/carbon-design-system/carbon-website/commit/3dae97c4e3e1
non_process
carbon website ref refs heads gh action test a pull request on carbon website was just merged it contains mdx content changes that may need to be synced to platform
0
20,623
27,293,519,924
IssuesEvent
2023-02-23 18:20:43
googleapis/nodejs-dataform
https://api.github.com/repos/googleapis/nodejs-dataform
closed
Your .repo-metadata.json file has a problem 🤒
type: process repo-metadata: lint
You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname field missing from .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
1.0
Your .repo-metadata.json file has a problem 🤒 - You have a problem with your .repo-metadata.json file: Result of scan 📈: * api_shortname field missing from .repo-metadata.json ☝️ Once you address these problems, you can close this issue. ### Need help? * [Schema definition](https://github.com/googleapis/repo-automation-bots/blob/main/packages/repo-metadata-lint/src/repo-metadata-schema.json): lists valid options for each field. * [API index](https://github.com/googleapis/googleapis/blob/master/api-index-v1.json): for gRPC libraries **api_shortname** should match the subdomain of an API's **hostName**. * Reach out to **go/github-automation** if you have any questions.
process
your repo metadata json file has a problem 🤒 you have a problem with your repo metadata json file result of scan 📈 api shortname field missing from repo metadata json ☝️ once you address these problems you can close this issue need help lists valid options for each field for grpc libraries api shortname should match the subdomain of an api s hostname reach out to go github automation if you have any questions
1
5,114
7,888,067,312
IssuesEvent
2018-06-27 20:41:51
rubberduck-vba/Rubberduck
https://api.github.com/repos/rubberduck-vba/Rubberduck
closed
VB6 VBForm parse errors
parse-tree-preprocessing vb6-specific
In testing RD against real-world VB6 forms, I'm finding additional parse failures with form headers. As I find them, I'll update this issue so they can be tackled collectively.
1.0
VB6 VBForm parse errors - In testing RD against real-world VB6 forms, I'm finding additional parse failures with form headers. As I find them, I'll update this issue so they can be tackled collectively.
process
vbform parse errors in testing rd against real world forms i m finding additional parse failures with form headers as i find them i ll update this issue so they can be tackled collectively
1
100,813
4,103,702,160
IssuesEvent
2016-06-04 21:25:49
Bernie-2016/ground-control
https://api.github.com/repos/Bernie-2016/ground-control
closed
Fix google analytics loading error
newbie-friendly priority-low status-needs-discussion
`Failed to load resource: the server responded with a status of 404 () -- https://www.googletagmanager.com/gtm.js?id=GTM-WZL6ZL`
1.0
Fix google analytics loading error - `Failed to load resource: the server responded with a status of 404 () -- https://www.googletagmanager.com/gtm.js?id=GTM-WZL6ZL`
non_process
fix google analytics loading error failed to load resource the server responded with a status of
0
273,516
23,760,654,747
IssuesEvent
2022-09-01 08:35:52
apache/pulsar
https://api.github.com/repos/apache/pulsar
closed
Flaky-test: PulsarSinksTest.testKinesis
component/test flaky-tests
### Search before asking - [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar. ### Example failure https://github.com/apache/pulsar/runs/8111865921?check_suite_focus=true ### Exception stacktrace ``` Error: Tests run: 18, Failures: 1, Errors: 0, Skipped: 10, Time elapsed: 895.805 s <<< FAILURE! - in TestSuite Error: testKinesis(org.apache.pulsar.tests.integration.io.sinks.PulsarSinksTest) Time elapsed: 38.973 s <<< FAILURE! java.util.concurrent.ExecutionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.661945619138E9 at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) at org.apache.pulsar.tests.integration.io.sinks.KinesisSinkTester.addMoreRecords(KinesisSinkTester.java:237) at org.apache.pulsar.tests.integration.io.sinks.KinesisSinkTester.internalValidateSinkResult(KinesisSinkTester.java:212) at org.apache.pulsar.tests.integration.io.sinks.KinesisSinkTester.lambda$validateSinkResult$1(KinesisSinkTester.java:188) at org.awaitility.core.AssertionCondition.lambda$new$0(AssertionCondition.java:53) at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:248) at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:235) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.661945619138E9 at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97) at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:77) at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:56) at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:32) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller$SimpleTypeJsonUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:160) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:210) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:114) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.lambda$unmarshallList$2(JsonProtocolUnmarshaller.java:143) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallList(JsonProtocolUnmarshaller.java:145) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:210) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:197) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:168) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:79) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:36) at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonResponseHandler.handle(AwsJsonResponseHandler.java:43) at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$resultTransformationResponseHandler$5(BaseClientHandler.java:232) at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:88) at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1150) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:129) at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:171) at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$500(ResponseHandler.java:68) at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:287) at com.typesafe.netty.HandlerPublisher.complete(HandlerPublisher.java:408) at com.typesafe.netty.HandlerPublisher.handlerRemoved(HandlerPublisher.java:395) at io.netty.channel.AbstractChannelHandlerContext.callHandlerRemoved(AbstractChannelHandlerContext.java:946) at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:637) at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477) at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:423) at com.typesafe.netty.http.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:328) at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:189) at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165) at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at software.amazon.awssdk.http.nio.netty.internal.LastHttpContentHandler.channelRead(LastHttpContentHandler.java:43) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.onDataRead(Http2ToHttpInboundAdapter.java:66) at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:44) at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:38) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.handler.codec.http2.AbstractHttp2StreamChannel$Http2ChannelUnsafe.doRead0(AbstractHttp2StreamChannel.java:901) at io.netty.handler.codec.http2.AbstractHttp2StreamChannel.fireChildRead(AbstractHttp2StreamChannel.java:555) at io.netty.handler.codec.http2.Http2MultiplexHandler.channelRead(Http2MultiplexHandler.java:180) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.codec.http2.Http2FrameCodec.onHttp2Frame(Http2FrameCodec.java:707) at io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onDataRead(Http2FrameCodec.java:646) at io.netty.handler.codec.http2.Http2FrameListenerDecorator.onDataRead(Http2FrameListenerDecorator.java:36) at io.netty.handler.codec.http2.Http2EmptyDataFrameListener.onDataRead(Http2EmptyDataFrameListener.java:49) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:307) at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:415) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:250) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:159) at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:173) at io.netty.handler.codec.http2.DecoratingHttp2ConnectionDecoder.decodeFrame(DecoratingHttp2ConnectionDecoder.java:63) at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:378) at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ... 1 more Caused by: java.lang.NumberFormatException: For input string: "1.661945619138E9" at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67) at java.base/java.lang.Long.parseLong(Long.java:711) at java.base/java.lang.Long.parseLong(Long.java:836) at software.amazon.awssdk.utils.DateUtils.parseUnixTimestampMillisInstant(DateUtils.java:146) at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:72) ... 99 more ``` ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR!
2.0
Flaky-test: PulsarSinksTest.testKinesis - ### Search before asking - [X] I searched in the [issues](https://github.com/apache/pulsar/issues) and found nothing similar. ### Example failure https://github.com/apache/pulsar/runs/8111865921?check_suite_focus=true ### Exception stacktrace ``` Error: Tests run: 18, Failures: 1, Errors: 0, Skipped: 10, Time elapsed: 895.805 s <<< FAILURE! - in TestSuite Error: testKinesis(org.apache.pulsar.tests.integration.io.sinks.PulsarSinksTest) Time elapsed: 38.973 s <<< FAILURE! java.util.concurrent.ExecutionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.661945619138E9 at java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:396) at java.base/java.util.concurrent.CompletableFuture.get(CompletableFuture.java:2073) at org.apache.pulsar.tests.integration.io.sinks.KinesisSinkTester.addMoreRecords(KinesisSinkTester.java:237) at org.apache.pulsar.tests.integration.io.sinks.KinesisSinkTester.internalValidateSinkResult(KinesisSinkTester.java:212) at org.apache.pulsar.tests.integration.io.sinks.KinesisSinkTester.lambda$validateSinkResult$1(KinesisSinkTester.java:188) at org.awaitility.core.AssertionCondition.lambda$new$0(AssertionCondition.java:53) at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:248) at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:235) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) at java.base/java.lang.Thread.run(Thread.java:833) Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable to parse date : 1.661945619138E9 at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97) at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:77) at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:56) at software.amazon.awssdk.protocols.core.StringToInstant.convert(StringToInstant.java:32) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller$SimpleTypeJsonUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:160) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:210) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:114) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.lambda$unmarshallList$2(JsonProtocolUnmarshaller.java:143) at java.base/java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:197) at java.base/java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1625) at java.base/java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:509) at java.base/java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:499) at java.base/java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:921) at java.base/java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) at java.base/java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:682) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallList(JsonProtocolUnmarshaller.java:145) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshallStructured(JsonProtocolUnmarshaller.java:210) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:197) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonProtocolUnmarshaller.unmarshall(JsonProtocolUnmarshaller.java:168) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:79) at software.amazon.awssdk.protocols.json.internal.unmarshall.JsonResponseHandler.handle(JsonResponseHandler.java:36) at software.amazon.awssdk.protocols.json.internal.unmarshall.AwsJsonResponseHandler.handle(AwsJsonResponseHandler.java:43) at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$resultTransformationResponseHandler$5(BaseClientHandler.java:232) at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler.lambda$prepare$0(AsyncResponseHandler.java:88) at java.base/java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1150) at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147) at software.amazon.awssdk.core.internal.http.async.AsyncResponseHandler$BaosSubscriber.onComplete(AsyncResponseHandler.java:129) at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:171) at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$500(ResponseHandler.java:68) at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:287) at com.typesafe.netty.HandlerPublisher.complete(HandlerPublisher.java:408) at com.typesafe.netty.HandlerPublisher.handlerRemoved(HandlerPublisher.java:395) at io.netty.channel.AbstractChannelHandlerContext.callHandlerRemoved(AbstractChannelHandlerContext.java:946) at io.netty.channel.DefaultChannelPipeline.callHandlerRemoved0(DefaultChannelPipeline.java:637) at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:477) at io.netty.channel.DefaultChannelPipeline.remove(DefaultChannelPipeline.java:423) at com.typesafe.netty.http.HttpStreamsHandler.removeHandlerIfActive(HttpStreamsHandler.java:328) at com.typesafe.netty.http.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:189) at com.typesafe.netty.http.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:165) at com.typesafe.netty.http.HttpStreamsClientHandler.channelRead(HttpStreamsClientHandler.java:148) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at software.amazon.awssdk.http.nio.netty.internal.LastHttpContentHandler.channelRead(LastHttpContentHandler.java:43) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.onDataRead(Http2ToHttpInboundAdapter.java:66) at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:44) at software.amazon.awssdk.http.nio.netty.internal.http2.Http2ToHttpInboundAdapter.channelRead0(Http2ToHttpInboundAdapter.java:38) at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.handler.codec.http2.AbstractHttp2StreamChannel$Http2ChannelUnsafe.doRead0(AbstractHttp2StreamChannel.java:901) at io.netty.handler.codec.http2.AbstractHttp2StreamChannel.fireChildRead(AbstractHttp2StreamChannel.java:555) at io.netty.handler.codec.http2.Http2MultiplexHandler.channelRead(Http2MultiplexHandler.java:180) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.handler.codec.http2.Http2FrameCodec.onHttp2Frame(Http2FrameCodec.java:707) at io.netty.handler.codec.http2.Http2FrameCodec$FrameListener.onDataRead(Http2FrameCodec.java:646) at io.netty.handler.codec.http2.Http2FrameListenerDecorator.onDataRead(Http2FrameListenerDecorator.java:36) at io.netty.handler.codec.http2.Http2EmptyDataFrameListener.onDataRead(Http2EmptyDataFrameListener.java:49) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:307) at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:415) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:250) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:159) at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:173) at io.netty.handler.codec.http2.DecoratingHttp2ConnectionDecoder.decodeFrame(DecoratingHttp2ConnectionDecoder.java:63) at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:378) at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:438) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:510) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:449) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995) at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ... 1 more Caused by: java.lang.NumberFormatException: For input string: "1.661945619138E9" at java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:67) at java.base/java.lang.Long.parseLong(Long.java:711) at java.base/java.lang.Long.parseLong(Long.java:836) at software.amazon.awssdk.utils.DateUtils.parseUnixTimestampMillisInstant(DateUtils.java:146) at software.amazon.awssdk.protocols.core.StringToInstant.lambda$safeParseDate$0(StringToInstant.java:72) ... 99 more ``` ### Are you willing to submit a PR? - [ ] I'm willing to submit a PR!
non_process
flaky test pulsarsinkstest testkinesis search before asking i searched in the and found nothing similar example failure exception stacktrace error tests run failures errors skipped time elapsed s failure in testsuite error testkinesis org apache pulsar tests integration io sinks pulsarsinkstest time elapsed s failure java util concurrent executionexception software amazon awssdk core exception sdkclientexception unable to parse date at java base java util concurrent completablefuture reportget completablefuture java at java base java util concurrent completablefuture get completablefuture java at org apache pulsar tests integration io sinks kinesissinktester addmorerecords kinesissinktester java at org apache pulsar tests integration io sinks kinesissinktester internalvalidatesinkresult kinesissinktester java at org apache pulsar tests integration io sinks kinesissinktester lambda validatesinkresult kinesissinktester java at org awaitility core assertioncondition lambda new assertioncondition java at org awaitility core conditionawaiter conditionpoller call conditionawaiter java at org awaitility core conditionawaiter conditionpoller call conditionawaiter java at java base java util concurrent futuretask run futuretask java at java base java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java base java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java base java lang thread run thread java caused by software amazon awssdk core exception sdkclientexception unable to parse date at software amazon awssdk core exception sdkclientexception builderimpl build sdkclientexception java at software amazon awssdk protocols core stringtoinstant lambda safeparsedate stringtoinstant java at software amazon awssdk protocols core stringtoinstant convert stringtoinstant java at software amazon awssdk protocols core stringtoinstant convert stringtoinstant java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller simpletypejsonunmarshaller unmarshall jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller unmarshallstructured jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller unmarshallstructured jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller lambda unmarshalllist jsonprotocolunmarshaller java at java base java util stream referencepipeline accept referencepipeline java at java base java util arraylist arraylistspliterator foreachremaining arraylist java at java base java util stream abstractpipeline copyinto abstractpipeline java at java base java util stream abstractpipeline wrapandcopyinto abstractpipeline java at java base java util stream reduceops reduceop evaluatesequential reduceops java at java base java util stream abstractpipeline evaluate abstractpipeline java at java base java util stream referencepipeline collect referencepipeline java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller unmarshalllist jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller unmarshallstructured jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller unmarshall jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonprotocolunmarshaller unmarshall jsonprotocolunmarshaller java at software amazon awssdk protocols json internal unmarshall jsonresponsehandler handle jsonresponsehandler java at software amazon awssdk protocols json internal unmarshall jsonresponsehandler handle jsonresponsehandler java at software amazon awssdk protocols json internal unmarshall awsjsonresponsehandler handle awsjsonresponsehandler java at software amazon awssdk core internal handler baseclienthandler lambda resulttransformationresponsehandler baseclienthandler java at software amazon awssdk core internal http async asyncresponsehandler lambda prepare asyncresponsehandler java at java base java util concurrent completablefuture unicompose tryfire completablefuture java at java base java util concurrent completablefuture postcomplete completablefuture java at java base java util concurrent completablefuture complete completablefuture java at software amazon awssdk core internal http async asyncresponsehandler baossubscriber oncomplete asyncresponsehandler java at software amazon awssdk http nio netty internal responsehandler runandlogerror responsehandler java at software amazon awssdk http nio netty internal responsehandler access responsehandler java at software amazon awssdk http nio netty internal responsehandler publisheradapter oncomplete responsehandler java at com typesafe netty handlerpublisher complete handlerpublisher java at com typesafe netty handlerpublisher handlerremoved handlerpublisher java at io netty channel abstractchannelhandlercontext callhandlerremoved abstractchannelhandlercontext java at io netty channel defaultchannelpipeline defaultchannelpipeline java at io netty channel defaultchannelpipeline remove defaultchannelpipeline java at io netty channel defaultchannelpipeline remove defaultchannelpipeline java at com typesafe netty http httpstreamshandler removehandlerifactive httpstreamshandler java at com typesafe netty http httpstreamshandler handlereadhttpcontent httpstreamshandler java at com typesafe netty http httpstreamshandler channelread httpstreamshandler java at com typesafe netty http httpstreamsclienthandler channelread httpstreamsclienthandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at software amazon awssdk http nio netty internal lasthttpcontenthandler channelread lasthttpcontenthandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at software amazon awssdk http nio netty internal ondataread java at software amazon awssdk http nio netty internal java at software amazon awssdk http nio netty internal java at io netty channel simplechannelinboundhandler channelread simplechannelinboundhandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler timeout idlestatehandler channelread idlestatehandler java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty handler codec java at io netty handler codec firechildread java at io netty handler codec channelread java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty handler codec java at io netty handler codec framelistener ondataread java at io netty handler codec ondataread java at io netty handler codec ondataread java at io netty handler codec framereadlistener ondataread java at io netty handler codec ondataread java at io netty handler codec readdataframe java at io netty handler codec processpayloadstate java at io netty handler codec readframe java at io netty handler codec readframe java at io netty handler codec decodeframe java at io netty handler codec decodeframe java at io netty handler codec framedecoder decode java at io netty handler codec decode java at io netty handler codec bytetomessagedecoder decoderemovalreentryprotection bytetomessagedecoder java at io netty handler codec bytetomessagedecoder calldecode bytetomessagedecoder java at io netty handler codec bytetomessagedecoder channelread bytetomessagedecoder java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext firechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline headcontext channelread defaultchannelpipeline java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel abstractchannelhandlercontext invokechannelread abstractchannelhandlercontext java at io netty channel defaultchannelpipeline firechannelread defaultchannelpipeline java at io netty channel nio abstractniobytechannel niobyteunsafe read abstractniobytechannel java at io netty channel nio nioeventloop processselectedkey nioeventloop java at io netty channel nio nioeventloop processselectedkeysoptimized nioeventloop java at io netty channel nio nioeventloop processselectedkeys nioeventloop java at io netty channel nio nioeventloop run nioeventloop java at io netty util concurrent singlethreadeventexecutor run singlethreadeventexecutor java at io netty util internal threadexecutormap run threadexecutormap java more caused by java lang numberformatexception for input string at java base java lang numberformatexception forinputstring numberformatexception java at java base java lang long parselong long java at java base java lang long parselong long java at software amazon awssdk utils dateutils parseunixtimestampmillisinstant dateutils java at software amazon awssdk protocols core stringtoinstant lambda safeparsedate stringtoinstant java more are you willing to submit a pr i m willing to submit a pr
0
1,377
3,933,825,577
IssuesEvent
2016-04-25 20:28:26
moxie-leean/wp-plugin
https://api.github.com/repos/moxie-leean/wp-plugin
closed
Link wp-plugin with the generators
process
In order to generate a plugin dir with the name of the project and updated namespaces as well.
1.0
Link wp-plugin with the generators - In order to generate a plugin dir with the name of the project and updated namespaces as well.
process
link wp plugin with the generators in order to generate a plugin dir with the name of the project and updated namespaces as well
1
17,868
23,812,912,899
IssuesEvent
2022-09-05 01:12:39
lynnandtonic/nestflix.fun
https://api.github.com/repos/lynnandtonic/nestflix.fun
closed
Add American Dreamz
suggested title in process
Please add as much of the following info as you can: Title: American Dreamz Type (film/tv show): reality TV show Film or show in which it appears: American Dreamz (movie) Is the parent film/show streaming anywhere? Don't know. About when in the parent film/show does it appear? All throughout. Actual footage of the film/show can be seen (yes/no)? Yes. https://m.imdb.com/title/tt0465142/
1.0
Add American Dreamz - Please add as much of the following info as you can: Title: American Dreamz Type (film/tv show): reality TV show Film or show in which it appears: American Dreamz (movie) Is the parent film/show streaming anywhere? Don't know. About when in the parent film/show does it appear? All throughout. Actual footage of the film/show can be seen (yes/no)? Yes. https://m.imdb.com/title/tt0465142/
process
add american dreamz please add as much of the following info as you can title american dreamz type film tv show reality tv show film or show in which it appears american dreamz movie is the parent film show streaming anywhere don t know about when in the parent film show does it appear all throughout actual footage of the film show can be seen yes no yes
1
791,500
27,865,529,561
IssuesEvent
2023-03-21 10:00:04
pastas/pastas
https://api.github.com/repos/pastas/pastas
closed
[DEVELOPMENT] add tests for freq != "D"
development priority 1
I talked with @rubencalje and we think it would be a good idea to add some tests for frequencies other than daily to our test suite. Also the relationship between the obtained parameters when solving on different frequencies might be interesting to explore in a notebook. (We discussed this because I was working on models with freq="14D" but I was not entirely sure this always worked as well as I'd hoped. I've now decided to solve on a daily basis and getting a sample every 14 days from my oseries.)
1.0
[DEVELOPMENT] add tests for freq != "D" - I talked with @rubencalje and we think it would be a good idea to add some tests for frequencies other than daily to our test suite. Also the relationship between the obtained parameters when solving on different frequencies might be interesting to explore in a notebook. (We discussed this because I was working on models with freq="14D" but I was not entirely sure this always worked as well as I'd hoped. I've now decided to solve on a daily basis and getting a sample every 14 days from my oseries.)
non_process
add tests for freq d i talked with rubencalje and we think it would be a good idea to add some tests for frequencies other than daily to our test suite also the relationship between the obtained parameters when solving on different frequencies might be interesting to explore in a notebook we discussed this because i was working on models with freq but i was not entirely sure this always worked as well as i d hoped i ve now decided to solve on a daily basis and getting a sample every days from my oseries
0
12,777
15,162,840,842
IssuesEvent
2021-02-12 11:14:06
MaowImpl/Optionals
https://api.github.com/repos/MaowImpl/Optionals
closed
Clean processor/transformer code (1.0.0-beta.6)
enhancement good first issue javac processor
With the addition of `@AllOptional`, the code base has become a bit messy. My proposal is that we go ahead and fix a few of the structure issues with the project. One major issue with `@AllOptional` is how it uses a very last-minute method for applying `@Optional` transformations, that of which being an `override` boolean/int value in the new `OptionalTransformer` class. Preferably, this could be replaced with a flags system so that it's not as connected to the logic of `@AllOptional` and can be extended without too much extra effort in the future.
1.0
Clean processor/transformer code (1.0.0-beta.6) - With the addition of `@AllOptional`, the code base has become a bit messy. My proposal is that we go ahead and fix a few of the structure issues with the project. One major issue with `@AllOptional` is how it uses a very last-minute method for applying `@Optional` transformations, that of which being an `override` boolean/int value in the new `OptionalTransformer` class. Preferably, this could be replaced with a flags system so that it's not as connected to the logic of `@AllOptional` and can be extended without too much extra effort in the future.
process
clean processor transformer code beta with the addition of alloptional the code base has become a bit messy my proposal is that we go ahead and fix a few of the structure issues with the project one major issue with alloptional is how it uses a very last minute method for applying optional transformations that of which being an override boolean int value in the new optionaltransformer class preferably this could be replaced with a flags system so that it s not as connected to the logic of alloptional and can be extended without too much extra effort in the future
1
22,216
30,768,100,478
IssuesEvent
2023-07-30 14:58:14
km4ack/73Linux
https://api.github.com/repos/km4ack/73Linux
closed
VARA Start False Error Message
in process
I did a fresh build of RPi4 32bit Bullseye. I then installed 73 Linux and starting selecting apps to be installed. After the install I proceeded to test. I had to do a second install attempt of the VARA apps and it work with the second install. That was a noted possibility. When ran the VARA HF and VARA FM, I got an error message "The VARA Modem FAILED to Start", but the modems did load and performed as expected. Upon examining the start-vara-fm and start-vara-hf scripts I found and an issue that had been identified several months ago. The scripts test for modem start using this piece of code PIDVARA=$(ps aux | grep -i box86 | grep -i varafm). The problem is that box86 does not show using PS. Upon testing using .wine in place of box86 things work as expected.
1.0
VARA Start False Error Message - I did a fresh build of RPi4 32bit Bullseye. I then installed 73 Linux and starting selecting apps to be installed. After the install I proceeded to test. I had to do a second install attempt of the VARA apps and it work with the second install. That was a noted possibility. When ran the VARA HF and VARA FM, I got an error message "The VARA Modem FAILED to Start", but the modems did load and performed as expected. Upon examining the start-vara-fm and start-vara-hf scripts I found and an issue that had been identified several months ago. The scripts test for modem start using this piece of code PIDVARA=$(ps aux | grep -i box86 | grep -i varafm). The problem is that box86 does not show using PS. Upon testing using .wine in place of box86 things work as expected.
process
vara start false error message i did a fresh build of bullseye i then installed linux and starting selecting apps to be installed after the install i proceeded to test i had to do a second install attempt of the vara apps and it work with the second install that was a noted possibility when ran the vara hf and vara fm i got an error message the vara modem failed to start but the modems did load and performed as expected upon examining the start vara fm and start vara hf scripts i found and an issue that had been identified several months ago the scripts test for modem start using this piece of code pidvara ps aux grep i grep i varafm the problem is that does not show using ps upon testing using wine in place of things work as expected
1
3,583
6,620,745,676
IssuesEvent
2017-09-21 16:33:01
cptechinc/soft-6-ecomm
https://api.github.com/repos/cptechinc/soft-6-ecomm
closed
Contact Page
enhancement PHP PHP Backend Processwire
https://github.com/cptechinc/soft-6-ecomm/blob/2fdc3e82cf5ee6ba1abedd84a279464b03dda584/site/templates/contact.php Let's get all these values into the processwire so we can update them on the processwire and echo them out on this side.
1.0
Contact Page - https://github.com/cptechinc/soft-6-ecomm/blob/2fdc3e82cf5ee6ba1abedd84a279464b03dda584/site/templates/contact.php Let's get all these values into the processwire so we can update them on the processwire and echo them out on this side.
process
contact page let s get all these values into the processwire so we can update them on the processwire and echo them out on this side
1
86,524
15,755,674,905
IssuesEvent
2021-03-31 02:11:53
attesch/zencart
https://api.github.com/repos/attesch/zencart
opened
CVE-2020-7608 (Medium) detected in yargs-parser-5.0.0.tgz, yargs-parser-11.1.1.tgz
security vulnerability
## CVE-2020-7608 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-11.1.1.tgz</b></p></summary> <p> <details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary> <p>the mighty option parser used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p> <p>Path to dependency file: /zencart/admin/includes/template/javascript/gridstack.js-master/package.json</p> <p>Path to vulnerable library: zencart/admin/includes/template/javascript/gridstack.js-master/node_modules/yargs-parser/package.json</p> <p> Dependency Hierarchy: - grunt-sass-2.1.0.tgz (Root Library) - node-sass-4.12.0.tgz - sass-graph-2.2.4.tgz - yargs-7.1.0.tgz - :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary> <p>the mighty option parser used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p> <p>Path to dependency file: /zencart/package.json</p> <p>Path to vulnerable library: zencart/node_modules/yargs-parser/package.json</p> <p> Dependency Hierarchy: - laravel-mix-4.0.16.tgz (Root Library) - yargs-12.0.5.tgz - :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload. <p>Publish Date: 2020-03-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2">https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2</a></p> <p>Release Date: 2020-06-05</p> <p>Fix Resolution: 5.0.1;13.1.2;15.0.1;18.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2020-7608 (Medium) detected in yargs-parser-5.0.0.tgz, yargs-parser-11.1.1.tgz - ## CVE-2020-7608 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>yargs-parser-5.0.0.tgz</b>, <b>yargs-parser-11.1.1.tgz</b></p></summary> <p> <details><summary><b>yargs-parser-5.0.0.tgz</b></p></summary> <p>the mighty option parser used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-5.0.0.tgz</a></p> <p>Path to dependency file: /zencart/admin/includes/template/javascript/gridstack.js-master/package.json</p> <p>Path to vulnerable library: zencart/admin/includes/template/javascript/gridstack.js-master/node_modules/yargs-parser/package.json</p> <p> Dependency Hierarchy: - grunt-sass-2.1.0.tgz (Root Library) - node-sass-4.12.0.tgz - sass-graph-2.2.4.tgz - yargs-7.1.0.tgz - :x: **yargs-parser-5.0.0.tgz** (Vulnerable Library) </details> <details><summary><b>yargs-parser-11.1.1.tgz</b></p></summary> <p>the mighty option parser used by yargs</p> <p>Library home page: <a href="https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz">https://registry.npmjs.org/yargs-parser/-/yargs-parser-11.1.1.tgz</a></p> <p>Path to dependency file: /zencart/package.json</p> <p>Path to vulnerable library: zencart/node_modules/yargs-parser/package.json</p> <p> Dependency Hierarchy: - laravel-mix-4.0.16.tgz (Root Library) - yargs-12.0.5.tgz - :x: **yargs-parser-11.1.1.tgz** (Vulnerable Library) </details> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> yargs-parser could be tricked into adding or modifying properties of Object.prototype using a "__proto__" payload. <p>Publish Date: 2020-03-16 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-7608>CVE-2020-7608</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: Low - Integrity Impact: Low - Availability Impact: Low </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2">https://github.com/yargs/yargs-parser/commit/63810ca1ae1a24b08293a4d971e70e058c7a41e2</a></p> <p>Release Date: 2020-06-05</p> <p>Fix Resolution: 5.0.1;13.1.2;15.0.1;18.1.1</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve medium detected in yargs parser tgz yargs parser tgz cve medium severity vulnerability vulnerable libraries yargs parser tgz yargs parser tgz yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file zencart admin includes template javascript gridstack js master package json path to vulnerable library zencart admin includes template javascript gridstack js master node modules yargs parser package json dependency hierarchy grunt sass tgz root library node sass tgz sass graph tgz yargs tgz x yargs parser tgz vulnerable library yargs parser tgz the mighty option parser used by yargs library home page a href path to dependency file zencart package json path to vulnerable library zencart node modules yargs parser package json dependency hierarchy laravel mix tgz root library yargs tgz x yargs parser tgz vulnerable library vulnerability details yargs parser could be tricked into adding or modifying properties of object prototype using a proto payload publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0
3,087
6,101,731,591
IssuesEvent
2017-06-20 15:08:32
dita-ot/dita-ot
https://api.github.com/repos/dita-ot/dita-ot
closed
cite with keyref to peer HTML broken in XHTML
bug P2 preprocess/keyref
Using the hierarchy.ditamap sample, I've added this key definition: ``` xml <topicref scope="peer" format="html" keys="citekey" href="../com.ibm.sample/sample.html#targetid" navtitle="Peer HTML file"/> ``` In changingtheoil.xml, I've added these references: ``` xml <p>Testing cite <cite keyref="citekey"/> and xref <xref keyref="citekey"/></p> ``` The xref element resolves correctly into a link to the HTML file, with the proper link text. The cite element generates `<a href="..%5C.html#targetid"><cite class="cite"/></a>` in the XHTML output, along with the following warning: ``` [xslt] C:\DITA-OT1.8.M2\plugins\org.dita.xhtml\xsl\xslhtml\dita2htmlImpl.xsl:4243: Error! java.io.FileNotFoundException: C:\DITA-OT1.8.M2-\temp\temp20131023105911072\tasks\..\..\com.ibm.sample\sample.html (The system cannot find the path specified.) Cause: java.io.FileNotFoundException: C:\DITA-OT1.8.M2\temp\temp20131023105911072\tasks\..\..\com.ibm.sample\sample.html (The system cannot find the path specified.) ``` It looks like the keyref module does not do anything for cite, which makes some sense because cite does not have the scope/format/href attributes. It relies on the dita2html code, which seems to have some bad assumptions in the mode="find-keyref-target" -- it assumes the target is DITA, and assumes that a "." in the href is an extension to a local file.
1.0
cite with keyref to peer HTML broken in XHTML - Using the hierarchy.ditamap sample, I've added this key definition: ``` xml <topicref scope="peer" format="html" keys="citekey" href="../com.ibm.sample/sample.html#targetid" navtitle="Peer HTML file"/> ``` In changingtheoil.xml, I've added these references: ``` xml <p>Testing cite <cite keyref="citekey"/> and xref <xref keyref="citekey"/></p> ``` The xref element resolves correctly into a link to the HTML file, with the proper link text. The cite element generates `<a href="..%5C.html#targetid"><cite class="cite"/></a>` in the XHTML output, along with the following warning: ``` [xslt] C:\DITA-OT1.8.M2\plugins\org.dita.xhtml\xsl\xslhtml\dita2htmlImpl.xsl:4243: Error! java.io.FileNotFoundException: C:\DITA-OT1.8.M2-\temp\temp20131023105911072\tasks\..\..\com.ibm.sample\sample.html (The system cannot find the path specified.) Cause: java.io.FileNotFoundException: C:\DITA-OT1.8.M2\temp\temp20131023105911072\tasks\..\..\com.ibm.sample\sample.html (The system cannot find the path specified.) ``` It looks like the keyref module does not do anything for cite, which makes some sense because cite does not have the scope/format/href attributes. It relies on the dita2html code, which seems to have some bad assumptions in the mode="find-keyref-target" -- it assumes the target is DITA, and assumes that a "." in the href is an extension to a local file.
process
cite with keyref to peer html broken in xhtml using the hierarchy ditamap sample i ve added this key definition xml topicref scope peer format html keys citekey href com ibm sample sample html targetid navtitle peer html file in changingtheoil xml i ve added these references xml testing cite and xref the xref element resolves correctly into a link to the html file with the proper link text the cite element generates in the xhtml output along with the following warning c dita plugins org dita xhtml xsl xslhtml xsl error java io filenotfoundexception c dita temp tasks com ibm sample sample html the system cannot find the path specified cause java io filenotfoundexception c dita temp tasks com ibm sample sample html the system cannot find the path specified it looks like the keyref module does not do anything for cite which makes some sense because cite does not have the scope format href attributes it relies on the code which seems to have some bad assumptions in the mode find keyref target it assumes the target is dita and assumes that a in the href is an extension to a local file
1
1,243
3,779,431,368
IssuesEvent
2016-03-18 08:15:55
sci-visus/visus-issues
https://api.github.com/repos/sci-visus/visus-issues
closed
add support for new Visus Array operations
Feature Request Processing
Binarization and Bitwise operations Automatic Normalization Thresholding Histogram Equilization and Weighting This is not a complete list, but I wanted to start collecting ideas. Essentially many of these are motivated by trying to write scripts that loop over the arrays to perform some operation. As this isn't possible, collective versions of these operations are necessary.
1.0
add support for new Visus Array operations - Binarization and Bitwise operations Automatic Normalization Thresholding Histogram Equilization and Weighting This is not a complete list, but I wanted to start collecting ideas. Essentially many of these are motivated by trying to write scripts that loop over the arrays to perform some operation. As this isn't possible, collective versions of these operations are necessary.
process
add support for new visus array operations binarization and bitwise operations automatic normalization thresholding histogram equilization and weighting this is not a complete list but i wanted to start collecting ideas essentially many of these are motivated by trying to write scripts that loop over the arrays to perform some operation as this isn t possible collective versions of these operations are necessary
1
17,932
23,932,100,179
IssuesEvent
2022-09-10 17:55:48
apache/arrow-rs
https://api.github.com/repos/apache/arrow-rs
closed
Object Store Fails to Compile on Master (quick-xml)
bug development-process
**Describe the bug** <!-- A clear and concise description of what the bug is. --> Master is currently failing to compile as quick-xml has yanked version 0.24.0 **To Reproduce** <!-- Steps to reproduce the behavior: --> Attempt to compile master **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> Master should compile **Additional context** <!-- Add any other context about the problem here. --> I have raised https://github.com/tafia/quick-xml/issues/475 as an upstream fix, yanking a crate without providing a semver compatible fix is a bit of a smell
1.0
Object Store Fails to Compile on Master (quick-xml) - **Describe the bug** <!-- A clear and concise description of what the bug is. --> Master is currently failing to compile as quick-xml has yanked version 0.24.0 **To Reproduce** <!-- Steps to reproduce the behavior: --> Attempt to compile master **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> Master should compile **Additional context** <!-- Add any other context about the problem here. --> I have raised https://github.com/tafia/quick-xml/issues/475 as an upstream fix, yanking a crate without providing a semver compatible fix is a bit of a smell
process
object store fails to compile on master quick xml describe the bug a clear and concise description of what the bug is master is currently failing to compile as quick xml has yanked version to reproduce steps to reproduce the behavior attempt to compile master expected behavior a clear and concise description of what you expected to happen master should compile additional context add any other context about the problem here i have raised as an upstream fix yanking a crate without providing a semver compatible fix is a bit of a smell
1
8,752
27,172,208,080
IssuesEvent
2023-02-17 20:33:15
OneDrive/onedrive-api-docs
https://api.github.com/repos/OneDrive/onedrive-api-docs
closed
"ItemNotFound" when uploading file
type:bug Needs: Attention :wave: automation:Closed
using Graph.NET Nuget package 1.17.0 in my UWP app ### Expected behavior I am able to upload a file to OneDrive (into the apps app folder). ### Actual behavior An exception is thrown (see below) - but only sometimes! I feel like it mostly (or only) happens the first time the app tries to upload something to its (newly created) app folder. It usually (but not always!) succeeds on a second attempt. > Microsoft.Graph.ServiceException: Code: itemNotFoundMessage: Item does not existInner error at Microsoft.Graph.HttpProvider.<SendAsync>d__19.MoveNext() + 0x5ac--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at Microsoft.Graph.BaseRequest.<SendRequestAsync>d__36.MoveNext() + 0x475--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at Microsoft.Graph.BaseRequest.<SendAsync>d__32`1.MoveNext() + 0x12f--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at Diarium.OneDriveHelper.<UploadFile>d__15.MoveNext() + 0x2ad--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult() + 0xb at Diarium.OneDriveHelper.<Sync>d__1.MoveNext() + 0x150e ### Steps to reproduce the behavior ``` static async Task UploadFile(GraphServiceClient graphClient, string filePath, MemoryStream stream) { if (stream.Length > 4194304) { var session = await graphClient.Drive.Special.AppRoot.ItemWithPath(filePath).CreateUploadSession().Request().PostAsync(); await new ChunkedUploadProvider(session, graphClient, stream).UploadAsync(); } else { await graphClient.Drive.Special.AppRoot.ItemWithPath(filePath).Content.Request().PutAsync<DriveItem>(stream); } } ``` Issue occured on both the "beta" and "v1.0" endpoint This issue was also raised here: https://github.com/microsoftgraph/msgraph-sdk-dotnet/issues/385
1.0
"ItemNotFound" when uploading file - using Graph.NET Nuget package 1.17.0 in my UWP app ### Expected behavior I am able to upload a file to OneDrive (into the apps app folder). ### Actual behavior An exception is thrown (see below) - but only sometimes! I feel like it mostly (or only) happens the first time the app tries to upload something to its (newly created) app folder. It usually (but not always!) succeeds on a second attempt. > Microsoft.Graph.ServiceException: Code: itemNotFoundMessage: Item does not existInner error at Microsoft.Graph.HttpProvider.<SendAsync>d__19.MoveNext() + 0x5ac--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at Microsoft.Graph.BaseRequest.<SendRequestAsync>d__36.MoveNext() + 0x475--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at Microsoft.Graph.BaseRequest.<SendAsync>d__32`1.MoveNext() + 0x12f--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at Diarium.OneDriveHelper.<UploadFile>d__15.MoveNext() + 0x2ad--- End of stack trace from previous location where exception was thrown --- at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw() + 0x21 at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task) + 0x5c at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task) + 0x44 at System.Runtime.CompilerServices.TaskAwaiter.ValidateEnd(Task) + 0x1c at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult() + 0xb at Diarium.OneDriveHelper.<Sync>d__1.MoveNext() + 0x150e ### Steps to reproduce the behavior ``` static async Task UploadFile(GraphServiceClient graphClient, string filePath, MemoryStream stream) { if (stream.Length > 4194304) { var session = await graphClient.Drive.Special.AppRoot.ItemWithPath(filePath).CreateUploadSession().Request().PostAsync(); await new ChunkedUploadProvider(session, graphClient, stream).UploadAsync(); } else { await graphClient.Drive.Special.AppRoot.ItemWithPath(filePath).Content.Request().PutAsync<DriveItem>(stream); } } ``` Issue occured on both the "beta" and "v1.0" endpoint This issue was also raised here: https://github.com/microsoftgraph/msgraph-sdk-dotnet/issues/385
non_process
itemnotfound when uploading file using graph net nuget package in my uwp app expected behavior i am able to upload a file to onedrive into the apps app folder actual behavior an exception is thrown see below but only sometimes i feel like it mostly or only happens the first time the app tries to upload something to its newly created app folder it usually but not always succeeds on a second attempt microsoft graph serviceexception code itemnotfoundmessage item does not existinner error at microsoft graph httpprovider d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task at system runtime compilerservices taskawaiter validateend task at microsoft graph baserequest d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task at system runtime compilerservices taskawaiter validateend task at microsoft graph baserequest d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task at system runtime compilerservices taskawaiter validateend task at diarium onedrivehelper d movenext end of stack trace from previous location where exception was thrown at system runtime exceptionservices exceptiondispatchinfo throw at system runtime compilerservices taskawaiter throwfornonsuccess task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task at system runtime compilerservices taskawaiter validateend task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at diarium onedrivehelper d movenext steps to reproduce the behavior static async task uploadfile graphserviceclient graphclient string filepath memorystream stream if stream length var session await graphclient drive special approot itemwithpath filepath createuploadsession request postasync await new chunkeduploadprovider session graphclient stream uploadasync else await graphclient drive special approot itemwithpath filepath content request putasync stream issue occured on both the beta and endpoint this issue was also raised here
0
18,718
24,609,612,020
IssuesEvent
2022-10-14 19:52:10
openxla/stablehlo
https://api.github.com/repos/openxla/stablehlo
opened
Update ODS to reflect the updated sideeffect modelling
Process
### Request description Need to apply the relevant changes once the llvm_version is bumped to https://github.com/llvm/llvm-project/commit/86771d0b65ee13242f89b8dfdf3c66f738eae4e5.
1.0
Update ODS to reflect the updated sideeffect modelling - ### Request description Need to apply the relevant changes once the llvm_version is bumped to https://github.com/llvm/llvm-project/commit/86771d0b65ee13242f89b8dfdf3c66f738eae4e5.
process
update ods to reflect the updated sideeffect modelling request description need to apply the relevant changes once the llvm version is bumped to
1
9,950
12,977,226,855
IssuesEvent
2020-07-21 20:15:05
googleapis/java-cloud-bom
https://api.github.com/repos/googleapis/java-cloud-bom
opened
Add CI check that ensures all included clients are using compatible google-cloud-shared-dependencies versions
type: process
It can start as a non-blocking CI check, but it should block any release from going out as any released version of the google-cloud-bom should converge for these dependencies. If the CI check fails, it should enumerate the clients that need to update their shared-dependencies version.
1.0
Add CI check that ensures all included clients are using compatible google-cloud-shared-dependencies versions - It can start as a non-blocking CI check, but it should block any release from going out as any released version of the google-cloud-bom should converge for these dependencies. If the CI check fails, it should enumerate the clients that need to update their shared-dependencies version.
process
add ci check that ensures all included clients are using compatible google cloud shared dependencies versions it can start as a non blocking ci check but it should block any release from going out as any released version of the google cloud bom should converge for these dependencies if the ci check fails it should enumerate the clients that need to update their shared dependencies version
1
11,686
14,542,862,018
IssuesEvent
2020-12-15 16:12:01
MicrosoftDocs/azure-devops-docs
https://api.github.com/repos/MicrosoftDocs/azure-devops-docs
closed
Variables in Variable Group not accessible when declared at stage level
Pri1 devops-cicd-process/tech devops/prod doc-bug
I have defined a variable group and set permissions to all pipelines, And I updated my pipeline to something like this: ``` - stage: DeployStage displayName: Deploy dependsOn: - SetupStage - BuildStage variables: - group: Dev jobs: - deployment: DeployAzureFunctions strategy: runOnce: deploy: steps: <...> - task: AzureFunctionApp@1 displayName: "Deploy Function App" inputs: azureSubscription: $(azureSubscription) <...> ``` I have the variable `azureSubscription` defined in the variable group, but I get this error ``` There was a resource authorization issue: "The pipeline is not valid. Job DeployAzureFunctions: Step input azureSubscription references service connection $(azureSubscription) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz. ``` However, when I add the variable group at the root level of my pipeline (before `Stages`) everything works fine --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#expansion-of-variables) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
1.0
Variables in Variable Group not accessible when declared at stage level - I have defined a variable group and set permissions to all pipelines, And I updated my pipeline to something like this: ``` - stage: DeployStage displayName: Deploy dependsOn: - SetupStage - BuildStage variables: - group: Dev jobs: - deployment: DeployAzureFunctions strategy: runOnce: deploy: steps: <...> - task: AzureFunctionApp@1 displayName: "Deploy Function App" inputs: azureSubscription: $(azureSubscription) <...> ``` I have the variable `azureSubscription` defined in the variable group, but I get this error ``` There was a resource authorization issue: "The pipeline is not valid. Job DeployAzureFunctions: Step input azureSubscription references service connection $(azureSubscription) which could not be found. The service connection does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz. ``` However, when I add the variable group at the root level of my pipeline (before `Stages`) everything works fine --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: dd7e0bd3-1f7d-d7b6-cc72-5ef63c31b46a * Version Independent ID: dae87abd-b73d-9120-bcdb-6097d4b40f2a * Content: [Define variables - Azure Pipelines](https://docs.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#expansion-of-variables) * Content Source: [docs/pipelines/process/variables.md](https://github.com/MicrosoftDocs/azure-devops-docs/blob/master/docs/pipelines/process/variables.md) * Product: **devops** * Technology: **devops-cicd-process** * GitHub Login: @juliakm * Microsoft Alias: **jukullam**
process
variables in variable group not accessible when declared at stage level i have defined a variable group and set permissions to all pipelines and i updated my pipeline to something like this stage deploystage displayname deploy dependson setupstage buildstage variables group dev jobs deployment deployazurefunctions strategy runonce deploy steps task azurefunctionapp displayname deploy function app inputs azuresubscription azuresubscription i have the variable azuresubscription defined in the variable group but i get this error there was a resource authorization issue the pipeline is not valid job deployazurefunctions step input azuresubscription references service connection azuresubscription which could not be found the service connection does not exist or has not been authorized for use for authorization details refer to however when i add the variable group at the root level of my pipeline before stages everything works fine document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id bcdb content content source product devops technology devops cicd process github login juliakm microsoft alias jukullam
1
50,412
21,097,215,058
IssuesEvent
2022-04-04 11:28:30
gradido/gradido
https://api.github.com/repos/gradido/gradido
closed
🔧 [Refactor] text for toast expand link copied
service: wallet frontend refactor
## 🔧 Refactor ticket Request from Margret: If a link is copied to the clipboard then the hint text should be extended. "Link has been copied to the clipboard. You can now paste it into an email or message." "Link wurde in die Zwischenablage kopiert. Du kannst ihn jetzt in eine E-Mail oder Nachricht einfügen." <img width="813" alt="Bildschirmfoto_2022-04-02_um_09 42 20" src="https://user-images.githubusercontent.com/1324583/161419929-d8e60311-c6de-469f-a40b-466ca42ad5a6.png">
1.0
🔧 [Refactor] text for toast expand link copied - ## 🔧 Refactor ticket Request from Margret: If a link is copied to the clipboard then the hint text should be extended. "Link has been copied to the clipboard. You can now paste it into an email or message." "Link wurde in die Zwischenablage kopiert. Du kannst ihn jetzt in eine E-Mail oder Nachricht einfügen." <img width="813" alt="Bildschirmfoto_2022-04-02_um_09 42 20" src="https://user-images.githubusercontent.com/1324583/161419929-d8e60311-c6de-469f-a40b-466ca42ad5a6.png">
non_process
🔧 text for toast expand link copied 🔧 refactor ticket request from margret if a link is copied to the clipboard then the hint text should be extended link has been copied to the clipboard you can now paste it into an email or message link wurde in die zwischenablage kopiert du kannst ihn jetzt in eine e mail oder nachricht einfügen img width alt bildschirmfoto um src
0
39,408
8,641,164,894
IssuesEvent
2018-11-24 14:52:38
dart-lang/linter
https://api.github.com/repos/dart-lang/linter
opened
[1.0.0] review experimental lints
code health docs
Pre-release (and really ASAP), we should review the lints currently tagged as experimental and assess: * `prefer_void_to_null` * `avoid_positional_boolean_parameters` * `prefer_foreach` * `literal_only_boolean_expressions` What's keeping them experimental? My sense is that where possible we should bump to stable or deprecate. Thoughts? /cc @alexeieleusis @MichaelRFairhurst @srawlins @a14n @bwilkerson
1.0
[1.0.0] review experimental lints - Pre-release (and really ASAP), we should review the lints currently tagged as experimental and assess: * `prefer_void_to_null` * `avoid_positional_boolean_parameters` * `prefer_foreach` * `literal_only_boolean_expressions` What's keeping them experimental? My sense is that where possible we should bump to stable or deprecate. Thoughts? /cc @alexeieleusis @MichaelRFairhurst @srawlins @a14n @bwilkerson
non_process
review experimental lints pre release and really asap we should review the lints currently tagged as experimental and assess prefer void to null avoid positional boolean parameters prefer foreach literal only boolean expressions what s keeping them experimental my sense is that where possible we should bump to stable or deprecate thoughts cc alexeieleusis michaelrfairhurst srawlins bwilkerson
0
5,553
8,394,169,313
IssuesEvent
2018-10-09 23:13:27
aspnet/IISIntegration
https://api.github.com/repos/aspnet/IISIntegration
closed
Change the template for 50x error page
2 - Working diagnostics in-process out-of-process shim
Folks are currently get confused and think the issues are with ANCM instead of the application.
2.0
Change the template for 50x error page - Folks are currently get confused and think the issues are with ANCM instead of the application.
process
change the template for error page folks are currently get confused and think the issues are with ancm instead of the application
1
149,335
11,890,396,004
IssuesEvent
2020-03-28 18:08:52
rust-lang/rust
https://api.github.com/repos/rust-lang/rust
closed
Some compiler tests fail on ARM
A-testsuite C-bug O-ARM
I've build Rust on an Odroid XU3 board (arm-linux-gnueabihf), then run `make check`, and found the following failing tests: ``` [run-make] run-make/atomic-lock-free [run-make] run-make/issue-14500 [run-make] run-make/issue-24445 [run-make] run-make/lto-smoke-c ``` [Here's the full `make check` log](https://gist.github.com/mmatyas/b775b83908cbdfc6551ff79d8bf0b0ef), the failures are at the bottom. I used Rust cde0fa5f673c99e8d534123187ee554452513dc3.
1.0
Some compiler tests fail on ARM - I've build Rust on an Odroid XU3 board (arm-linux-gnueabihf), then run `make check`, and found the following failing tests: ``` [run-make] run-make/atomic-lock-free [run-make] run-make/issue-14500 [run-make] run-make/issue-24445 [run-make] run-make/lto-smoke-c ``` [Here's the full `make check` log](https://gist.github.com/mmatyas/b775b83908cbdfc6551ff79d8bf0b0ef), the failures are at the bottom. I used Rust cde0fa5f673c99e8d534123187ee554452513dc3.
non_process
some compiler tests fail on arm i ve build rust on an odroid board arm linux gnueabihf then run make check and found the following failing tests run make atomic lock free run make issue run make issue run make lto smoke c the failures are at the bottom i used rust
0
12,011
14,738,371,167
IssuesEvent
2021-01-07 04:33:59
kdjstudios/SABillingGitlab
https://api.github.com/repos/kdjstudios/SABillingGitlab
closed
Stockton A/r Reconciliation Report not loading
anc-ops anc-process anc-report anp-important ant-bug ant-support has attachment
In GitLab by @kdjstudios on May 16, 2018, 10:27 **Submitted by:** "Martin Villegas" <martin.villegas@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-16-45685/conversation **Server:** Internal **Client/Site:** Stockton **Account:** NA **Issue:** I am trying to view Stockton’s A/R Reconciliation report but get the following message: ![image](/uploads/fbced78b03865783b70ae2d4b2c94c0c/image.png)
1.0
Stockton A/r Reconciliation Report not loading - In GitLab by @kdjstudios on May 16, 2018, 10:27 **Submitted by:** "Martin Villegas" <martin.villegas@answernet.com> **Helpdesk:** http://www.servicedesk.answernet.com/profiles/ticket/2018-05-16-45685/conversation **Server:** Internal **Client/Site:** Stockton **Account:** NA **Issue:** I am trying to view Stockton’s A/R Reconciliation report but get the following message: ![image](/uploads/fbced78b03865783b70ae2d4b2c94c0c/image.png)
process
stockton a r reconciliation report not loading in gitlab by kdjstudios on may submitted by martin villegas helpdesk server internal client site stockton account na issue i am trying to view stockton’s a r reconciliation report but get the following message uploads image png
1
22,462
31,238,401,337
IssuesEvent
2023-08-20 15:01:01
pyanodon/pybugreports
https://api.github.com/repos/pyanodon/pybugreports
closed
"Refined syngas using methanol from canisters" is inconsistent
bug needs investigation mod:pycoalprocessing
### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [X] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [X] Balance - [ ] Pypostprocessing failure - [X] Other ### What is the problem? the recipe "Refined syngas using methanol from canisters" is the only one that uses "gas canister" all the other recipes use "fuel canister" The "gas canister" is also filled in a automated factory and not in a barrelling machine. Suggestion: change the recipe so it uses the methanol in a fuel cannister and remove the gas canister entirely. bonus remark: this recipe also exists in non-cannister form, but locked behind py3 science ... ### Steps to reproduce _No response_ ### Additional context ![afbeelding](https://user-images.githubusercontent.com/108619933/185779861-000a164a-b70b-404c-8637-4ce67f764163.png) ### Log file _No response_
1.0
"Refined syngas using methanol from canisters" is inconsistent - ### Mod source PyAE Beta ### Which mod are you having an issue with? - [ ] pyalienlife - [ ] pyalternativeenergy - [X] pycoalprocessing - [ ] pyfusionenergy - [ ] pyhightech - [ ] pyindustry - [ ] pypetroleumhandling - [ ] pypostprocessing - [ ] pyrawores ### Operating system >=Windows 10 ### What kind of issue is this? - [ ] Compatibility - [ ] Locale (names, descriptions, unknown keys) - [ ] Graphical - [ ] Crash - [ ] Progression - [X] Balance - [ ] Pypostprocessing failure - [X] Other ### What is the problem? the recipe "Refined syngas using methanol from canisters" is the only one that uses "gas canister" all the other recipes use "fuel canister" The "gas canister" is also filled in a automated factory and not in a barrelling machine. Suggestion: change the recipe so it uses the methanol in a fuel cannister and remove the gas canister entirely. bonus remark: this recipe also exists in non-cannister form, but locked behind py3 science ... ### Steps to reproduce _No response_ ### Additional context ![afbeelding](https://user-images.githubusercontent.com/108619933/185779861-000a164a-b70b-404c-8637-4ce67f764163.png) ### Log file _No response_
process
refined syngas using methanol from canisters is inconsistent mod source pyae beta which mod are you having an issue with pyalienlife pyalternativeenergy pycoalprocessing pyfusionenergy pyhightech pyindustry pypetroleumhandling pypostprocessing pyrawores operating system windows what kind of issue is this compatibility locale names descriptions unknown keys graphical crash progression balance pypostprocessing failure other what is the problem the recipe refined syngas using methanol from canisters is the only one that uses gas canister all the other recipes use fuel canister the gas canister is also filled in a automated factory and not in a barrelling machine suggestion change the recipe so it uses the methanol in a fuel cannister and remove the gas canister entirely bonus remark this recipe also exists in non cannister form but locked behind science steps to reproduce no response additional context log file no response
1
24,220
2,667,010,582
IssuesEvent
2015-03-22 04:45:29
NewCreature/EOF
https://api.github.com/repos/NewCreature/EOF
closed
"Split Lyric" menu item is not disabled when the function should not be available
bug imported Priority-Medium
_From [xander4j...@yahoo.com](https://code.google.com/u/111302640723734240985/) on May 10, 2010 01:13:06_ "Split Lyric" is always available in the Note menu (SHIFT+L seems to be properly suppressed), even when PART VOCALS isn't active, even when there are no lyric tokens in PART VOCALS. If there are no lyrics, and split lyric is activated, EOF displays gibberish in the Split Lyric dialog window (such as "g^^^^^^"). EOF allows the lyric token to be changed, and when undo is performed to reverse the effect of this, EOF crashes. It's possible that various other operations after the glitched lyric split occurs could cause a crash as well. _Original issue: http://code.google.com/p/editor-on-fire/issues/detail?id=1_
1.0
"Split Lyric" menu item is not disabled when the function should not be available - _From [xander4j...@yahoo.com](https://code.google.com/u/111302640723734240985/) on May 10, 2010 01:13:06_ "Split Lyric" is always available in the Note menu (SHIFT+L seems to be properly suppressed), even when PART VOCALS isn't active, even when there are no lyric tokens in PART VOCALS. If there are no lyrics, and split lyric is activated, EOF displays gibberish in the Split Lyric dialog window (such as "g^^^^^^"). EOF allows the lyric token to be changed, and when undo is performed to reverse the effect of this, EOF crashes. It's possible that various other operations after the glitched lyric split occurs could cause a crash as well. _Original issue: http://code.google.com/p/editor-on-fire/issues/detail?id=1_
non_process
split lyric menu item is not disabled when the function should not be available from on may split lyric is always available in the note menu shift l seems to be properly suppressed even when part vocals isn t active even when there are no lyric tokens in part vocals if there are no lyrics and split lyric is activated eof displays gibberish in the split lyric dialog window such as g eof allows the lyric token to be changed and when undo is performed to reverse the effect of this eof crashes it s possible that various other operations after the glitched lyric split occurs could cause a crash as well original issue
0
15,061
18,764,317,755
IssuesEvent
2021-11-05 20:48:03
ORNL-AMO/AMO-Tools-Desktop
https://api.github.com/repos/ORNL-AMO/AMO-Tools-Desktop
closed
TH Integration - Heat Cascading
Process Heating Treasure Hunt
Heat cascade: May setup Primary and Secondary process side-by-side, but nothing good for inputs for "Baseline" vs. "Modification" For results: Maybe current results can be on the right side somewhere, or just skip most of them. Suite issue in to fix something and add an "hourlySavings" Annual BL energy = firingRate2 x hours2 mod energy = firingRate2 x hours2 - energySavings savings = energySavings
1.0
TH Integration - Heat Cascading - Heat cascade: May setup Primary and Secondary process side-by-side, but nothing good for inputs for "Baseline" vs. "Modification" For results: Maybe current results can be on the right side somewhere, or just skip most of them. Suite issue in to fix something and add an "hourlySavings" Annual BL energy = firingRate2 x hours2 mod energy = firingRate2 x hours2 - energySavings savings = energySavings
process
th integration heat cascading heat cascade may setup primary and secondary process side by side but nothing good for inputs for baseline vs modification for results maybe current results can be on the right side somewhere or just skip most of them suite issue in to fix something and add an hourlysavings annual bl energy x mod energy x energysavings savings energysavings
1
9,600
12,544,240,445
IssuesEvent
2020-06-05 16:53:13
googleapis/google-cloud-cpp
https://api.github.com/repos/googleapis/google-cloud-cpp
closed
Move (back) to a Monorepo
type: process
**TL;DR:** All the `google-cloud-cpp-*` GitHub repos will be combined into **this** repo as a single "monorepo" in Q2-2020. We apologize for any confusion this may cause our customers, but we think this will be much simpler for our customers going forward. This should **not** result in any build breaks for our customers, but in order for customers to pick up the newest monorepo versions of our libraries, they will need to make minimal changes to their build scripts. # Background This GitHub repo is currently a "monorepo" in the sense that it contains the code for several client libraries. It contains the [GCS client code](https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/storage), the [Bigtable client code](https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/bigtable), and the beginnings of a [Firestore client](https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/firestore). However, our plan has been to change to a multi-repo approach where each client library lives in its own GitHub respository. This plan was outlined in https://github.com/googleapis/google-cloud-cpp/issues/3350. We started doing the multi-repo approach about a year ago with with our [-spanner](https://github.com/googleapis/google-cloud-cpp-spanner) repo. At this point we now have the following GitHub repos: * https://github.com/googleapis/google-cloud-cpp (this "mono" repo) * https://github.com/googleapis/google-cloud-cpp-spanner * https://github.com/googleapis/google-cloud-cpp-common * https://github.com/googleapis/google-cloud-cpp-pubsub * https://github.com/googleapis/google-cloud-cpp-bigquery These separate repos are becoming exceedingly difficult to manage, there's lots of duplication across repos, and redundant work to keep things in sync, and the separate repos make it difficult for customers to explore all the C++ clients that we have available. This latter point will become an even bigger problem as the number of supported client libraries increase. Given the new data we have from the experience of managing multiple repos for about a year, we have decided to reverse this decision and to instead move back to a monorepo. # The Plan Our plan is to combine all of the "google-cloud-cpp-*" projects into a single GitHub repo, which will be this repo. The code, samples, and documentation for each client library will live in its own directory at `google/cloud/$library`, for example, `google/cloud/bigtable`, `google/cloud/storage`, and `google/cloud/spanner`. Our plan is to begin and complete this work in Q2-2020 (roughly April - June, 2020). The work to do this is captured in the following milestones: 1. [Optimize CI Builds](https://github.com/googleapis/google-cloud-cpp/milestone/14) 1. [Prepare to be a monorepo](https://github.com/googleapis/google-cloud-cpp/milestone/15) 1. [Prepare builds for incoming repos](https://github.com/googleapis/google-cloud-cpp/milestone/16) 1. [Move google-cloud-cpp-pubsub](https://github.com/googleapis/google-cloud-cpp/milestone/17) 1. [Move google-cloud-cpp-bigquery](https://github.com/googleapis/google-cloud-cpp/milestone/18) 1. [Move google-cloud-cpp-spanner](https://github.com/googleapis/google-cloud-cpp/milestone/19) 1. [Move google-cloud-cpp-common](https://github.com/googleapis/google-cloud-cpp/milestone/20) # Expected User Impact *We apologize for any confusion or difficulty this change may cause our customers. We are committed to minimizing this friction. We also believe that the end result will be simpler, clearer, and better for our current and new customers going forward.* We know this change impacts existing users, but we believe users will be able to migrate to this new scheme with small amounts of effort. We anticipate that **no** changes to the C++ code will be needed. We expect that some changes to the build and/or packaging scripts will be needed. ## Users Downloading from GitHub We expect that a number of our customers download the source from GitHub and then incorporate the code into their build scripts. Existing releases of `google-cloud-cpp` and the other `google-cloud-cpp-*` repos will continue to work, and will have exactly the same content as before. Therefore, customers who have pinned their scripts to use a release will have **no impact** until such time as they decide to upgrade to the new version in the new "monorepo". Users who download `master` will break immediately, but we hope this is a small number, and that they are cognizant of the risks of depending on a non-release branch. ### Changes for CMake Users depending on pre-installed libraries No changes to their `CMakeLists.txt` files should be needed for these customers. Some changes to their build / installation scripts would be needed at the time they decide to upgrade. ### Changes for CMake Users with Super Builds These customers will need to change their top-level `CMakeLists.txt` to add new external projects. ### Bazel Users Bazel users will need to change their `WORKSPACE` file to introduce `google-cloud-cpp` as a new dependency. ## Package Maintainers Package maintainers who use `google-cloud-cpp-spanner` will need to create a new package for `google-cloud-cpp`, if one does not already exist.
1.0
Move (back) to a Monorepo - **TL;DR:** All the `google-cloud-cpp-*` GitHub repos will be combined into **this** repo as a single "monorepo" in Q2-2020. We apologize for any confusion this may cause our customers, but we think this will be much simpler for our customers going forward. This should **not** result in any build breaks for our customers, but in order for customers to pick up the newest monorepo versions of our libraries, they will need to make minimal changes to their build scripts. # Background This GitHub repo is currently a "monorepo" in the sense that it contains the code for several client libraries. It contains the [GCS client code](https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/storage), the [Bigtable client code](https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/bigtable), and the beginnings of a [Firestore client](https://github.com/googleapis/google-cloud-cpp/tree/master/google/cloud/firestore). However, our plan has been to change to a multi-repo approach where each client library lives in its own GitHub respository. This plan was outlined in https://github.com/googleapis/google-cloud-cpp/issues/3350. We started doing the multi-repo approach about a year ago with with our [-spanner](https://github.com/googleapis/google-cloud-cpp-spanner) repo. At this point we now have the following GitHub repos: * https://github.com/googleapis/google-cloud-cpp (this "mono" repo) * https://github.com/googleapis/google-cloud-cpp-spanner * https://github.com/googleapis/google-cloud-cpp-common * https://github.com/googleapis/google-cloud-cpp-pubsub * https://github.com/googleapis/google-cloud-cpp-bigquery These separate repos are becoming exceedingly difficult to manage, there's lots of duplication across repos, and redundant work to keep things in sync, and the separate repos make it difficult for customers to explore all the C++ clients that we have available. This latter point will become an even bigger problem as the number of supported client libraries increase. Given the new data we have from the experience of managing multiple repos for about a year, we have decided to reverse this decision and to instead move back to a monorepo. # The Plan Our plan is to combine all of the "google-cloud-cpp-*" projects into a single GitHub repo, which will be this repo. The code, samples, and documentation for each client library will live in its own directory at `google/cloud/$library`, for example, `google/cloud/bigtable`, `google/cloud/storage`, and `google/cloud/spanner`. Our plan is to begin and complete this work in Q2-2020 (roughly April - June, 2020). The work to do this is captured in the following milestones: 1. [Optimize CI Builds](https://github.com/googleapis/google-cloud-cpp/milestone/14) 1. [Prepare to be a monorepo](https://github.com/googleapis/google-cloud-cpp/milestone/15) 1. [Prepare builds for incoming repos](https://github.com/googleapis/google-cloud-cpp/milestone/16) 1. [Move google-cloud-cpp-pubsub](https://github.com/googleapis/google-cloud-cpp/milestone/17) 1. [Move google-cloud-cpp-bigquery](https://github.com/googleapis/google-cloud-cpp/milestone/18) 1. [Move google-cloud-cpp-spanner](https://github.com/googleapis/google-cloud-cpp/milestone/19) 1. [Move google-cloud-cpp-common](https://github.com/googleapis/google-cloud-cpp/milestone/20) # Expected User Impact *We apologize for any confusion or difficulty this change may cause our customers. We are committed to minimizing this friction. We also believe that the end result will be simpler, clearer, and better for our current and new customers going forward.* We know this change impacts existing users, but we believe users will be able to migrate to this new scheme with small amounts of effort. We anticipate that **no** changes to the C++ code will be needed. We expect that some changes to the build and/or packaging scripts will be needed. ## Users Downloading from GitHub We expect that a number of our customers download the source from GitHub and then incorporate the code into their build scripts. Existing releases of `google-cloud-cpp` and the other `google-cloud-cpp-*` repos will continue to work, and will have exactly the same content as before. Therefore, customers who have pinned their scripts to use a release will have **no impact** until such time as they decide to upgrade to the new version in the new "monorepo". Users who download `master` will break immediately, but we hope this is a small number, and that they are cognizant of the risks of depending on a non-release branch. ### Changes for CMake Users depending on pre-installed libraries No changes to their `CMakeLists.txt` files should be needed for these customers. Some changes to their build / installation scripts would be needed at the time they decide to upgrade. ### Changes for CMake Users with Super Builds These customers will need to change their top-level `CMakeLists.txt` to add new external projects. ### Bazel Users Bazel users will need to change their `WORKSPACE` file to introduce `google-cloud-cpp` as a new dependency. ## Package Maintainers Package maintainers who use `google-cloud-cpp-spanner` will need to create a new package for `google-cloud-cpp`, if one does not already exist.
process
move back to a monorepo tl dr all the google cloud cpp github repos will be combined into this repo as a single monorepo in we apologize for any confusion this may cause our customers but we think this will be much simpler for our customers going forward this should not result in any build breaks for our customers but in order for customers to pick up the newest monorepo versions of our libraries they will need to make minimal changes to their build scripts background this github repo is currently a monorepo in the sense that it contains the code for several client libraries it contains the the and the beginnings of a however our plan has been to change to a multi repo approach where each client library lives in its own github respository this plan was outlined in we started doing the multi repo approach about a year ago with with our repo at this point we now have the following github repos this mono repo these separate repos are becoming exceedingly difficult to manage there s lots of duplication across repos and redundant work to keep things in sync and the separate repos make it difficult for customers to explore all the c clients that we have available this latter point will become an even bigger problem as the number of supported client libraries increase given the new data we have from the experience of managing multiple repos for about a year we have decided to reverse this decision and to instead move back to a monorepo the plan our plan is to combine all of the google cloud cpp projects into a single github repo which will be this repo the code samples and documentation for each client library will live in its own directory at google cloud library for example google cloud bigtable google cloud storage and google cloud spanner our plan is to begin and complete this work in roughly april june the work to do this is captured in the following milestones expected user impact we apologize for any confusion or difficulty this change may cause our customers we are committed to minimizing this friction we also believe that the end result will be simpler clearer and better for our current and new customers going forward we know this change impacts existing users but we believe users will be able to migrate to this new scheme with small amounts of effort we anticipate that no changes to the c code will be needed we expect that some changes to the build and or packaging scripts will be needed users downloading from github we expect that a number of our customers download the source from github and then incorporate the code into their build scripts existing releases of google cloud cpp and the other google cloud cpp repos will continue to work and will have exactly the same content as before therefore customers who have pinned their scripts to use a release will have no impact until such time as they decide to upgrade to the new version in the new monorepo users who download master will break immediately but we hope this is a small number and that they are cognizant of the risks of depending on a non release branch changes for cmake users depending on pre installed libraries no changes to their cmakelists txt files should be needed for these customers some changes to their build installation scripts would be needed at the time they decide to upgrade changes for cmake users with super builds these customers will need to change their top level cmakelists txt to add new external projects bazel users bazel users will need to change their workspace file to introduce google cloud cpp as a new dependency package maintainers package maintainers who use google cloud cpp spanner will need to create a new package for google cloud cpp if one does not already exist
1
6,885
2,867,724,209
IssuesEvent
2015-06-05 14:54:59
AAndharia/PSB
https://api.github.com/repos/AAndharia/PSB
closed
Timesheet Page
CR Approved Ready For Testing
"There should be option to show time sheet entries by Files too. If it is shown by Files then in place of File Number on current List View page, show the Initials Lawyer."
1.0
Timesheet Page - "There should be option to show time sheet entries by Files too. If it is shown by Files then in place of File Number on current List View page, show the Initials Lawyer."
non_process
timesheet page there should be option to show time sheet entries by files too if it is shown by files then in place of file number on current list view page show the initials lawyer
0
429,739
12,427,008,716
IssuesEvent
2020-05-25 00:28:07
eclipse-ee4j/glassfish
https://api.github.com/repos/eclipse-ee4j/glassfish
closed
Stateful Security Context for secured EJB remote method calls
ERR: Assignee Priority: Minor Stale Type: New Feature
Kind request to implement stateful security context for secured EJB remote method calls. **Reason:** Currently AppservPasswordLoginModule.authenticateUser() of a login module securing an EJB is called for every remote call to a method of this EJB. It would be better (in terms of performance) if after the first succesful authentication a 'session' is generated and used for any following EJB call to make the security context stateful instead of stateless as described by the [CORBA Common Secure Interoperability (CSIv2)](http://www.omg.org/technology/documents/corba_spec_catalog.htm#CSIv2). #### Affected Versions [3.1.1]
1.0
Stateful Security Context for secured EJB remote method calls - Kind request to implement stateful security context for secured EJB remote method calls. **Reason:** Currently AppservPasswordLoginModule.authenticateUser() of a login module securing an EJB is called for every remote call to a method of this EJB. It would be better (in terms of performance) if after the first succesful authentication a 'session' is generated and used for any following EJB call to make the security context stateful instead of stateless as described by the [CORBA Common Secure Interoperability (CSIv2)](http://www.omg.org/technology/documents/corba_spec_catalog.htm#CSIv2). #### Affected Versions [3.1.1]
non_process
stateful security context for secured ejb remote method calls kind request to implement stateful security context for secured ejb remote method calls reason currently appservpasswordloginmodule authenticateuser of a login module securing an ejb is called for every remote call to a method of this ejb it would be better in terms of performance if after the first succesful authentication a session is generated and used for any following ejb call to make the security context stateful instead of stateless as described by the affected versions
0
19,799
26,186,673,655
IssuesEvent
2023-01-03 02:00:07
lizhihao6/get-daily-arxiv-noti
https://api.github.com/repos/lizhihao6/get-daily-arxiv-noti
opened
New submissions for Tue, 3 Jan 23
event camera white balance isp compression image signal processing image signal process raw raw image events camera color contrast events AWB
## Keyword: events There is no result ## Keyword: event camera ### An Event-based Algorithm for Simultaneous 6-DOF Camera Pose Tracking and Mapping - **Authors:** Masoud Dayani Najafabadi, Mohammad Reza Ahmadzadeh - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.00618 - **Pdf link:** https://arxiv.org/pdf/2301.00618 - **Abstract** Compared to regular cameras, Dynamic Vision Sensors or Event Cameras can output compact visual data based on a change in the intensity in each pixel location asynchronously. In this paper, we study the application of current image-based SLAM techniques to these novel sensors. To this end, the information in adaptively selected event windows is processed to form motion-compensated images. These images are then used to reconstruct the scene and estimate the 6-DOF pose of the camera. We also propose an inertial version of the event-only pipeline to assess its capabilities. We compare the results of different configurations of the proposed algorithm against the ground truth for sequences of two publicly available event datasets. We also compare the results of the proposed event-inertial pipeline with the state-of-the-art and show it can produce comparable or more accurate results provided the map estimate is reliable. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Rethinking the Video Sampling and Reasoning Strategies for Temporal Sentence Grounding - **Authors:** Jiahao Zhu, Daizong Liu, Pan Zhou, Xing Di, Yu Cheng, Song Yang, Wenzheng Xu, Zichuan Xu, Yao Wan, Lichao Sun, Zeyu Xiong - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.00514 - **Pdf link:** https://arxiv.org/pdf/2301.00514 - **Abstract** Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### PCRLv2: A Unified Visual Information Preservation Framework for Self-supervised Pre-training in Medical Image Analysis - **Authors:** Hong-Yu Zhou, Chixiang Lu, Chaoqi Chen, Sibei Yang, Yizhou Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2301.00772 - **Pdf link:** https://arxiv.org/pdf/2301.00772 - **Abstract** Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations. ## Keyword: raw image There is no result
2.0
New submissions for Tue, 3 Jan 23 - ## Keyword: events There is no result ## Keyword: event camera ### An Event-based Algorithm for Simultaneous 6-DOF Camera Pose Tracking and Mapping - **Authors:** Masoud Dayani Najafabadi, Mohammad Reza Ahmadzadeh - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.00618 - **Pdf link:** https://arxiv.org/pdf/2301.00618 - **Abstract** Compared to regular cameras, Dynamic Vision Sensors or Event Cameras can output compact visual data based on a change in the intensity in each pixel location asynchronously. In this paper, we study the application of current image-based SLAM techniques to these novel sensors. To this end, the information in adaptively selected event windows is processed to form motion-compensated images. These images are then used to reconstruct the scene and estimate the 6-DOF pose of the camera. We also propose an inertial version of the event-only pipeline to assess its capabilities. We compare the results of different configurations of the proposed algorithm against the ground truth for sequences of two publicly available event datasets. We also compare the results of the proposed event-inertial pipeline with the state-of-the-art and show it can produce comparable or more accurate results provided the map estimate is reliable. ## Keyword: events camera There is no result ## Keyword: white balance There is no result ## Keyword: color contrast There is no result ## Keyword: AWB There is no result ## Keyword: ISP ### Rethinking the Video Sampling and Reasoning Strategies for Temporal Sentence Grounding - **Authors:** Jiahao Zhu, Daizong Liu, Pan Zhou, Xing Di, Yu Cheng, Song Yang, Wenzheng Xu, Zichuan Xu, Yao Wan, Lichao Sun, Zeyu Xiong - **Subjects:** Computer Vision and Pattern Recognition (cs.CV) - **Arxiv link:** https://arxiv.org/abs/2301.00514 - **Pdf link:** https://arxiv.org/pdf/2301.00514 - **Abstract** Temporal sentence grounding (TSG) aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query. All existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi-modal interactions with query sentence for reasoning. However, we argue that these methods have overlooked two indispensable issues: 1) Boundary-bias: The annotated target segment generally refers to two specific frames as corresponding start and end timestamps. The video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries. 2) Reasoning-bias: Such incorrect new boundary frames also lead to the reasoning bias during frame-query interaction, reducing the generalization ability of model. To alleviate above limitations, in this paper, we propose a novel Siamese Sampling and Reasoning Network (SSRN) for TSG, which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries. Specifically, a reasoning strategy is developed to learn the inter-relationship among these frames and generate soft labels on boundaries for more accurate frame-query reasoning. Such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine-grained activity understanding. Extensive experiments demonstrate the effectiveness of SSRN on three challenging datasets. ## Keyword: image signal processing There is no result ## Keyword: image signal process There is no result ## Keyword: compression There is no result ## Keyword: RAW ### PCRLv2: A Unified Visual Information Preservation Framework for Self-supervised Pre-training in Medical Image Analysis - **Authors:** Hong-Yu Zhou, Chixiang Lu, Chaoqi Chen, Sibei Yang, Yizhou Yu - **Subjects:** Computer Vision and Pattern Recognition (cs.CV); Machine Learning (cs.LG) - **Arxiv link:** https://arxiv.org/abs/2301.00772 - **Pdf link:** https://arxiv.org/pdf/2301.00772 - **Abstract** Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations. ## Keyword: raw image There is no result
process
new submissions for tue jan keyword events there is no result keyword event camera an event based algorithm for simultaneous dof camera pose tracking and mapping authors masoud dayani najafabadi mohammad reza ahmadzadeh subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract compared to regular cameras dynamic vision sensors or event cameras can output compact visual data based on a change in the intensity in each pixel location asynchronously in this paper we study the application of current image based slam techniques to these novel sensors to this end the information in adaptively selected event windows is processed to form motion compensated images these images are then used to reconstruct the scene and estimate the dof pose of the camera we also propose an inertial version of the event only pipeline to assess its capabilities we compare the results of different configurations of the proposed algorithm against the ground truth for sequences of two publicly available event datasets we also compare the results of the proposed event inertial pipeline with the state of the art and show it can produce comparable or more accurate results provided the map estimate is reliable keyword events camera there is no result keyword white balance there is no result keyword color contrast there is no result keyword awb there is no result keyword isp rethinking the video sampling and reasoning strategies for temporal sentence grounding authors jiahao zhu daizong liu pan zhou xing di yu cheng song yang wenzheng xu zichuan xu yao wan lichao sun zeyu xiong subjects computer vision and pattern recognition cs cv arxiv link pdf link abstract temporal sentence grounding tsg aims to identify the temporal boundary of a specific segment from an untrimmed video by a sentence query all existing works first utilize a sparse sampling strategy to extract a fixed number of video frames and then conduct multi modal interactions with query sentence for reasoning however we argue that these methods have overlooked two indispensable issues boundary bias the annotated target segment generally refers to two specific frames as corresponding start and end timestamps the video downsampling process may lose these two frames and take the adjacent irrelevant frames as new boundaries reasoning bias such incorrect new boundary frames also lead to the reasoning bias during frame query interaction reducing the generalization ability of model to alleviate above limitations in this paper we propose a novel siamese sampling and reasoning network ssrn for tsg which introduces a siamese sampling mechanism to generate additional contextual frames to enrich and refine the new boundaries specifically a reasoning strategy is developed to learn the inter relationship among these frames and generate soft labels on boundaries for more accurate frame query reasoning such mechanism is also able to supplement the absent consecutive visual semantics to the sampled sparse frames for fine grained activity understanding extensive experiments demonstrate the effectiveness of ssrn on three challenging datasets keyword image signal processing there is no result keyword image signal process there is no result keyword compression there is no result keyword raw a unified visual information preservation framework for self supervised pre training in medical image analysis authors hong yu zhou chixiang lu chaoqi chen sibei yang yizhou yu subjects computer vision and pattern recognition cs cv machine learning cs lg arxiv link pdf link abstract recent advances in self supervised learning ssl in computer vision are primarily comparative whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views however the preserved high level semantics do not contain enough local information which is vital in medical image analysis e g image based diagnosis and tumor segmentation to mitigate the locality problem of comparative ssl we propose to incorporate the task of pixel restoration for explicitly encoding more pixel level information into high level semantics we also address the preservation of scale information a powerful tool in aiding image understanding but has not drawn much attention in ssl the resulting framework can be formulated as a multi task optimization problem on the feature pyramid specifically we conduct multi scale pixel restoration and siamese feature comparison in the pyramid in addition we propose non skip u net to build the feature pyramid and develop sub crop to replace multi crop in medical imaging the proposed unified ssl framework surpasses its self supervised counterparts on various tasks including brain tumor segmentation brats chest pathology identification chestx ray chexpert pulmonary nodule detection luna and abdominal organ segmentation lits sometimes outperforming them by large margins with limited annotations keyword raw image there is no result
1
107,618
16,761,611,888
IssuesEvent
2021-06-13 22:31:17
gms-ws-demo/nibrs
https://api.github.com/repos/gms-ws-demo/nibrs
closed
CVE-2019-5427 (High) detected in c3p0-0.9.1.1.jar - autoclosed
security vulnerability
## CVE-2019-5427 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>c3p0-0.9.1.1.jar</b></p></summary> <p>c3p0 is an easy-to-use library for augmenting traditional (DriverManager-based) JDBC drivers with JNDI-bindable DataSources, including DataSources that implement Connection and Statement Pooling, as described by the jdbc3 spec and jdbc2 std extension.</p> <p>Library home page: <a href="http://c3p0.sourceforge.net">http://c3p0.sourceforge.net</a></p> <p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar,/home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar,/home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/c3p0-0.9.1.1.jar,/home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar</p> <p> Dependency Hierarchy: - :x: **c3p0-0.9.1.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> c3p0 version < 0.9.5.4 may be exploited by a billion laughs attack when loading XML configuration due to missing protections against recursive entity expansion when loading configuration. <p>Publish Date: 2019-04-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5427>CVE-2019-5427</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5427">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5427</a></p> <p>Release Date: 2019-04-22</p> <p>Fix Resolution: com.mchange:c3p0:0.9.5.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"c3p0","packageName":"c3p0","packageVersion":"0.9.1.1","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml","/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"c3p0:c3p0:0.9.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.mchange:c3p0:0.9.5.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-5427","vulnerabilityDetails":"c3p0 version \u003c 0.9.5.4 may be exploited by a billion laughs attack when loading XML configuration due to missing protections against recursive entity expansion when loading configuration.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5427","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
True
CVE-2019-5427 (High) detected in c3p0-0.9.1.1.jar - autoclosed - ## CVE-2019-5427 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>c3p0-0.9.1.1.jar</b></p></summary> <p>c3p0 is an easy-to-use library for augmenting traditional (DriverManager-based) JDBC drivers with JNDI-bindable DataSources, including DataSources that implement Connection and Statement Pooling, as described by the jdbc3 spec and jdbc2 std extension.</p> <p>Library home page: <a href="http://c3p0.sourceforge.net">http://c3p0.sourceforge.net</a></p> <p>Path to dependency file: nibrs/tools/nibrs-fbi-service/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar,/home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar,/home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar,nibrs/tools/nibrs-fbi-service/target/nibrs-fbi-service-1.0.0/WEB-INF/lib/c3p0-0.9.1.1.jar,/home/wss-scanner/.m2/repository/c3p0/c3p0/0.9.1.1/c3p0-0.9.1.1.jar</p> <p> Dependency Hierarchy: - :x: **c3p0-0.9.1.1.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/gms-ws-demo/nibrs/commit/9fb1c19bd26c2113d1961640de126a33eacdc946">9fb1c19bd26c2113d1961640de126a33eacdc946</a></p> <p>Found in base branch: <b>master</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> c3p0 version < 0.9.5.4 may be exploited by a billion laughs attack when loading XML configuration due to missing protections against recursive entity expansion when loading configuration. <p>Publish Date: 2019-04-22 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5427>CVE-2019-5427</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5427">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-5427</a></p> <p>Release Date: 2019-04-22</p> <p>Fix Resolution: com.mchange:c3p0:0.9.5.4</p> </p> </details> <p></p> *** <!-- REMEDIATE-OPEN-PR-START --> - [ ] Check this box to open an automated fix PR <!-- REMEDIATE-OPEN-PR-END --> <!-- <REMEDIATE>{"isOpenPROnVulnerability":false,"isPackageBased":true,"isDefaultBranch":true,"packages":[{"packageType":"Java","groupId":"c3p0","packageName":"c3p0","packageVersion":"0.9.1.1","packageFilePaths":["/tools/nibrs-fbi-service/pom.xml","/tools/nibrs-flatfile/pom.xml","/tools/nibrs-validate-common/pom.xml"],"isTransitiveDependency":false,"dependencyTree":"c3p0:c3p0:0.9.1.1","isMinimumFixVersionAvailable":true,"minimumFixVersion":"com.mchange:c3p0:0.9.5.4"}],"baseBranches":["master"],"vulnerabilityIdentifier":"CVE-2019-5427","vulnerabilityDetails":"c3p0 version \u003c 0.9.5.4 may be exploited by a billion laughs attack when loading XML configuration due to missing protections against recursive entity expansion when loading configuration.","vulnerabilityUrl":"https://vuln.whitesourcesoftware.com/vulnerability/CVE-2019-5427","cvss3Severity":"high","cvss3Score":"7.5","cvss3Metrics":{"A":"High","AC":"Low","PR":"None","S":"Unchanged","C":"None","UI":"None","AV":"Network","I":"None"},"extraData":{}}</REMEDIATE> -->
non_process
cve high detected in jar autoclosed cve high severity vulnerability vulnerable library jar is an easy to use library for augmenting traditional drivermanager based jdbc drivers with jndi bindable datasources including datasources that implement connection and statement pooling as described by the spec and std extension library home page a href path to dependency file nibrs tools nibrs fbi service pom xml path to vulnerable library home wss scanner repository jar home wss scanner repository jar home wss scanner repository jar nibrs tools nibrs fbi service target nibrs fbi service web inf lib jar home wss scanner repository jar dependency hierarchy x jar vulnerable library found in head commit a href found in base branch master vulnerability details version may be exploited by a billion laughs attack when loading xml configuration due to missing protections against recursive entity expansion when loading configuration publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com mchange check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree isminimumfixversionavailable true minimumfixversion com mchange basebranches vulnerabilityidentifier cve vulnerabilitydetails version may be exploited by a billion laughs attack when loading xml configuration due to missing protections against recursive entity expansion when loading configuration vulnerabilityurl
0
15,959
20,175,455,196
IssuesEvent
2022-02-10 14:10:54
ooi-data/CE09OSSM-MFD37-01-OPTAAC000-recovered_host-optaa_dj_dcl_instrument_recovered
https://api.github.com/repos/ooi-data/CE09OSSM-MFD37-01-OPTAAC000-recovered_host-optaa_dj_dcl_instrument_recovered
opened
🛑 Processing failed: GroupNotFoundError
process
## Overview `GroupNotFoundError` found in `processing_task` task during run ended on 2022-02-10T14:10:53.563696. ## Details Flow name: `CE09OSSM-MFD37-01-OPTAAC000-recovered_host-optaa_dj_dcl_instrument_recovered` Task name: `processing_task` Error type: `GroupNotFoundError` Error message: group not found at path '' <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 64, in finalize_data_stream final_group = zarr.open_group(final_store, mode='r+') File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/hierarchy.py", line 1168, in open_group raise GroupNotFoundError(path) zarr.errors.GroupNotFoundError: group not found at path '' ``` </details>
1.0
🛑 Processing failed: GroupNotFoundError - ## Overview `GroupNotFoundError` found in `processing_task` task during run ended on 2022-02-10T14:10:53.563696. ## Details Flow name: `CE09OSSM-MFD37-01-OPTAAC000-recovered_host-optaa_dj_dcl_instrument_recovered` Task name: `processing_task` Error type: `GroupNotFoundError` Error message: group not found at path '' <details> <summary>Traceback</summary> ``` Traceback (most recent call last): File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/pipeline.py", line 165, in processing final_path = finalize_data_stream( File "/srv/conda/envs/notebook/lib/python3.9/site-packages/ooi_harvester/processor/__init__.py", line 64, in finalize_data_stream final_group = zarr.open_group(final_store, mode='r+') File "/srv/conda/envs/notebook/lib/python3.9/site-packages/zarr/hierarchy.py", line 1168, in open_group raise GroupNotFoundError(path) zarr.errors.GroupNotFoundError: group not found at path '' ``` </details>
process
🛑 processing failed groupnotfounderror overview groupnotfounderror found in processing task task during run ended on details flow name recovered host optaa dj dcl instrument recovered task name processing task error type groupnotfounderror error message group not found at path traceback traceback most recent call last file srv conda envs notebook lib site packages ooi harvester processor pipeline py line in processing final path finalize data stream file srv conda envs notebook lib site packages ooi harvester processor init py line in finalize data stream final group zarr open group final store mode r file srv conda envs notebook lib site packages zarr hierarchy py line in open group raise groupnotfounderror path zarr errors groupnotfounderror group not found at path
1
68,226
7,090,338,995
IssuesEvent
2018-01-12 08:35:00
Exa-Networks/exabgp
https://api.github.com/repos/Exa-Networks/exabgp
closed
Exabgp produces invalid JSON
bug fixed-need-testing
##### ISSUE TYPE - Bug Report ##### OS CentOS 6.9, with exabgp running under pypy 2.0.2 ##### VERSION ``` ExaBGP : 3.4.15 Python : 2.7.3 (f66246c46ca30b26a5c73e4cc95dd6235c966b8f, Jul 30 2013, 09:27:06) [PyPy 2.0.2 with GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] Uname : #1 SMP Wed Jul 12 14:17:22 UTC 2017 ``` ##### ENVIRONMENT ``` [exabgp.api] highres = true [exabgp.cache] attributes = false nexthops = false [exabgp.log] configuration = false destination = 'syslog' ``` ##### CONFIGURATION ``` group multi_neighbor { process receive-routes { encoder json; run /opt/exabgp/bin/queuer.py /export/exabgp/message-queues-rabbit/4 /export/exabgp/message-queues-kafka/4; neighbor-changes; receive { update; parsed; packets; consolidate; open; keepalive; notification; } } process control-socket { run /opt/exabgp/bin/control_socket.run /var/run/exabgp/control_instance_4.sock; } process beacon { run /opt/exabgp/bin/beaconhandler.py /opt/exabgp/etc/exabgp/beacon_instance_4.conf /var/run/exabgp/beacon_4.state /opt/exabgp/etc/exabgp/bgp-beacon.def; } family { ipv4 unicast; ipv6 unicast; } neighbor 2001:7f8:24::5a { router-id 91.206.52.253; local-address 2001:7f8:24::fd; local-as 12654; peer-as 59414; static { } } } ``` There are more peers in the config, but I've only left in the one that's triggering this bug. ##### SUMMARY When exabgp receives an update from this peer, the resulting JSON contains control characters. When exabgp passes this JSON to the "receive-routes" process, that process dies with an exception. ##### STEPS TO REPRODUCE I don't know if this can be easily reproduced. However, I'm attaching a file here containing the JSON produced from an update from this peer. The thing of note is that there is a key called "raw", and this contains control characters. [rrc20-exabgp-broken-json.txt](https://github.com/Exa-Networks/exabgp/files/1496002/rrc20-exabgp-broken-json.txt) ##### EXPECTED RESULTS Properly escaped JSON. ##### ACTUAL RESULTS The JSON structure contains a key called "raw" with control characters in it. ##### IMPORTANCE This is causing a problem in production, because this update causes the receive-routes process to die, and this in turn causes exabgp to die.
1.0
Exabgp produces invalid JSON - ##### ISSUE TYPE - Bug Report ##### OS CentOS 6.9, with exabgp running under pypy 2.0.2 ##### VERSION ``` ExaBGP : 3.4.15 Python : 2.7.3 (f66246c46ca30b26a5c73e4cc95dd6235c966b8f, Jul 30 2013, 09:27:06) [PyPy 2.0.2 with GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] Uname : #1 SMP Wed Jul 12 14:17:22 UTC 2017 ``` ##### ENVIRONMENT ``` [exabgp.api] highres = true [exabgp.cache] attributes = false nexthops = false [exabgp.log] configuration = false destination = 'syslog' ``` ##### CONFIGURATION ``` group multi_neighbor { process receive-routes { encoder json; run /opt/exabgp/bin/queuer.py /export/exabgp/message-queues-rabbit/4 /export/exabgp/message-queues-kafka/4; neighbor-changes; receive { update; parsed; packets; consolidate; open; keepalive; notification; } } process control-socket { run /opt/exabgp/bin/control_socket.run /var/run/exabgp/control_instance_4.sock; } process beacon { run /opt/exabgp/bin/beaconhandler.py /opt/exabgp/etc/exabgp/beacon_instance_4.conf /var/run/exabgp/beacon_4.state /opt/exabgp/etc/exabgp/bgp-beacon.def; } family { ipv4 unicast; ipv6 unicast; } neighbor 2001:7f8:24::5a { router-id 91.206.52.253; local-address 2001:7f8:24::fd; local-as 12654; peer-as 59414; static { } } } ``` There are more peers in the config, but I've only left in the one that's triggering this bug. ##### SUMMARY When exabgp receives an update from this peer, the resulting JSON contains control characters. When exabgp passes this JSON to the "receive-routes" process, that process dies with an exception. ##### STEPS TO REPRODUCE I don't know if this can be easily reproduced. However, I'm attaching a file here containing the JSON produced from an update from this peer. The thing of note is that there is a key called "raw", and this contains control characters. [rrc20-exabgp-broken-json.txt](https://github.com/Exa-Networks/exabgp/files/1496002/rrc20-exabgp-broken-json.txt) ##### EXPECTED RESULTS Properly escaped JSON. ##### ACTUAL RESULTS The JSON structure contains a key called "raw" with control characters in it. ##### IMPORTANCE This is causing a problem in production, because this update causes the receive-routes process to die, and this in turn causes exabgp to die.
non_process
exabgp produces invalid json issue type bug report os centos with exabgp running under pypy version exabgp python jul uname smp wed jul utc environment highres true attributes false nexthops false configuration false destination syslog configuration group multi neighbor process receive routes encoder json run opt exabgp bin queuer py export exabgp message queues rabbit export exabgp message queues kafka neighbor changes receive update parsed packets consolidate open keepalive notification process control socket run opt exabgp bin control socket run var run exabgp control instance sock process beacon run opt exabgp bin beaconhandler py opt exabgp etc exabgp beacon instance conf var run exabgp beacon state opt exabgp etc exabgp bgp beacon def family unicast unicast neighbor router id local address fd local as peer as static there are more peers in the config but i ve only left in the one that s triggering this bug summary when exabgp receives an update from this peer the resulting json contains control characters when exabgp passes this json to the receive routes process that process dies with an exception steps to reproduce i don t know if this can be easily reproduced however i m attaching a file here containing the json produced from an update from this peer the thing of note is that there is a key called raw and this contains control characters expected results properly escaped json actual results the json structure contains a key called raw with control characters in it importance this is causing a problem in production because this update causes the receive routes process to die and this in turn causes exabgp to die
0
6,951
10,113,526,002
IssuesEvent
2019-07-30 16:56:21
material-components/material-components-ios
https://api.github.com/repos/material-components/material-components-ios
closed
[Examples] Convert to Swift 4.0
[FlexibleHeader] app:Catalog app:Pesto app:Shrine skill:Swift type:Process
When Xcode 8 support is dropped. The majority of the work can be gleaned from #2089 <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/117179037](http://b/117179037)
1.0
[Examples] Convert to Swift 4.0 - When Xcode 8 support is dropped. The majority of the work can be gleaned from #2089 <!-- Auto-generated content below, do not modify --> --- #### Internal data - Associated internal bug: [b/117179037](http://b/117179037)
process
convert to swift when xcode support is dropped the majority of the work can be gleaned from internal data associated internal bug
1
7,791
10,948,750,245
IssuesEvent
2019-11-26 09:31:17
Open-EO/openeo-processes
https://api.github.com/repos/Open-EO/openeo-processes
closed
Add clone_cube
help wanted new process question
Suggestion from @claxn: > Moreover, a function clone_raster_cube (i.e. copying all properties from a raster cube and creating an empty one) could be helpful in case a user only has to change one property instead of specifying all spatial attributes. Does that sound helpful to everybody? I assume the values would all be set to no data (null)? Could be someting like: `clone_cube(raster_cube cube, ?boolean clone_values = false) : raster_cube` clone values = false => Set all values to null (no data) clone values = true => Copy also the pixel values
1.0
Add clone_cube - Suggestion from @claxn: > Moreover, a function clone_raster_cube (i.e. copying all properties from a raster cube and creating an empty one) could be helpful in case a user only has to change one property instead of specifying all spatial attributes. Does that sound helpful to everybody? I assume the values would all be set to no data (null)? Could be someting like: `clone_cube(raster_cube cube, ?boolean clone_values = false) : raster_cube` clone values = false => Set all values to null (no data) clone values = true => Copy also the pixel values
process
add clone cube suggestion from claxn moreover a function clone raster cube i e copying all properties from a raster cube and creating an empty one could be helpful in case a user only has to change one property instead of specifying all spatial attributes does that sound helpful to everybody i assume the values would all be set to no data null could be someting like clone cube raster cube cube boolean clone values false raster cube clone values false set all values to null no data clone values true copy also the pixel values
1
117,912
17,568,424,167
IssuesEvent
2021-08-14 06:48:35
veritem/springboot-template
https://api.github.com/repos/veritem/springboot-template
closed
CVE-2016-1000343 (High) detected in bcprov-jdk15on-1.50.jar
security vulnerability
## CVE-2016-1000343 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.50.jar</b></p></summary> <p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.7.</p> <p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p> <p>Path to dependency file: springboot-template/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.50/bcprov-jdk15on-1.50.jar</p> <p> Dependency Hierarchy: - passay-1.0.jar (Root Library) - cryptacular-1.0.jar - :x: **bcprov-jdk15on-1.50.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/veritem/springboot-template/commit/01ac1a359f8ea94a04ca4a65820afb352f5bbc25">01ac1a359f8ea94a04ca4a65820afb352f5bbc25</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the Bouncy Castle JCE Provider version 1.55 and earlier the DSA key pair generator generates a weak private key if used with default values. If the JCA key pair generator is not explicitly initialised with DSA parameters, 1.55 and earlier generates a private value assuming a 1024 bit key size. In earlier releases this can be dealt with by explicitly passing parameters to the key pair generator. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000343>CVE-2016-1000343</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000343</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
CVE-2016-1000343 (High) detected in bcprov-jdk15on-1.50.jar - ## CVE-2016-1000343 - High Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>bcprov-jdk15on-1.50.jar</b></p></summary> <p>The Bouncy Castle Crypto package is a Java implementation of cryptographic algorithms. This jar contains JCE provider and lightweight API for the Bouncy Castle Cryptography APIs for JDK 1.5 to JDK 1.7.</p> <p>Library home page: <a href="http://www.bouncycastle.org/java.html">http://www.bouncycastle.org/java.html</a></p> <p>Path to dependency file: springboot-template/pom.xml</p> <p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/bouncycastle/bcprov-jdk15on/1.50/bcprov-jdk15on-1.50.jar</p> <p> Dependency Hierarchy: - passay-1.0.jar (Root Library) - cryptacular-1.0.jar - :x: **bcprov-jdk15on-1.50.jar** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/veritem/springboot-template/commit/01ac1a359f8ea94a04ca4a65820afb352f5bbc25">01ac1a359f8ea94a04ca4a65820afb352f5bbc25</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary> <p> In the Bouncy Castle JCE Provider version 1.55 and earlier the DSA key pair generator generates a weak private key if used with default values. If the JCA key pair generator is not explicitly initialised with DSA parameters, 1.55 and earlier generates a private value assuming a 1024 bit key size. In earlier releases this can be dealt with by explicitly passing parameters to the key pair generator. <p>Publish Date: 2018-06-04 <p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2016-1000343>CVE-2016-1000343</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Network - Attack Complexity: Low - Privileges Required: None - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: High - Integrity Impact: None - Availability Impact: None </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000343">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-1000343</a></p> <p>Release Date: 2018-06-04</p> <p>Fix Resolution: org.bouncycastle:bcprov-debug-jdk14:1.56,org.bouncycastle:bcprov-ext-jdk15on:1.56,org.bouncycastle:bcprov-jdk14:1.56,org.bouncycastle:bcprov-jdk15on:1.56,org.bouncycastle:bcprov-ext-debug-jdk15on:1.56</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
cve high detected in bcprov jar cve high severity vulnerability vulnerable library bcprov jar the bouncy castle crypto package is a java implementation of cryptographic algorithms this jar contains jce provider and lightweight api for the bouncy castle cryptography apis for jdk to jdk library home page a href path to dependency file springboot template pom xml path to vulnerable library home wss scanner repository org bouncycastle bcprov bcprov jar dependency hierarchy passay jar root library cryptacular jar x bcprov jar vulnerable library found in head commit a href found in base branch main vulnerability details in the bouncy castle jce provider version and earlier the dsa key pair generator generates a weak private key if used with default values if the jca key pair generator is not explicitly initialised with dsa parameters and earlier generates a private value assuming a bit key size in earlier releases this can be dealt with by explicitly passing parameters to the key pair generator publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org bouncycastle bcprov debug org bouncycastle bcprov ext org bouncycastle bcprov org bouncycastle bcprov org bouncycastle bcprov ext debug step up your open source security game with whitesource
0
9,135
12,203,170,369
IssuesEvent
2020-04-30 10:08:35
MHRA/products
https://api.github.com/repos/MHRA/products
closed
Integration tests for the doc-index-updater
Automated Tests :robot: EPIC - Auto Batch Process :oncoming_automobile:
# Integration tests for the doc-index-updater This is all about creating integration tests which should have been covered in #434, #435, #444, #456. ## User want As a Products team member I want to know when someone has broken functionality in the doc-index-updater So that I can prevent it from causing issues in production ### Customer acceptance criteria People uploading documents to Sentinel want the upload service to be robust, which means well-tested. ### Technical acceptance criteria Tests should cover: - [ ] Spinning up a redis server as part of the - [ ] Setting a status using the status setting endpoint - [ ] Getting a job status using the get status endpoint - [ ] Calling the delete endpoint and seeing the job status updated - [ ] Calling the create endpoint and seeing the job status updated ### Testing acceptance criteria - [ ] Tests exist. - [ ] They run. - [ ] They pass. - [ ] They're understandable. - [ ] They fail when code is changed that breaks the functionality they purport to cover. ### Exit Criteria met - [x] Backlog - [x] Discovery - [x] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
1.0
Integration tests for the doc-index-updater - # Integration tests for the doc-index-updater This is all about creating integration tests which should have been covered in #434, #435, #444, #456. ## User want As a Products team member I want to know when someone has broken functionality in the doc-index-updater So that I can prevent it from causing issues in production ### Customer acceptance criteria People uploading documents to Sentinel want the upload service to be robust, which means well-tested. ### Technical acceptance criteria Tests should cover: - [ ] Spinning up a redis server as part of the - [ ] Setting a status using the status setting endpoint - [ ] Getting a job status using the get status endpoint - [ ] Calling the delete endpoint and seeing the job status updated - [ ] Calling the create endpoint and seeing the job status updated ### Testing acceptance criteria - [ ] Tests exist. - [ ] They run. - [ ] They pass. - [ ] They're understandable. - [ ] They fail when code is changed that breaks the functionality they purport to cover. ### Exit Criteria met - [x] Backlog - [x] Discovery - [x] DUXD - [ ] Development - [ ] Quality Assurance - [ ] Release and Validate
process
integration tests for the doc index updater integration tests for the doc index updater this is all about creating integration tests which should have been covered in user want as a products team member i want to know when someone has broken functionality in the doc index updater so that i can prevent it from causing issues in production customer acceptance criteria people uploading documents to sentinel want the upload service to be robust which means well tested technical acceptance criteria tests should cover spinning up a redis server as part of the setting a status using the status setting endpoint getting a job status using the get status endpoint calling the delete endpoint and seeing the job status updated calling the create endpoint and seeing the job status updated testing acceptance criteria tests exist they run they pass they re understandable they fail when code is changed that breaks the functionality they purport to cover exit criteria met backlog discovery duxd development quality assurance release and validate
1
8,031
11,210,731,834
IssuesEvent
2020-01-06 13:56:24
prisma/prisma2
https://api.github.com/repos/prisma/prisma2
closed
Error reporting zip and MacOS catalina
bug/2-confirmed kind/bug process/candidate topic: errors
The error reports that we upload as a part of the `prisma2` CLI are not working on MacOS catalina. An attempt to unzip them yields: ![image](https://user-images.githubusercontent.com/746482/71820061-3b80af80-308e-11ea-9892-4e3c2549197f.png)
1.0
Error reporting zip and MacOS catalina - The error reports that we upload as a part of the `prisma2` CLI are not working on MacOS catalina. An attempt to unzip them yields: ![image](https://user-images.githubusercontent.com/746482/71820061-3b80af80-308e-11ea-9892-4e3c2549197f.png)
process
error reporting zip and macos catalina the error reports that we upload as a part of the cli are not working on macos catalina an attempt to unzip them yields
1
150,572
19,604,212,140
IssuesEvent
2022-01-06 07:07:33
snykiotcubedev/arangodb-3.7.6
https://api.github.com/repos/snykiotcubedev/arangodb-3.7.6
opened
WS-2019-0209 (Medium) detected in marked-0.6.2.tgz
security vulnerability
## WS-2019-0209 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.6.2.tgz</b></p></summary> <p>A markdown parser built for speed</p> <p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.6.2.tgz">https://registry.npmjs.org/marked/-/marked-0.6.2.tgz</a></p> <p>Path to dependency file: /js/node/package.json</p> <p>Path to vulnerable library: /js/node/node_modules/marked/package.json</p> <p> Dependency Hierarchy: - :x: **marked-0.6.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> marked before 0.7.0 vulnerable to Redos attack by he _label subrule that may significantly degrade parsing performance of malformed input. <p>Publish Date: 2019-07-04 <p>URL: <a href=https://github.com/markedjs/marked/pull/1515>WS-2019-0209</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1076">https://www.npmjs.com/advisories/1076</a></p> <p>Release Date: 2019-07-04</p> <p>Fix Resolution: 0.7.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
True
WS-2019-0209 (Medium) detected in marked-0.6.2.tgz - ## WS-2019-0209 - Medium Severity Vulnerability <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>marked-0.6.2.tgz</b></p></summary> <p>A markdown parser built for speed</p> <p>Library home page: <a href="https://registry.npmjs.org/marked/-/marked-0.6.2.tgz">https://registry.npmjs.org/marked/-/marked-0.6.2.tgz</a></p> <p>Path to dependency file: /js/node/package.json</p> <p>Path to vulnerable library: /js/node/node_modules/marked/package.json</p> <p> Dependency Hierarchy: - :x: **marked-0.6.2.tgz** (Vulnerable Library) <p>Found in HEAD commit: <a href="https://github.com/snykiotcubedev/arangodb-3.7.6/commit/fce8f85f1c2f070c8e6a8e76d17210a2117d3833">fce8f85f1c2f070c8e6a8e76d17210a2117d3833</a></p> <p>Found in base branch: <b>main</b></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary> <p> marked before 0.7.0 vulnerable to Redos attack by he _label subrule that may significantly degrade parsing performance of malformed input. <p>Publish Date: 2019-07-04 <p>URL: <a href=https://github.com/markedjs/marked/pull/1515>WS-2019-0209</a></p> </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary> <p> Base Score Metrics: - Exploitability Metrics: - Attack Vector: Local - Attack Complexity: Low - Privileges Required: Low - User Interaction: None - Scope: Unchanged - Impact Metrics: - Confidentiality Impact: None - Integrity Impact: None - Availability Impact: High </p> For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>. </p> </details> <p></p> <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary> <p> <p>Type: Upgrade version</p> <p>Origin: <a href="https://www.npmjs.com/advisories/1076">https://www.npmjs.com/advisories/1076</a></p> <p>Release Date: 2019-07-04</p> <p>Fix Resolution: 0.7.0</p> </p> </details> <p></p> *** Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
non_process
ws medium detected in marked tgz ws medium severity vulnerability vulnerable library marked tgz a markdown parser built for speed library home page a href path to dependency file js node package json path to vulnerable library js node node modules marked package json dependency hierarchy x marked tgz vulnerable library found in head commit a href found in base branch main vulnerability details marked before vulnerable to redos attack by he label subrule that may significantly degrade parsing performance of malformed input publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource
0